#### Table of Contents

# SVM Viability Kernel Approximation (2/3)

## SVM Viability kernel approximation algorithm

### Approximating viability kernel with a classification procedure

Consider a grid *Kh* covering the viability constraint set, with *h* the maximal distance between the points. At the first iteration, all the points *xh* of the grid are viable. We also suppose that *G* is mu-Lipschitz.

At iteration *n+1*, the algorithm remove the points that clearly go outside the generalisation of the approximation of the viability kernel at step *n*. If a point is viable at iteration *n+1*, we attribute to it a *+1* label and *-1* otherwise. We obtain the classification function *l* that define the current approximation of the viability kernel:

**Algorithm**
We iteratively define the sets such that:

Then, if the classification procedure respects some conditions,

The algorithm thus provides an outer approximation of viability kernels.

### Approximating viability kernel with support vector machines

We consider SVMs as a relevant classification procedure in this context. The main advantage of choosing SVMs is that they provide a kind of barrier function of the viability kernel boundary, which enables one to use gradient techniques to find a viable control.

**Initialization**: discretization of the state space and initialization of non-viable examples

**Iteration n+1**: *SVMn* is available

**Iteration n+1**: we use a gradient method to find a viable control (possibility to extend to several time steps)

**Iteration n+1**: update the labels from *SVMn* and define *SVMn+1*

## SVM Viability controller

The idea is to keep the same control *u0* until the point at the next step reaches

where Delta is a security distance on the approximation (the algorithm provides an outer approximation and it is thus necessary to reduces the approximation in order to stay within it). When the point reaches the security distance, find a viable control using the gradient descent of function *f*.

It is possible to define more or less cautious controller, anticipating on several time steps.

## Fulfilment of algorithm convergence condition

One condition the SVMs must respect in order to proove the algorithm convergence is that the SVM produce no missclassified points. In practice, one must choose an high C-value in the SVM settings and a "good" alpha value (in case of using a gaussian kernel). An algorithm has been developped at LISC, in order to set automatically the SVM C-value:

- Loosli, G., Deffuant, G., Canu, S. (2008) BALK Bandwidth Autosetting for SVM with Local Kernels. Application to data on incomplete grids. CAP08: Conférence francophone sur l'apprentissage automatique.

[Problem and viability] previous page next page [Example]