Getting started
The following piece of code introduces essentially everything you ever need to learn. It defines variables using sdpvar, constraints, objectives, options including solver options via sdpsettings, solves the problem using optimize, checks result and extracts solution (Note that the code specifies the solver to QUADPROG. If you don’t have QUADPROG installed, simply remove that solver selection in the definition of options. YALMIP will pick some other solver it finds installed)
% It's good practice to start by clearing YALMIPs internal database
% Every time you call sdpvar etc, an internal database grows larger
yalmip('clear')
% Define variables
x = sdpvar(10,1);
% Define constraints
Constraints = [sum(x) <= 10, x(1) == 0, 0.5 <= x(2) <= 1.5];
for i = 1 : 7
Constraints = [Constraints, x(i) + x(i+1) <= x(i+2) + x(i+3)];
end
% Define an objective
Objective = x'*x+norm(x,1);
% Set some options for YALMIP and solver
options = sdpsettings('verbose',1,'solver','quadprog','quadprog.maxiter',100);
% Solve the problem
sol = optimize(Constraints,Objective,options);
% Analyze error flags
if sol.problem == 0
% Extract and display value
solution = value(x)
else
display('Hmm, something went wrong!');
sol.info
yalmiperror(sol.problem)
end
Having seen that, let us start from the beginning.
YALMIPs symbolic variable
The most important command in YALMIP is sdpvar. This command is used to the define decision variables. To define a matrix (or scalar) P with n rows and m columns, we write
P = sdpvar(n,m);
Note that a square matrix is symmetric by default! To obtain a fully parameterized (i.e. not necessarily symmetric) square matrix, a third argument is needed.
P = sdpvar(3,3,'full');
Tip: If you omit the ; or simply write the name, the expression will be displayed and you can check its properties
P
Linear matrix variable 3x3 (full, real, 9 variables)
The third argument can be used to obtain a number of pre-defined types of variables, such as Toeplitz, Hankel, diagonal, symmetric and skew-symmetric matrices. See the help text on sdpvar for details. Alternatively, standard MATLAB commands can be applied to a vector.
x = sdpvar(n,1);
D = diag(x) ; % Diagonal matrix
H = hankel(x); % Hankel matrix
T = toeplitz(x); % Hankel matrix
Scalars can be defined in three different ways.
x = sdpvar(1,1); y = sdpvar(1,1);
x = sdpvar(1); y = sdpvar(1);
sdpvar x y
The sdpvar objects are manipulated in MATLAB as any other variable and most functions are overloaded. Hence, the following commands are valid
P = sdpvar(3,3) + diag(sdpvar(3,1));
X = [P P;P eye(length(P))] + 2*trace(P);
Y = X + sum(sum(P*rand(length(P)))) + P(end,end)+hankel(X(:,1));
When developing code and you’re not a YALMIP pro, learn the habit to look at expressions and variables, to see that they actually have the properties you would expect them to have. Here for instance, X should be a 6x6 real and symmetric matrix unless we have made some mistake, and indeed it is constructed from 9 variables defining a symmetric 3x3 matrix.
>> X
Linear matrix variable 6x6 (symmetric, real, 9 variables)
In some situations, coding is simplified with a multi-dimensional variable. This is supported in YALMIP with two different constructs, cell arrays and multi-dimensional sdpvar objects.
The cell array format is nothing but an abstraction of the following code
for i = 1:5
X{i} = sdpvar(2,3);
end
By using vector dimensions in sdpvar, the same cell array can be setup as follows
X = sdpvar([2 2 2 2 2],[3 3 3 3 3]);
The cell array can now be used as usual in MATLAB.
The drawback with the approach above is that the variable X cannot be used directly, as a standard sdpvar object (operations such as plus etc are not overloaded on cells in MATLAB). As an alternative, a completely general multi-dimensional sdpvar is available. We can create an essentially equivalent object with this call.
X = sdpvar(2,3,5);
The difference is that we can operate directly on this object, using standard MATLAB code.
Y = sum(X,3)
X(:,:,2)
Note that the two first slices are symmetric (if the two first dimensions are the same), according to standard YALMIP syntax. To create a fully parameterized higher-dimensional variable, use trailing flags as in the standard case.
X = sdpvar(2,2,2,2,'full');
For an illustration of multi-dimensional variables, check out the Sudoku example.
Constraints
To define a collection of constraints, we simply create and concatenate them. The meaning of a constraint is context-dependent. If the left-hand side and right-hand side are Hermitian, the constraint is interpreted in terms of positive definiteness, otherwise element-wise. Hence, declaring a symmetric matrix and a positive definiteness constraint is done with
n = 3;
P = sdpvar(n,n);
C = [P>=0];
while a symmetric matrix with positive elements is defined with, e.g.,
P = sdpvar(n,n);
C = [P(:)>=0];
Note that this defines the off-diagnoal constraints twice. A good SDP solver will perhaps detect this during preprocessing and reduce the model, but we can of-course define only the unique elements manually using standard MATLAB code
C = [triu(P)>=0];
or
C = [P(find(triu(ones(n))))>=0];
According to the rules above, a non-square matrix (or generally a non-symmetric) with positive elements can be defined using the >= operator immediately
P = sdpvar(n,2*n);
C = [P>=0];
and so can a fully parameterized square matrix with positive elements
P = sdpvar(n,n,'full');
C = [P>=0];
A list of several constraints is defined by just adding or, alternatively, concatenating them.
P = sdpvar(n,n);
C = [P>=0] + [P(1,1)>=2];
C = [P>=0, P(1,1)>=2];
Tip: The constraint list can be displayed and you can check that you actually have defined the type of constraints that you intended to. Just as we learn the habit to look at expressions as we devlop code, we should also look at constraint lists to see that they match with out intention
C
++++++++++++++++++++++++++++++++++++++
| ID| Constraint|
++++++++++++++++++++++++++++++++++++++
| #1| Matrix inequality 3x3|
| #2| Element-wise inequality 1x1|
++++++++++++++++++++++++++++++++++++++
Of course, the involved expressions can be arbitrary sdpvar objects, and equality constraints (==) can be defined, as well as constraints using <=.
C = [P>=0, P(1,1)<=2, sum(sum(P))==10
++++++++++++++++++++++++++++++++++++++
| ID| Constraint|
++++++++++++++++++++++++++++++++++++++
| #1| Matrix inequality 3x3|
| #2| Element-wise inequality 1x1|
| #3| Equality constraint 1x1|
++++++++++++++++++++++++++++++++++++++
A convenient way to define several constraints is to use double-sided constraints.
F = [0 <= P(1,1) <= 2];
Strict inequalities cannot be used (YALMIP will warn you) as they have no meaning when doing optimization with numerical solvers, as these solvers always work with numerical tolerances anyway. Hence, if you need the upper bound to be strict, you have to select a margin. Note that selection of this margin is a tricky issues. Too small and it will make no difference as it will drown in the general tolerances solvers use to define constraints as close enough to feasible, and too large might reduce your feasible space too much.
my_tolerance_for_strict = 1e-5;
F = [0 <= P(1,1) <= 2-my_tolerance_for_strict,
P >= eye(n)*my_tolerance_for_strict];
Note though, many times strict inequalities are part of a homogenous problem, and the problem should be dehomogenized by adding a single constraint such as P>=eye(n) and replace all other constraints with non-strict.
Several constraints can be appended as usual in MATLAB in for-loop etc
F = [0 <= P(1,1) <= 2];
for i = 2:n-1
F = [F, P(i,1) <= P(2,i) - P(i,i)];
end
After having defined variables and constraints, you are ready to solve problems. Check out the remaining tutorials to learn more about this.