Simple Distributionally Robust Optimization
A simple distributionally robust optimization example is presented to demonstrate how to model ambiguity set and implement generalized decision rule by XProg. Suppose that there is one random variable \(\tilde{z}\), and the adjustable decision made after the observation of uncertainty realization \(z\) is denoted by \(y(z)\). This distributionally robust optimization model is expressed by the following problem.
\begin{align}
\min~&\max\limits_{\mathbb{P}\in\mathbb{F}} \mathbb{E}_{\mathbb{P}}\left\{y(\tilde{z})\right\} \\
\text{s.t.}~&y(z)\geq z, ~~~~~\forall z\in \mathcal{Z}\\
&y(z)\geq -z, ~~\forall z\in \mathcal{Z} \\
\end{align} where \(\mathbb{P}\) is the distribution of random variable \(\tilde{z}\), \(\mathcal{Z}\) is the support set of \(\tilde{z}\), which is set to be \([-2,~2]\), and \(\mathbb{F}\) is the ambiguity set that characterizes a collection of distributions, which is expressed in the equation below.
\begin{align}
\mathbb{F}=\left\{
\mathbb{P}\in\mathcal{P}_0(\mathbb{R}):
\begin{array}
\\
\tilde{z}\in\mathbb{R}\\
\mathbb{E}_{\mathbb{P}}\left(\tilde{z}\right)=0 \\
\mathbb{E}_{\mathbb{P}}\left(\tilde{z}^2\right)\leq 1 \\
\mathbb{P}\left\{\tilde{z}\in\mathcal{Z}\right\}=
\mathbb{P}\left\{-2\leq \tilde{z}\leq 2\right\}=1
\end{array}
\right\}
\end{align} For this simple case, it is apparent that the optimal adjustable decision follows \(y(z)=|z|\), and the objective value is \(1\). However, for a general problem, such adjustable decisions are intractable to identify, so the linear decision rule is applied to approximate the actual adjustable decisions by linear affine functions of random variable \(\tilde{z}\) and some auxiliary variables. According to reference (Bertsimas, Sim and Zhang 2014), the ambiguity set is extended to the following lifted form by introducing auxiliary variable \(\tilde{u}\) into the set.
\begin{align}
\mathbb{G}=\left\{
\mathbb{Q}\in\mathcal{P}_0(\mathbb{R}^2):
\begin{array}
\\
\tilde{z}\in\mathbb{R},~\tilde{u}\in\mathbb{R}\\
\mathbb{E}_{\mathbb{Q}}\left(\tilde{z}\right)=0 \\
\mathbb{E}_{\mathbb{Q}}\left(\tilde{u}\right)\leq 1 \\
\mathbb{Q}\left\{
\begin{array}
\\
-2 \leq \tilde{z} \leq 2 \\
\tilde{z}^2\leq \tilde{u} \leq 4\\
\end{array}
\right\}=1
\end{array}
\right\}
\end{align} The decision rule is a function of uncertainty \(z\) and the auxiliary variable \(u\), expressed as \(\bar{y}(z,u)\) in the equation below.
\begin{align}
\bar{y}(z,u)=y^0+y^zz+y^uu
\end{align} After replacing the recourse decision by the linear decision rule, the XProg automatically transforms the distributionally robust model into a tractable robust counterpart, and solves it by the external solver CPLEX. Details of defining the extended ambiguity sets and generalized decision rule are explained in the following code.
A simple distributionally robust optimization example is presented to demonstrate how to model ambiguity set and implement generalized decision rule by XProg. Suppose that there is one random variable \(\tilde{z}\), and the adjustable decision made after the observation of uncertainty realization \(z\) is denoted by \(y(z)\). This distributionally robust optimization model is expressed by the following problem.
\begin{align}
\min~&\max\limits_{\mathbb{P}\in\mathbb{F}} \mathbb{E}_{\mathbb{P}}\left\{y(\tilde{z})\right\} \\
\text{s.t.}~&y(z)\geq z, ~~~~~\forall z\in \mathcal{Z}\\
&y(z)\geq -z, ~~\forall z\in \mathcal{Z} \\
\end{align} where \(\mathbb{P}\) is the distribution of random variable \(\tilde{z}\), \(\mathcal{Z}\) is the support set of \(\tilde{z}\), which is set to be \([-2,~2]\), and \(\mathbb{F}\) is the ambiguity set that characterizes a collection of distributions, which is expressed in the equation below.
\begin{align}
\mathbb{F}=\left\{
\mathbb{P}\in\mathcal{P}_0(\mathbb{R}):
\begin{array}
\\
\tilde{z}\in\mathbb{R}\\
\mathbb{E}_{\mathbb{P}}\left(\tilde{z}\right)=0 \\
\mathbb{E}_{\mathbb{P}}\left(\tilde{z}^2\right)\leq 1 \\
\mathbb{P}\left\{\tilde{z}\in\mathcal{Z}\right\}=
\mathbb{P}\left\{-2\leq \tilde{z}\leq 2\right\}=1
\end{array}
\right\}
\end{align} For this simple case, it is apparent that the optimal adjustable decision follows \(y(z)=|z|\), and the objective value is \(1\). However, for a general problem, such adjustable decisions are intractable to identify, so the linear decision rule is applied to approximate the actual adjustable decisions by linear affine functions of random variable \(\tilde{z}\) and some auxiliary variables. According to reference (Bertsimas, Sim and Zhang 2014), the ambiguity set is extended to the following lifted form by introducing auxiliary variable \(\tilde{u}\) into the set.
\begin{align}
\mathbb{G}=\left\{
\mathbb{Q}\in\mathcal{P}_0(\mathbb{R}^2):
\begin{array}
\\
\tilde{z}\in\mathbb{R},~\tilde{u}\in\mathbb{R}\\
\mathbb{E}_{\mathbb{Q}}\left(\tilde{z}\right)=0 \\
\mathbb{E}_{\mathbb{Q}}\left(\tilde{u}\right)\leq 1 \\
\mathbb{Q}\left\{
\begin{array}
\\
-2 \leq \tilde{z} \leq 2 \\
\tilde{z}^2\leq \tilde{u} \leq 4\\
\end{array}
\right\}=1
\end{array}
\right\}
\end{align} The decision rule is a function of uncertainty \(z\) and the auxiliary variable \(u\), expressed as \(\bar{y}(z,u)\) in the equation below.
\begin{align}
\bar{y}(z,u)=y^0+y^zz+y^uu
\end{align} After replacing the recourse decision by the linear decision rule, the XProg automatically transforms the distributionally robust model into a tractable robust counterpart, and solves it by the external solver CPLEX. Details of defining the extended ambiguity sets and generalized decision rule are explained in the following code.
model=xprog('simple_dro'); % create a model named 'simple_dro' y=model.recourse(1,1); % define decision rule y for the adjustable decision z=model.random(1); % define random variable z u=model.random(1); % define auxiliary variable u y.depend(z); % define dependency of decision rule y on z y.depend(u); % define dependency of decision rule y on u model.uncertain(expect(z)==0); % 2nd line of set G: mean value of z model.uncertain(expect(u)<=1); % 3rd line of set G: variance of z model.uncertain(z<= 2); % 4th line of set G: support set of z model.uncertain(z>=-2); % 4th line of set G: support set of z model.uncertain(z.^2<=u); % 4th line of set G: u is the upper bound of z^2 model.uncertain(u<=2^2); % 4th line of set G: the upper limit of u is 2^2 model.min(expect(y)); % objective function is the expected value of y model.add(y>=z); % 1st constraint model.add(y>=-z); % 2nd constraint model.solve; % solve the problem
The solution suggests that the optimal decision rule is \(\bar{y}=\frac{1+z^2}{2}\). The actual optimal recourse decision and the optimal decision rule, together with the worst-case distribution, are displayed by the figure below.
It can be seen that although \(\bar{y}(z,u)\) is a conservative approximation, it attaches to the optimal recourse decision as the probability is nonzero, so the expected value of the objective \(\bar{y}(z,u)\) is still the same as \(1\).
Now consider the case with two confidence sets incorporated into the ambiguity set in order to better capture the ambiguous information of distribution. Details of including confidence sets are provided in reference (Wiesemann, Kuhn, and Sim 2014). The updated ambiguity set is expressed as follows.
\begin{align}
\mathbb{F}=\left\{
\mathbb{P}\in\mathcal{P}_0(\mathbb{R}):
\begin{array}
\\
\tilde{z}\in\mathbb{R}\\
\mathbb{E}_{\mathbb{P}}\left(\tilde{z}\right)=0 \\
\mathbb{E}_{\mathbb{P}}\left(\tilde{z}^2\right)\leq 1 \\
\mathbb{P}\left\{\tilde{z}\in\mathcal{Z}\right\}=
\mathbb{P}\left\{-2\leq \tilde{z}\leq 2\right\}=1 \\
\mathbb{P}\left\{\tilde{z}\in\mathcal{Z}_1\right\}=
\mathbb{P}\left\{-1\leq \tilde{z}\leq 1\right\}=0.9 \\
\mathbb{P}\left\{\tilde{z}\in\mathcal{Z}_2\right\}=
\mathbb{P}\left\{-0.5\leq \tilde{z}\leq 0.5\right\}\in[0.6,~0.7]
\end{array}
\right\}
\end{align} Note that so far XProg can only deal with the case that confidence sets that are proper subsets of the support set. Similarly the extended ambiguity set can be formulated as follows.
\begin{align}
\mathbb{G}=\left\{
\mathbb{Q}\in\mathcal{P}_0(\mathbb{R}^2):
\begin{array}
\\
\tilde{z}\in\mathbb{R},~\tilde{u}\in\mathbb{R}\\
\mathbb{E}_{\mathbb{Q}}\left(\tilde{z}\right)=0 \\
\mathbb{E}_{\mathbb{Q}}\left(\tilde{u}\right)\leq 1 \\
\mathbb{Q}\left\{
\begin{array}
\\
-2 \leq \tilde{z} \leq 2 \\
\tilde{z}^2\leq \tilde{u} \leq 4\\
\end{array}
\right\}=1 \\
\mathbb{Q}\left\{
\begin{array}
\\
-1 \leq \tilde{z} \leq 1 \\
\tilde{z}^2\leq \tilde{u} \leq 1\\
\end{array}
\right\}=0.9 \\
\mathbb{Q}\left\{
\begin{array}
\\
-0.5 \leq \tilde{z} \leq 0.5 \\
\tilde{z}^2\leq \tilde{u} \leq 0.25\\
\end{array}
\right\}\in [0.6,~0.7]
\end{array}
\right\}
\end{align} The code for the updated problem is given below.
Now consider the case with two confidence sets incorporated into the ambiguity set in order to better capture the ambiguous information of distribution. Details of including confidence sets are provided in reference (Wiesemann, Kuhn, and Sim 2014). The updated ambiguity set is expressed as follows.
\begin{align}
\mathbb{F}=\left\{
\mathbb{P}\in\mathcal{P}_0(\mathbb{R}):
\begin{array}
\\
\tilde{z}\in\mathbb{R}\\
\mathbb{E}_{\mathbb{P}}\left(\tilde{z}\right)=0 \\
\mathbb{E}_{\mathbb{P}}\left(\tilde{z}^2\right)\leq 1 \\
\mathbb{P}\left\{\tilde{z}\in\mathcal{Z}\right\}=
\mathbb{P}\left\{-2\leq \tilde{z}\leq 2\right\}=1 \\
\mathbb{P}\left\{\tilde{z}\in\mathcal{Z}_1\right\}=
\mathbb{P}\left\{-1\leq \tilde{z}\leq 1\right\}=0.9 \\
\mathbb{P}\left\{\tilde{z}\in\mathcal{Z}_2\right\}=
\mathbb{P}\left\{-0.5\leq \tilde{z}\leq 0.5\right\}\in[0.6,~0.7]
\end{array}
\right\}
\end{align} Note that so far XProg can only deal with the case that confidence sets that are proper subsets of the support set. Similarly the extended ambiguity set can be formulated as follows.
\begin{align}
\mathbb{G}=\left\{
\mathbb{Q}\in\mathcal{P}_0(\mathbb{R}^2):
\begin{array}
\\
\tilde{z}\in\mathbb{R},~\tilde{u}\in\mathbb{R}\\
\mathbb{E}_{\mathbb{Q}}\left(\tilde{z}\right)=0 \\
\mathbb{E}_{\mathbb{Q}}\left(\tilde{u}\right)\leq 1 \\
\mathbb{Q}\left\{
\begin{array}
\\
-2 \leq \tilde{z} \leq 2 \\
\tilde{z}^2\leq \tilde{u} \leq 4\\
\end{array}
\right\}=1 \\
\mathbb{Q}\left\{
\begin{array}
\\
-1 \leq \tilde{z} \leq 1 \\
\tilde{z}^2\leq \tilde{u} \leq 1\\
\end{array}
\right\}=0.9 \\
\mathbb{Q}\left\{
\begin{array}
\\
-0.5 \leq \tilde{z} \leq 0.5 \\
\tilde{z}^2\leq \tilde{u} \leq 0.25\\
\end{array}
\right\}\in [0.6,~0.7]
\end{array}
\right\}
\end{align} The code for the updated problem is given below.
model=xprog('simple_dro'); % create a model named 'simple_dro' y=model.recourse(1,1); % define decision rule y for the adjustable decision y z=model.random(1); % define random variable z u=model.random(1); % define auxiliary variable u y.depend(z); % define dependency of y on z y.depend(u); % define dependency of y on u model.uncertain(expect(z)==0); % 2nd line of set G: mean value of z model.uncertain(expect(u)<=1); % 3rd line of set G: variance of z model.uncertain(z<= 2); % 4th line of set G: support set Z model.uncertain(z>=-2); % 4th line of set G: support set Z model.uncertain(z.^2<=u); % 4th line of set G: u is the upper bound of z^2 model.uncertain(u<=2^2); % 4th line of set G: upper limit of u Z1=model.subset(0.9); % define Z1 to be a proper subset of support set, P{Z1} is 0.9 model.uncertain(z<= 1,Z1); % 5th line of set G: confidence set Z1 model.uncertain(z>=-1,Z1); % 5th line of set G: confidence set Z1 model.uncertain(z.^2<=u,Z1); % 5th line of set G: u is the upper bound of z^2 model.uncertain(u<=1^2,Z1); % 5th line of set G: upper limit of u Z2=model.subset([0.6 0.7],Z1); % define Z2 to be a proper subset of Z1, P{Z2} is between 0.6 and 0.7 model.uncertain(z<= 0.5,Z2); % 6th line of set G: confidence set Z2 model.uncertain(z>=-0.5,Z2); % 6th iine of set G: confidence set Z2 model.uncertain(z.^2<=u,Z2); % 6th line of set G: u is the upper bound of z^2 model.uncertain(u<=0.5^2,Z2); % 6th line of set G: upper limit of u model.min(expect(y)); % the objective function is the expected value of y model.add(y>=z); % 1st constraint model.add(y>=-z); % 2nd constraint model.solve; % solve the problem
The updated objective value becomes 0.9920. The optimal recourse decision, and the decision rule, as well as the worst-case distribution, are depicted by the figure below.