Robust Optimization Harvard Case Solution & Analysis


The community of mathematical programming has been more focused about the effects of parameters’ uncertainty on optimization.The solutions to the problems of optimization can displaynotable sensitivity to worries in the problem’s parameters, hence ofteninterpreting a highly infeasible computed solution, suboptimal or both (in brief, theoreticallyinsignificant).

In the fields of science and engineering, it is hardly a new idea. In the perspective of optimization, robust control is the most closely linked field.However, there areseveral similarities at high level, and there is no doubt that the motivation to build robust optimization came from robust control community. Robust optimization is a different field, concentrated on typical concepts of optimization theory, algorithms particularly, tractability and geometry, plus power of modelling and structural results that are widespread more widely in the settlement of robustness. Contrary to the robust optimization, stochastic optimization initiates by considering that uncertainty emphasizes on probabilistic description, whereasrobust optimization deals uncertainty withdeterministic and described approachbased on sets.

For any given optimization problem, there can be a multiple version of robust, which depends on thecomposition of the uncertainty set. When articulating a robust matching part for an optimization problem, the main issue which arises is to maintain the tractability.

Robust Optimization Harvard Case Solution & Analysis


When we describe data uncertainty, it specifies the confidence level that a person has in his data. There can be many reasons that might be influencing 100% confidence that our data is really reflecting the ground reality. Before looking those reasons in more detail, it is of use to look at the two facets of uncertainty: Accuracy and Precision.

Accuracy refers to the point to which a measured value reaches to the exact value while precision can be better understood with two aspects:

  • For the group of measurements, to which extent the measured values are clustered around the mean value.
  • The data resolution means that nominal measurement distinction can be recorded using any tool or method.



There are many sources of data uncertainty. Few of them are listed below:

  • Biased sampling by the observer: means that while sampling the original data, observer uses such tools which cannot filter the data unbiasedly.
  • Accuracy of the device used for measurement: It might be possible that the device being used for measurement is not accurate or updated which is causing errors in producing results.
  • Precision of the device used for measurement:Precision is something what that to which level, the measuring device is calibrated exactly.
  • Processing of the data: The measures you have used to clean your data might also affect the results. Wrong data cleaning processes can also induce data uncertainty.
  • Phenomenal variation: It is also important what the nature of the data is. There are so many kinds of data, so it is necessary to deal any data according to its nature to reduce data uncertainty...............

This is just a sample partical work. Please place the order on the website to get your own originally done case solution.

Share This


Save Up To




Register now and save up to 30%.