I was wondering if there is a nice way (preferably using JuMP) to get all optimal solutions of a linear program (in case there are multiple optimal solutions).
An example
minimize the statistical distance (Kolmogorov distance) between two probabilities distributions.
min sum_{i=1}^{4} |P[i] - Q[i]| over free variable Q
P = [0.25,0.25,0.25,0.25]
sum_i P[i] = 1
Q[1] + Q[4] = 1
sum_i Q[i] = 1 -> Q[2],Q[3] = 0
Note we can phrase the optimization as a linear program, the objective becomes
min S >= sum_i S[i]
S[i] >= P[i]-Q[i]
S[i] >= Q[i]-P[i]
There is no unique solution to this problem, instead the subspace of optimal solution is spanned by
Q1 = [0.75,0,0,0.25]
Q2 = [0.25,0,0,0.75]
Both have the minimal distance of 0.5, any convex combination of the these two solution is optimal.
I was wondering if there is a nice way to find all these optimal extreme points (points that span the optimal subspace)?
Why am I interested in this; the points that gives the maximal Bhattacharyya coefficient (concave function), lies somewhere in the middle of the optimal subspace of the statical distance.
So far I`ve tried to find optimal P,Q pairs (refering to example I gave) by making the algorithm favor miniziming the distance between P[i],Q[i], by adding a weight of 1.001 to this term in the sum. It seems to work to some extend, although I can hardly know for sure.
min sum A[i]
subject toA[i] >= P[i] - Q[i]
andA[i] >= Q[i] - P[i]
for each1 <= i <= 4
– Remembrancer