0
, movement costs are a dimension-dependently scaled Manhattan distanceSSCO - Fractional - Uni-Dimensional
Stays within bounds on the optimal solution moving backwards in time.
SCO - Fractional
Finds the minimizer of a convex objective.
SSCO - Integral - Uni-Dimensional
Finds a shortest path in a graph using dynamic programming (in polynomial time).
SSCO - Integral
Finds a shortest path in a graph using dynamic programming (not in polynomial time).
SSCO - Integral
Extends the graph-based optimal algorithm by only considering a subset of the decision space to achieve a better performance.
SCO - Fractional
Solves a smaller problem than Convex Optimization to obtain an optimal static solution.
SCO - Integral
Cycles through all possible configurations to find the optimal static integral solution. Convexity is used to stop the search of the decision space quickly in practice, however, the worst-case runtime is exponential.
SSCO - Fractional - Uni-Dimensional - 3
-competitive
Lazily stays within fractional bounds on the decision space.
SSCO - Integral - Uni-Dimensional - 3
-competitive (optimal for a deterministic algorithm)
Lazily stays within integral bounds on the decision space.
SSCO - Fractional - Uni-Dimensional - 3
-competitive (optimal for a memoryless algorithm)
Moves towards the minimizer of the hitting cost balancing the paid movement cost. Special case of Primal Online Balanced Descent.
SSCO - Fractional - Uni-Dimensional - 2
-competitive (optimal)
Constructs a probability distribution of well-performing configurations over time.
SCO - Fractional - Uni-Dimensional - 2
-competitive (optimal)
Uses a work function to balance hitting and movement costs.
SSCO - Integral - Uni-Dimensional - 2
-competitive (optimal)
Randomly rounds solutions of any 2
-competitive fractional algorithm.
SLO - Integral - 2d
-competitive (optimal for a deterministic algorithm)
Keeps servers active for some minimum time delta even when they are not use to balance hitting costs and movement costs.
SLO - Integral - (e / (e - 1))d
-competitive
Keeps servers active for some minimum time delta even when they are not use to balance hitting costs and movement costs.
SBLO - Integral - 2d + 1 + \epsilon
-competitive
Keeps servers active for some minimum time delta even when they are not use to balance hitting costs and movement costs.
SCO - Fractional
Achieves sublinear regret by learning the static offline optimum.
SCO - Fractional - 3+\mathcal{O}(1/\alpha)
-competitive for the \ell_2
norm and locally \alpha
-polyhedral hitting costs, \mathcal{O}(\sqrt{d})
-competitive for the \ell_1
norm; mirror map must be m
-strongly convex and M
-Lipschitz smooth in the switching cost norm
Takes a gradient step orthogonal to some landing level set balancing costs in the primal space.
SCO - Fractional - mirror map must be m
-strongly convex and M
-Lipschitz smooth in the switching cost norm
Takes a gradient step orthogonal to some landing level set balancing costs in the dual space. Achieves sublinear regret.
SCO - Fractional - \mathcal{O}(1 / \sqrt{m})
-competitive for m
-quasiconvex hitting costs and \ell_2
-squared switching costs
Takes a normal OBD-step and then an additional step directly towards the minimizer of the hitting cost depending on the convexity parameter m
.
SCO - Fractional - \mathcal{O}(1 / \sqrt{m})
-competitive (optimal) for m
-strongly convex and differentiable hitting costs and switching costs modeled as the Bregman divergence where the potential function is \alpha
-strongly convex, \beta
-strongly smooth, differentiable, and its Fenchel Conjugate is differentiable; \Omega(1/m)
-competitive for m
-quasiconvex hitting costs and \ell_2
-squared switching costs
Using a computationally simpler local view.
SSCO - Fractional - (1 + \Omega(\beta/e_0))
-competitive where e_0
is the idle cost and \beta
is the scaling of the Manhattan distance; when uni-dimensional the competitive ratio is 1 + \mathcal{O}(1/w)
SSCO - Fractional - 1 + \mathcal{O}(1/w)
-competitive