### Skapa referens, olika format (klipp och klistra)

**Harvard**

Larsson, T., Patriksson, M. och Strömberg, A. (1996) *Ergodic results and bounds on the optimal value in subgradient optimization*.

** BibTeX **

@conference{

Larsson1996,

author={Larsson, Torbjörn and Patriksson, Michael and Strömberg, Ann-Brith},

title={Ergodic results and bounds on the optimal value in subgradient optimization},

booktitle={Operations Research Proceedings 1995},

isbn={978-3540608066},

pages={30-35},

abstract={<p>
Subgradient methods are popular tools for nonsmooth, convex minimization, especially in the context of Lagrangean relaxation; their simplicity has been a main contribution to their success. As a consequence of the nonsmoothness, it is not straightforward to monitor the progress of a subgradient method in terms of the approximate fulfilment of optimality conditions, since the subgradients used in the method will, in general, not accumulate to subgradients that verify optimality of a solution obtained in the limit. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers, is not directly available in subgradient schemes.
</p>
<p>
As a means for overcoming these weaknesses of subgradient optimization methods, we introduce the computation of an ergodic (averaged) sequence of subgradients. Specifically, we consider a nonsmooth, convex program solved by a conditional subgradient optimization scheme (of which the traditional sub gradient optimization method is a special case) with divergent series step lengths, which generates a sequence of iterates that converges to an optimal solution. We show that the elements of the ergodic sequence of subgradients in the limit fulfill the optimality conditions at this optimal solution. Further, we use the convergence properties of the ergodic sequence of subgradients to establish convergence of an ergodic sequence of Lagrange multipliers. Finally, some potential applications of these ergodic results are briefly discussed.
</p>},

year={1996},

keywords={Nonsmooth minimization, Conditional subgradient optimization, Ergodic sequences, Lagrange multipliers},

}

** RefWorks **

RT Conference Proceedings

SR Print

ID 141740

A1 Larsson, Torbjörn

A1 Patriksson, Michael

A1 Strömberg, Ann-Brith

T1 Ergodic results and bounds on the optimal value in subgradient optimization

YR 1996

T2 Operations Research Proceedings 1995

SN 978-3540608066

SP 30

OP 35

AB <p>
Subgradient methods are popular tools for nonsmooth, convex minimization, especially in the context of Lagrangean relaxation; their simplicity has been a main contribution to their success. As a consequence of the nonsmoothness, it is not straightforward to monitor the progress of a subgradient method in terms of the approximate fulfilment of optimality conditions, since the subgradients used in the method will, in general, not accumulate to subgradients that verify optimality of a solution obtained in the limit. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers, is not directly available in subgradient schemes.
</p>
<p>
As a means for overcoming these weaknesses of subgradient optimization methods, we introduce the computation of an ergodic (averaged) sequence of subgradients. Specifically, we consider a nonsmooth, convex program solved by a conditional subgradient optimization scheme (of which the traditional sub gradient optimization method is a special case) with divergent series step lengths, which generates a sequence of iterates that converges to an optimal solution. We show that the elements of the ergodic sequence of subgradients in the limit fulfill the optimality conditions at this optimal solution. Further, we use the convergence properties of the ergodic sequence of subgradients to establish convergence of an ergodic sequence of Lagrange multipliers. Finally, some potential applications of these ergodic results are briefly discussed.
</p>

LA eng

OL 30