### Skapa referens, olika format (klipp och klistra)

**Harvard**

Thrampoulidis, C., Panahi, A. och Hassibi, B. (2015) *Asymptotically exact error analysis for the generalized equation-LASSO*.

** BibTeX **

@conference{

Thrampoulidis2015,

author={Thrampoulidis, C. and Panahi, Ashkan and Hassibi, B.},

title={Asymptotically exact error analysis for the generalized equation-LASSO},

booktitle={2015 IEEE International Symposium on Information Theory (ISIT)},

isbn={9781467377041},

pages={2021-2025},

abstract={Given an unknown signal x0 ϵ ℝn and linear noisy measurements y = Ax0 + σv ϵ ℝm, the generalized equation-LASSO solves equation. Here, f is a convex regularization function (e.g. ℓ1-norm, nuclear-norm) aiming to promote the structure of x0 (e.g. sparse, low-rank), and, λ ≥ 0 is the regularizer parameter. A related optimization problem, though not as popular or well-known, is often referred to as the generalized ℓ2-LASSO and takes the form equation, and has been analyzed by Oymak, Thrampoulidis and Hassibi. Oymak et al. further made conjectures about the performance of the generalized equation-LASSO. This paper establishes these conjectures rigorously. We measure performance with the normalized squared error equation. Assuming the entries of A are i.i.d. Gaussian N(0, 1/m) and those of v are i.i.d. N(0, 1), we precisely characterize the 'asymptotic NSE' aNSE :=limσ→0 NSE(σ) when the problem dimensions tend to infinity in a proportional manner. The role of λ, f and x0 is explicitly captured in the derived expression via means of a single geometric quantity, the Gaussian distance to the subdifferential. We conjecture that aNSE = supσ>0 NSE(σ). We include detailed discussions on the interpretation of our result, make connections to relevant literature and perform computational experiments that validate our theoretical findings.},

year={2015},

keywords={Computation theory, Optimization, Computational experiment, Convex regularizations, Generalized Equations, Geometric quantities, Noisy measurements, Optimization problems, Squared errors, Subdifferentials, Information theory },

}

** RefWorks **

RT Conference Proceedings

SR Electronic

ID 237561

A1 Thrampoulidis, C.

A1 Panahi, Ashkan

A1 Hassibi, B.

T1 Asymptotically exact error analysis for the generalized equation-LASSO

YR 2015

T2 2015 IEEE International Symposium on Information Theory (ISIT)

SN 9781467377041

SP 2021

OP 2025

AB Given an unknown signal x0 ϵ ℝn and linear noisy measurements y = Ax0 + σv ϵ ℝm, the generalized equation-LASSO solves equation. Here, f is a convex regularization function (e.g. ℓ1-norm, nuclear-norm) aiming to promote the structure of x0 (e.g. sparse, low-rank), and, λ ≥ 0 is the regularizer parameter. A related optimization problem, though not as popular or well-known, is often referred to as the generalized ℓ2-LASSO and takes the form equation, and has been analyzed by Oymak, Thrampoulidis and Hassibi. Oymak et al. further made conjectures about the performance of the generalized equation-LASSO. This paper establishes these conjectures rigorously. We measure performance with the normalized squared error equation. Assuming the entries of A are i.i.d. Gaussian N(0, 1/m) and those of v are i.i.d. N(0, 1), we precisely characterize the 'asymptotic NSE' aNSE :=limσ→0 NSE(σ) when the problem dimensions tend to infinity in a proportional manner. The role of λ, f and x0 is explicitly captured in the derived expression via means of a single geometric quantity, the Gaussian distance to the subdifferential. We conjecture that aNSE = supσ>0 NSE(σ). We include detailed discussions on the interpretation of our result, make connections to relevant literature and perform computational experiments that validate our theoretical findings.

LA eng

DO 10.1109/ISIT.2015.7282810

LK http://dx.doi.org/10.1109/ISIT.2015.7282810

OL 30