### Skapa referens, olika format (klipp och klistra)

**Harvard**

Zou, Z., Gidmark, A., Charalambous, T. och Johansson, M. (2016) *Optimal Radio Frequency Energy Harvesting With Limited Energy Arrival Knowledge*.

** BibTeX **

@article{

Zou2016,

author={Zou, Z. H. and Gidmark, A. and Charalambous, Themistoklis and Johansson, M.},

title={Optimal Radio Frequency Energy Harvesting With Limited Energy Arrival Knowledge},

journal={Ieee Journal on Selected Areas in Communications},

issn={0733-8716},

volume={34},

issue={12},

pages={3528-3539},

abstract={We develop optimal sleeping and harvesting policies for radio frequency (RF) energy harvesting devices, formalizing the following intuition: when the ambient RF energy is low, devices consume more energy being awake than what can be harvested and should enter sleep mode; when the ambient RF energy is high, on the other hand, it is essential to wake up and harvest. Toward this end, we consider a scenario with intermittent energy arrivals described by a two-state Gilbert-Elliott Markov chain model. The challenge is that the state of the Markov chain can only be observed during the harvesting action, and not while in sleep mode. Two scenarios are studied under this model. In the first scenario, we assume that the transition probabilities of the Markov chain are known and formulate the problem as a partially observable Markov decision process (POMDP). We prove that the optimal policy has a threshold structure and derive the optimal decision parameters. In the practical scenario where the ratio between the reward and the penalty is neither too large nor too small, the POMDP framework and the threshold-based optimal policies are very useful for finding non-trivial optimal sleeping times. In the second scenario, we assume that the Markov chain parameters are unknown and formulate the problem as a Bayesian adaptive POMDP and propose a heuristic posterior sampling algorithm to reduce the computational complexity. The performance of our approaches is demonstrated via numerical examples.},

year={2016},

keywords={Energy harvesting, ambient radio frequency energy, partially observable Markov decision process, MARKOV DECISION-PROCESSES, MANAGEMENT POLICIES, TRANSMISSION, NETWORKS, CHANNELS, DEVICES, NODES },

}

** RefWorks **

RT Journal Article

SR Electronic

ID 248506

A1 Zou, Z. H.

A1 Gidmark, A.

A1 Charalambous, Themistoklis

A1 Johansson, M.

T1 Optimal Radio Frequency Energy Harvesting With Limited Energy Arrival Knowledge

YR 2016

JF Ieee Journal on Selected Areas in Communications

SN 0733-8716

VO 34

IS 12

SP 3528

OP 3539

AB We develop optimal sleeping and harvesting policies for radio frequency (RF) energy harvesting devices, formalizing the following intuition: when the ambient RF energy is low, devices consume more energy being awake than what can be harvested and should enter sleep mode; when the ambient RF energy is high, on the other hand, it is essential to wake up and harvest. Toward this end, we consider a scenario with intermittent energy arrivals described by a two-state Gilbert-Elliott Markov chain model. The challenge is that the state of the Markov chain can only be observed during the harvesting action, and not while in sleep mode. Two scenarios are studied under this model. In the first scenario, we assume that the transition probabilities of the Markov chain are known and formulate the problem as a partially observable Markov decision process (POMDP). We prove that the optimal policy has a threshold structure and derive the optimal decision parameters. In the practical scenario where the ratio between the reward and the penalty is neither too large nor too small, the POMDP framework and the threshold-based optimal policies are very useful for finding non-trivial optimal sleeping times. In the second scenario, we assume that the Markov chain parameters are unknown and formulate the problem as a Bayesian adaptive POMDP and propose a heuristic posterior sampling algorithm to reduce the computational complexity. The performance of our approaches is demonstrated via numerical examples.

LA eng

DO 10.1109/JSAC.2016.2600364

LK http://dx.doi.org/10.1109/JSAC.2016.2600364

OL 30