EEC256 – Stochastic Optimization in Dynamic Systems
4 units – Spring Quarter; alternate years
Lecture: 4 hours
Prerequisite: EEC 260 or equivalent
Grading: Letter; homework, classwork, and exams as determined by instructor.
Markov Decision Processes (MDP), dynamic programming, multi-armed bandit and restless bandit, Partially observable MDP, optimal stopping, stochastic scheduling, sequential detection and quickest change detection, competitive MDP and game theory, applications in dynamic systems such as queueing networks, communication networks, and social economic systems.
Expanded Course Description:
- Review of Markov Theory
- Classification of states: transience vs. recurrence
- Stationary distribution and ergodicity
- Applications in dynamic systems: stability analysis of queueing networks
- Fundamentals of Markov Decision Processes (MDP)
- Finite-horizon MDP and dynamic programming
- Random-horizon MDP: stochastic shortest path and optimal stopping
- Infinite-horizon MDP under discounted and average reward criteria
- Special Classes of MDP and Sequential Stochastic Optimization
- Multi-armed bandit and restless bandit problems
- Partially observable MDP
- Sequential detection and quickest change detection
- Stochastic scheduling
- Introduction to Competitive MDP and Game Theory
- Static games and finite dynamic games
- Competitive MDP and stochastic games
- Markov Decision Processes: Discrete Stochastic Dynamic Programming, by M.L. Puterman, Wiley, 2005.
- Introduction to Stochastic Dynamic Programming, by S.M. Ross, Academic Press, 1995.
- Markov Chains, by J.R. Norris, Cambridge University Press, 1997
THIS COURSE DOES NOT DUPLICATE ANY EXISTING COURSE.
Last revised: Spring 2012