ABE-IPSABE HOLDINGABE BOOKS
English Polski
On-line access

Bookstore

0.00 PLN
Bookshelf (0) 
Your bookshelf is empty
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning

Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning

Authors
Publisher Springer, Berlin
Year
Pages 554
Version paperback
Language English
ISBN 9781441953544
Categories Cybernetics & systems theory
Delivery to United States

check shipping prices
Ask about the product
Email
question
  Send
Add to bookshelf

Table of contents

List of Figures. List of Tables. Acknowledgements. Preface. 1. Background. 1.1. Why this book was written. 1.2. Simulation-based optimization and modern times. 1.3. How this book is organized. 2. Notation. 2.1. Chapter Overview. 2.2. Some basic conventions. 2.3. Vector notation. 2.4. Notation for matrices. 2.5. Notation for n-tuples. 2.6. Notation for sets. 2.7. Notation for sequences. 2.8. Notation for transformations. 2.9. Max, min and arg max. 2.10. Acronyms and abbreviations. 3. Probability theory: a refresher.3.1. Overview of this chapter. 3.2. Laws of probability. 3.3. Probability distributions. 3.4. Expected value of a random variable. 3.5. Standard deviation of a random variable. 3.6. Limit theorems. 3.7. Review questions. 4. Basic concepts underlying simulation. 4.1. Chapter overview. 4.2. Introductions. 4.3. Models. 4.4. Simulation modeling of random systems. 4.5. Concluding remarks. 4.6. Historical remarks. 4.7. Review questions. 5. Simulation optimization: an overview. 5.1. Chapter overview. 5.2. Stochastic parametric optimization. 5.3. Stochastic control optimization. 5.4. Historical remarks. 5.5. Review questions. 6. Response surfaces and neural nets. 6.1. Chapter overview. 6.2. RSM: an overview. 6.3. RSM: details. 6.4. Neuro-response surface methods. 6.5. Concluding remarks. 6.6. Bibliographic remarks. 6.7. Review questions. 7. Parametric optimization. 7.1. Chapter overview. 7.2. Continuous optimization. 7.3. Discrete optimization. 7.4. Hybrid solution spaces. 7.5. Concluding remarks. 7.6. Bibliographic remarks. 7.7. Review questions. 8. Dynamic programming. 8.1. Chapter overview. 8.2. Stochastic processes. 8.3. Markov processes, Markov chains and semi-Markov processes. 8.4. Markov decision problems. 8.5. How to solve an MDP using exhaustive enumeration. 8.6. Dynamic programming for average reward. 8.7. Dynamic programming and discounted reward. 8.8. The Bellman equation: an intuitive perspective. 8.9. Semi-Markov decision problems. 8.10. Modified policy iteration. 8.11. Miscellaneous topics related to MDPs and SMDPs. 8.12. Conclusions. 8.13. Bibliographic remarks. 8.14. Review questions. 9. Reinforcement learning. 9.1. Chapter overview. 9.2. The need for reinforcement learning. 9.3. Generating the TPM through straightforward counting. 9.4. Reinforcement learning: fundamentals. 9.5. Discounted reward reinforcement learning. 9.6. Average reward reinforcement learning. 9.7. Semi-Markov decision problems and RL. 9.8. RL algorithms and their DP counterparts. 9.9. Act

We also recommend books

Strony www Białystok Warszawa
801 777 223