872.05 PLN with VAT
$196.17 / €160.38 / £147.43
261.62 PLN with VAT
$58.85 / €48.12 / £44.23
348.82 PLN with VAT
$78.47 / €64.16 / £58.97
436.03 PLN with VAT
$98.08 / €80.19 / £73.72
523.23 PLN with VAT
$117.70 / €96.23 / £88.46
566.83 PLN with VAT
$127.51 / €104.25 / £95.84
This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol. Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management.
Distributed Machine Learning and Gradient Optimization