ABE-IPSABE HOLDINGABE BOOKS
English Polski
On-line access

Bookstore

0.00 PLN
Bookshelf (0) 
Your bookshelf is empty
Distributed Machine Learning and Gradient Optimization

Distributed Machine Learning and Gradient Optimization

Authors
Publisher Springer, Berlin
Year
Pages 169
Version hardback
Language English
ISBN 9789811634192
Categories Machine learning
Delivery to United States

check shipping prices
Ask about the product
Email
question
  Send
Add to bookshelf

Book description

This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.

Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management.


Distributed Machine Learning and Gradient Optimization

Table of contents

Chapter 1: Introduction

1.1.  Background

1.2.  Distributed machine learning

1.3.  Gradient optimization

1.4.  Challenges

Chapter 2: The preliminaries

        2.1. Overview

        2.2. Parallel strategy

        2.3. Gradient compression

        2.4. Synchronization protocol

Chapter 3: Parallel strategy

1.1.  Background and problem

1.2.  Data parallelism

1.3.  Model parallelism

1.4.  Hybrid parallelism

        3.5. Benchmark

        3.6. Summary

Chapter 4: Gradient compression

        4.1. Background and problem

        4.2. Lossless gradient compression

        4.3. Lossy gradient compression

        4.4. Sparse gradient compression

        4.5. Benchmark

        4.6. Summary

Chapter 5: Synchronization protocol

        5.1. Background and problem

        5.2. Bulk synchronous protocol

        5.3. Asynchronous protocol

        5.4. Stale synchronous protocol

        5.5. Benchmark

        5.6. Summary

Chapter 6: Conclusion

        6.1. Summary of the book

        6.2. Future work

We also recommend books

Strony www Białystok Warszawa
801 777 223