ABE-IPSABE HOLDINGABE BOOKS
English Polski
Dostęp on-line

Książki

0.00 PLN
Schowek (0) 
Schowek jest pusty
Foundations of Deep Learning

Foundations of Deep Learning

Autorzy
Wydawnictwo Springer, Berlin
Data wydania
Liczba stron 280
Forma publikacji książka w twardej oprawie
Język angielski
ISBN 9789811682322
Kategorie Sztuczna inteligencja
Zapytaj o ten produkt
E-mail
Pytanie
 
Do schowka

Opis książki

Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a "cloud" to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. 

The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the "effective" hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.

 We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.

Foundations of Deep Learning

Spis treści

1         Introduction

1.1  Definition and terminology

1.2  Advances of deep learning

1.3  Applications of deep learning

1.4  The status quo of deep learning theory

 

Part I                     Background

 

2         Conventional statistical learning theory

2.1   PAC learnability

2.2   VC dimension

2.3   Rademacher complexity

2.4   Other tools

 

3         Difficulty of conventional statistical learning theory

3.1    Sample size for guaranteed generalizability

3.2    Sample size vs. model size in deep learning

 

Part II                   Developing deep learning theory

 

4         Generalization bounds on hypothesis complexity

4.1     Generalization bounds on size-dependent complexity

4.2     Generalization bounds on depth-independent complexity

4.3     Generalization bounds on size-independent complexity

 

5         Interplay of optimization, Bayesian inference, and generalization

5.1     Stochastic gradient descent

5.2     Model SGD as dynamics

5.3     Historic cycle: Solving inference by optimization and model SGD as inference

5.4     Advances in optimization and generalization of SGD

 

6         Geometrical properties of loss surface

6.1     Does nonlinearity in activations matter?

6.2     Geometric structure of loss surface

6.3     Flatness and sharpness of local minima

 

7         The role of over-parametrization

7.1     Universal approximation theorem

7.2     Smooth the loss surface

7.3     Flatten the local minima

7.4     Double descent: Does variance-bias trade-off really exists?

 

Part III                 Rising concerns in ethics and security

 

8         Privacy preservation

8.1  The current statues of privacy preservation in deep learning

8.2  The relationship between privacy preservation and generalization

 

9         Fairness protection

9.1  The current statues of fairness protection in deep learning

9.2  The relationship between fairness protection and generalization

 

10      Algorithmic robustness

10.1  The current status of algorithmic robustness in deep learning

10.2  The relationship between algorithmic robustness and generalization

 

Appendix A         Background knowledge

A.1  Calculus and linear algebra

A.2  Probability and statistics

A.3  Stochastic differential equation

Polecamy również książki

Strony www Białystok Warszawa
801 777 223