000 | 01766 a2200241 4500 | ||
---|---|---|---|
003 | OSt | ||
005 | 20250728124052.0 | ||
008 | 250724b |||||||| |||| 00| 0 eng d | ||
020 | _a9789811634192 | ||
082 |
_a006.31 _bJ56d |
||
100 | _aJiang, Jiawei | ||
245 |
_aDistributed machine learning and gradient optimization _cJiawei Jiang, Bin Cui and Ce Zhang |
||
260 |
_bSpringer _c2022 _aSingapore |
||
300 | _axi, 169p | ||
440 | _aBig data management | ||
490 | _a / edited by Xiaofeng Meng | ||
520 | _aThis book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol. Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management. | ||
650 | _aMachine learning | ||
650 | _aMachine learning algorithms | ||
700 | _aCui, Bin | ||
700 | _aZhang, Ce | ||
942 | _cBK | ||
999 |
_c567560 _d567560 |