PARALLELISATION OF DFA ALGORITHM FOR DEEP NEURAL NETWORK TRAINING
DOI:
https://doi.org/10.24867/21BE04GrahovacKeywords:
DFA, deep neural networks, multiprocessorsAbstract
This paper presents a multiprocessor parallelization of the DFA algorithm for neural network training. Parallelization is implemented for a deep neural network of arbitrary dimensions.
References
[1] Nøkland A., “Direct Feedback Alignment Provides Learning in Deep Neural Networks”, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
[2] OpenAI, “Language Models are Few-Shot Learners”, Advances in Neural Information Processing Systems 33 (NeurIPS 2020).
[3] Grother P. J., “MNIST Special Database 19”, National Institute of Standards and Technology, 1995.
[4] Launay J., Poli I., Boniface F., Krzakala F., “Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures”, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
[2] OpenAI, “Language Models are Few-Shot Learners”, Advances in Neural Information Processing Systems 33 (NeurIPS 2020).
[3] Grother P. J., “MNIST Special Database 19”, National Institute of Standards and Technology, 1995.
[4] Launay J., Poli I., Boniface F., Krzakala F., “Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures”, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Downloads
Published
2023-01-07
Issue
Section
Electrotechnical and Computer Engineering