HSBNN: A High-Scalable Bayesian Neural Networks Accelerator Based on Field Programmable Gate Arrays (FPGA)

Yon D, Frith CD. Precision and the Bayesian brain. Curr Biol. 2021;31(17):R1026–32.

Article  Google Scholar 

Opanasenko V, Fazilov SK, Radjabov S, Kakharov SS. Multilevel face recognition system. Cybern Syst Anal. 2024;60(1):146–51.

Article  MATH  Google Scholar 

Tarnpradab S, Poonpinij P, Na Lumpoon N, Wattanapongsakorn N. Real-time masked face recognition and authentication with convolutional neural networks on the web application. Multimed Tools Appl; 2024. pp. 1–25.

Soumya, A., Cenkeramaddi, L. R., Chalavadi, V., et al. Multi-class object classification using deep learning models in automotive object detection scenarios. In: Sixteenth international conference on machine vision (ICMV 2023); 2024. vol. 13072, pp. 48–55, SPIE

Wang B, Wang P, Zhang Y, Wang X, Zhou Z, Wang Y. Condition-guided urban traffic co-prediction with multiple sparse surveillance data. IEEE Transactions on Vehicular Technology; 2024.

Chen J, Wang H, He E. A transfer learning-based CNN deep learning model for unfavorable driving state recognition. Cogn Comput. 2024;16(1):121–30.

Article  Google Scholar 

Ichikawa K, Kaneko K. Bayesian inference is facilitated by modular neural networks with different time scales. PLoS Comput Biol. 2024;20(3): e1011897.

Article  Google Scholar 

Gou H, Zhang G, Medeiros EP, Jagatheesaperumal SK, de Albuquerque VHC. A cognitive medical decision support system for IoT-based human-computer interface in pervasive computing environment. Cogn Comput. 2024;16(5):2471–86.

Article  Google Scholar 

Chen Y, Ding Y, Hu Z-Z, and Ren Z. Geometrized task scheduling and adaptive resource allocation for large-scale edge computing in smart cities. IEEE Internet of Things Journal; 2025

Wang Z, Goudarzi M, Gong M, Buyya R. Deep reinforcement learning-based scheduling for optimizing system load and response time in edge and fog computing environments. Future Gen Comput Syst. 2024;152:55–69.

Article  Google Scholar 

Alsharif MH, Jahid A, Kannadasan R, Singla MK, Gupta J, Nisar KS, Abdel-Aty A-H, Kim M-K. Survey of energy-efficient fog computing: techniques and recent advances. Energy Rep. 2025;13:1739–63.

Article  Google Scholar 

Morán A, Canals V, Galan-Prado F, Frasser CF, Radhakrishnan D, Safavi S, Rosselló JL. Hardware-optimized reservoir computing system for edge intelligence applications. Cogn Comput; 2023. pp. 1–9

Tasci M, Istanbullu A, Tumen V, Kosunalp S. FPGA-QNN: quantized neural network hardware acceleration on FPGAs. Appl Sci. 2025;15(2):688.

Article  Google Scholar 

Wang D, Xu K, Jiang D. PipeCNN: an OpenCL-based open-source FPGA accelerator for convolution neural networks. In: 2017 International conference on field programmable technology (ICFPT); 2017. pp. 279–282

Liu Z, Liu Q, Yan S, Cheung RC. An efficient FPGA-based depthwise separable convolutional neural network accelerator with hardware pruning. ACM Trans Reconfigurable Technol Syst. 2024;17(1):1–20.

Article  Google Scholar 

Liu F, Li H, Hu W, He Y. Review of neural network model acceleration techniques based on FPGA platforms. Neurocomputing; 2024. 128511 .

Asiatici M, Maiorano D, Ienne P. How many CPU cores is an FPGA worth? Lessons learned from accelerating string sorting on a CPU-FPGA system. J Signal Process Syst. 2021;93:1405–17.

Article  Google Scholar 

López-Asunción S, Ituero P. Enabling efficient on-edge spiking neural network acceleration with highly flexible FPGA architectures. Electronics. 2024;13(6):1074.

Article  Google Scholar 

Wan Y, Chen J, Yang X, Zhang H, Huang C, Xie X. DSA-CNN: an FPGA-integrated deformable systolic array for convolutional neural network acceleration. Appl Intell. 2025;55(1):1–18.

Article  Google Scholar 

Bai L, Zhao Y, Huang X. A CNN accelerator on FPGA using depthwise separable convolution. IEEE Transactions on Circuits and Systems II: Express Briefs. 2018;65(10):1415–9.

Google Scholar 

Fan H, Ferianc M, Rodrigues M, Zhou H, Niu X, Luk W. High-performance FPGA-based accelerator for Bayesian neural networks. In: 2021 58th ACM/IEEE design automation conference (DAC); 2021. pp. 1063–1068. IEEE

Ferianc M, Que Z, Fan H, Luk W, Rodrigues M. Optimizing Bayesian recurrent neural networks on an FPGA-based accelerator. In: 2021 International conference on field-programmable technology (ICFPT); 2021. pp. 1–10. IEEE.

Li H, Wan B, Fang Y, Li Q, Liu JK, An L. An FPGA implementation of Bayesian inference with spiking neural networks. Front Neurosci. 2024;17:1291051.

Article  Google Scholar 

Cai R, Ren A, Liu N, Ding C, Wang L, Qian X, Pedram M, Wang Y. VIBNN: hardware acceleration of Bayesian neural networks. SIGPLAN Not. 2018;53:476–88.

Article  Google Scholar 

Wu X, Wen C, Wang Z, Liu W, Yang J. A novel ensemble-learning-based convolution neural network for handling imbalanced data. Cogn Comput. 2024;16(1):177–90.

Article  Google Scholar 

Rezaeezade A, Batina L. Regularizers to the rescue: fighting overfitting in deep learning-based side-channel analysis. J Cryptographic Eng. 2024;14(4):609–29.

Article  Google Scholar 

Grabinski J, Gavrikov P, Keuper J, Keuper M. Robust models are less over-confident. Adv Neural Inf Process Syst. 2022;35:39059–75.

MATH  Google Scholar 

Le Coz A, Herbin S, Adjed F. Confidence calibration of classifiers with many classes. Adv Neural Inf Process Syst. 2024;37:77686–725.

Google Scholar 

Shridhar K, Laumann F, Liwicki M. A comprehensive guide to Bayesian convolutional neural network with variational inference; 2019. arXiv:1901.02731.

Pham N, Fomel S. Uncertainty estimation using Bayesian convolutional neural network for automatic channel detection. In: SEG international exposition and annual meeting, D031S068R001, SEG; 2020.

Graves, A. Practical variational inference for neural networks. In: Advances in neural information processing systems; 2011. pp. 2348–2356 .

Blei DM, Kucukelbir A, McAuliffe JD. Variational inference: a review for statisticians. J Am Stat Assoc. 2017;112(518):859–77.

Article  MathSciNet  Google Scholar 

Kendall, A. and Gal, Y. What uncertainties do we need in Bayesian deep learning for computer vision?. In: Advances in neural information processing systems; 2017. pp. 5574–5584 .

Kwon Y, Won J-H, Kim BJ, Paik MC. Uncertainty quantification using Bayesian neural networks in classification: application to biomedical image segmentation. Comput Stat & Data Anal. 2020;142.

Stone JE, Gohara D, Shi G. OpenCL: a parallel programming standard for heterogeneous computing systems. Comput Sci & Eng. 2010;12(3):66.

Article  Google Scholar 

Breyer, M., Van Craen, A., and Pflüger, D. A comparison of SYCL, OpenCL, CUDA, and OpenMP for massively parallel support vector machine classification on multi-vendor hardware. In: Proceedings of the 10th International Workshop on OpenCL; 2022. pp. 1–12.

Rychlik, Z. A central limit theorem for sums of a random number of independent random variables. In: Colloquium Mathematicum,. 35(1)147–158, Institute of Mathematics Polish Academy of Sciences 1976.

Angus JE. The probability integral transform and related results. SIAM Rev. 1994;36(4):652–4.

Article  MathSciNet  MATH  Google Scholar 

Box GEP, Muller ME. A note on the generation of random normal deviates; 1958.

Condo C, Gross W. Pseudo-random Gaussian distribution through optimised LFSR permutations. Electron Lett. 2015;51(25):2098–100.

Article  Google Scholar 

Jr, P. R. P. “central limit theorem” in “probability, random variables and random signal principles” 4th ed., 125, 2001.

Shapiro SS, Wilk MB. An analysis of variance test for normality (complete samples). Biometrika. 1965;52:591–611.

Article  MathSciNet  MATH  Google Scholar 

D’Agostino R, Pearson ES. Tests for departure from normality. Biometrika. 1973;60:613–22.

MathSciNet  MATH  Google Scholar 

LeCun Y, Cortes C, Burges C. MNIST handwritten digit database; 2010.

Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X. TensorFlow: large-scale machine learning on heterogeneous systems; 2015. Software available from tensorflow.org.

Amdahl, G. M. Validity of the single processor approach to achieving large scale computing capabilities. In: Proceedings of the April 18-20, 1967, Spring Joint Computer Conference, pp 483–485.

Gustafson JL. Reevaluating Amdahl’s law. Commun. ACM; 1988. pp. 532–533.

Li J, Yang SX. A novel feature learning-based bio-inspired neural network for real-time collision-free rescue of multirobot systems. IEEE Transactions on Industrial Electronics; 2024.

Wang Z, Li S, Xuan J, Shi T. Biologically inspired compound defect detection using a spiking neural network with continuous time-frequency gradients. Adv Eng Inf. 2025;65: 103132.

Article  Google Scholar 

Comments (0)

No login
gif