Density results by deep neural network operators with integer weights

    Danilo Costarelli   Affiliation


In the present paper, a new family of multi-layers (deep) neural network (NN) operators is introduced. Density results have been established in the space of continuous functions on [−1,1], with respect to the uniform norm. First, the case of the operators with two-layers is considered in detail, then the definition and the corresponding density results have been extended to the general case of multi-layers operators. All the above definitions allow us to prove approximation results by a constructive approach, in the sense that, for any given f all the weights, the thresholds, and the coefficients of the deep NN operators can be explicitly determined. Finally, examples of activation functions have been provided, together with graphical examples. The main motivation of this work resides in the aim to provide the corresponding multi-layers version of the well-known (shallow) NN operators, according to what is done in the applications with the construction of deep neural models.

Keyword : deep neural networks, neural network operators, density results, ReLU activation function, RePUs activation functions, sigmoidal functions

How to Cite
Costarelli, D. (2022). Density results by deep neural network operators with integer weights. Mathematical Modelling and Analysis, 27(4), 547–560.
Published in Issue
Nov 10, 2022
Abstract Views
PDF Downloads
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.


G.A. Anastassiou. Rate of convergence of some neural network operators to the unit-univariate case. J. Math. Anal. Appl., 212(1):237–262, 1997.

G.A. Anastassiou. Intelligent Systems II: Complete Approximation by Neural Network Operators. Studies in Computational Intelligence book series (SCI, volume 608), Springer, Cham, 2016.

S. Bajpeyi and A. Sathish Kumar. Max-product type exponential neural network operators. In International Conference on Mathematical Analysis and Computing, ICMAC 2019, Mathematical Analysis and Computing, pp. 561–571, 2019.

S. Bajpeyi and A. Sathish Kumar. Approximation by exponential sampling type neural network operators. Analysis and Mathematical Physics, 11(108), 2021.

A.R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930–945, 1993.

P. Bohra, J. Campos, H. Gupta, S. Aziznejad and M. Unser. Learning activation functions in deep (spline) neural networks. IEEE Open Journal of Signal Proc., 1:295–309, 2020.

P.L. Butzer and R.J. Nessel. Fourier Analysis and Approximation I. Academic Press, New York-London, 1971.

F. Cao and Z. Chen. The approximation operators with sigmoidal functions. Comput. Math. Appl., 58(4):758–765, 2009.

F. Cao and Z. Chen. The construction and approximation of a class of neural networks operators with ramp functions. J. Comput. Anal. Appl., 14(1):101–112, 2012.

P. Cardaliaguet and G. Euvrard. Approximation of a function and its derivative with a neural network. Neural Netw., 5(2):207–220, 1992.

H. Chen, T. Chen and R. Liu. A constructive proof and an extension of Cybenko’s approximation theorem. In Connie Page and Raoul LePage(Eds.), Computing Science and Statistics, pp. 163–168, New York, NY, 1992. Springer New York.

L. Coroianu and S.G. Gal. Approximation by truncated max-product operators of Kantorovich-type based on generalized (φ,ψ)-kernels. Math. Meth. Appl. Sci., 41(17):7971–7984, 2018.

D. Costarelli. Interpolation by neural network operators activated by ramp functions. J. Math. Anal. Appl., 419(1):574–582, 2014.

D. Costarelli. Neural network operators: constructive interpolation of multivariate functions. Neural Netw., 67:28–36, 2015.

D. Costarelli and A.R. Sambucini. Approximation results in orlicz spaces for sequences of Kantorovich max-product neural network operators. Results in Mathematics, 73(1):15, 2018.

D. Costarelli and R. Spigler. Approximation results for neural network operators activated by sigmoidal functions. Neural Netw., 44:101–106, 2013.

D. Costarelli and R. Spigler. Multivariate neural network operators with sigmoidal activation functions. Neural Netw., 48:72–77, 2013.

D. Costarelli and R. Spigler. Approximation by series of sigmoidal functions with applications to neural networks. Annali Mat. Pura Appl., 194(1):289–306, 2015.

D. Costarelli and G. Vinti. Saturation classes for max-product neural network operators activated by sigmoidal functions. Results Math., 72(3):1555–1569, 2017.

G. Cybenko. Approximation by superpositions of a sigmoidal function. Math. Control Signals Systems, 2:303–314, 1989.

S. Goebbels. On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks. Results Math., 75:109, 2020.

K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Netw., 4(2):251–257, 1991.

K. Hornik, M. Stinchcombe and H. White. Multilayer feedforward networks are universal approximators. Neural Netw., 2(5):359–366, 1989.

U. Kadak. Fractional type multivariate neural network operators. Math. Meth. Appl. Sci., 2021.

P.C. Kainen, V. Kurková and A. Vogt. Approximative compactness of linear combinations of characteristic functions. J. Approx. Theory, 257:105435, 2020.

M. Kohler and A. Krzyzak. Over-parametrized deep neural networks do not generalize well. ArXiv preprint arxiv:1912.03925, 2020.

V. Kurková. Kolmogorov’s theorem and multilayer neural networks. Neural Netw., 5(3):501–506, 1992.

V. Kurkova´ and M. Sanguineti. Classification by sparse neural networks. IEEE Trans. on Neural Netw. Learning Syst., 30(9):2746–2754, 2019.

B. Lenze. Constructive Multivariate Approximation with Sigmoidal Functions and Applications to Neural Networks, pp. 155–175. Birkha¨user Basel, Basel, 1992.

B. Li, S. Tang and H. Yu. Approximations of high dimensional smooth functions by deep neural networks with rectified power units. arXiv:1903.05858v4, 2019.

X. Li. Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer. Neurocomputing, 12:327–343, 1996.

X. Li. On simultaneous approximations by radial basis function neural networks. Appl. Math. Comput., 95:75–89, 1998.

H.N. Mhaskar and C.A. Micchelli. Approximation by superposition of sigmoidal and radial basis functions. Adv. Appl. Math., 13(32):350–373, 1992.

W. Samek, G. Montavon, S. Lapuschkin, C.J. Anders and K.-R. Mu¨ller. Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE, 109(3):247–278, 2021.

J. Schmidhuber. Deep learning in neural networks: An overview. Neural Netw., 61:85–117, 2015.

Z. Shen, H. Yang and S. Zhang. Optimal approximation rate of ReLU networks in terms of width and depth. Journal de Mathématiques Pures et Appliquées, 157:101–135, 2022.

C. Turkun and O. Duman. Modified neural network operators and their convergence properties with summability methods. RACSAM, 114:132, 2020.

D. Yarotsky. Error bounds for approximations with deep ReLU networks. Neural Netw., 94:103–114, 2017.

D.X. Zhou. Theory of deep convolutional neural networks: Downsampling. Neural Netw., 124:319–327, 2020.

D.X. Zhou. Universality of deep convolutional neural networks. Appl. Comput. Harmonic Anal., 48(2):787–794, 2020.