Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Liangjie Sun
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2025) 37 (2): 235–259.
Published: 21 January 2025
FIGURES
Abstract
View articletitled, On the Compressive Power of Autoencoders With Linear and ReLU Activation Functions
View
PDF
for article titled, On the Compressive Power of Autoencoders With Linear and ReLU Activation Functions
In this article, we mainly study the depth and width of autoencoders consisting of rectified linear unit (ReLU) activation functions. An autoencoder is a layered neural network consisting of an encoder, which compresses an input vector to a lower-dimensional vector, and a decoder, which transforms the low-dimensional vector back to the original input vector exactly (or approximately). In a previous study, Melkman et al. ( 2023 ) studied the depth and width of autoencoders using linear threshold activation functions with binary input and output vectors. We show that similar theoretical results hold if autoencoders using ReLU activation functions with real input and output vectors are used. Furthermore, we show that it is possible to compress input vectors to one-dimensional vectors using ReLU activation functions, although the size of compressed vectors is trivially Ω(log n ) for autoencoders with linear threshold activation functions, where n is the number of input vectors. We also study the cases of linear activation functions. The results suggest that the compressive power of autoencoders using linear activation functions is considerably limited compared with those using ReLU activation functions.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (4): 778–799.
Published: 01 April 2016
Abstract
View articletitled, Sampled-Data State Feedback Stabilization of Boolean Control Networks
View
PDF
for article titled, Sampled-Data State Feedback Stabilization of Boolean Control Networks
In this letter, we investigate the sampled-data state feedback control (SDSFC) problem of Boolean control networks (BCNs). Some necessary and sufficient conditions are obtained for the global stabilization of BCNs by SDSFC. Different from conventional state feedback controls, new phenomena observed the study of SDSFC. Based on the controllability matrix, we derive some necessary and sufficient conditions under which the trajectories of BCNs can be stabilized to a fixed point by piecewise constant control (PCC). It is proved that the global stabilization of BCNs under SDSFC is equivalent to that by PCC. Moreover, algorithms are given to construct the sampled-data state feedback controllers. Numerical examples are given to illustrate the efficiency of the obtained results.