Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-2 of 2
Subutai Ahmad
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2016) 28 (11): 2474–2504.
Published: 01 November 2016
FIGURES
| View All (11)
Abstract
View article
PDF
The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variable order temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods—autoregressive integrated moving average; feedforward neural networks—time delay neural network and online sequential extreme learning machine; and recurrent neural networks—long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (3): 382–391.
Published: 01 September 1989
Abstract
View article
PDF
We calculate analytically the rate of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. For networks without hidden units using the standard quadratic error function and a sigmoidal transfer function, we find that the error decreases as 1/ t for large t , and the output states approach their target values as 1/√ t . It is possible to obtain a different convergence rate for certain error and transfer functions, but the convergence can never be faster than 1/ t. These results are unaffected by a momentum term in the learning algorithm, but convergence can be substantially improved by an adaptive learning rate scheme. For networks with hidden units, we generally expect the same rate of convergence to be obtained as in the single-layer case; however, under certain circumstances one can obtain a polynomial speed-up for non sigmoidal units, or a logarithmic speed-up for sigmoidal units. Our analytic results are confirmed by empirical measurements of the convergence rate in numerical simulations.