Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
K. P. Unnikrishnan
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2014) 26 (7): 1263–1297.
Published: 01 July 2014
FIGURES
| View All (8)
Abstract
View article
PDF
Repeating patterns of precisely timed activity across a group of neurons (called frequent episodes) are indicative of networks in the underlying neural tissue. This letter develops statistical methods to determine functional connectivity among neurons based on nonoverlapping occurrences of episodes. We study the distribution of episode counts and develop a two-phase strategy for identifying functional connections. For the first phase, we develop statistical procedures that are used to screen all two-node episodes and identify possible functional connections (edges). For the second phase, we develop additional statistical procedures to prune the two-node episodes and remove false edges that can be attributed to chains or fan-out structures. The restriction to nonoverlapping occurrences makes the counting of all two-node episodes in phase 1 computationally efficient. The second (pruning) phase is critical since phase 1 can yield a large number of false connections. The scalability of the two-phase approach is examined through simulation. The method is then used to reconstruct the graph structure of observed neuronal networks, first from simulated data and then from recordings of cultured cortical neurons.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (4): 1025–1059.
Published: 01 April 2010
FIGURES
| View All (7)
Abstract
View article
PDF
We consider the problem of detecting statistically significant sequential patterns in multineuronal spike trains. These patterns are characterized by ordered sequences of spikes from different neurons with specific delays between spikes. We have previously proposed a data-mining scheme to efficiently discover such patterns, which occur often enough in the data. Here we propose a method to determine the statistical significance of such repeating patterns. The novelty of our approach is that we use a compound null hypothesis that not only includes models of independent neurons but also models where neurons have weak dependencies. The strength of interaction among the neurons is represented in terms of certain pair-wise conditional probabilities. We specify our null hypothesis by putting an upper bound on all such conditional probabilities. We construct a probabilistic model that captures the counting process and use this to derive a test of significance for rejecting such a compound null hypothesis. The structure of our null hypothesis also allows us to rank-order different significant patterns. We illustrate the effectiveness of our approach using spike trains generated with a simulator.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (11): 2729–2750.
Published: 01 November 2002
Abstract
View article
PDF
Alopex is a correlation-based gradient-free optimization technique useful in many learning problems. However, there are no analytical results on the asymptotic behavior of this algorithm. This article presents a new version of Alopex that can be analyzed using techniques of two timescale stochastic approximation method. It is shown that the algorithm asymptotically behaves like a gradient-descent method, though it does not need (or estimate) any gradient information. It is also shown, through simulations, that the algorithm is quite effective.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (3): 469–490.
Published: 01 May 1994
Abstract
View article
PDF
We present a learning algorithm for neural networks, called Alopex. Instead of error gradient , Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feedforward and recurrent networks. All the weights in a network are updated simultaneously , using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a “temperature” parameter in a manner similar to that in simulated annealing. A heuristic “annealing schedule” is presented that is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feedforward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1992) 4 (1): 108–119.
Published: 01 January 1992
Abstract
View article
PDF
The capability of a small neural network to perform speaker-independent recognition of spoken digits in connected speech has been investigated. The network uses time delays to organize rapidly changing outputs of symbol detectors over the time scale of a word. The network is data driven and unclocked. To achieve useful accuracy in a speaker-independent setting, many new ideas and procedures were developed. These include improving the feature detectors, self-recognition of word ends, reduction in network size, and dividing speakers into natural classes. Quantitative experiments based on Texas Instruments (TI) digit databases are described.