Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Marcello Sanguineti
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2017) 29 (8): 2203–2291.
Published: 01 August 2017
FIGURES
| View All (12)
Abstract
View article
PDF
Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical linear quadratic gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a nontrivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman filter estimate of the parameter vector to be learned. It is shown that the proposed algorithm is less sensitive to outliers with respect to the Kalman estimate (thanks to the presence of the regularization term), thus providing smoother estimates with respect to time. The basic formulation of the proposed online learning framework refers to a discrete-time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called kernel trick, the case of nonlinear models.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2015) 27 (2): 388–480.
Published: 01 February 2015
FIGURES
| View All (31)
Abstract
View article
PDF
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (4): 1029–1106.
Published: 01 April 2013
FIGURES
| View All (25)
Abstract
View article
PDF
Kernel machines traditionally arise from an elegant formulation based on measuring the smoothness of the admissible solutions by the norm in the reproducing kernel Hilbert space (RKHS) generated by the chosen kernel. It was pointed out that they can be formulated in a related functional framework, in which the Green’s function of suitable differential operators is thought of as a kernel. In this letter, our own picture of this intriguing connection is given by emphasizing some relevant distinctions between these different ways of measuring the smoothness of admissible solutions. In particular, we show that for some kernels, there is no associated differential operator. The crucial relevance of boundary conditions is especially emphasized, which is in fact the truly distinguishing feature of the approach based on differential operators. We provide a general solution to the problem of learning from data and boundary conditions and illustrate the significant role played by boundary conditions with examples. It turns out that the degree of freedom that arises in the traditional formulation of kernel machines is indeed a limitation, which is partly overcome when incorporating the boundary conditions. This likely holds true in many real-world applications in which there is prior knowledge about the expected behavior of classifiers and regressors on the boundary.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (3): 793–829.
Published: 01 March 2010
Abstract
View article
PDF
Various regularization techniques are investigated in supervised learning from data. Theoretical features of the associated optimization problems are studied, and sparse suboptimal solutions are searched for. Rates of approximate optimization are estimated for sequences of suboptimal solutions formed by linear combinations of n -tuples of computational units, and statistical learning bounds are derived. As hypothesis sets, reproducing kernel Hilbert spaces and their subsets are considered.