## Abstract

The full-span log-linear (FSLL) model introduced in this letter is considered an $n$th order Boltzmann machine, where $n$ is the number of all variables in the target system. Let $X=(X0,\u2026,Xn-1)$ be finite discrete random variables that can take $|X|=|X0|\u2026|Xn-1|$ different values. The FSLL model has $|X|-1$ parameters and can represent arbitrary positive distributions of $X$. The FSLL model is a highest-order Boltzmann machine; nevertheless, we can compute the dual parameter of the model distribution, which plays important roles in exponential families in $O(|X|log|X|)$ time. Furthermore, using properties of the dual parameters of the FSLL model, we can construct an efficient learning algorithm. The FSLL model is limited to small probabilistic models up to $|X|\u2248225$; however, in this problem domain, the FSLL model flexibly fits various true distributions underlying the training data without any hyperparameter tuning. The experiments showed that the FSLL successfully learned six training data sets such that $|X|=220$ within 1 minute with a laptop PC.

## 1 Introduction

The main purpose of this letter is as follows:

To introduce the full-span log-linear (FSLL) model and a fast learning algorithm

To demonstrate the performance of the FSLL model with experiments

*model distribution*:

One disadvantage of the Boltzmann machine is its insufficient ability to represent distributions. The Boltzmann machine has only $n(n+1)/2$ parameters, while the dimension of the function space spanned by the possible distributions of $X$ is $|X|-1$ ($-1$ comes from the constraint $\u2211x\u2208Xp(x)=1$).

^{1}of $X$.

However, introducing higher-order terms leads to an enormous increase in computational cost because the $k$th-order Boltzmann machine has $\u2211i=1kni$ parameters.

The FSLL model introduced in this letter can be considered an $n$th order Boltzmann machine,^{2} where $n$ is the number of all variables in the target system. The FSLL model has $|X|-1$ parameters and can represent arbitrary positive distributions. Since the FSLL model is a highest-order Boltzmann machine, the learning of FSLL is expected to be very slow. However, we propose a fast learning algorithm. For example, this algorithm can learn a joint distribution of 20 binary variables within 1 minute with a laptop PC.

Since the FSLL model has full degrees of freedom, a regularization mechanism to avoid overfitting is essential. For this purpose, we used a regularization mechanism based on the minimum description length principle (Rissanen, 2007).

The remainder of this letter is organized as follows. In section 2, we present the FSLL model and its fast learning algorithm. In section 3, we demonstrate the performance of the FSLL model by experiment. In section 4, we discuss the advantages and disadvantages of the FSLL model. In section 5, we present the conclusions and extensions of the letter.

## 2 Full-Span Log-Linear Model

Before introducing the FSLL model, we define the notations used in this letter. A random variable is denoted by a capital letter, such as $X$, and the value that $X$ takes is indicated by a lowercase letter, such as $x$. $X$ also denotes the set of values that the variable $X$ can take; thus, $|X|$ denotes the number of values that $X$ can take. $\u2329f\u232ap$ denotes the expectation of $f(X)$ with distribution $p(X)$, that is, $\u2329f\u232ap=\u2211x\u2208Xp(x)f(x)$. The differential operator $\u2202/\u2202\theta y$ is abbreviated as $\u2202y$. $P$ denotes the set of all distributions of $X$, and $P+$ denotes all positive distributions of $X$.

### 2.1 Model Distribution

*local basis functions*, and ${\Phi y}(y\u2208X)$ are $|X|$ functions of $X=(X0,\u2026,Xn-1)$, which we refer to as the

*(global) basis functions*. Using the following theorem recursively, we can prove that the global basis functions are linearly independent functions of $X$.

If ${fi}(i\u2208I)$ are linearly independent functions of $X0$, and ${gj}(j\u2208J)$ are linearly independent functions of $X1$, then ${fi(X0)gj(X1)}(i\u2208I,j\u2208J)$ are linearly independent functions of $(X0,X1)$.

The proof is provided in the appendix.

In the FSLL model, we determine the local basis functions as follows:

**Case $|Xi|=2k$:**

*Walsh–Hadamard matrix*(Pratt, Andrews, & Kane, 1969).

**Else:**

Since $\Phi 0\u22611$, an arbitrary $\theta 0$ gives the same model distribution. Therefore, we determine as $\theta 0\u22610$.

### 2.2 Learning Algorithm

Algorithm 1 presents the outline of the learning algorithm of the FSLL model. This algorithm is a greedy search to find a local minimum point of $cost(\theta )$, which monotonically decreases as the iteration progresses.

In line 7 of the algorithm, *candidate* denotes $\theta '$ derived from $\theta $ by applying one of the following modifications on the $y$th component:

**Candidate derived by appending $\theta y$:**If $\theta y=0$, then let $\theta y'=argmin\theta ycost(\theta )$.**Candidate derived by adjusting $\theta y$:**If $\theta y\u22600$, then let $\theta y'=argmin\theta ycost(\theta )$.**Candidate derived by removing $\theta y$:**If $\theta y\u22600$, then let $\theta y'=0$.

#### 2.2.1 Cost Function

^{3}to transmit the sparse vector $y$ becomes

#### 2.2.2 Fast Algorithm to Compute $\theta \xaf$

*dual parameter*of $\theta $, plays important roles in the exponential families (Amari, 2016):

^{4}for $|X0|\xd7|X1|$ pixels of data. The 2D-DFT transforms $f$ into $F$ by the following equation:

We can use the same algorithm to obtain the vector $d\xaf$ from $pd$. In this case, let $g0=pd$.

#### 2.2.3 Acceleration by Walsh-Hadamard Transform

In cases where $|Xi|$ is large, we can accelerate the computation of $\theta \xaf$ by using Walsh-Hadamard transform (WHT) (Fino & Algazi, 1976).

Let us recall the 2D-DFT in section 2.2.2. If $|X0|,|X1|$ are powers of two, we can use the fast Fourier transform (FFT) (Smith, 2010) for DFT by $X0$ and DFT by $X1$ in equation 2.7 and can reduce the computational cost to $O(|X0||X1|log|X0||X1|)$. We can apply this principle to computing $\theta \xaf$.

*local transform*. The local transform usually requires $O(|Xi|2)$ time; however, if the matrix in equation 2.10 is a Walsh-Hadamard matrix (see equation 2.2), using Walsh-Hadamard transform (WHT) (Fino & Algazi, 1976), we can perform the local transform in $O(|Xi|log|Xi|)$ time.

^{5}

Since the number of all combinations of $y0,\u2026,yi-1,xi+1,\u2026,xn-1$ (the $i$th component is omitted) is $|X|/|Xi|$, we can obtain the function $gi+1$ from $gi$ in $O(|X|log|Xi|)$ time. Moreover, $gn$ is obtained from $g0$ in $O(\u2211i|X|log|Xi|)=O(|X|log|X|)$ time.

#### 2.2.4 $O(1)$ Algorithm to Evaluate Candidate

**Appending $\theta y$:**By equation 2.16, $KL(pd\u2225p\theta )$ is minimized when $\theta \xafy=d\xafy$. Therefore,$\Delta =1+d\xafy2ln1+\theta \xafy01+d\xafy+1-d\xafy2ln1-\theta \xafy01-d\xafy+ry.$(2.18)**Adjusting $\theta y$:**By equation 2.16, $KL(pd\u2225p\theta )$ is minimized when $\theta \xafy=d\xafy$. Therefore,$\Delta =1+d\xafy2ln1+\theta \xafy01+d\xafy+1-d\xafy2ln1-\theta \xafy01-d\xafy.$**Removing $\theta y$:**$\theta y$ becomes 0. Therefore,$\Delta =1+d\xafy2ln1+\theta \xafy01+\theta \xafy(0)+1-d\xafy2ln1-\theta \xafy01-\theta \xafy(0)-ry.$

Among the three types of candidates—appending, adjusting, and removing—the group of candidates by appending $\theta y$ has almost $|X|$ candidates because $\theta y$ is a very sparse vector. Therefore, it is important to reduce the computational cost to evaluate candidates by appending. Evaluating $\Delta $ in equation 2.18 is an expensive task for a central processing unit (CPU) because it involves logarithm computation.^{6}

Algorithm 2 presents the details of line 7 of algorithm 1. In line 5, if $\Delta \u0332(\theta y')\u2265\Delta \theta 1$, the candidate $\theta '$ is discarded and the evaluation of $\Delta (\theta y')$ is skipped. This skipping effectively reduces the computational cost of evaluating candidates.^{7}

#### 2.2.5 Updating $\theta $ and $p\theta $

Algorithm 3 presents the details of lines 10 and 11 in algorithm 1. This algorithm requires $O(|X|)$ time. It should be noted that the expensive exponential computation $exp(\theta y'1-\theta y'0)\Phi y'(x)$ is not used in the for-loop.

#### 2.2.6 Memory Requirements

In the FSLL model, most of the memory consumption is dominated by four large tables for $p\theta $, $\theta \xaf$, $d\xaf$, and $r$, and each stores $|X|$ floating point numbers. However, $\theta $ does not require a large amount of memory because it is a sparse vector.

For example, if the FSLL model is allowed to use 4 GB of memory, it can handle up to 26 binary variables, 16 three-valued variables($|Xi|=3$), and 13 four-valued variables.

### 2.3 Convergence to Target Distribution

One major point of interest about algorithm 1 is whether the model distribution converges to a target distribution at the limit of $N\u2192\u221e$ or not, where $N$ is the number of samples in the training data. As a result, we can guarantee this convergence.

We first consider the case where the cost function is a continuous function of $\theta $ with no regularization term. Here, if and only if $f(\theta )=min\theta '\u2208A(\theta )f(\theta ')$, we say that $\theta $ is an *axis minimum* of $f$. Then the following theorem holds (Beck, 2015):

Let $f:R|X|\u2192R$ be a continuous cost function such that $B={\theta |f(\theta )\u2264f(\theta 0)}$ is a bounded close set. Then any accumulation point of ${\theta t}(t\u2208[0,\u221e))$ is an axis minimum of $f$.

The proof is provided in the appendix. Here, if $f(\theta )$ has an unique axis minimum at $\theta =a$, the following corollary is derived from theorem 2:

Let $f$ be a function satisfying the condition in theorem 2. If $f$ has a unique axis minimum at $\theta =\theta min$, then $\theta min$ is also the global minimum and $limt\u2192\u221e\theta t=\theta min$.

The proof is provided in the appendix.

Let $q$ be a positive distribution. By corollary 1, in the case where the cost function is $f(\theta )=KL(q\u2225p\theta )$, where $q$ is a positive distribution,^{8} the equation $limt\u2192\u221eKL(q\u2225p\theta t)=0$ holds.

Then we extend the cost function to $fN(\theta )=f(\theta )+r(\theta ,N)$, where $f:R|X|\u2192R$ is a continuous function having the unique global minimum at $\theta =\theta min$, and $r(\theta ,N)$ is a regularization term in equation 2.5. The following theorem holds:

## 3 Experiments

In this section, to demonstrate the performance of the FSLL model, we compare a full-span log-linear model that we refer to as *FL* with two Boltzmann machines that we refer to as *BM-DI* and *BM-PCD*.

### 3.1 Full-Span Log-Linear Model FL

FL is a full-span log-linear model that has been described in section 2. The model distribution of FL is given by equation 2.1. The cost function is given by equation 2.5. The learning algorithm is algorithm 1 described in section 2.2. The threshold to finish the cost minimization is determined as $\epsilon =10-4$.

### 3.2 Boltzmann Machine BM-DI

BM-DI (Boltzmann machine with direct integration) is a fully connected Boltzmann machine having no hidden variables and no temperature parameter. To examine the ideal performance of the Boltzmann machine, we do not use the Monte Carlo approximation in BM-DI to evaluate equation 1.2. The model distribution of BM-DI is given by equation 1.1. The cost function of BM-DI is $KL(pd\u2225p\theta )$ and has no regularization term.

### 3.3 Boltzmann Machine BM-PCD

BM-PCD (Boltzman machine with persistent contrastive divergence method) is similar to BM-DI; however, BM-PCD uses the persistent contrastive divergence method (Tieleman, 2008), a popular Monte Carlo method in Boltzmann machine learning. BM-PCD has some hyperparameters. We tested various combinations of these hyperparameters and determined them as learning rate $=$ 0.01, number of Markov chains $=$ 100, and length of Markov chains $=$ 10,000.

### 3.4 Training Data

We prepared six training data sets. These data sets are artificial; therefore, their true distributions $p*(X)$ are known. Each is an independent and identically distributed (i.i.d.) data set drawn from its true distribution.

#### 3.4.1 Ising5 $\xd7$ 4S, Ising5 $\xd7$ 4L

#### 3.4.2 BN20-37S, BN20-37L

#### 3.4.3 BN20-54S, BN20-54L

### 3.5 Experimental Platform

All experiments were conducted on a laptop PC (CPU: Intel Core i7-6700K @4 GHz; memory: 64 GB; operating system: Windows 10 Pro). All programs were written in and executed on Java 8.

### 3.6 Results

Table 1 represents performance comparisons between FL BM-DI and BM-PCD. We evaluated the accuracy of the learned distribution by $KL(p*\u2225p\theta )$. Figure 3 illustrates the comparison of $KL(p*\u2225p\theta )$.

Data . | Model . | $KL(pd\u2225p\theta )$ . | $KL(p*\u2225p\theta )$ . | #Basis . | Time . |
---|---|---|---|---|---|

Ising5 $\xd7$ 4S | FL | 2.501 nat | 0.012 nat | 31 | 5 sec |

BM-DI | 2.424 | 0.087 | 210 | 13 | |

BM-PCD | 2.504 | 0.094 | 210 | 3 | |

Ising5 $\xd7$ 4L | FL | 0.476 | 0.004 | 37 | 9 |

BM-DI | 0.473 | 0.002 | 210 | 12 | |

BM-PCD | 0.528 | 0.053 | 210 | 3 | |

BN20-37S | FL | 4.355 | 0.317 | 39 | 5 |

BM-DI | 4.746 | 0.863 | 210 | 17 | |

BM-PCD | 4.803 | 0.903 | 210 | 3 | |

BN20-37L | FL | 0.697 | 0.026 | 105 | 12 |

BM-DI | 1.422 | 0.750 | 210 | 19 | |

BM-PCD | 1.477 | 0.806 | 210 | 3 | |

BN20-54S | FL | 3.288 | 0.697 | 41 | 5 |

BM-DI | 3.743 | 1.301 | 210 | 23 | |

BM-PCD | 3.826 | 1.338 | 210 | 3 | |

BN20-54L | FL | 0.430 | 0.057 | 192 | 23 |

BM-DI | 1.545 | 1.166 | 210 | 21 | |

BM-PCD | 1.620 | 1.242 | 210 | 3 |

Data . | Model . | $KL(pd\u2225p\theta )$ . | $KL(p*\u2225p\theta )$ . | #Basis . | Time . |
---|---|---|---|---|---|

Ising5 $\xd7$ 4S | FL | 2.501 nat | 0.012 nat | 31 | 5 sec |

BM-DI | 2.424 | 0.087 | 210 | 13 | |

BM-PCD | 2.504 | 0.094 | 210 | 3 | |

Ising5 $\xd7$ 4L | FL | 0.476 | 0.004 | 37 | 9 |

BM-DI | 0.473 | 0.002 | 210 | 12 | |

BM-PCD | 0.528 | 0.053 | 210 | 3 | |

BN20-37S | FL | 4.355 | 0.317 | 39 | 5 |

BM-DI | 4.746 | 0.863 | 210 | 17 | |

BM-PCD | 4.803 | 0.903 | 210 | 3 | |

BN20-37L | FL | 0.697 | 0.026 | 105 | 12 |

BM-DI | 1.422 | 0.750 | 210 | 19 | |

BM-PCD | 1.477 | 0.806 | 210 | 3 | |

BN20-54S | FL | 3.288 | 0.697 | 41 | 5 |

BM-DI | 3.743 | 1.301 | 210 | 23 | |

BM-PCD | 3.826 | 1.338 | 210 | 3 | |

BN20-54L | FL | 0.430 | 0.057 | 192 | 23 |

BM-DI | 1.545 | 1.166 | 210 | 21 | |

BM-PCD | 1.620 | 1.242 | 210 | 3 |

Notes: $pd$: empirical distribution of training data. $p*$: true distribution. #Basis: number of used ($\theta y\u22600$) basis functions. Time: CPU time for learning(median of three trials).

For Ising5 $\xd7$ 4S/L, a performance difference between FL and BMs (BM-DI and BM-PCD) was not remarkable because both FL and BMs could represent the true distribution $p*$. The fact that $KL(pd\u2225p\theta )\u226bKL(p*\u2225p\theta )$ implies that overfitting to $pd$ was successfully suppressed. FL used fewer basis functions than BMs used, which implies that some basis functions of BM were useless to represent $p*$. Regarding the accuracy of the model distribution, BM-PCD is less accurate than FL and BM-DI. This disadvantage becomes noticeable when the model distribution is close to the true distribution. Even when large training data are given, some error remains in the model distribution of BM-PCD (e.g., Ising5 $\xd7$ 4L).

For BN20-37S/L and BN20-54S/L, FL outperformed BMs because only FL could represent $p*$. To fit $p*$, FL adaptively selected 39 basis functions for BN20-37S and 105 basis functions for BN20-37L from $|X|-1=220-1$ basis functions. This fact implies that FL constructed a more complex model to fit $p*$ as the training data increased. Furthermore, a comparison of $KL(p*\u2225p\theta )$ revealed that the accuracy of the model distribution was remarkably improved as the size of training data increased in FL. In contrast, BMs could not fit $p*$ even if a large training data set was supplied.

Figure 4 illustrates the CPU time to learn the training data sets. BM-PCD was the fastest, and FL was faster than BM-DI for five out of six training data sets. The learning time of BM-PCD is constant because we used a fixed length (10,000) of Markov chains. FL had $|X|-1=220-1$ basis functions, while BM-DI had $n(n+1)/2=210$ basis functions. Nevertheless, FL was faster than BM-DI.

For BN20-54L, FL takes a long time to learn because it uses 192 basis functions to construct the model distribution. Using the 192 basis functions, FL successfully constructed a model distribution that fit $p*$, while BMs failed.

## 4 Discussion

The major disadvantage of the FSLL model is that it is not feasible for large problems due to memory consumption and learning speed. If we use a typical personal computer, the problem size should be limited as $|X|\u2272225$. However, as far as we use the FSLL model in this problem domain, the model has the following theoretical and practical advantages.

The first advantage is that the FSLL model can represent arbitrary distributions of $X$. Furthermore, it is guaranteed that the model distribution converges to any target distribution as the limit of the training data size is infinity.

Here, we view learning machines from an information geometry perspective. The dimension of the distribution space $P+$ (all positive distributions of $X$) is $|X|-1$, and a learning machine having $M$ parameters spans an $M$-dimensional manifold in $P+$ to represent its model distribution (we refer to this manifold as the *model manifold*).

A learning machine with $M<|X|-1$ parameters cannot represent arbitrary distributions in $P$. Moreover, if $M<|X|-1$, there is no guarantee that the true distribution is close to the model manifold, and if the model manifold is remote from the true distribution, the machine's performance will be poor. This poor performance is not improved even if infinite training data are given.

The FSLL model extends the manifold's dimension to $|X|-1$ by introducing higher-order factors. The model manifold becomes $P$ itself; thus, there is no more expansion. Therefore, we refer to the model as the *full-Span log-linear model*. The main interest of this letter is that as far as the problem size is $|X|\u2272225$, the FSLL model becomes a feasible and practical model.

For example, suppose that we construct a full-span model by adding hidden nodes into a Boltzmann machine having 20 visible nodes. The number of parameters of the Boltzmann machine is $E+n$, where $E$ is the number of edges and $n$ is the number of nodes. Therefore, it is not practical to construct the full-span model for 20 visible nodes because it requires $\u2248220$ edges.

The second advantage is that the FSLL model has no hyperparameters; therefore, no hyperparameter tuning is needed. For example, if we use a Boltzmann machine with hidden nodes that learns the true distribution with contrastive divergence methods, we need to determine hyperparameters such as the learning rate, mini-batch size, and the number of hidden and graphical structure of nodes. The FSLL model, however, automatically learns the training data without human participation.

## 5 Conclusion and Extension

Suppose that we let the FSLL model learn training data consisting of 20 binary variables. The dimension of the function space spanned by possible positive distributions is $220-1$. The FSLL model has $220-1$ parameters and can fit arbitrary positive distributions. The basis functions of the FSLL model have the following properties:

Each basis function is a product of univariate functions.

The basis functions take values 1 or $-1$.

The proposed learning algorithm exploited these properties and realized fast learning.

Our experiments demonstrated the following:

The FSLL model could learn the training data with 20 binary variables within 1 minute with a laptop PC.

The FSLL model successfully learned the true distribution underlying the training data; even higher-order terms that depend on three or more variables existed.

The FSLL model constructed a more complex model to fit the true distribution as the training data increased; however, the learning time became longer.

## Appendix: Proofs and Derivations of Equations

### A.1 Proof of Theorem 1

### A.2 Derivation of Equation 2.6

### A.3 Derivation of Equation 2.11

### A.4 Derivation of Equation 2.13

### A.5 Derivation of Equation 2.16

### A.6 Derivation of Equation 2.17

### A.7 Derivation of Equation 2.19

### A.8 Proof of Theorem 2

Since $B$ is a bounded closed set, ${\theta t}$ has one or more accumulation point(s) in $B$.

### A.9 Proof of Corollary 1

By theorem 2, any accumulation point of ${\theta t}$ is an axis minimum of $f$; however, the axis minimum of $f$ is unique; therefore, $\theta =a$ is a unique accumulation point of ${\theta t}$, and $\theta t\u2192a(t\u2192\u221e)$. $\u25a1$

### A.10 Proof of Theorem 3

## Acknowledgments

This research is supported by KAKENHI 17H01793.

## Notes

^{1}

Distributions such that $\u2200x,p(x)>0$.

^{2}

Furthermore, the FSLL model is not limited to binary variables.

^{3}

We use “nat” as the description length unit; thus, we use “ln” instead of “log.”

^{4}

Not 2D-FFT but 2D-DFT.

^{5}

For $k\u22644$, directly multiplying the Hadamard matrix is faster than the WHT in our environment; therefore, we use the WHT only in cases where $k\u22655$.

^{6}

Logarithm computation is 30 times slower than addition or multiplication in our environment.

^{7}

In our environment, this skipping makes the evaluation of candidates more than 10 times faster.

^{8}

Positivity is needed to keep $B$ bounded.

## References

*Cognitive Science*

*Information geometry and its applications*

*Proceedings of the 24th International Conference on Machine Learning*

*SIAM Journal on Optimization*

*SIAM Review*

*IEEE Transactions on Computers*

*Monte Carlo methods in statistical physic*

*Numerical optimization*

*Proceedings of the IEEE*

*Journal of Machine Learning Research*

*Information and complexity in statistical modeling*

*AIP Conference Proceedings*

*Handbook of real-time fast Fourier transforms*

*IEICE Technical Report*

*IEICE Technical Report*

*Proceedings of the 25th International Conference on Machine Learning*