Abstract

In this letter, we perform a complete and in-depth analysis of Lorentzian noises, such as those arising from and channel kinetics, in order to identify the source of –type noise in neurological membranes. We prove that the autocovariance of Lorentzian noise depends solely on the eigenvalues (time constants) of the kinetic matrix but that the Lorentzian weighting coefficients depend entirely on the eigenvectors of this matrix. We then show that there are rotations of the kinetic eigenvectors that send any initial weights to any target weights without altering the time constants. In particular, we show there are target weights for which the resulting Lorenztian noise has an approximately –type spectrum. We justify these kinetic rotations by introducing a quantum mechanical formulation of membrane stochastics, called hidden quantum activated-measurement models, and prove that these quantum models are probabilistically indistinguishable from the classical hidden Markov models typically used for ion channel stochastics. The quantum dividend obtained by replacing classical with quantum membranes is that rotations of the Lorentzian weights become simple readjustments of the quantum state without any change to the laboratory-determined kinetic and conductance parameters. Moreover, the quantum formalism allows us to model the activation energy of a membrane, and we show that maximizing entropy under constrained activation energy yields the previous –type Lorentzian weights, in which the spectral exponent is a Lagrange multiplier for the energy constraint. Thus, we provide a plausible neurophysical mechanism by which channel and membrane kinetics can give rise to –type noise (something that has been occasionally denied in the literature), as well as a realistic and experimentally testable explanation for the numerical values of the spectral exponents. We also discuss applications of quantum membranes beyond –type -noise, including applications to animal models and possible impact on quantum foundations.

1  Introduction and Overview

1.1  The Nature of Neurological Noise

The theory of neurological noise presented in this letter was developed originally to enhance algorithms for electroencephalogram (EEG)-based, steady-state visual evoked potential (SSVEP) (Luck, 2014) brain-computer interfaces (BCI) (Paris, 2016).1 In particular, many BCI algorithms (e.g., Liavas, Moustakides, Henning, Psarakis, & Husar, 1998) assume flat gaussian noise for EEG periodograms, although it is well attested in the literature (Vysata et al., 2014) that EEG spectra have a roughly roll-off, as is visually evident in typical SSVEP data (see Figure 1). This discrepancy called into question the validity of standard statistical detection algorithms for the low signal-to-noise ratio, -band applications we were investigating.

Figure 1:

EEG periodogram from a 15 s, SSVEP experiment showing –type roll-off. (SSVEP data from Bakardjian, Tanaka, & Chichocki, 2010.)

Figure 1:

EEG periodogram from a 15 s, SSVEP experiment showing –type roll-off. (SSVEP data from Bakardjian, Tanaka, & Chichocki, 2010.)

By “neurological noise” we mean any signal in the apparatus of a neurologically based experiment that is regarded, for the present, as carrying no information in the sense of communication theory (Manwani & Koch, 1999b). The cautious phrase “for the present” must be included in the definition since neurological systems are more like cocktail parties than concerts: what is “noise” to us now may transform to signal if we move through the room.2 We include in the term all scales from ion channels (Hille, 2001) and membranes (DeFelice, 1981; Bezrukov & Vodyanoy, 1994; Johnston & Wu, 1995; Mak & Webb, 1997), to neurons and small networks (i.e., “neural” noise) (Manwani & Koch, 1999a; White, Rubinstein, & Kay, 2000; Destexhe & Rudolph-Lilith, 2012, is a good overview), through macroscale EEG and other cortical imaging modalities (Nunez & Srinivasan, 2006; Cohen, 2014). For example, high-frequency oscillation measurements for epileptic seizure localization (Zijlmans et al., 2001) may require the attenuation of noise originating at the level of ion channels and membranes (DeFelice, 1981), a scale well below EEG.

Yet in spite of scaling by at least seven orders of magnitude (Nunez & Srinivasan, 2006), there are a few common themes. First, neurological noise is often both a curse and a blessing. Certainly it is a curse when it interferes negatively with our favorite signal processing methods. However, in the form of what B. Hille (1978) called “fluctuation studies,” its analysis has led to key breakthroughs in our understanding of channels, gating, and synapses. This was spectacularly the case in the work of Katz, Miledi, Anderson, and Stevens (Katz & Miledi, 1972; Anderson & Stevens, 1973) and others on acetylcholine noise at neuromuscular junctions (see Figure 2). Also, there is evidence, at both the neuronal (Koch, 1999; Legenstein & Maass, 2014) and cortical (Palmer & O'Shea, 2014) levels, that noise is required for normal cognitive function. In addition, noise characteristics may be diagnostic of certain neurological disorders (Díez, Casado, Martín-Loeches, & Molina, 2013; Vysata et al., 2014). Thus, neurological noise research is as much about basic neuroscience as about signal engineering.

Figure 2:

Single-term Lorenztian spectrum at a neuromuscular junction (from Anderson & Stevens, 1973). (© 1973 John Wiley & Sons)

Figure 2:

Single-term Lorenztian spectrum at a neuromuscular junction (from Anderson & Stevens, 1973). (© 1973 John Wiley & Sons)

Another common theme at all scales is the presence of –type noise—stationary noise whose power spectral density (PSD) has the form
formula
1.1
over some range of frequencies and with some constant spectral exponent . (Here “” denotes “behaves like,” in an informal sense.3) The literature on –type noise, even when restricted to neurological contexts, is so vast that we cannot begin to review it here. (See Destexhe & Rudolph-Lilith, 2012, for a thorough bibliography.) Its neurological origin is considered somewhat mysterious and many explanations for it have appeared in the literature (Weissman, 1978, is rather interesting).
It is known that –type noises appear during single-channel measurements (White et al., 2000), which therefore is likely caused by the governing Markov kinetics (Hille, 2001). Yet some very early work (Hill & Chen, 1972) that sought to determine whether channels could give rise to –type noise concluded, paradoxically, that they could not. So far as we know, this conclusion of Hill and Chen has never been challenged. On the other hand, Diba, Lester, and Koch, (2004) demonstrated that a very broad class of spectral shapes can be approximated by multichannel membranes with noise from the Lorentzian family, to be discussed subsequently (see section 2), whose autocovariances are of the form
formula
1.2
and thus possessing PSDs
formula
1.3
where is time lag, is frequency, the 's are time constants, and are appropriately chosen weights. But they discussed no criteria, beyond numerical fitting, to select the weights systematically.

Understanding the origin and meaning of the spectral exponent is of more than intellectual interest since unusual values may indicate neuro-degeneration (Vysata et al., 2014). Because of its “power law” spectrum (Miller & Troyer, 2002), many researchers have speculated on origination in purported fractal or scale-invariant properties (Millhauser, Saltpeter, & Oswald, 1988) of the neurological tissue for which is related to the “Hurst parameter” (Mandelbrot, 1983, 1999; Liebovitch & Sullivan, 1987). However, the evidence for this is equivocal (Korn & Horn, 1988; McManus, Weiss, Spivak, Blatz, & Magleby, 1988; Sansom, Ball, Kerry, Ramsey, & Usherwood, 1989). (But see section 5.2 for an intriguing observation.)

We will return in section 3.1 to this important parameter where we show that the spectral exponent is most parsimoniously interpreted as a Lagrange multiplier for constrained entropy maximization.

1.2  Deconstructing Kinetic Models of Ion Channel Noise

It is highly thought provoking that the most reliable equations in the classic Hodgkin & Huxley (1952) are in some ways the least satisfactory. The experimentally determined, phenomenological rate equations,
formula
1.4
with the membrane voltage measured in mV and in kHz, together with the comparable equations, are extremely reliable (see Figure 3). This is proven by their success in reproducing the original voltage clamp data to which they were fit and, far more significantly, predicting the precise action potential dynamics. Yet no justification was offered, or perhaps even possible, for their form other than their success.4 Of course, the same reliability extends to the derived time constant and steady-state probability .
Figure 3:

Simulated / membrane current during voltage clamp: Markov sample paths and Hodgkin-Huxley predicted mean. ( density = , clock rate kHz.)

Figure 3:

Simulated / membrane current during voltage clamp: Markov sample paths and Hodgkin-Huxley predicted mean. ( density = , clock rate kHz.)

Hodgkin and Huxley were well aware that their probability function , defined by
formula
1.5
with exponential solution and time constant , represented only the mean value, at each time , of some stochastic process driving the opening and closing of the ion channels (see Figure 3).5 But they did not speculate on the nature of this process, being satisfied with the substance of their voltage-clamp experiments.
However, soon after (e.g., Fitzhugh, 1965; Hill & Chen, 1972), researchers began to model the underlying stochastic process using hidden Markov models (HMMs) (see appendix C, section C.3.3, definition 62). In chemical kinetic terms (DeFelice, 1981; Clay & DeFelice, 1983; Hille, 2001), this is equivalent to having a finite list of state occupation probabilities whose dynamics are determined by a kinetic matrix according to the first-order kinetic equation6
formula
1.6
where denotes the column vector with entries .
Definition 1. Kinetic Matrix.
A kinetic matrix, with states,7 is an matrix that has at least one column vector (a stable probability vector), with entries , satisfying
formula
A kinetic matrix is nondegenerate if the eigenvalue 0 has multiplicity 1; that is, , is the unique (up to scaling) column vector such that . It is degenerate otherwise.

Note that, without loss of generality, we always will assume .8

Definition 2. Time Constants.

When the kinetic matrix is nondegenerate, there are precisely nonzero eigenvalues. Thus, we have time constants such that are the nonzero eigenvalues of , repeated according to multiplicities.

Definition 3. Markov Kinetics Property.

A kinetic matrix satisfies the Markov property or is Markovian if all nondiagonal entries are nonnegative: , for .

Remark 1.

A kinetic matrix is Markovian if and only there is a clock rate (alt. clock frequency) such that is a Markov probability matrix (Lamperti, 1977). (But see section C.4, remark 67.)

An example of a nondegenerate, Markovian kinetic matrix is the five-state matrix,
formula
1.7
at membrane voltage , which has been “the” kinetic model since the late 1960s (Hill & Chen, 1972; DeFelice, 1981; Manwani & Koch, 1999a; White et al., 2000; Hille, 2001; Destexhe & Rudolph-Lilith, 2012).9
All steady-state HMMs, with kinetic equation 1.6, have discrete Lorentzian autocovariances10 (see equation 2.1),
formula
1.8
at lag , with time constants and weights calculated from . For example, has , for , with the Hodgkin-Huxley time constant defined earlier, and
formula
1.9
with , where denotes proportionality.

In our research, we considered the effect on the noise statistics if, governed only by the information presented in Hodgkin and Huxley (1952), we replaced an a priori kinetic matrix , such as equation 1.7, with some other kinetic matrix . The central observations of our Hodgkin-Huxley noise analysis (see section F.1) are the following:

  • The time constants are always the the reciprocals of the nonzero eigenvalues of the kinetic matrix . So, in the case of the and channels whose time constants were experimentally determined, these eigenvalues are reliable in the sense we discussed earlier. Thus, if we ensure that shares the same eigenvalues as , we will stay within experimental bounds.

  • On the other hand, the weights depend only on the eigenvectors of and, as a result of their not making any proposals concerning the specific kinetic matrices or even the form of stochastics, there is absolutely no information related to these eigenvectors contained in Hodgkin-Huxley's original work. For example, the eigenvectors of could differ arbitrarily from those of above without affecting any laboratory findings.

Consequently, a weighting calculation like equation 1.9, which is almost universally used when ion noise is modeled (e.g., Manwani & Koch, 1999a), depends entirely on the eigenvectors of which we have no experimental knowledge whatsoever. Put another way, formulas such as equation 1.9 utilize an implicit and particular vector space basis whose only apparent virtue is the elegance, as seen in equation 1.7, of the resulting matrices.

1.3  Ion Channel Kinetics, Entropy, and Quantum Mechanics

The reflections of section 1.2 suggest the possibility of a new type of ion channel noise analysis, solidly based on the reliable kinetic eigenvalues, yet recognizing that the choice of eigenvectors radically affects the resulting statistical characteristics. In particular, it may be possible to unify Hill and Chen's (1972) negative conclusions concerning the possibility of generating noise (see section 1.1) by known kinetic mechanisms with prima facie contradictory results, such as Diba et al.'s (2004), which demonstrate that properly weighted populations of channels can do so.

Our examination of these possibilities produced three key results:

  1. Under commonly occurring hypotheses (see section 3.3, definition 10), applicable to the standard Hodgkin-Huxley kinetic matrices, there exists an oblique rotation of the kinetic eigenvectors, which sends its initial autocovariance weights (see equation 1.8) to any predetermined final weights , while simultaneously preserving the time constants: . (see section 3.3, theorem 19).

  2. Weights of the general form will, with appropriate definitions, maximize channel entropy, where is a Lagrange multiplier for an average activation energy constraint (see section 3.3, theorem 17).

  3. Maximum entropy weights will generate PSDs that are approximately of the form in the middle frequencies (see equation 2.11).

The obvious question raised by these results is whether there is a biophysically-plausible mechanism that could give rise to eigenvector rotations of kinetic matrices. There are very few degrees of freedom for such rotations in the structure of the classical Markov ion channel kinetic models (see section C.3), especially in light of the “Markovian” qualification discussed in section 3.3, remark 21). On the other hand, rotations of eigenvectors are the very essence of quantum mechanics (QM) (see section D.4). In particular, Schrödinger's equation (see equations D.1 and D.7), as well as measurements of quantum systems (see section D.6, definition 100) represent rotations of a quantum state matrix (see section D.2, definition 89).

Definition 4. The Quantum Dividend.
Foreshadowing a great deal of subsequent development, the oblique rotation we use (see section B.1, example 37) for result 1 would send matrices of parameters, such as -order moments of channel conductances, to , where denotes the matrix adjoint (see equation B.1), an operation that preserves the eigenvalues of but rotates its eigenvectors. As we mentioned, there is little classical justification for this operation. However, in QM, expected values are determined by the quantum state matrix via the trace operation (see section B.3, definition 44). For example, the expected value of , in quantum state is (see section D.3, corollary 93). But the properties of trace (see section B.3 remark 46) imply
formula
1.10
which moves the rotation away from the parameters and onto the quantum state. Thus, insofar as kinetic properties such as autocorrelations can be expressed as expected values, we can model eigenvector rotations by letting the neurophysical system adjust to a new quantum state. In other words, the same kinetic matrix and parameter matrix can be made to yield –type or any other Lorentzian noise by changing the physical context. This is the quantum dividend obtained through the introduction of QM techniques into membrane noise studies. (Section 4.4 gives a numerical example.)

In addition, there is solid laboratory evidence (Bernèche & Roux, 2001; Morals-Cabral, Zhou, & MacKinnon, 2001; Zhou, Morals-Cabral, Kaufman, & MacKinnon, 2001) that for certain experiments, an ion channel's state should be regarded as a quantum superposition (see section D.2, definition 90) of its basic protein conformations. For example, rather than each of the four subunits of a channel being in either a permissive or a nonpermissive conformation, each must be regarded as being in both simultaneously, like four Schrödinger cats (see section D.6, definition 103). Remarkably, our theoretical analysis of –type noise independently led us to precisely the same conclusion (see section 4).

However, in proposing QM mechanism for ion channel kinetics, we must be sure to preserve classical noise theory as a special case. We met this requirement through the development of what we call quantum simplicial processes (QSPs) (see section C.4, definition 64) and the hidden quantum activation-measurement (HQAM) cycle (see sections 4 and C.3.4, definition 118; see also Figure 7 for a graphical description). Combined with the previous, this yielded an additional important result:

  1. There is a biophysically plausible, quantum-based mechanism that in the well-attested cases such as Hodgkin-Huxley noise, is indistinguishable from the standard HMM model yet also supports -preserving rotations to maximum entropy, –type noise spectra.

1.4  Ion Flux

Another consideration, apart from noise models and the superposition evidence, recommends QM as a latent foundation of membrane kinetics.

The flux of ions through a multichannel membrane (see Figure 7, which is discussed in section 4.2), or even through a patch-clamped single channel (Sakmann & Neher, 1983), is sufficiently high to make it impossible, using ordinary laboratory equipment, to detect single ions traveling between the intra- and extracellular environments. For example, single-channel, valence-1 detection, in the best possible circumstances (Koch, 1999), would require time resolution of at least , with more typical values in the range. Therefore, in practice, all we can say about an individual ion that contributed to the observed current fluctuation is that it traversed the multichannel membrane, but not when or which individual channel allowed it through or the physical conformation of that channel. (Even channels with only a single official open conformation may allow a certain amount of leakage, especially when quantum tunneling is taken into account. (See “sputtering” in section A.2, definition 29.)

The question is whether the statistics of these ions, separated by a multichannel membrane, are best compared to

  • Classical salt grains slipping through the cap of a salt shaker or

  • ions diffusing along a potential through a monomolecular layer.

In the first case, we can be certain that every escaped grain will pass through one, and only one, hole in the cap. But with the second model, quantum interference (see section D.6, definition 103) may occur, in which case every ion must be regarded as traversing all possible paths through the layer simultaneously.

Of course, neurologically important ions such as , , , are very heavy, with atomic weights around 39, 23, 40, and 35.5, respectively, so we expect quantum effects to be much less pronounced than with light ions (i.e., protons) (see section 5.4). However, the issue is not whether QM plays some role in active membranes; physical law guarantees that it must do so at some scale. The issue is to characterize those scales and contexts in which QM thinking is required for the estimation of neurological statistics. It is our contention that membrane noise may be such a scale and context. Specifically, we posit the following (see section 4.2, definition 22):

Quantum Membrane Hypothesis. Macroscopic conductance experiments on an active membrane constitute quantum mechanical measurements but not quantum mechanical observations of its embedded ion channels.

The remainder of this letter explores the meaning (see section 4.2) and consequences (see section 5) of this hypothesis.

1.5  Scope

For clarity, it is important to identify several research areas not addressed by this letter.

1.5.1  Absolute Models of Ion Channels

Distinctions must be made between what we term absolute, phenomenological, and structural models of ion channels.

An absolute (Hille, 2001) model is a biophysically accurate mathematical representation of the protein structure, forces, and dynamics of an ion channel capable of predicting the numerical values of parameters such as kinetic rate constants. There has been enormous growth in such detailed biophysical knowledge (Levitt, 1978a, 1978b; Cherstvy, 2006; Babakhani, Gorfe, Gullingsrud, Kim, & McCammon, 2007), continuing to the present (Chipot & Comer, 2016). These models can provide, for instance, the free energy values needed for Eyring rate (Woodbury, 1969; Hille, 2001) predictions of ion transitions through channels.

A phenomenological model simply predicts measured values. The classic example of a pure phenomenological model is Hodgkin and Huxley's - rate functions (see equation 1.4), which, so far as we know, have no biophysical foundation other than their ability to fit the experimental data.

In constrast to these, a structural model is a simplified, hypothetical mechanism proposed as a tentative explanation of experimental observations. Continuing the previous example, Hodgkin and Huxley's hypothetical four mobile, electrically active “particles” as the mechanism of measured -channel kinetics was an extremely useful structural model, some components of which have stood the test of time (e.g., the number “four”), while others have been discarded (e.g., the “particles”).

Our quantum entropy ion channel mechanism is thus a structural model, intended only to explain the phenomena for which it was defined: the ubiquity and statistical characteristics of –type noise in neurological signals.

1.5.2  Quantum Theories of the Brain

Claims are made regularly concerning a purported role for QM in higher brain functions such as memory, behavior, and even consciousness (Penrose, 1987; Penrose & Hameroff, 2011). Some of these are of scientific interest and may even point to new physical principles (Bohm & Hiley, 1993). Unfortunately, in spite of their obvious appeal, most such theories seem to have little neuroscientific merit (Tegmark, 2000; Koch & Hepp, 2006).

However, our use of QM as a structural model for low-level ion channel statistics has nothing in common with these speculations. Rather, it is simply a framework for understanding decades of –type noise measurements from channels and channel-containing membranes, the apparent inability to explain this noise using classical HMM kinetics (Hill & Chen, 1972), as well as highly suggestive experimental results mentioned previously, which seem to indicate the presence of quantum-superposed channel states in nature. (See section 5.5 for further discussion of neurophysical modeling.)

1.5.3  Noise Scales

We note that except for a brief discussion of our practical EEG BCI results using generalized van der Ziel-McWhorter noise (see section 2, definition 8), a member of the Lorentzian family (see definition 6), all of our detailed techniques are designed to account for noise at the membrane scale or smaller. Certainly there is no reason to expect that a single noise model will be appropriate from the ion channel scale up through EEG. However, the Lorentzian family has the highly suggestive property of complete scalability; it is clear that as with gaussian noise, the sum of any number of independent Lorentzian noises is also a Lorentzian noise. Hence, there is the possibility that the cumulative effect of billions of independent Lorentzian membrane noise sources would be Lorentzian noise in the background EEG chatter. Nevertheless, that investigation is left to the future.

1.6  Results and Organization

The remaining sections of this letter present the content of our results. To avoid long, technical digressions, most of the detailed background concepts and proofs are presented in appendixes. Appendixes B and D are primarily summaries of existing knowledge. However, the others represent original research such as detailed Lorentzian covariance calculations, the definition of QSPs, and detailed proofs. They should be regarded as essential components of this letter.

  • Section 2 expands the discussion in section 1.1. Section 2.1 defines the Lorentzian autocovariance and presents several important examples. Section 2.2 discusses a prototypical Lorentzian noise, with a –type spectrum, that we have used with great success for improving statistical detection in BCI algorithms. Section 2.3 establishes the heuristic linkage between GVZM noise, Lorentzian weights, and entropy maximization as a prototype for subsequent detailed analysis.

  • Section 3 is one of the the most significant sections of this letter because it presents the conceptual framework for QM membranes. Section 3.1 formalizes the prototype Arrhenius-based argument of section 2.3 by defining a QM translation of the standard detailed balance principle. Using this translation, the conformation energy operator for single and multiple ion channel membranes is defined. We prove that the Lorentzian weights maximize membrane entropy under activation energy constraint. Section 3.2 presents a detailed example of spectral exponent calculation using a two-channel membrane with kinetic matrix (see Figure 6). Section 3.3 uses the covariance formulas from appendix A and the kinetic inner product, to rotate any initial Lorentzian covariance weights to any preassigned list of final weights; in particular, to the maximum entropy weights of section 3.1.

  • Section 4 demonstrates that the rotation to the maximum entropy weights of section 3.1, proven to be possible by section 3.3, theorem 19, is biophysically plausible. In combination with the stochastic concepts of appendix C, it presents a mechanism by which a hidden QM layer can give rise to a manifest process in our measuring instruments (such as current meters and so forth) that could appear to have been generated by a classical, discrete-state Markov layer. Section 4.1 presents a brief overview, discussed in more detail in section D.1, of the general philosophy of QM needed to model membrane stochastics. Section 4.2 consists of a detailed explanation of Figure 7. Specifically, we distinguish channel state measurement, which is feasible using macroscopic instruments, from channel state observation, which is not. Section 4.3 works through a detailed, numerical example of the process described in section 4.2 in order to clarify the concepts. Section 4.4 works through a similar example demonstrating that rotations of the quantum state can completely alter a membrane conductance process into any form we want without affecting the kinetic matrix or the conductance parameters.

  • Section 5 is a reflection on these results, with a few thoughts on future directions.

The appendixes contain material indispensable for a thorough grasp of the reasoning:
  • Appendix A presents definitions related to Lorentzian noise, as well as a valid derivation of the Lorentzian autocovariance formulas (see theorem 26) based on the new concepts of hidden simplicial and kinetic models (see definitions 54, and 61 in, respectively, sections C.2 and C.3).

  • Appendix B presents all the required linear algebra background, including the key definitions of the trace operators (see definition 44) and the tensor product formalism (see section B.4) which are used throughout the letter. In addition, lemma 123 (whose proof is our own) is the basic technical result justifying kinetic rotations.

  • Appendix C defines hidden kinetic models (HKMs) (see definition 61) as the common ancestor of both HMMs (see definition 62) and our hidden quantum models (HQMs; see definition 69), allowing the covariance formulas of appendix A to apply to both descendants. This proves the indiscernibility of the competing models at the macroscopic level.

  • Appendix D is a brief overview of the QM concepts needed for the QSPs (see definition 64) of appendix C). As pointed out in section 4.1, we require a QM formalism that is significantly more abstract than the more commonly known Schrödinger wave function approach.

  • Appendix E is the keystone of this letter and is also, in many ways, the most complex part. It assembles all the intricate pieces together by formalizing the activated-measurement cycle described previously. It shows how to convert classical Markov kinetics into quantum activated measurements (QAMs) and constructs the required kinetic matrix rotations. The mathematics is just a precise and slightly more general translation of the procedures illustrated by the numerical examples in sections 4.3 and 4.4.

  • Appendix F contains proofs of various results, including the important maximum principle entropy in section 3.1, theorem 17, and the existence of kinetic rotations in section 3.3, theorem 19.

  • Appendix G contains assorted technical conventions, symbols and notation, and abbreviations and acronyms, which are needed especially for appendix C.

2  The Lorentzian Family of Noises

2.1  Definitions and Examples

Definition 5. Lorentzian Noise.
A Lorentzian noise (see Figure 4) is any stationary, mean-zero, stochastic process with autocovariance (Brockwell & Davis, 1991),
formula
2.1
at lag , where is a positive measure space (Royden, 1968), , are nonnegative constants with the dimension of power density, and is the Dirac delta function (see appendix G). In the prototypical situation in example 126, are the reciprocals of the nonzero eigenvalues of an associated kinetic matrix (see definition 1, in section 1.2. Equivalently, the Lorentzian family can be defined as any stationary, mean-zero, process with PSD of the form
formula
2.2
Figure 4:

Periodogram of simulation of GVZM-type Lorentzian noise.

Figure 4:

Periodogram of simulation of GVZM-type Lorentzian noise.

Note that the Lorentzian family is closed under the addition of independent sources, rescaling by a power constant , and adding a flat noise floor of power (the between- and within-state variances, respectively; see definition 27, in section A.2).

Example 1.

Telegraph Waves. A Poisson telegraph wave consists of a -valued stochastic process with exponentially distributed sign reversals. If the expected waiting time between reversals is , then this process has autocovariance . Thus, a weighted sum of independent Poisson telegraph waves is a Lorentzian noise.

Example 2.

Hidden-Layer Ion Channel Models. As mentioned in equation 1.8, and subsequently examined in detail (in definition 62 and example 63 in section C.3), standard kinetic models of ion channels give rise to HMMs that generate Lorentzian noise processes for which the time constants 's are the reciprocal eigenvalues of the kinetic matrix. We will prove this for a general class of noise sources we call hidden kinetic models (in section C.3, definition 61) of which both HMMs and and our newly defined hidden quantum activation models (see definition 118 in section E) are examples.

Example 3.
Gaussian Lorentzian Noise. Since stationary, mean-zero, gaussian processes are uniquely defined by their autocovariance (Brockwell & Davis, 1991), a one-term gaussian Lorentzian noise with spectrum must satisfy the continuous-time equation11 (Brockwell & Davis, 1991)
formula
2.3
where is mean-0, white gaussian noise of variance 1. Equation 2.3 has the unique (up to probability) formal solution
formula
2.4
Thus, with a collection of independent gaussian white noises, the expression
formula
2.5
has formal autocovariance given by equation 2.1 and is the unique Lorentzian noise with this property.
Thus, every gaussian Lorentzian noise can be approximated by a weighted sum of discrete-time processes (Paris et al., 2017),
formula
together with a careful double limiting process (see Figure 4). Section 2.2 provides a detailed example of the gaussian Lorentzian family used as a statistical model of EEG noise.

2.2  Generalized van der Ziel–McWhorter Noise

In this section, we introduce a very useful type of Lorentzian noise that formed the conceptual starting point for many of our subsequent theoretical developments.

In the 1950s, McWhorter (1957) proposed a structural model, subsequently abstracted by van der Ziel (1970), to explain –type noise in semiconductors. McWhorter suggested that this noise was generated when semiconductor electrons absorbed quantized electromagnetic energy , then held it for average waiting time
formula
2.6
before reemission, where is Boltzmann's constant and is absolute temperature. Assuming the electron processes are independent and the occupation probability is uniform for available energies in an interval , the resulting noise has autocovariance
formula
and PSD
formula
2.7
where , . It is easy to see that transitions smoothly from , for small to , for large , while , roughly for .12

We generalized the van der Ziel–McWhorter (VZM) mechanism to the following very useful family of noises (Paris, 2016; Paris et al., 2017):

Definition 6.  Multispecies Generalized van der Ziel–McWhorter Noise.
A multispecies GVZM (mGVZM) process is any stationary, zero-mean, process whose autocovariance function is of the form
formula
2.8
where are time-constant functions, one for each of species of channel, are the constant spectral exponents, are weighting constants, and is a “neurologically relevant” measure space (Royden, 1968).

The simplest example of mGVZM noise is the one-species GVZM form:

Definition 7. Generalized van der Ziel–McWhorter Noise.
formula
2.9
which (for ) possesses a roughly –type PSD:
formula
2.10
In these cases, the neurological measure space is the closed interval with the ordinary Riemann-Lebesgue measure (Royden, 1968) and the time constant function is the identity . GVZM noise reduces to VZM noise when .

Obviously, is a Lorentzian spectrum. (See Paris et al., 2017, for background and applications of GVZM noise to EEG-level algorithms.)

2.3  GVZM and Arrhenius Rates

This section presents an informal argument showing that maximizing entropy can lead to VZM and GVZM noise.

GVZM noise can be connected to ion channel and chemical kinetics through the Arrhenius reaction rate formula,
formula
2.11
where is a chemical reaction rate, is an activation energy, and is an activation rate constant associated with the system. According to Hille (2001), the idea of such a relationship between the chemical reaction rate and activation energy originated with S. Arrhenius in the nineteenth century. In the 1930s, H. Eyring began to develop absolute (see section 1.5) kinetic rate models that, in effect, sought to determine the activation rate constants in particular systems and was later applied directly to ion channel theory (Woodbury, 1969).

Note that when McWhorter's waiting time for –type semiconductor noise (see section 2.2) is reexpressed as the rate of energy absorption/emission, an Arrhenius-type formula results from equation 2.6. The close association between Arrhenius rates and GVZM noise (see definition 8) can be made plausible by the following critical heuristic argument:

Let be any function with dimension energy, defined over a time interval . A standard proof shows (Mackey, 1992) that the probability distribution on , which maximizes entropy under the expected-energy constraint , for a fixed value of , satisfies for some dimensionless Lagrange multiplier . But if is related to through the Arrhenius formula via the reciprocal rates , then , with . Assuming the 's parameterize a collection of independent Poisson telegraph-type waves (see equation 2.1), the resulting process will be stationary with autocovariance
formula
which is precisely the GVZM –type autocovariance formula (see equation 2.9).

In the following sections, we show how this informal argument can be made precise in the case of discrete-state Lorentzian systems. Note, however, that, as discussed in section 2.1, the discrete case contains the general case, at least to an arbitrary degree of approximation.

3  Detailed Balance and Quantum Membranes

3.1  Maximum Entropy Lorentzians

We now formalize the heuristic argument of section connecting GVZM noise to maximum-entropy Lorentzians through quantum membranes. The most important new concepts are:

  1. 1.

    The QM translation (definition 12) of kinetics

  2. 2.

    The conformation energy operator (definition 14)

  3. 3.

    The Arrhenius membrane model (definition 15), which connects the conformation energy to the kinetic matrix

  4. 4.

    The spectral exponent function (theorem 17), which maximizes entropy under constrained conformation energy

Note that we require the concepts of adjoints with respect to an inner product from section B.1 and QM operators from appendix D.2.

Definition 8. Membranes.

By the term membrane, we mean a system of one or more ion channels, the entire system governed by a single kinetic equation (see equation 1.6). Concrete examples 129 and 133 appear in section 3.2.

Definition 9. Strong Detailed Balance.
The kinetic matrix (see definition 1 in section 1.2) satisfies the principle of detailed balances13 (Hille, 2001) if
formula
3.1
where is a stable probability vector of and denotes the vector-to-diagonal matrix constructor (see definition 24 in appendix A). We say satisfies strong detailed balances or is strongly balanced if, in addition to equation 3.1, is nondegenerate and all entries of are strictly positive. A strongly balanced membrane is one with a strongly balanced kinetic matrix.
Example 4.

The standard matrix, equation 1.7, as well as the related standard matrix, equation 3.4 (Hille, 2001) are both strongly balanced, as can be verified by calculation.

Definition 10. Kinetic Inner Product.

The key observation is that the strong balance condition of definition 10 implies that the matrix exists and can be considered the kernel (see section B.1) of a non-Euclidean inner product on . For instance, Figure 5 shows an example of a rotation that is orthogonal with respect to a kinetic kernel but oblique with respect to the Euclidean kernel .

Figure 5:

Typical kinetic rotation corresponding to a two-state HMM with stable probability vector .

Figure 5:

Typical kinetic rotation corresponding to a two-state HMM with stable probability vector .

Definition 11. The QM Translation of Detailed Balance.

The strong detailed balance conditions, common in classical chemical kinetics, actually are most revealing when reexpressed in the language of QM (see section D.2). Thus:

  • The vector space , containing the probability values of the kinetic equation, 1.6, is a Hilbert space , which is the configuration space of the quantum membrane.

  • The strongly balanced kinetic matrix is a –self-adjoint operator on (see section B.1).

  • is a unit vector in , whose –adjoint “dual linear functional” is . This means the vector is a “wave function” corresponding to the “pure state” .

In this way, the standard classical kinetic principle of detailed balances (alt. Markov reversibility) induces an extremely natural translation of classical kinetics directly into quantum language. Obviously, we do not regard this as a happy accident but as an observation that demands full investigation (see section 5). We always model strongly balanced membranes as quantum systems in this manner.

Definition 12. Membrane States and Entropy.
A membrane state is a quantum state (see section D.2, definition 89) on the configuration space ; i.e., is a nonnegative definite, –self-adjoint operator on of trace 1 (see section B.1, definition 44). The standard interpretation (see section D.3, definition 92) of is that its eigenvalues form a probability distribution , for , in some –orthonormal basis for and thus possesses statistical membrane entropy (Mackey, 1992). In operator language (and assuming is a positive definite), this becomes the QM expected value (see section D.3, corollary 93),
formula
where is the trace operator (see section B.1, definition 44, section D.3, corollary 93).
Definition 13. Conformation Energy Operator.
We now seek to associate a conformation energy operator with the strongly balanced membrane. We require to be a nonnegative definite, –self-adjoint operator with the following interpretation: when the membrane is in quantum state (see section B.2, definition 89), its expected conformational energy is
formula
In particular, since the wave function represents kinetic equilbrium, we require
formula
to be the minimum conformational energy; that is, is both the kinetic and energetic stable probability distribution of channel conformations. We allow the energy operator to depend on absolute temperature and also on a gating parameter, which may be membrane voltage, concentration, or other control signal.

These requirements are met by following generalization of the Arrhenius formula:

Definition 14. Arrhenius Membrane Model.
The conformation energy operator of a membrane with strongly balanced, gated, kinetic matrix is defined implicitly by
formula
3.2
where is the activation energy operator, is the minimum conformational energy, is the identity operator, and is a fixed activation rate (see equation 2.11), which we assume is greater than all possible eigenvalues of .14
We can interpret as the operator such that, when the membrane is in quantum state , it is activated with conformational energy above its stable energy . The calculation follows from equation 3.2 as expected; is the inactivated quantum state. If is the nonzero eigenvalue of , for , and the corresponding eigenvalue of , then
formula
where , showing that equation 3.2 is the correct generalization of the Arrhenius formula, equation 2.11.
Remark 2.

No claim is made about whether determines or whether the measured kinetic rates are a result of the underlying membrane energetics summarized by . We regard them as alternative expressions of the same underlying neurophysical reality.

Theorem 1. Maximum Entropy Quantum Membrane States.

Let be the kinetic matrix of a strongly balanced membrane with time constants and gating parameter . Let be an activation rate such that , for . Let be the maximum energy of the Arrhenius energy operator .

  • i.

    For every value , with , there is a membrane state with expected conformational energy and whose entropy is maximum among all such membrane states.

  • ii.

    There is a unique maximum entropy quantum state that commutes with .

  • iii.
    There is a real-valued spectral exponent function such that the eigenvalues of the commuting are proportional to
    formula

The proof is in section F.1.

Remark 3.

Inflection point of . Referring to the proof of theorem 17i, the function , the inverse of , is sigmoidal with its inflection point at . This implies that has an inflection point when . (see Figure 6.)

3.2  Quantum Membrane Examples

We present a few concrete but simple examples of quantum membranes (see Figure 6). The most important task in modeling a quantum membrane is the design of the configuration space and any constraints on the quantum states. As in physics (Sudbery, 1988), this is as much art as science. Generally we need a vector dimension for every observable we might measure (whether those measurements actually take place or not) and constraints to characterize how those observables may affect one another. We discuss just a few simple examples.

Figure 6:

Maximum entropy spectral exponent as a function of normalized mean conformation energy for a complex at the resting membrane voltage. The solid box is the physiological noise region .

Figure 6:

Maximum entropy spectral exponent as a function of normalized mean conformation energy for a complex at the resting membrane voltage. The solid box is the physiological noise region .

Example 5.

Independent (Conjunctive) Systems. When physical subsystems (see section D.1), such as channel subunits, with configuration spaces and dimensions , respectively, are combined in such a way that all quantum states of the subsystems are simultaneously possible, then the appropriate combined configuration space is the tensor product (see section B.4), of dimension . We can regard this as the “and” operation (logical conjunction): the full state consists of the state and the state and … and the state, calculated independently.

Example 6.
Independent Kinetic Systems. For instance, suppose we have two independent Markov kinetic systems , with kinetic matrices (see section 1.2, definition 1), operating on configuration spaces , yielding probability solution vectors of the kinetic equation 1.6. Then the combined configuration space is , with combined probability vector . It is easy to see that
formula
so the combined kinetic matrix is given by
formula
3.3
where are the identity matrices. Moreover, the nonzero eigenvalues (i.e., reciprocal time constants in section 1.2, definition 2) are
formula
where are the subsystem time constants.
For example, the standard kinetic matrix (Manwani & Koch, 1999a) is
formula
where
formula
are the - and -subunit kinetic matrices. The nonzero eigenvalues , for , are given by
formula
Example 7.

Entangled Systems. Interesting situations can arise when restrictions are placed on the quantum states of systems composed of independent subsystems (such as the channel above). This is because the tensor product (see section B.4) does not consist of just pure products of wave functions (see appendixes D, including section D.2), but also linear combinations such as (note that we are ignoring normalization). Such wave functions are called entangled because knowing something about the first component instantly provides information about the second.15

For instance, let , , be the wave functions corresponding to the 4 conformations of the -subunit and , the corresponding wave functions for the -subunit of the example above. (The association between quantum states and channel conformations is the subject of section 4.) Suppose and are the two open subconformations. Then a wave function such as
formula
is entangled: any observation of the subunits will reveal one subunit open but the probability is 0 that both are open at the same time. So knowing that the -subunit is open instantly implies the -subunit is closed and vice versa. (We are not suggesting real ions behave this way.)

As part of our quantum membrane model, we propose extending the concept of entanglement to include entanglement in time in order to make quantum neurological statistical signal processing possible. (See sections 5.3; C.4, definition 64 and D.1, definition 86).

Example 8.

Exclusive (Disjunctive) Systems. When physical subsystems , with configuration spaces and dimensions , respectively, are combined in such a way that only one subsystem can be operative at any one time, then the appropriate combined configuration space is the direct sum (see section B.4), of dimension . We can regard this as the “or” operation (logical disjunction): the full state consists of precisely one of the states or one of the states or or one of the states.

As an artificial but interesting example, suppose the voltage difference is measured between the intracellular environments of two extremely close axons, with microelectrodes near a single channel in one axon and a single channel in the other. A 1 eV voltage spike could occur because the allowed a charge carrier out of its axon or because the channel allowed a carrier into the facing axon. Assuming it must be one or the other but not both, we can model this complex in using the kinetic matrix , which is constructed with the matrix in the upper left corner and the matrix in the lower right.16

Imagining a population of these complexes as a source of –type noise, we can calculate the maximum entropy spectral exponent as a function of the mean conformation energy as defined by section 3.1, theorem 17. This is shown in Figure 6 (the figure's energy scale is dimensionless). It is observed that the function predicts spectral roll-offs in the physiological range for a quite reasonable range of activation energies. Note also the inflection point at , as mentioned in section 3.1, remark 18.

Example 9.

“Bosonic” Membranes. Situations can be imagined in which certain membrane configurations are identified. For example, consider a disjunctive membrane, as in example 132, consisting of four two-subunit channels but for which configurations with the same total number of open channels are considered identical. This may appear similar to a classic channel (see section 1.2, example 131), in which the Markov states 0, 1, 2, 3, 4 refer to the number of open subunits, but there is a subtle probabilistic difference. In standard channel statistics, states such as (open, open, closed, closed), (open, closed, open, closed), , though representing the same hidden Markov state, must be counted as distinct for the purposes of calculating Boolean probabilities (e.g., DeFelice, 1981). But quantum mechanics admits the possibility of the complete identification of these formally distinct possibilities so that they represent only a single quantum state, characteristic of the physical particles called bosons (Sudbery, 1988). (See section 4.4 for a numerical example of a simple bosonic membrane.)

3.3  Rotations of Kinetic Lorentzians

In this section, we justify the claim made in section 1.3 about autocovariance and the rotation of kinetic eigenvectors. It applies to any HKM (see section C.3, definition 61), including HMMs and our new quantum channel models.

Theorem 2. Kinetic Rotations.

Let be an -state HKM (see section C.3, definition 61) with strongly balanced kinetic matrix , time constants , and steady-state probability vector . Assume that is in equilibrium; i.e., the simplex vector (see section C.2, equation 49) of the hidden process satisfies . Then:

  • is second-order stationary.17

  • There are constants and weights , , such that the autocovariance can be written in discrete Lorentzian form,
    formula
  • Let be arbitrary weights with . Then there is an orthogonal matrix such that is a strongly balanced kinetic matrix with time constants , steady-state probability vector , and such that any equilbrium HKM , with kinetic matrix , has autocovariance
    formula

The proof is in section F.2.

Remark 4.

Lorentzian Weight Uncertainty. The matrix is a –orthogonal rotation (see section B.1, example 37, and Figure 5),6 changing the eigenvectors of but not its eigenvalues. Thus, as a result of theorem 19, any conclusion based on a particular weight formula such as equation 1.9 can be immediately invalidated simply by rotating the purported kinetic matrix while, leaving the reliable, experimentally derived data unchanged. In particular, a conclusion such as Hill and Chen's (1972) rejection of ions as a source of –type noise (see section 1.1) is meaningless without independent evidence that the particular kinetic matrix equation 2.7 they used is appropriate in vivo. But so far as we are aware, there is no such evidence.

Remark 5.

Non-Markovian Rotations. In our subsequent use of theorem 19 (see sections 4.4 and appendix E, definition 121), the rotated kinetic matrix will be guaranteed to be Markovian (see definition 3 in section 1.2). In general, however, theorem 19 does not necessarily produce Markovian kinetic matrices, although, as mentioned in remark 67 in section C.4, this may not be problematic for QM applications.

4  Hidden Quantum Activation Models

4.1  The Stochastics of Quantum Membranes

In this section, we outline the scheme leading from quantum membranes to stochastic processes. This will implement the temporal entanglement requirement for autocovariance detailed in definition 64 in section C.4 (see also section 5.3 and definition 86 in section D.1 for discussions).

The ultimate consequence, presented in theorem 119 in appendix E, is the conclusion that these processes are Lorentzian noises indistinguishable from the classical Markov models of channels. This backward compatibility is critical because of the overwhelming experimental evidence, going all the way back to the pioneering work of Hodgkin, Huxley, Katz, and others, demonstrating the applicability of such models to ion channel electrochemistry. Aside from the necessarily abstract quantum language, the channel-level activation-measurement process is actually quite intuitive, as shown in the second column of Figure 7.

Figure 7:

Transformations of quantum membrane states during the activation-measurement-observation cycle. See section 4.2. (Drawing © 2017 Britta Seisums.)

Figure 7:

Transformations of quantum membrane states during the activation-measurement-observation cycle. See section 4.2. (Drawing © 2017 Britta Seisums.)

4.2  Activation-Measurement-Observation of Quantum Membranes

As mentioned in section 1.4, to investigate quantum flux models, we studied the consequences of the following.

Definition 15. Quantum Membrane Hypothesis.

Macroscopic conductance experiments on an active membrane constitute quantum mechanical measurements of its Arrhenius conformation energy operator (see definition 15, in section 3.1) but not quantum mechanical observations of its values.

We will explain the meaning of the membrane hypothesis by referring to section 1.4, Figure 7 and the background material of appendix D, especially definition 100 in section D.6, and all of appendix E.

4.2.1  Coherent Membrane Quantum States

The technical definition of the previously mentioned quantum state or density matrix is a nonnegative definite, self-adjoint linear operator of trace 1 (see definition in section D.2). The quantum state is coherent (see definition 95 in section D.5) with the conformation energy operator if and only if . In such coherent states, we can regard the membrane as having a definite, but unobserved, conformation energy with certain observational probabilities calculated from (see section D.5). Alternatively, and closer to the real spirit of QM, we can regard this state as the simultaneous superposition or entanglement (see definition 101 in section D.6) of all possible ions, channels, and channel conformations (which are assumed to be in one-to-one correspondence with the energy states ), like Schrödinger's famous cat, which is both alive and dead at the same time (see remark 105 in section D.6).

For example, if is an orthonormal basis of eigenvectors of , so that , then the rank-1 projection operator, or eigenstate, (see definition 88 in section D.2) is a coherent state, and the sum , where and , is the unique superposed, coherent state in which the membrane energy has the probability of being observed to have value (see definition 92 in section D.3). Here, is the adjoint (alt. dual) (see equation B.1) of .

4.2.2  The Activation-Measurement-Observation Cycle

The activation row of Figure 7 shows a process of activation, in which energy (which may be thermal, chemical, or some other form) is absorbed from the environment, raising the expected conformational energy of the membrane. We depict this allegorically as shaking objects in a cup. The two channel shapes are meant to represent different channel species: an channel and a channel. Each has conformations that are open, closed, or leaky, the last corresponding to sputtering (see definition 29 in section A.2).

We model this activation process as the application of an activator , which is an operator sending the coherent quantum state to a possibly noncoherent, activated quantum state , where, again, is the adjoint operator (see equation B.1). Thus, the expected energy is raised . The noncoherence means that the “shaking” membrane can no longer be regarded as having some definite, but unobserved, energy , not even in a superposed sense: all relationship to is lost. (Note that needs to satisfy certain conditions which are detailed in definition 106 in appendix D.)

The measurement process of the second row in Figure 7 is the most characteristically nonclassical operation in QM (see section D.6) and a continued source of metaphysical debate. As depicted by the now motionless hand, we regard it as the release of the absorbed activation energy by transfering an ion through the membrane and thus contributing to a macroscopic measurement. By a fundamental law of QM (see corollary 104i in section D.6), this measurement will collapse the activated state to the coherent superposition , where is the eigenstate of as above. Thus, the membrane is restored to a coherent state. However, as discussed section in 1.4 and symbolized by the unlifted cup, we cannot determine any resulting energy level of the individual transferred ion, the particular channel that performed the transfer, or the conformation of the channel when the transfer took place.

The final row of Figure 7, “observation,” would then correspond to determining the final configuration energy, the particular transported ion, the particular channel through which it passed, and the actual channel conformation during the passage (lifting the cup and observing the outcome). The effect would be to further collapse the measurement state to the single eigenstate corresponding to the whatever particular conformation was observed. But this is precisely what the membrane hypothesis, definition 22, claims cannot be done. Instead, in our structural (see section 5.1), hidden quantum activation model (see definition 118 in section C.4), we hypothesize that, after waiting in the measurement row for a random time, the membrane will again absorb energy and renter the activated state. Thus, an ongoing cycle of QAMs will be generated.

The indiscernibility theorem 119 in appendix E demonstrates that this quantum process yields noise that is indistinguishable from that produced by the standard kinetic Markov models. In fact, the quantum activator determines a kinetic matrix that, if the hidden process were Markov and governed by would generate the same Lorentzian noise. But the difference between these two conceptions is striking. Like the classical salt grains of section 1.4, a Markov channel is always in one, and only one, conformation at each moment, even if it is hidden from observation. But our quantum channel is never in a single conformation. It is either activated, which is not necessarily even a coherent superposition of conformations, or it has been measured but not observed, in which case it is in all conformations at once.

4.3  An Example of a Quantum Activator

In this section, we formalize the presentation of the QAM cycle by means of a numerical example. The detailed theory is presented in appendix E, but we believe that carefully working through this example will help readers understand the HQAM membrane model better than any abstract discussion.

The defining property of an activator is that it sends coherent quantum states to new quantum states (see lemma 107 in appendix E), though ones that are not required to be coherent. In particular, if we choose an orthonormal basis (to specify what we mean by “coherent”), then we require each activated vector to be a unit vector. This will ensure, using the notation of the section 4.2, that
formula
which is a defining property of a quantum state (see definition 89 in section D.2.) (Note that self-adjointness and nonnegativity follow from the symmetry of the operation .)
The detailed calculations will depend on the kernel of the inner product (see section B.1), which we want to be representative of the kinetic inner products we have been emphasizing (see definition 11 in section 3.1, Figure 5, and example 37 in section B.1). So we choose
formula
4.1
on the configuration space , that is, . This has the same form as a kinetic kernel (see definition 11 in section 3.1) , with stable probability vector , corresponding to some strongly balanced kinetic matrix (in fact, we will see subsequently that it can be so interpreted).
We use an orthonormal basis parallel to the standard basis
formula
(see definition 32 in section B.1). We have , while , so , , and .

From section B.1, the adjoint of a column vector is , from which follows , , and . Note this implies that the eigenstates are trivial diagonal matrices with a single 1 in the entry.

For our proposed activator, we choose
formula
4.2
whose validity must be checked by examining . To do this, we again use to calculate
formula
4.3
Using the formula (see section B.1), we thus have
formula
which demonstrates that is an activator for the basis .
Using these calculations and the definition of activation, we can now evaluate the effect of activation by on the eigenstates
formula
where the last equation holds because (see section B.1). Thus,
formula
The measurement operation sends to , but we have shown that in our particular case, has just a single 1 in the diagonal entry. So the effect of is to reduce a quantum state to its diagonal only:
formula
Note that every has trace 1 and is therefore a quantum state, as required. The absence of off-diagonal entries means it is coherent in the basis, also as required.
We can now calculate the effect of on any coherent quantum state . Such a state is a unique superposition of eigenstates: , with and . Since , we can substitute to find the coefficients of the new decomposition . When this is done, we find
formula
so
formula
4.4
That is, during every QAM cycle, the superposition coefficients transform precisely as if they were the discrete-state probabilities of a Markov process determined by the matrix , which is easily seen to be a valid Markov transition matrix (Lamperti, 1977). In fact, as we foreshadowed, the stable probability vector for is precisely , which generated the original kernel . But the QAM process is absolutely not a Markov process. There are no discrete Markov states that are cycling, only simultaneous superpositions of membrane eigenstates.
If we choose some activation rate (i.e., the rate of Poisson-distributed repetitions of the QAM cycle), we can extract a kinetic matrix using , the formula that always relates kinetic and Markov matrices. For example, with , we find
formula
4.5
which happens to be the standard three-unit kinetic matrix (see equation 1.7) for .

4.4  An Example of a Kinetic Rotation

We remind readers that the strategic purpose of introducing QM into membrane noise studies was to benefit from what we called the quantum dividend (see definition 5 in section 1.3), that is, creating extra degrees of freedom by repositioning rotations from the parameter matrices to the quantum state via trace formulas such as . An example is presented here.

Suppose we are using the same kinetic matrix as in equation 4.5. Further, suppose that in vitro experiments have determined the discrete-time, mean conductance process to be
formula
where are the -step probabilities given by the QAM-Markov update, equation 4.4, and is the maximum conductance. This could arise, for example, from a three-channel, two-subunit bosonic membrane (see example 133 in section 3.2), each channel with open-channel conductance . (Note that we work in discrete time for simplicity. Section C.5 shows how to extend the QAM process to continuous time by Poisson sampling.)
Recalling that is the vector-to-diagonal matrix constructor (see section A.1), we can write this in QM form by using the in vitro conductance matrix and the quantum state , so that
formula
Assume, however, that in vivo noise experiments have revealed the conductance process would be better modeled by
formula
4.6
with the same as before. This would be as if the fully closed state 1 and the fully open state 3 have somehow partially exchanged conductance values as a result of the activity of the living tissue. We take it for granted that the laboratory kinetic matrix and conductances are reliable (recall the discussion of Hodgkin-Huxley reliability in section 1.2) and that we are unaware of any mechanism in the living tissue that could alter these parameters. Yet somehow, the conductances (and thus the noise autocovariance derived from them; see theorem 26 in section A.1) appeared to change from to . How can this be explained?
First, note that the matrix
formula
with the in vivo conductances along the main diagonal, is -self-adjoint (see section B.1) with the kernel used in section 4.3. Moreover, its eigenvalues are easily calculated to be the laboratory conductances . (The off-diagonal coefficients were chosen to impose these properties. Note also that the off-diagonals have no effect on the trace against diagonal quantum state ). Therefore, there is a –orthogonal rotation , corresponding to the eigenvectors of (see examples 42 and 43 in section B.2), such that
formula
This suggests looking for an activator that is coherent with respect to the eigenvectors of and such that equation 4.6 is the HQAM conductance process.
Using the proof (in section F.4) of lemma 115 (in section E.1) and the orthonormal basis of eigenvectors of calculated to be
formula
we choose
formula
(The symbol means “approximately equal to.”) It is seen that , for , verifying that is an activator for the basis.
As before, we compute the eigenstates , their activations , and their QAM images
formula
for .
Direct calculation then shows that if the system is started in an initial coherent quantum state , then after QAM cycles, the quantum state will be , with probabilities
formula
where is the same Markov transition matrix as in equation 4.4.
Therefore, the HQAM process is
formula

Thus, both the kinetic matrix and the conductance matrix are unchanged from their laboratory values. Yet the conductance process has been transformed from to by rotating the quantum state from to . So we can resolve the apparent paradox by postulating that the biophysical membrane is the same in the two situations but that the living tissue is executing a QAM cycle that changes the quantum state.

4.5  HQAM Summary

Thus, we demonstrated a systematic procedure, starting from a strongly balanced kinetic matrix , which drives a HMM, to “quantize” the stochastics by extracting a quantum activator from (see lemma 115 in appendix E) and then running the associated HQAM (see definition 118 in appendix E) instead of the HMM (see appendix E for all details).

To cite results from the appendixes without defining the terms or notation here, the joint probability density function (PDF) of a continuous-time HMM has the form
formula
where the various -way matrices are the hidden Markov PDFs determined by the kinetic matrix (see definition 54 in section C.2). The PDF of a HQAM is given by a tensor product formula
formula
4.7
where the quantum states are determined by the activation process (see definitions 69 in section C.4 and 60 in appendix E). We can then prove theorem 119 in appendix E:
formula
No macroscopic experiments, based solely on statistical properties, can distinguish HMM membranes from HQAM membranes. This applies, in particular, to the standard HMMs for ion channels.

We then went on to demonstrate that for any orthogonal matrix , there is a procedure for creating a HQAM whose quantum states are , thus exploiting the expected value identities (see definition 5 in section 1.3).

To close the circle, we proved our claim that quantum neurophysics can change the weights of the resulting Lorentzian noise autocovariance (see definition 6 in section 2.1),
formula
into anything we want, including –type weights, without altering the rate constants or the membrane conductance parameters (see theorem 19 in section 3.3).

5  Review and Discussion