Abstract

Different biological processes take different times to be completed, which can also be influenced by many environmental factors. In this work, a realistic definition of nonsynchronized spiking neural P systems (SN P systems, for short) is considered: during the work of an SN P system, the execution times of spiking rules cannot be known exactly (i.e., they are arbitrary). In order to establish robust systems against the environmental factors, a special class of SN P systems, called time-free SN P systems, is introduced, which always produce the same computation result independent of the execution times of the rules. The universality of time-free SN P systems is investigated. It is proved that these P systems with extended rules (several spikes can be produced by a rule) are equivalent to register machines. However, if the number of spikes present in the system is bounded, then the power of time-free SN P systems falls, and in this case, a characterization of semilinear sets of natural numbers is obtained.

1.  Introduction

Membrane computing is one of the recent branches of natural computing. It was initiated by Păun (2000) and developed very rapidly (in 2003, ISI considered membrane computing a “fast emerging research area in computer science”; see http://esi-topics.com). The aim of membrane computing is to abstract computing ideas (e.g., data structures, operations with data, computing models) from the structure and functioning of a single cell or from complexes of cells, such as tissues and organs, including the brain. The obtained models are distributed and parallel computing devices called P systems. This letter deals with a class of neural-like P systems called spiking neural P systems (SN P systems; Ionescu, Păun, & Yokomori, 2006). (For general information in this area see Păun, Rozenberg, & Salomaa, 2010, and for details, see the membrane computing Web site, http://ppage.psystems.eu.)

SN P systems are a class of distributed and parallel computing models inspired by spiking neurons, which are currently much investigated in neural computing (Gerstner & Kistler, 2002; Maass, 2002; Maass & Bishop, 1999). Briefly, an SN P system consists of a set of neurons placed in the nodes of a directed graph, where neurons send signals (spikes, denoted by the symbol a in what follows) along synapses (arcs of the graph). Thus, the architecture of an SN P system is that of a tissue-like P system, with only one kind of object present in the cells. The objects evolve by means of spiking rules, which are of the form E/aca; d, where E is a regular expression over {a} and c, d are natural numbers, c ⩾ 1, d ⩾ 0. In other words, a neuron containing k spikes such that akL(E), kc, can consume c spikes and produce one spike after a delay of d steps. This spike is sent to all neurons connected by an outgoing synapse from the neuron where the rule was applied. There are also forgetting rules, of the form as → λ, which means that s ⩾ 1 spikes are forgotten if the neuron contains exactly s spikes. The system works in a synchronized manner: in each time unit, the rule to be applied in each neuron is nondeterministically chosen, a chosen rule must be applied for each neuron with applicable rules, and the work of the system is sequential in each neuron: at most one rule is applied in each neuron. One of the neurons is considered to be the output neuron, and its spikes are also sent to the environment. The result of a computation is defined as the total number of spikes sent into the environment by the output neuron.

In standard SN P systems, the synchronization is in general a powerful feature, which is useful in controlling the work of a computing device (e.g., the synchronization plays a crucial role in the proof of results such as computational completeness). But from both a mathematical and neurobiological point of view, it is rather natural to consider nonsynchronized systems, where the application of rules is not obligatory. Even if a neuron has a rule enabled in a given time unit, this rule is not obligatorily applied. The neuron may choose to remain unfired, perhaps receiving spikes from the neighboring neurons. If the unapplied rule may be applied later, it is used later without any restriction on the interval when it has remained unapplied. If the new spikes made the rule nonapplicable, then the computation continues in the new circumstances (maybe other rules are enabled now). Such nonsynchronized SN P systems are investigated in Cavaliere et al. (2009), who proved that such systems are still universal as generators of sets of natural numbers.

In the nonsynchronized SN P systems introduced in Cavaliere et al. (2009), any neuron in each time unit is free to apply a rule or not even if it is enabled. It is important to point out that when a neuron spikes, its spikes immediately leave the neuron and reach the target neurons simultaneously. That is, the execution of a rule is completed in exactly one time unit (one step). However, this feature is not justified by the corresponding biological reality. In fact, different chemical reactions or complex biological operations usually take different amounts of time to be completed, which can also be influenced by many environmental factors. So in this work, a timed version of SN P systems is introduced. In timed SN P systems, a natural number representing the time of execution of the rule is associated with each rule. In order to capture the class of SN P systems that are robust against the environmental changes that could affect, in an unpredicted manner, the execution times of the rules of the system, time-free SN P systems are also introduced. A time-free SN P system is one that always produces the same computation result, independent of the execution times of the rules.

The computational completeness of time-free SN P systems is investigated. It is proved that these systems are still universal by simulating universal register machines. In the proof of this universality result, a neuron is used to synchronize the work of the other neurons, employing a signaling mechanism to control the functioning of the entire system (this corresponds to synchronizing biological processes by using appropriate biological signals in biological systems).

In the proof of the universality of time-free SN P systems, the systems are assumed to be able to accumulate arbitrarily many spikes inside. To make the systems more realistic, a bound is imposed on the number of spikes present in any neuron along a computation (if a neuron gets more spikes than the given bound, the computation aborts). This restriction diminishes the power of time-free SN P systems; in this case, a characterization of semilinear sets of natural numbers is obtained.

The letter is structured as follows. In section 2, some necessary prerequisites are introduced. In section 3, the definition of time-free SN P systems is given. Two examples of time-free SN P systems are given in section 4. In section 5, it is proved that time-free SN P systems are universal by simulating register machines, while in section 6, a characterization of semilinear sets of natural numbers is given by means of time-free SN P systems with a bounded number of spikes in the neurons. Conclusions and remarks are given in section 7.

2.  Prerequisites

It is useful for readers to have some familiarity with basic elements of language theory (e.g., from Rozenberg & Salomaa, 1997), as well as basic membrane computing (Păun, 2002); a quick introduction to membrane computing can be found in Păun et al. (2010). For updated information about membrane computing, refer to http://ppage.psystems.eu. Here, some necessary prerequisites are introduced.

The set of natural numbers is denoted by . For an alphabet V, let V* denote the set of all finite strings over V, with the empty string denoted by λ. The set of all nonempty strings over V is denoted by V+. When V = {a} is a singleton, we write simply a* and a+ instead of {a}*, {a}+.

A regular expression over an alphabet V is defined as follows: (1) λ and each aV are regular expressions; (2) if E1, E2 are regular expressions over V, then (E1)(E2), (E1) ∪ (E2), and (E1)+ are regular expressions over V; and (3) nothing else is a regular expression over V. With each expression E, we associate a language L(E), defined in the following way: (1) L(λ) = {λ} and L(a) = {a}, for all aV and (2) L((E1) ∪ (E2)) = L(E1) ∪ L(E2), L((E1)(E2)) = L(E1)L(E2) and L((E1)+) = L(E+1) for all regular expressions E1, E2 over V. Nonnecessary parentheses are omitted when writing a regular expression, and (E)+ ∪ {λ} can be written as E*.

By SLIN, NRE we denote the families of semilinear and Turing-computable sets of numbers. Note that SLIN is the family of length sets of regular languages, characterized by regular expressions, and NRE is the family of length sets of recursively enumerable languages, those recognized by Turing machines. For example, the language L1 = {a(bb)nn ⩾ 0} is regular; it is characterized by regular expression a(bb)*, and its length set is {2n + 1 ∣ n ⩾ 0}, which belongs to SLIN. The language is Turing computable but not regular; its length set {2nn ⩾ 1} belongs to NRE but not SLIN. In general, SLINNRE holds.

A register machine is a construct M = (m, H, l1, lL, I), where m is the number of registers, H is the set of instruction labels, l1 is the start label (labeling an ADD instruction), lL is the halt label (assigned to instruction HALT), and I is the set of instructions. Each label from H labels only one instruction from I, thus precisely identifying it. The instructions are of the following forms:

  • • 

    . Add 1 to register r and then go to one of the instructions with labels lj, lk nondeterministically chosen.

  • • 

    . If register r is nonempty, then subtract 1 from it and go to the instruction with label lj; otherwise go to the instruction with label lk.

  • • 

    . The halt instruction.

A register machine M generates a set N(M) of numbers in the following way. The machine starts with all registers being empty (i.e., storing the number zero); the machine applies the instruction with label l1 and continues to apply instructions as indicated by the labels (and made possible by the contents of registers). If it reaches the halt instruction, the number n present in specified register 1 at that time is said to be generated by M. If the computation does not halt, then no number is generated. It is known that register machines generate all sets of numbers that are Turing computable; hence they characterize NRE (see Minsky, 1967).

Without loss of generality, it can be assumed that l1 labels an ADD instruction; in the halting configuration, all registers different from the first one are empty. The output register is never decremented during the computation, we only add to its content.

A register machine can also accept a set of numbers. A number n is accepted by M if, starting with n in register 1 and all other registers empty, the computation eventually halts (without loss of generality, we may assume that in the halting configuration all registers are empty). Deterministic register machines (i.e., with ADD instructions of the form )) working in the accepting mode are known to be equivalent with Turing machines.

We use the following convention. When comparing the power of two-number generating or accepting devices D1 and D2, the number zero is ignored; that is, we write N(D1) = N(D2) if and only if N(D1) − {0} = N(D2) − {0} (this corresponds to the usual practice of ignoring the empty string in language and automata theory).

3.  Time-Free Spiking Neural P Systems

SN P systems were introduced in Ionescu et al. (2006). In what follows, it is helpful to be familiar with the basic elements of classical SN P systems. Here, the variants of SN P systems investigated in this work are introduced: extended timed SN P systems (without delay) and extended time-free SN P systems (without delay).

An SN P system, of degree m ⩾ 1, is a construct of the form
formula
where:
  • 1.

    O = {a} is the singleton alphabet (a is called spike).

  • 2.
    σ1, …, σm are neurons, of the form
    formula
    where:
    • a.

      ni ⩾ 0 is the initial number of spikes contained in σi;

    • b.

      Ri is a finite set of rules of the following form: E/acap, where E is a regular expression over a, c ⩾ 1 and p ⩾ 0.

  • 3.

    syn ⊆ {1, 2, …, m} × {1, 2, …, m} with ij for each (i, j) ∈ syn, 1 ⩽ i, jm (synapses between neurons).

  • 4.

    i0 ∈ {1, 2, …, m} indicates the output neuron of the system.

A rule E/acap with p ⩾ 1 is called the extended firing (we also say spiking) rule; a rule E/acap with p = 0 is written in the simplified form E/ac → λ and is called a forgetting rule. If L(E) = ac, then the rules are written in the simplified form acap and ac → λ. A rule of the type E/aca and ac → λ is said to be restricted (or standard).

A timed SN P system Π(e) = (O, σ1, …, σm, syn, i0, e) can be constructed by adding to the SN P system Π a mapping that specifies the execution times of the rules. A timed SN P system Π(e) works in the following way. It is supposed to have an external clock that marks time units of equal length, starting from time 0. In what follows, when we say “at step t,” we mean the period of time from time t − 1 to time t. In each neuron, a finite number of spikes and a finite number of rules are present. If the neuron σi contains exactly k spikes and akL(E), kc, then the rule r:E/acap is enabled and can be applied. This means that consuming (removing) c spikes (thus only kc spikes remain in neuron σi), the neuron is fired, and it produces p spikes after e(r) time units (when the execution of spiking rule terminates; specifically, if the rule is started at time t, then p spikes are produced at time t + e(r)). During the execution of rules, the neuron is closed. If a rule is applied at step t and e(r) ⩾ 1, then at steps t, t + 1, t + 2, …, t + e(r) − 1 the neuron is closed, so that it cannot receive new spikes (if a neuron has a synapse to a closed neuron and tries to send a spike along it, that particular spike is lost). At step t + e(r), the neuron becomes open again so it can receive spikes (which can be used starting from the step t + e(r) + 1, when the neuron can again apply rules). Once emitted from neuron σi, the p spikes immediately reach all neurons σj that satisfy (1) (i, j) ∈ syn and (2) are open; that is, the p spikes are replicated, and each target neuron receives p spikes. As stated above, spikes sent to a closed neuron are “lost” (i.e., they are removed from the system). In the case of the output neuron, p spikes are also sent to the environment. Of course, if neuron σi has no synapse leaving from it, the produced spikes are lost.

If a forgetting rule E/ac → λ is applied, c ⩾ 1 spikes are removed. As in the case of the application of spiking rules, the neuron is closed during the execution of a forgetting rule.

In each time unit, if a neuron σi can use one of its rules, then a rule from Ri must be used. Clearly, when a rule from Ri is started, other rules from Ri cannot be applied before the execution of this rule completes. Since two firing rules and can have L(E1)∩L(E2) ≠ ∅, it is possible that two or more rules can be applied in a neuron. In this case, only one of them is chosen in a nondeterministic way. However, it is assumed that if a firing rule is applicable, then no forgetting rule is applicable, and vice versa. Thus, the rules are used in the sequential manner in each neuron (at most one in each step), but neurons work in parallel with each other. It is important to note that the applicability of a rule is established depending on the total number of spikes contained in the neuron.

The initial configuration of the system is described by the numbers of spikes present in each neuron, with all neurons being open. During the computation, a configuration is described by both the number of spikes present in each neuron and the state of the neuron (i.e., the number of steps to count down until it becomes open again; this number is zero if the neuron is already open). Thus, 〈r1/t1, …, rm/tm〉 is a configuration, where neuron σi contains ri ⩾ 0 spikes and it will be open after ti ⩾ 0 steps, for i = 1, 2, …, m. With this notation, the initial configuration of the system is C0 = 〈n1/0, …, nm/0〉. Using the rules as described, one can define transitions among configurations. A direct transition between two configurations C1 and C2 is denoted by C1C2. The reflexive and transitive closure of the relation ⇒ is denoted by ⇒*. Any sequence of transitions starting from the initial configuration is called a computation. A computation halts if it reaches a configuration where all neurons are open and no rule can be applied.

The result of a computation is defined as the total number of spikes sent into the environment by the output neuron (not the distance between the first two spikes of the spike train produced by the output neuron, as is usual in the SN P systems area). Specifically, a number x is generated by an SN P system if there is a halting computation of the system and the output neuron emits exactly x spikes (if several spikes are emitted by the output neuron at the same time, all of them are counted). Because of the nondeterminism in using the rules, a given system Π(e) computes in this way a set of numbers N(Π(e)).

A particular class of timed SN P systems is of interest. Called time free, they are robust against environmental changes that could affect, in an unpredicted manner, the execution times of the rules of the system. An SN P system is time free if and only if every timed SN P system in the set,
formula
produces the same set of natural numbers. The set of natural numbers generated by a time-free SN P system Π is denoted by N(Π).

By NSNPfreegen, we denote the families of the sets of natural numbers generated by time-free SN P systems.

From the definitions of classical SN P systems and timed SN P systems, it is easy to see that classical SN P systems are a subclass of timed SN P systems, where the execution times of all rules are one time unit. Example 1 in the next section shows a timed SN P system that is not time free; example 2 shows that there exists a time-free SN P system. In general, time-free SN P systems are a subclass of timed SN P systems.

Although the differences between asynchronous SN P systems (Cavaliere et al., 2009) and time-free SN P systems were outlined in section 1, it is worth reviewing the differences:

Application of Rules

  • • 

    In an asynchronous SN P system, in each time unit, any neuron is free to apply a rule or not even if it is enabled.

  • • 

    In a time free SN P system, if a neuron has rules enabled in a given time unit, one of them must be chosen nondeterministically and applied.

Execution Time of Rules

  • • 

    In an asynchronous SN P system, the execution of a rule is completed in exactly one time unit.

  • • 

    In a time-free SN P system, the execution time of a rule can be arbitrary.

State of Neurons

  • • 

    In an asynchronous SN P system, any neuron is always open.

  • • 

    In a time-free SN P system, the corresponding neuron is closed during the execution of a rule.

4.  Two Examples

In order to clarify the definitions, two examples are discussed. A standard way is also introduced to represent an initial configuration of an SN P system: each neuron is marked with a label, represented by a “membrane” and having inside both the current number of spikes (in the form an for n spikes present in a neuron) and the spiking rules. The arrows between the neurons represent the synapses linking the neurons. The output neuron identified by the label out has a short arrow that exits from it and points to the environment.

4.1.  Example 1.

The first example is system Π1 given in Figure 1, which has three neurons. Initially neurons σ1 and σ2 contain one spike, respectively,
formula
where syn = {(1, out), (out, 1), (2, out), (out, 2)}, σ1 = (1, R1), σ2 = (1, R2), σout = (0, Rout), R1 = {r1:aa}, R2 = {r2:aa, r3:a → λ}, Rout = {r4:a → λ, r5:a2a}.
Figure 1:

An SN P system that is not time free.

Figure 1:

An SN P system that is not time free.

System Π1 is not time free. Indeed, let us consider the time mappings e′ defined by e′(ri) = 1 for i = 1, 2, 3, 4, 5, and e′ defined by e′(r1) = 1, e′(ri) = 2 for i = 2, 3, 4, 5.

It is easy to see that . In fact, at step 1, in neuron σ2, the rule r2 or r3, nondeterministically chosen is started. In parallel, in neuron σ1, rule r1 is started. If rule r2 is applied at step 1, then the execution of the rules r1 and r2 terminates at the end of step 1 (neurons σ1 and σ2 become open again at step 2), and each of neurons σ1 and σ2 sends one spike to neuron σout. At step 2, with two spikes in neuron σout, rule r5 is started, and its execution terminates at the end of step 2, sending one spike to the environment and neurons σ1 and σ2, respectively. In this way, the computation reaches a configuration that is the same with the initial configuration; hence, the computation can iterate. If rule r3 is applied at step 1, then the spike in neuron σ2 is removed. Neuron σout receives a spike from neuron σ1 at step 1, and this spike is forgotten after the execution of rule r4 completes (at step 2). In this case, the system has no spike inside and no rule can be applied so the computation halts. In general, σout sends one spike out every two steps if σ2 continues using rule r2:aa. When rule r3:a → λ is applied by neuron σ2, the computation halts. Therefore, the set of numbers generated by Π1 is the set of natural numbers .

If the times of execution of the rules in Π1 are defined by the mapping e′, then N1(e′)) = {0}. Because e′(r1) = 1, the execution of rule r1 terminates at the end of step 1, and neuron σ1 sends one spike to neuron σout at step 1. In parallel, at step 1, in neuron σ2, rule r2 or r3 is started, nondeterministically chosen. Because e′(r2) = e′(r3) = 2, any execution of rule r2 or r3 terminates at the end of step 2. If rule r3 is applied at step 1, then neuron σ2 forgets its spike at step 2. At step 2, rule r4:a → λ is started, and the execution of rule r4 terminates at the end of step 3. When the execution of rule r4 completes (after step 3), all neurons are open without any spike inside, and no rule can be applied. The computation halts, sending no spike to the environment. If rule r2 is applied at step 1, the spike in neuron σout received from σ1 at step 1 is removed by rule r4, and the spike emitted from neuron σ2 at step 2 is lost (because at step 2, rule r4 is in execution; hence, σout is closed). The computation halts (after step 3), sending no spike to the environment.

4.2.  Example 2.

System Π1 can be modified to be a time-free system Π2 (see Figure 2), by deleting rule a → λ in neuron σout. System Π2 works as follows. If neuron σ2 chooses rule aa to apply, neuron σout will receive two spikes from neurons σ1 and σ2 (one spike from each neuron), which is independent of the execution times of rules (neuron σout is open before it accumulates two spikes; that is, neuron σout can receive spikes at any time before it accumulates two spikes). With two spikes inside, neuron σout sends one spike to the environment and one spike back to neurons σ1 and σ2, respectively, which is also independent of the execution time of rule a2a. This procedure can be repeated as long as neuron σ2 keeps choosing rule aa. If rule a → λ is applied, neuron σ2 consumes its spike; neuron σout gets only one spike from neuron σ1 and can never use its rule. Thus, , and system Π2 is time free since the computation result is independent of the execution times of the rules.

Figure 2:

A time-free SN P system.

Figure 2:

A time-free SN P system.

5.  Universality of Time-Free Spiking Neural P Systems

In this section, we prove that time-free SN P systems are universal by simulating register machines. In the SN P system designed for simulating a register machine, a neuron σstate is designed for the synchronization of the neurons. During each simulation of an instruction of a register machine, except for the output neuron, each neuron sends a signal (encoded by spikes) to neuron σstate when the execution of its corresponding rule is completed. Before neuron σstate receives enough spikes (“signals” that signal the execution of rules finished in each neuron), neuron σstate has to wait and does nothing (i.e., no rules in σstate are enabled), which is time independent. When neuron σstate “knows” that the execution of rules is finished (it received enough spikes), a rule in neuron σstate is enabled and applied, which starts the simulation of the next instruction of the register machine. This synchronization technique never appears in the universality proofs for classical SN P systems (Ionescu et al., 2006) and asynchronous SN P systems (Cavaliere et al., 2009).

Theorem 1.

NSNPfreegen = NRE.

Proof.

We have only to prove the inclusion NRENSNPfreegen. The converse inclusion is straightforward (or we can invoke for it the Turing-Church thesis). To this end, we use the characterization of NRE by means of register machines, which were introduced in section 2.

Let M = (m, H, l1, lL, I) be a register machine with m registers (1,⋅⋅⋅,m) and L instructions (l1, …, lL), having the properties specified in section 2. The result of a computation is the number from register 1, and this register is not subject to subtraction operations. H = {l1, …, lL} the set of instruction labels, l1 is the start label (labeling an ADD instruction) and lL the halt label (assigned to instruction HALT), and I is the set of instructions. In what follows, we construct a specific time-free SN P system Π to simulate M.

The structure of system Π is given in Figure 3 (spiking rules are omitted and will be specified below). In system Π, neurons , , and are used to load spikes to neuron σstate. Neuron σstate is associated with all instructions of M; neurons σi and are associated with register i (i = 1, …, m); neuron σout is used to output the result of a computation.

In system Π, each neuron is assigned a set of rules (see Table 1), where T = (2m + 3) × L + 1, P(i) = (2m + 3) × i, for i = 1, 2, …, L, add(r) = 2r + 4, sub(r) = 2r + 3, for r = 1, 2, …, m. Neurons σi (i = 2, …, m) have the same set of rules except for neuron σ1. The difference originates from the fact that neuron σ1 is not subject to subtraction instruction and is related to output, the result of a computation. In neuron σstate, there are L groups of rules R1, R2, ⋅ ⋅ ⋅, RL. For each ADD instruction , the set of rules Ri = {aP(i)+2T/a2m+2+Taadd(r),aP(i)−2+3T/aP(i)−2+TP(j) → λ,aP(i)−2+3T/aP(i)−2+TP(k) → λ} is associated; for each SUB instruction , the set of rules Ri = {aP(i)+2T/a2m+2+Tasub(r), aP(i)−2+3T/aP(i)−2+TP(j) → λ, aP(i)−1+3T/aP(i)−1+TP(k) → λ} is associated; for instruction , RL = {aP(L)+2T/a2m+2+2Tasub(1)} is associated. If the number of spikes in neuron σstate is P(i) + 2T, then system Π starts to simulate instruction li (in particular, having the number of spikes of the form P(1) + 2T = 2m + 3 + 2T in neuron σstate, system Π starts to simulate the initial instruction l1 of M; with P(L) + 2T = (2m + 3) × L + 2T spikes in neuron σstate, system Π start to output the result of a computation). That is why we use the label state for this neuron, and the functioning of this neuron is somewhat similar to the set of states in Turing machines.

Initially, all neurons have no spikes, with the only exception that neuron contains 2T − 1 spikes (recall that T = (2m + 3) × L + 1) and neuron σstate contains P(1) + 2T spikes. During the computation of M, if the register r holds the number n ⩾ 0, then the associated neuron σr will contain 4n spikes. The increase (resp. decrease) of the number stored in register r by 1 is simulated by adding (resp. removing) four spikes. In what follows, we check the simulation of register machine M by system Π by decomposing system Π into four modules with different functioning (modules ADD, SUB, OUTPUT, and an auxiliary module) and checking the work of each module.

Auxiliary module: Loading 2T spikes into σstate (see Figure 4). The subsystem in Figure 4 is used to load 2T spikes into neuron σstate. During each simulation of an instruction li (i = 1, …, L) that acts on register r, neuron σstate emits add(r) or sub(r) spikes (depending whether we have ADD, SUB, or halt instruction, as we will see below). With add(r) or sub(r) spikes in neuron , the rule aadd(r)a or asub(r)a is started. At some step, when the execution of the applied rule is completed, neuron sends one spike to neuron (it does not matter when neuron receives this spike; that is, it does not matter what the execution times of rules in neuron are; neuron will receive this spike). With this spike, neuron has 2T spikes inside, and rule a2Ta2T is started. Neuron sends 2T spikes to neurons σstate and when the execution of rule a2Ta2T terminates. With 2T spikes in neuron , rule a2Ta2T−1 is started. At some step, neuron sends 2T − 1 spikes back to neuron . In this way, neuron σstate gets 2T spikes, and neuron goes back to its initial state with 2T − 1 spikes inside. This process can be repeated. Specifically, for each simulation of an instruction, neuron σstate will be loaded with exactly 2T spikes.

Module ADD: Simulating a nondeterministic ADD instruction (see Figure 5). The initial instruction, labeled with l1, is an ADD instruction. Assume that the system is in a step when it starts to simulate an instruction , with P(i) + 2T spikes in neuron σstate, 2T − 1 spikes in neuron (as in the initial configuration), and no spikes in any other neurons except for neurons associated with the registers. Having P(i) + 2T spikes in neuron σstate, rule aP(i)+2T/a2m+2+Taadd(r) is started. When its execution terminates, neuron σstate emits add(r) spikes (which indicates adding register r with 1, as we will see below). By the rules in neurons (f ∈ {1, …, m}), we can see that at some steps, each of neurons (f ∈ {1, …, m} − {r}) sends two spikes to neurons σf; only neuron sends six spikes to neuron σr. After consuming two spikes by rule a2(a4)*/a2a2, neurons σf (fr) go back to the previous state (i.e., the state before this simulation). Only the number of spikes in neuron σr increases by four (which means the number stored in register r is increased by one).

After receiving 2m spikes from neurons σ1, …, σm and 2T spikes from neuron , neuron σstate has P(i) − 2 + 3T spikes (the fact of receiving all of these 2m + 2T spikes signals that the simulation of adding one to register r completes; the system will pass to nondeterministically choose instruction lj or lk for the next simulation). Neuron σstate nondeterministically starts one rule between aP(i)−2+3T/aP(i)−2+TP(j) → λ and aP(i)−2+3T/aP(i)−2+TP(k) → λ. If the former rule is applied, it consumes P(i) − 2 + TP(j) spikes, leaving P(j) + 2T spikes in neuron σstate; hence, the next simulated instruction will be lj. If the latter rule is applied, P(i) − 2 + TP(k) spikes are consumed, so P(k) + 2T spikes are left in neuron σstate, and system Π starts to simulate an instruction with label lk.

From this description, we can see that a signaling mechanism is used in system Π. During the simulation of an ADD instruction (as we will see, during each simulation of an instruction), all neurons have to spike except for neuron σout; neuron σstate is designed to “know” whether all other neurons have finished their work. If some neurons do not finish their work, neuron σstate cannot get enough spikes to apply its rules. It has to wait and do nothing until it receives enough spikes (i.e., until all other neurons have finished their work). This signaling mechanism is important for the synchronization of the neurons (which are time independent), and in this way register machines are correctly simulated.

Remark. (1) The auxiliary module (neurons , , ) is necessary for the functioning of system Π. It sends 2T spikes to neuron σstate during each simulation of an instruction, which ensures that the number of spikes in neuron σstate is not less than 0.

(2) In the simulation of an ADD instruction, the rules in neurons may have different execution times, and neurons function in parallel. However, before receiving all signals from neurons ai (i = 1, …, m) and , neuron σstate has to wait since there is no rule applicable. This guarantees the robustness of the system in the sense of time independency, which is also used in the following simulations of SUB and HALT instructions.

(3) In the simulation of an ADD instruction, when neuron σstate fires, it sends add(r) spikes to all neurons (i = 1, …, m). Checking the rules in neurons (i = 1, …, m) (listed in Table 1), we can find that in neuron , only rule aadd(r)a6 is enabled and applied, sending six spikes to neuron σr. In neuron with tr, only rule aadd(r)a2 is enabled and applied, and neuron σt, tr, receives two spikes, which will be transferred to neuron σstate by rule a2(a4)*/a2a2. In general, only the register r that the ADD instruction acts on can increase its number by one.

(4) As we will see below, when a SUB instruction that acts on register r is simulated, neuron σstate sends out sub(r) = 2r + 3 spikes. After the transformation by neurons (i = 1, …, m), only neuron σr receives three spikes. Each of neurons σf (f ∈ {1, …, m} − {r}) receives two spikes. These two spikes are then removed, so the number of spikes in these neurons is not changed.

Module SUB: Simulating a SUB instruction (see Figure 6). We recall that no SUB instruction acts on register 1. The execution of instruction li is simulated in system Π in the following way. Assume that at time t, there are P(i) + 2T spikes in neuron σstate. The rule aP(i)+2T/a2m+2+Tasub(r) is started and produces sub(r) spikes at some step. Receiving sub(r) spikes, neuron is activated and sends three spikes to σr, but neurons (f ∈ {1, …, m} − {r}) send two spikes to neurons σf. Then neurons (f ∈ {1, …, m} − {r}) will spike at some steps and send two spikes back to neuron σstate. These spikes are “signals” to inform neuron σstate that neurons σf have finished their “works” (the execution of rules). For neuron σr, after receiving three spikes from , there are the following two cases.

  • 1.

    The number of spikes in neuron σr at time t is 0. Then rule a3a3 consumes these three spikes, sending three spikes to neuron σstate. Note that neuron σstate can do nothing before receiving the following 2m + 1 + 2T spikes: three spikes from σr, 2(m − 1) spikes from neurons σf (f ∈ {1, …, m} − {r}), and 2T spikes from neuron . After all of these spikes arrive, neuron σstate has P(i) − 1 + 3T spikes inside and rule aP(i)−1+3T/aP(i)−1+TP(k) → λ will be triggered. It consumes P(i) − 1 + TP(k) spikes, leaving P(k) + 2T spikes in neuron σstate. Hence, the system will go to simulate the instruction lk.

  • 2.

    The number of spikes in neuron σr at time t is 4n with n>0. Then rule a3(a4)+/a7a2 consumes seven spikes, sending two spikes to neuron σstate. At some step, two spikes from σr, 2(m − 1) spikes from neurons σf (f ∈ {1, …, m} − {r}) and 2T spikes from neuron arrive in neuron σstate. With P(i) − 2 + 3T spikes inside, rule aP(i)−2+3T/aP(i)−2+TP(j) → λ can be applied. It consumes P(i) − 2 + TP(j) spikes, leaving P(j) + 2T spikes in neuron σstate, so system Π will start to simulate the instruction lj.

Module OUTPUT: Outputting the result of a computation (see Figure 7). Assume now that the computation in M halts, which means that the halt instruction lL is reached. For system Π, this means that neuron σstate contains P(L) + 2T spikes and neuron σ1 stores 4n spikes, for n being the content of register 1 of M. With P(L) + 2T spikes in neuron σstate, rule aP(L)+2T/a2m+2+2Tasub(1) is started, and neuron σstate sends sub(1) spikes to neurons (t = 1, …, m) when the execution of the rule terminates. Receiving these spikes, neurons (f ≠ 1) send out two spikes at some steps, and neuron emits three spikes. With 4n + 3 spikes inside, neuron σ1 starts its rule a3(a4)+/a7a and will send one spike to neurons σstate and σout, respectively. This spike changes the parity of the number of spikes in σout, which has an odd number of spikes now, and rule a(a2)*/aa can be applied. After the execution of this rule is completed, one spike goes out to the environment, and one spike arrives in neuron σstate as a “signal.” After receiving all “signals” (one spike from neuron σout, one spike from σ1, 2(m − 1), spikes from neurons σf, f = 2, 3, …, m, and 2T spikes from ), neuron σstate has P(L) + 2T again. So system Π will repeat the process, sending one spike to the environment for every four spikes in neuron σ1, until exhausting the spikes in neuron σ1. After the 4n spikes are consumed, no rule can be applied in neuron σ1, even though it receives three spikes from neuron . At some time (after all the neurons σf, f = 2, 3, …, m complete the execution of their rules a2(a4)*/a2a2), all neurons in system Π are open and have no rule applicable, so system Π halts.

From this description, it is clear that the register machine M is correctly simulated by system Π. Consequently, N(M) = N(Π). This completes the proof.

Figure 3:

Structure of system Π with the initial numbers of spikes.

Figure 3:

Structure of system Π with the initial numbers of spikes.

Table 1:
Rules Associated with Neurons in System Π.
NeuronsAssociated Rules
 a2Ta2T 
 a2Ta2T−1 
 aadd(r)a (r = 1, …, m), asub(r)a (r = 1, …, m
σ1 a2(a4)*/a2a2, a3(a4)+/a7a 
σi, i = 2, 3, …, m a2(a4)*/a2a2, a3(a4)+/a7a2, a3a3 
, i = 1, ⋅ ⋅ ⋅, m aadd(i)a6, asub(i)a3
 aadd(j)a2, asub(j)a2, j ∈ {1, …, m} − {i
σout a(a2)*/aa 
σstate Rstate = R1R2 ∪ ⋅ ⋅ ⋅ ∪ RL, where: 
 Ri = {aP(i)+2T/a2m+2+Taadd(r)
      aP(i)−2+3T/aP(i)−2+TP(j) → λ, 
      aP(i)−2+3T/aP(i)−2+TP(k) → λ}, 
 for instruction
 Ri = {aP(i)+2T/a2m+2+Tasub(r)
      aP(i)−2+3T/aP(i)−2+TP(j) → λ, 
      aP(i)−1+3T/aP(i)−1+TP(k) → λ}, 
 for instruction
 RL = {aP(L)+2T/a2m+2+2Tasub(1)}, 
 for instruction
NeuronsAssociated Rules
 a2Ta2T 
 a2Ta2T−1 
 aadd(r)a (r = 1, …, m), asub(r)a (r = 1, …, m
σ1 a2(a4)*/a2a2, a3(a4)+/a7a 
σi, i = 2, 3, …, m a2(a4)*/a2a2, a3(a4)+/a7a2, a3a3 
, i = 1, ⋅ ⋅ ⋅, m aadd(i)a6, asub(i)a3
 aadd(j)a2, asub(j)a2, j ∈ {1, …, m} − {i
σout a(a2)*/aa 
σstate Rstate = R1R2 ∪ ⋅ ⋅ ⋅ ∪ RL, where: 
 Ri = {aP(i)+2T/a2m+2+Taadd(r)
      aP(i)−2+3T/aP(i)−2+TP(j) → λ, 
      aP(i)−2+3T/aP(i)−2+TP(k) → λ}, 
 for instruction
 Ri = {aP(i)+2T/a2m+2+Tasub(r)
      aP(i)−2+3T/aP(i)−2+TP(j) → λ, 
      aP(i)−1+3T/aP(i)−1+TP(k) → λ}, 
 for instruction
 RL = {aP(L)+2T/a2m+2+2Tasub(1)}, 
 for instruction
Figure 4:

Auxiliary module loading a2T spikes into neuron σstate.

Figure 4:

Auxiliary module loading a2T spikes into neuron σstate.

Figure 5:

Module ADD simulating an ADD instruction .

Figure 5:

Module ADD simulating an ADD instruction .

Figure 6:

Module SUB simulating a SUB instruction .

Figure 6:

Module SUB simulating a SUB instruction .

Figure 7:

Module OUTPUT outputting the result stored in neuron σ1.

Figure 7:

Module OUTPUT outputting the result stored in neuron σ1.

6.  A Characterization of Semilinear Sets of Numbers

In the previous section, the neurons are allowed to hold arbitrarily many spikes. In this section, to make the systems more “realistic,” a bound is imposed on the number of spikes in neurons during a computation. This restriction, which looks rather minor at first sight, has a crucial influence on the power of the systems. Specifically, it decreases their computing power: the time-free SN P systems with a bounded number of spikes in each neuron generate exactly semilinear sets of natural numbers (hence, the systems are not computationally complete).

Let us denote by NSNPfreegen(bounds) the family of sets of numbers generated by time-free SN P systems with at most s spikes present at any time in any neuron (if a computation reaches a configuration where a neuron accumulates more than s spikes, then it aborts; such a computation does not provide any result). By bound* we denote that the time-free SN P systems have a bound on the number of spikes present in any neuron, but this bound is not specified.

Theorem 2.

SLIN = NSNPfreegen(bound*).

The proof of theorem 2 follows the style of proofs from Ionescu et al. (2006). In order to prove this theorem, a series of lemmas is given. We start with the inclusion NSNPfreegen(bound*) ⊆ SLIN.

Lemma 1.

NSNPfreegen(bound*) ⊆ SLIN.

Proof.

Take a time-free system Π with a bound s on the number of spikes in each neuron, and let e be an arbitrary time mapping. Because Π is time free, N(Π(e)) = N(Π). The number of neurons is given, their contents are bounded, the number of rules in neurons is given, and the time-mapping e is given. Hence, the number of configuration reached by Π(e) is finite. Let be the set of configurations of Π(e), and let C0 be the initial configuration of Π(e).

We construct the right-linear grammar , where P contains the following rules:

  • 1.

    CC′, for such that there is a transition CC′ in Π(e) during which the output neuron does not spike.

  • 2.

    CapC′, for such that there is a transition CC′ in Π(e) during which the output neuron sends p spikes to the environment.

  • 3.

    C → λ, for , which is a halting configuration in Π(e).

The way of controlling the derivation by the nonterminals in ensures the fact that N(Π(e)) is the length set of the regular language L(G); hence, it is semilinear. Because N(Π(e)) = N(Π), we have that N(Π) is also semilinear.

In order to obtain the opposite inclusion, SLINNSNPfreegen(bound*), we use the fact that any semilinear set of numbers is the union of a finite set with a finite number of arithmetical progressions. It suffices to prove the closure under union and the fact that finite sets and arithmetical progressions are in NSNPfreegen(bound*). We do this in the following lemmas.

Lemma 2.

Every finite set of natural numbers U = {n1, n2, …, nk} is in NSNPfreegen(boundn), where n = max{n1, n2, …, nk}.

Proof.
For a finite set of natural numbers U = {n1, n2, …, nk}, we take the system with only one neuron, containing initially n spikes, where n = max{n1, n2, …, nk}, and k rules:
formula
It is not difficult to check that the system is time free. The neuron nondeterministically chooses one rule to apply, and it is clear that the set of numbers generated by this system is U.

Lemma 3.

Any arithmetical progression Pk,0 = {knn ⩾ 1} with k ⩾ 1 is in the family NSNPfreegen(boundk).

Proof.

We can check that the system given in Figure 8 is time free, the number of spikes in each neuron is not more than k during a computation, and it generates the set {knn ⩾ 1}.

Figure 8:

A time-free SN P system generating a pure arithmetical progression.

Figure 8:

A time-free SN P system generating a pure arithmetical progression.

Lemma 4.

Any arithmetical progression Pk,l = {kn + ln ⩾ 1} with k ⩾ 1, l ⩾ 1 is in the family NSNPfreegen(boundk+l).

Proof.

It is not difficult to check that the system given in Figure 9 is time free, the number of spikes in each neuron is not more than k+l during a computation, and it generates the set {kn + ln ⩾ 1}.

Figure 9:

A time-free SN P system generating an arithmetical progression.

Figure 9:

A time-free SN P system generating an arithmetical progression.

From lemmas 3 and 4, we have that any arithmetical progression Pk,l = {kn + ln ⩾ 1} with k ⩾ 1, l ⩾ 0 is in the family NSNPfreegen(bound*).

For proving the closure under union, we need an auxiliary result. For an SN P system Π, let us denote by active(Π) the number of neurons that can be activated in the initial configuration of Π.

Lemma 5.

For any SN P system Π, there is an equivalent system Π′ such that active(Π′) = 1.

Proof.

Take an arbitrary SN P system Π. We construct a new system Π′ by the following two steps. First, let us denote by spin(Π) the maximal number of spikes present in the active(Π) neurons in the initial configuration of Π. We replace the rules of the form E/acap in these neurons by aspin(Π)E/acap. Second, add a neuron labeled with in (it is assumed that this label is not used in Π). Neuron σin contains aspin(Π) spikes initially and contains only one rule aspin(Π)aspin(Π).

An example for illustrating this construction is given in Figure 10. System Π in the left part has two neurons that can be activated at the beginning, that is, active(Π) = 2. By following the two steps introduced above, we get system Π′ shown at the right of Figure 10. Π′ has only one neuron σin, which can be activated in the initial configuration. Thus, active(Π′) = 1. It is easy to check that after neuron σin fires, system Π′ works in the same way as Π, and, of course, N(Π) = N(Π′).

Figure 10:

An example for illustrating the proof of lemma 5.

Figure 10:

An example for illustrating the proof of lemma 5.

Lemma 6.

If Q1, Q2NSNPfreegen(bounds), for s ⩾ 2, then Q1Q2NSNPfreegen(bounds).

Proof.

Take two SN P systems Π1, Π2 in the normal form given by lemma 5. Let in1, in2 be the labels of neurons in Π1, Π2, containing initially and spikes, respectively. We construct a system Π as depicted in Figure 11.

In system Π constructed in this way, only neuron σ1 can spike in the initial configuration. The nondeterminism of neuron σ1 will enable either subsystem Π1 or subsystem Π2. If σ1 uses rule a2a, then σ3 spikes, and subsystem Π2 is enabled. If rule a2a2 is used, then σ2 fires, and subsystem Π1 is enabled. Hence, system Π can generate whatever any of the two systems can generate.

Figure 11:

A time-free SN P system generating the union of two subsystem.

Figure 11:

A time-free SN P system generating the union of two subsystem.

7.  Conclusions and Remarks

The use of time in membrane computing is well investigated, especially in cell-like P systems (see Cavaliere & Deufemia, 2006; Cavaliere & Sburlan, 2005a, 2005b). In this work, this topic is considered in the framework of SN P systems. A rather restrictive variant of SN P systems is introduced: time-free SN P systems, where the execution times of the rules can be arbitrarily chosen and the output produced by the system is always the same. Even in this restrictive framework, time-free SN P systems with extended rules (several spikes can be produced by a rule) have been proved to be equivalent to register machines.

It remains open whether time-free SN P systems with standard rules (rules can only produce one spike) are Turing complete. A signaling mechanism is used in the proof of theorem 1, which is important for the synchronization of the time-free system. The “signals” in the proof of theorem 1 consist of packages of 2T spikes, six spikes, two spikes, produced by the extended spiking rules. More than one kind of “signal” is used for synchronization in the proof of theorem 1. However, this does not seem to work in the case of using standard rules, because standard spiking rules can produce only one spike, which means that all “signals” produced by standard spiking rules are the same. This open problem is worth investigating in the following sense: a nonuniversality result can show an interesting difference between standard and time free SN P systems, with the loss in power compensated by the additional “programming capacity” of extended rules.

Although it is proved that time-free SN P systems with extended rules are universal, an interesting question concerns the possibility of checking in an algorithmic manner whether an arbitrary SN P system is time free.

An interesting issue for further investigation of time-free SN P systems is to impose certain conditions on the execution times of the rules, which leads to a sort of partially time-free P system. One question concerning this class of SN P systems is as follows. What are the realistic conditions from a mathematical or biological point of view? One of the possible answers is to impose a bound t on the execution time such that the execution of a rule can take only arbitrary time smaller than t, where t is assumed to be large enough to ensure the completeness of every reaction (rule) in a system.

The structure (topology) of the system given in the proof of theorem 1 is important, as it ensures that the signaling mechanism works. Zeng, Zhang, and Pan (2009) proved that SN P systems with one kind of neuron (that is, the set of rules in each neuron is the same) are universal, which shows that the topology of an SN P system is crucial for the functioning of the system. In general, the role of the topology for the power of an SN P system is worth further investigation.

A similar idea of time-freeness was introduced and investigated in the framework of SN P systems in the form of stochastic SN P systems, where the time of firing for an enabled spiking rule is probabilistically chosen (Cavaliere & Mura, 2008). Note that classical SN P systems cannot be directly seen as a special case of stochastic SN P systems, because these systems do not include a closed state of neurons. For more differences between time-free SN P systems and stochastic SN P systems, and further research topics on stochastic SN P systems, refer to Cavaliere and Mura (2008).

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61033003, 30870826, 60703047, and 60803113), the Fundamental Research Funds for the Central Universities (2010ZD001), the Ph.D. Programs Foundation of Ministry of Education of China (20100142110072), and the Natural Science Foundation of Hubei Province (2008CDB113 and 2008CDB180). We gratefully acknowledge comments from Gheorghe Păun and also thank the anonymous referees of the letter for their useful comments and suggestions.

References

Cavaliere
,
M.
, &
Deufemia
,
V.
(
2006
).
Further results on time-free P systems
.
International Journal of Foundations of Computer Science
,
17
,
69
89
.
Cavaliere
,
M.
,
Egecioglu
,
O.
,
Ibarra
,
O. H.
,
Woodworth
,
S.
,
Ionescu
,
M.
, &
Păun
,
Gh.
(
2009
).
Asynchronous spiking neural P systems
.
Theoretical Computer Science
,
410
,
2352
2364
.
Cavaliere
,
M.
, &
Mura
,
I.
(
2008
).
Experiments on the reliability of stochastic spiking neural P systems
.
Natural Computing
,
7
,
453
470
.
Cavaliere
,
M.
, &
Sburlan
,
D.
(
2005a
).
Time and synchronization in membrane systems
.
Fundamenta Informaticae
,
64
,
65
77
.
Cavaliere
,
M.
, &
Sburlan
,
D.
(
2005b
).
Time-independent P systems
.
Lecture Notes in Computer Science
,
3365
,
239
258
.
Gerstner
,
W.
, &
Kistler
,
W.
(
2002
).
Spiking neuron models: Single neurons, populations, plasticity
.
Cambridge
:
Cambridge University Press
.
Ionescu
,
M.
,
Păun
,
Gh.
, &
Yokomori
,
T.
(
2006
).
Spiking neural P systems
.
Fundamenta Informaticae
,
71
,
279
308
.
Maass
,
W.
(
2002
).
Computing with spikes
.
Foundations of Information Processing of TELEMATIK
,
8
,
32
36
.
Maass
,
W.
, &
Bishop
,
C.
(Eds.). (
1999
).
Pulsed neural networks
.
Cambridge, MA
:
MIT Press
.
Minsky
,
M.
(
1967
).
Computation: Finite and infinite machines
.
Englewood Cliffs, NJ
:
Prentice Hall
.
Păun
,
Gh.
(
2000
).
Computing with membranes
.
Journal of Computer and System Sciences
,
43
,
108
143
.
Păun
,
Gh.
(
2002
).
Membrane computing: An introduction
.
Berlin
:
Springer-Verlag
.
Păun
,
Gh.
,
Rozenberg
,
G.
, &
Salomaa
,
A.
(Eds.). (
2010
).
Handbook of membrane computing
.
New York
:
Oxford University Press
.
Rozenberg
,
G.
, &
Salomaa
,
A.
(Eds.). (
1997
).
Handbook of formal languages
.
Berlin
:
Springer-Verlag
.
Zeng
,
X.
,
Zhang
,
X.
, &
Pan
,
L.
(
2009
).
Homogeneous spiking neural P systems
.
Fundamenta Informaticae
,
97
,
1
20
.