Abstract

Spiking neural P systems (SN P systems) are a class of distributed parallel computing devices inspired by the way neurons communicate by means of spikes; neurons work in parallel in the sense that each neuron that can fire should fire, but the work in each neuron is sequential in the sense that at most one rule can be applied at each computation step. In this work, with biological inspiration, we consider SN P systems with the restriction that at each step, one of the neurons (i.e., sequential mode) or all neurons (i.e., pseudo-sequential mode) with the maximum (or minimum) number of spikes among the neurons that are active (can spike) will fire. If an active neuron has more than one enabled rule, it nondeterministically chooses one of the enabled rules to be applied, and the chosen rule is applied in an exhaustive manner (a kind of local parallelism): the rule is used as many times as possible. This strategy makes the system sequential or pseudo-sequential from the global view of the whole network and locally parallel at the level of neurons. We obtain four types of SN P systems: maximum/minimum spike number induced sequential/pseudo-sequential SN P systems with exhaustive use of rules. We prove that SN P systems of these four types are all Turing universal as number-generating computation devices. These results illustrate that the restriction of sequentiality may have little effect on the computation power of SN P systems.

1.  Introduction

The human brain is an enormous, complex information processing system, with more than 1 trillion neurons working in a cooperative manner to perform tasks that are not yet matched by the tools we can build with our current technology, for example, thought, self-awareness, and intuition. Biology is a rich source of inspiration for informatics, as natural computing proves: in particular, the brain is the gold mine of this intellectual enterprise. We believe that if something really great is to appear in informatics in the near future, then this “something” will be suggested by the brain, as shown by the Turing machine and the finite automaton (Păun & Pérez-Jiménez, 2009). Spiking neural P systems (SN P systems) are a class of distributed parallel computation devices introduced in Ionescu, Păun, and Yokomori (2006) as an attempt to learn “something” from the brain. We stress here that SN P systems are not meant to provide the answer to the learning-from-brain challenge, but they are a way to call attention to this challenge again.

SN P systems are inspired by the way neurons communicate by means of spikes (electrical impulses of identical shape). Such systems provide a novel viewpoint to investigate spiking neural networks in the framework of the emergent research area of membrane computing. Membrane computing is one of the recent branches of natural computing, which was initiated by G. Păun in 1998 (Păun, 2000) and has developed rapidly (by 2003, the Information Sciences Institute considered membrane computing to be a “fast emerging research area in computer science”; see http://esi-topics.com). The aim is to abstract computing ideas (e.g., data structures, operations with data, ways to control operations, computing models) from the structure and the functioning of a single cell and from complexes of cells, such as tissues and organs, including the brain. The models obtained are called P systems, and this was proved to be a rich framework for handling many problems related to computing (Ishdorj, Leporati, Pan, Zeng, & Zhang, 2010; Wang, Hoogeboom, Pan, Păun, & Pérez-Jiménez, 2010; Xu & Jeavons, 2013). (See Păun, 2002, and Păun, Rozenberg, & Salomaa, 2010, for general information about membrane computing and http://ppage.psystems.eu for up-to-date information.)

Briefly, an SN P system consists of a set of neurons placed in the nodes of a directed graph. Each neuron contains a number of copies of a single object type called a spike, which is denoted by the symbol a in what follows. The communications between neurons are achieved by sending signals (in the form of spikes) along synapses (arcs of the graph). The spikes evolve by means of extended spiking rules, which are of the form , where E is a regular expression over and c, p, d are natural numbers, , with the restriction of . The spikes can also be removed from the neurons by extended forgetting rules of the form . If a neuron contains k spikes such that , , then the rules with the regular expression E are enabled. It is possible that more than one rule is enabled in a neuron at some moment, since two firing rules, and , may have . In this case, the neuron will nondeterministically choose one of the enabled rules to use. If the rule is applied in a neuron, then c spikes are consumed from the neuron and p spikes are produced after a delay of d steps. These spikes are sent to all neurons connected by an outgoing synapse from the neuron where the rule was applied. The use of the rule will remove c spikes from the neuron. All neurons work in parallel in the sense that at each step, each neuron that can apply a rule should do it, while the rules in each neuron are applied in a sequential manner with the meaning that at most one rule is applied in each neuron. One of the neurons is designated as the output neuron of the system, and its spikes are also sent to the environment. Various ways can be used to define the result of a computation. In this work, we use as the computation result the total number of spikes sent to the environment by the output neuron.

Many computational properties of SN P systems have been studied. SN P systems were proved to be computationally complete (equivalent to Turing machines or other equivalent computing devices; we also say that SN P systems are universal) as number computing devices (Ionescu et al., 2006; Pan, Zeng, & Zhang, 2011; Pan, Wang, & Hoogeboom, 2012), language generators (Chen, Freund, Ionescu, Păun, & Pérez-Jiménez, 2007; Chen et al., 2008), and function computing devices (Păun & Păun, 2007). SN P systems were also used to (theoretically) solve computationally hard problems in a feasible amount of time (see, Ishdorj et al., 2010; Wang et al., 2010).

At the level of neurons, a kind of local parallelism, called exhaustive use of rules, was proposed in Ionescu, Păun, and Yokomori (2007), where in each neuron, an applicable rule is used as many times as possible. The biological motivation of exhaustive use of rules is that an enabled chemical reaction consumes related substances as much as possible. It was proved that SN P systems with exhaustive use of rules are universal if the neurons work in parallel (Ionescu et al., 2007; Zhang, Zeng, & Pan, 2008).

Although biological neurons in the brain work in parallel, they are not synchronized by a universal clock as assumed in SN P systems. Several authors have noticed that the maximally parallel way of how neurons work is rather unrealistic and considered various strategies of how neurons work (e.g., Cavaliere et al., 2009; Freund, 2005; Ibarra, Woodworth, Yu, & Păun, 2006; Zhang, Luo, Fang, & Pan, 2012). Ibarra, Păun, and Rodríguez-Patón (2009) considered SN P systems to function in a sequential manner induced by a maximum (or, dually, minimum) spike number: if at any computation step there is more than one active neuron, then only the neuron(s) containing the maximum (or minimum) number of spikes (among the currently active neurons) will be able to fire. If there is a tie for the maximum number of spikes stored in active neurons, then two distinct strategies can be considered: max-pseudo-sequentiality (all the active neurons with the maximum number of spikes will fire) and max-sequentiality (only one of the active neurons with the maximum number of spikes will fire). Similarly, we have min-pseudo-sequentiality and min-sequentiality for when there is a tie for the minimum number of spikes stored in active neurons. It was shown that SN P systems are universal working in the max-sequential, max-pseudo-sequential, or min-sequential strategy; however, it remains open whether SN P systems working in the min-pseudo-sequential manner are universal (Ibarra et al., 2009).

In this letter, we consider SN P systems with exhaustive use of rules working in the max-sequential, max-pseudo-sequential, min-sequential, or min-pseudo-sequential manner. We abbreviate these systems as MaxsEx, MaxpsEx, MinsEx, and MinpsEx SN P systems, respectively. We prove that all of these systems are Turing universal, which illustrates that such restrictions on the working way of neurons do not reduce the computation power of SN P systems.

This letter is organized as follows. In section 2, we recall some necessary preliminaries. In section 3, we introduce the computation models investigated in this work: MaxsEx, MaxpsEx, MinsEx, and MinpsEx SN P systems. The computation power of such systems is investigated in sections 5 and 6. Conclusions and remarks are given in section 7.

2.  Prerequisites

It is useful for readers to have some familiarity with basic elements of formal language theory (see Rozenberg & Salomaa, 1997). We here introduce the necessary prerequisites.

For an alphabet V, denotes the set of all finite strings over V, with the empty string denoted by . The set of all nonempty strings over V is denoted by V+. When is a singleton, we simply write and a+ instead of , . We denote by RE the family of recursively enumerable languages and by NRE the family of length sets of languages in RE.

A regular expression over an alphabet V is defined as follows: (1) and are regular expressions; (2) if E1, E2 are regular expressions over V, then (E1)(E2), , and (E1)+ are regular expressions over V; and (3) nothing else is a regular expression over V. With each expression E, we associate a language L(E), defined in the following way: (1) and , for all and (2) , L((E1)(E2))=L(E1)L(E2), and L((E1)+)=(L(E1))+, for all regular expressions E1, E2 over V. Nonnecessary parentheses are omitted when writing a regular expression, and also can be written as .

A register machine is a construct M=(m, H, l0, lh, I), where m is the number of registers, H is the set of instruction labels, l0 is the start label, lh is the halt label (assigned to instruction ), and I is the set of instructions. Each label from H labels only one instruction from I, thus precisely identifying it. The instructions are of the following forms:

  • • 

    (add 1 to register r and then go to one of the instructions with labels lj, lk, nondeterministically chosen).

  • • 

    (if register r is nonempty, then subtract 1 from it and go to the instruction with label lj; otherwise go to the instruction with label lk).

  • • 

    (the halt instruction).

A register machine M generates a set N(M) of numbers in the following way. Starting with all registers empty (i.e., storing the number zero), the system applies the instruction with label l0 and continues to apply instructions as indicated by the labels (and made possible by the contents of registers). If the system reaches the halt instruction, then the number n present in register 1 at that time is said to be generated by M. The set of all numbers generated by M is denoted by N(M). It is known that register machines containing at least three registers can generate any recursively enumerable sets of numbers, which means that register machines characterize NRE (see Minsky, 1967).

Without loss of generality, we may assume that in the halting configuration, all registers except the first one are empty and that the output register is never a subject of SUB instructions, but only of ADD instructions.

We use the following convention. When the power of two number-generating or accepting devices D1 and D2 is compared, the number zero is ignored—that is, N(D1)=N(D2) if and only if (this corresponds to the usual practice of ignoring the empty string when comparing the power of two grammars or automata).

3.  SN P Systems

In this section, we introduce the computation model investigated in this work: SN P systems with exhaustive use of rules working in the sequential manner induced by maximum or minimum spike number among the active neurons.

An SN P system of degree without delay (this feature is not used in this work) is a construct of the form
formula
  • • 

    is the singleton alphabet (a is called a spike).

  • • 

    are neurons of the form

    1. is the initial number of spikes contained in ;

    2. Ri is a finite set of rules of the following two forms:

      1. Extended spiking rule: , where E is a regular expression over O, ; if p=1, the rule is called a standard spiking rule; if , the rule can be written in the simplified form .

      2. Extended forgetting rule: , where E is a regular expression over O, and , with the restriction that for any spiking rule ; if E=ac, then the rule is called a standard forgetting rule, and it can be written as .

  • • 

    with for each (synapses between neurons).

  • • 

    indicates the output neuron of the system.

A spiking rule is applied in an exhaustive way as follows. If neuron contains k spikes and , then the rule can be applied. Using the rule in an exhaustive way means the following. Assume that k=sc+r for some and (the remainder of dividing k by c); then sc spikes are consumed, r spikes remain in neuron , and sp spikes are produced and sent to each of the neurons such that (as usual, this means that the sp spikes are replicated and exactly sp spikes are sent to each of the neurons ). In the case of the output neuron, sp spikes are also sent to the environment. Of course, if neuron has no synapse leaving from it, then the produced spikes are lost.

A forgetting rule is applied in an exhaustive way as follows. If neuron contains k spikes and , k=sc+r with and , then the rule can be applied, meaning that sc spikes are removed from neuron and r spikes remain in neuron (in this work, we use a restricted version of forgetting rules of the form ; when a forgetting rule of such a form is used in a neuron, then all spikes will be removed from the neuron).

If several rules in a neuron are enabled at the same time, only one of them is nondeterministically chosen to be applied, and the remaining spikes cannot evolve by another rule. For instance, assume that a neuron has rules and , and contains five spikes. If the rule is chosen to be applied, then this rule is used twice, and one spike remains in the neuron; however, although , this remaining spike cannot evolve by the rule . If the rule is chosen instead of , then all spikes are consumed. This is the reason for which the term exhaustive is used and not the term parallel for describing the way the rules are used.

An SN P system with exhaustive use of rules can work in the max-sequential, max-pseudo-sequential, min-sequential or min-pseudo-sequential manner as introduced in section 1, and the corresponding SN P systems are called MaxsEx, MaxpsEx, MinsEx, and MinpsEx SN P systems, respectively.

A configuration of the system is described by the number of spikes present in each neuron. Thus, the initial configuration is . Using the rules as described above, one can define transitions among configurations. Any sequence of transitions starting from the initial configuration is called a computation. A computation halts if it reaches a configuration where no rule can be used. The result of a computation can be defined in several ways. In this work, we define as the computation result the total number of spikes sent to the environment by the output neuron during the computation; we also say that this number is generated or computed by an SN P system. The set of all numbers computed in this way by an SN P system is denoted by .

We denote by the family of all sets of numbers generated by SN P systems with exhaustive use of rules working in the manner , where (maxs, maxps, mins, minps stand for max-sequentiality, max-pseudo-sequentiality, min-sequentiality, min-pseudo-sequentiality, respectively), with at most m neurons; we replace m with when the number of neurons is not bounded.

4.  An Example

In order to clarify the definitions, we present an example, which is shown in Figure 1. The system consists of two neurons, labeled 1 and 2. Initially, each of neurons and contains two spikes.

Figure 1:

An example SN P system used to clarify the definitions.

Figure 1:

An example SN P system used to clarify the definitions.

We first consider the case that the system with an exhaustive use of rules works in a parallel manner. This means that at each step, each neuron that can apply a rule should do it. At step 1, both neurons and can fire by the rule . Under the exhaustive use of rules, neuron sends two spikes to neuron , while neuron sends two spikes to neuron and to the environment. In this way, each of neurons and contains two spikes, which means that the rule can be applied at the next step. This procedure can be repeated until the rule is applied in neuron . Assume that the rule in neuron is applied at step t. The application of this rule removes two spikes from neuron and sends one spike to neuron . At this step, the rule in neuron is also applied, sending two spikes to neuron . So neuron accumulates two spikes, while neuron accumulates one spike. The two spikes in neuron can trigger the rule or the rule , nondeterministically chosen.

If rule is applied, two spikes are sent to neuron . At this moment, no spike is contained in neuron , and three spikes are accumulated in neuron , by which neuron is blocked. So the system halts. If rule is applied, then one spike will be sent to neuron . At this moment, no spike is contained in neuron , and two spikes are accumulated in . The two spikes trigger rule in neuron , by which two spikes are sent back to neuron . This procedure can be repeated until rule is applied, and then the system halts. From the explanation, it is not difficult to find that the system always sends an even number of spikes to the environment. Due to the nondeterminism of choosing the rule or in neuron , the system working in the parallel manner will generate the set .

Let us now consider the case in which the system works in the max-sequential manner, which means that at step 1, only one of neurons and will fire, nondeterministically chosen. If neuron fires at step 1, then two spikes are sent to neuron and the environment. At this moment, neuron accumulates four spikes, and thus it is blocked. Since no spike is contained in neuron , the system halts at step 2. If neuron fires at step 1, rule or can be applied, nondeterministically chosen. Two spikes or one spike will be sent to neuron , and thus neuron accumulates four spikes or three spikes. So neuron is blocked, and at that moment the system halts. We can check that the system working in the max-sequential manner sends zero or two spikes into the environment, which means that it generates the set .

Since each of neurons and is enabled if it contains two spikes and both of neurons and initially have two spikes, the system works in the same way under the parallel manner and the max-pseudo-sequential manner. So the system working in the max-pseudo-sequential manner generates the set .

It is not difficult to check that the system generates the set under the min-sequential manner and the set under the min-pseudo-sequential manner.

5.  MaxsEx and MaxpsEx SN P Systems

In this section, we prove that MaxsEx and MaxpsEx SN P systems are Turing universal.

Theorem 1. 

.

Proof. 

We only have to prove the inclusion ; the converse inclusion is straightforward but cumbersome (for similar technical details, refer to section 8.1 in Păun, 2002).

To this aim, we prove that register machines can be simulated by MaxsEx SN P systems (as we know from section 2, register machines with at least three registers characterize NRE). Let M=(m, H, l0, lh, I) be a register machine. In what follows, a specific MaxsEx SN P system is constructed to simulate the register machine M.

The system consists of three types of modules: ADD module, SUB module, and FIN module shown in Figures 2 to 4, respectively. The ADD module and the SUB module are used to simulate ADD instructions and SUB instructions of M, respectively; the FIN module is used to output the computation result.

Figure 2:

Module ADD for simulating working in the max-sequential manner.

Figure 2:

Module ADD for simulating working in the max-sequential manner.

In general, for each register r of M, a neuron in is associated whose content corresponds to the content of the register. Specifically, if the register r holds the number , neuron will contain 6n+1 spikes. For each label li of an instruction in M, a neuron in is associated. Initially all neurons have no spike, with the exception of neuron , which is associated with the initial instruction l0 in M. Neuron contains two spikes in the initial configuration, which corresponds to the fact that M starts a computation by applying the instruction with label l0. During a computation, once neuron receives two spikes, it becomes active and starts to simulate an instruction ( is the operation of or ) of M: starting with neuron activated, operating neuron as requested by , then introducing two spikes in one of the neurons , , which in this way becomes active. When neuron associated with the halting label of M is activated, the computation in M is completely simulated in .

Note that in both the ADD and the SUB modules, the rules concerning neurons , are written in the form (q=j or k), because we do not know whether lj and lk are labels of ADD, SUB, or halting instructions. That is why we use the function , which is defined on H as follows:
formula
where .

In neuron of all modules, the indications ADD and SUB denote the rules that are used when ADD and SUB instructions are simulated, respectively.

Module ADD (shown in Figure 2), simulating an ADD instruction lj, lk): Let us assume that at some step t, the system starts to simulate an ADD instruction of M, and that register r holds the number n. At that moment, two spikes are present in neuron , and several spikes may be present in neurons associated with the registers. Neuron contains 6n+1 spikes. With two spikes in neuron , the rule in neuron is enabled, and no rule in other neurons is enabled at step t. So neuron fires at step t, sending a spike to each of neurons , . Neuron accumulates one spike, neuron accumulates two spikes, and neuron accumulates three spikes, so the rules in these neurons are enabled. Since the system works in the max-sequential manner, only neuron fires at step t+1. The rule in neuron is applied, and three spikes are sent to each of neurons and , .

After three spikes are received from neuron , neuron contains 6n+1+3 spikes, and so the rule in neuron is enabled. At step t+2, neuron fires by the rule , consuming 6n+1 spikes, leaving three spikes, and sending 6n+1 spikes to each of the neurons to which a synapse is connected from neuron (because of the exhaustive use of rules). This means that each of neurons , , receives 6n+1 spikes; hence, each of them contains 6n+1+3 spikes at step t+3. These spikes will not be used in the neurons until another spike is received.

Note that several other neurons might receive the 6n+1 spikes from neuron . If there exist ur ADD instructions (including ADD instruction li) and vr SUB instructions that act on register r, then there are in all 6ur+2vr auxiliary neurons to which a synapse is connected from neuron (as shown in Figures 2 and 3, each ADD module contains six auxiliary neurons to which a synapse is connected from neuron , and each SUB module contains two auxiliary neurons to which a synapse is connected from neuron ). Each of the 6ur+2vr auxiliary neurons receives 6n+1 spikes; hence, the forgetting rule is enabled. Let us recall that at step t+3, neurons and can fire, and neuron can fire by the rule . Since the system works in a sequential manner induced by maximum spike number, the 6ur+2vr auxiliary neurons fire first, and one step is needed for the firing of a neuron. So from step t+3 to step t+6ur+2vr+2, the system removes each 6n+1 spike from the 6ur+2vr auxiliary neurons. At step t+6ur+2vr+3, neuron fires by the rule , and so all spikes in neuron are removed.

Figure 3:

Module SUB for simulating working in the max-sequential manner.

Figure 3:

Module SUB for simulating working in the max-sequential manner.

At step t+6ur+2vr+4, neuron fires by rule , sending one spike to each of neurons , , and neurons , , . In this way, each of neurons , , accumulates 6n+1+4 spikes, and so rule is enabled. In the following six steps, neurons , , fire by rule , where the order of firing is nondeterministically chosen. The number of spikes in neuron becomes 6n+2, which corresponds to the fact that the number stored in register r is incremented by one. After each rule in neurons () is used, each of neurons , , contains four spikes. In the following six steps, neurons , , remove the four spikes by rule . So all auxiliary neurons to which a synapse is connected from neuron return to the configuration where they contain no spike.

At step t+6ur+2vr+17, neuron fires by rule , sending one spike to each of neurons , . At this moment, each of neurons , , and accumulates two spikes and neuron accumulates one spike. Neurons and return to the configuration where they contain one spike and two spikes, respectively. Neurons and can fire by rule . Since the system works in the max-sequential manner and neurons and contain the same number of spikes, one of neurons and is nondeterministically chosen to fire at step t+6ur+2vr+18. If neuron fires by rule at step t+6ur+2vr+18, then two spikes are sent to each of neurons and , enabling rule in neuron and rule in neuron . Neuron fires by rule at step t+6ur+2vr+19 (because of the max-sequential manner), removing the spikes in neuron . Neuron fires by rule at the next step, which means that the system starts to simulate the instruction lj of M. If neuron fires by rule at step t+6ur+2vr+18, then the spikes in neuron will first be removed and then neuron fires by rule , starting to simulate the instruction lk of M.

Therefore, the simulation of ADD instruction is correct: the system starts from neuron and ends in one of neurons and , nondeterministically chosen; at the same time, the number encoded by spikes in neuron is incremented by one.

Module SUB (shown in Figure 3), simulating a SUB instruction lj, lk): Let us assume that at step t, the system starts to simulate a SUB instruction . This means that at this step, neuron contains two spikes, and the neurons associated with registers may contain several spikes. At step t, neuron fires by rule , sending two spikes to each of neurons , , and , and so the rules in these neurons are enabled. At step t+1, neuron fires by rule (because of the max-sequential manner), sending four spikes to each of neurons , , and . A rule in neuron is enabled, and no rule in neurons and is enabled. For neuron , there are two cases.

If neuron contains 6n+1, n>0, spikes at step t (corresponding to the fact that the number stored in register r is n), neuron fires by rule at step t+2. By using this rule, 6n spikes are sent to neurons and . At this moment, each of neurons and accumulates 6n+4 spikes. Note that neuron will also send 6n spikes to other auxiliary neurons to which a synapse is connected from . If there exist ur ADD instructions and vr SUB instructions that act on register r (including SUB instruction li), there are 6ur+2vr such auxiliary neurons. So each of the 6ur+2vr auxiliary neurons also receives 6n spikes from neuron . In the following 6ur+2vr steps (from step t+3 to step t+6ur+2vr+2), every 6n spikes are removed from the 6ur+2vr auxiliary neurons by using the forgetting rule —one neuron needs one step. At step t+6ur+2vr+3, rule in neuron is applied, and so all spikes in neuron are removed.

At step t+6ur+2vr+4, no neuron can fire except for neurons and . Since neuron contains more spikes than neuron , under the max-sequential manner, neuron fires by rule at step t+6ur+2vr+4, sending one spike to each of neurons , , and . In this way, each of neurons and accumulates 6n+5 spikes, which enables rule in neuron and rule in neuron . Neurons and are nondeterministically chosen to fire at step t+6ur+2vr+5 because they contain the same number of spikes. Actually these two neurons will fire respectively at steps t+6ur+2vr+5 and t+6ur+2vr+6. By using rule in neuron , neuron receives 6n spikes (corresponding to the fact that the number stored in register r is decremented by one).

At step t+6ur+2vr+7, neuron removes its five spikes, and neuron fires by rule , sending two spikes to neurons and . Neuron returns to the configuration where it contains two spikes, and neuron accumulates three spikes, which enable rule . At step t+6ur+2vr+8, neuron fires by rule , sending three spikes to neurons and . In the following two steps, neuron fires by rule , and neuron fires by rule , respectively, where the order of firing is nondeterministically chosen. For neuron , the three spikes are removed by rule . As for neuron , two spikes are sent to neuron by rule , which means that the system starts to simulate instruction lj of M.

If neuron contains six spikes at step t (corresponding to the fact that the number stored in register r is 0), then neuron fires by rule at step t+2. In this case, we can similarly check that neuron becomes active at step t+6ur+2vr+11, and the system starts to simulate instruction lk of M.

Therefore, the simulation of SUB instruction is correct: the system starts from and ends in with the number encoded by spikes in decreased by one (if the number stored in register r is greater than 0) or in with the number encoded by spikes in remaining unchanged (if the number stored in register r is 0). From this explanation, we can find that there exists no interference between two SUB modules or between a SUB module and an ADD module. All spikes in neurons of other modules sent by the common neuron will be removed before system passes to the simulation of the next instruction. Therefore, all SUB instructions can be correctly simulated.

Module FIN (shown in Figure 4), outputting the result of computation: Let us assume that at step t, neuron in has accumulated two spikes and fires, which means that the computation in M halts (i.e, the halting instruction is reached). We also assume that when M halts, number n is present in register 1, so neuron contains 6n+1 spikes at that moment. When neuron fires, it immediately sends two spikes to each of neurons and , enabling rule in and rule in . At step t+1, neuron fires by rule , sending four spikes to neurons and . Neuron fires at step t+2 by rule (because of the max-sequential manner), sending 6n+1 spikes to each neuron to which a synapse is connected from neuron . If there are u1 ADD instructions that act on register 1 (recall that there is no SUB instruction that acts on register 1), the system has in total 6u1+1 auxiliary neurons to which a synapse is connected from neuron , including neuron in Figure 4. At step t+3, neuron accumulates 6n+1+4 spikes, while each of the other 6u1 auxiliary neurons accumulates 6n+1 spikes. Neuron fires at step t+3 by rule , sending 6n+1 spikes to neuron , leaving four spikes in neuron . The other 6u1 neurons fire in the next 6u1 steps by rule , removing each 6n+1 spikes from them. In the next two steps, the four spikes remaining in each of neurons and are removed by using rule .

Figure 4:

Module FIN for outputting the result of computation working in the max-sequential manner.

Figure 4:

Module FIN for outputting the result of computation working in the max-sequential manner.

At step t+6u1+6, neuron fires by rule , sending two spikes to neuron . In this way, neuron accumulates 6n+1+2 spikes. At step t+6u1+7, neuron fires by rule , sending 6n spikes to neuron , leaving two spikes in neuron . At the next step, the two spikes in neuron are consumed by using rule , sending one spike to neuron , and so rule in is enabled. At step t+6u1+9, neuron fires by rule , sending 6n spikes to neurons and . The 6n spikes in neuron are removed by rule at the next step. At step t+6u1+11, neuron fires by rule , sending one spike to neurons and . At this step, neuron accumulates one spike, and neuron accumulates 6n+1 spikes (these spikes will not be used until one more spike is received). Neuron fires at step t+6u1+12 by rule , so the number of spikes in neuron becomes 6n+2 and one spike is sent to the environment. In the next steps, only neurons , , and in the system keep working. The number of spikes in neuron is repeatedly divided by 6 until it becomes eight, and for each division, one spike is sent to the environment. Therefore, n spikes in total are sent to the environment, which is exactly the number stored in register 1 of M at the moment when the computation of M halts.

From this description, it is not difficult to check that the register machine M can be correctly simulated by the SN P system with exhaustive use of rules working in the max-sequential manner. This is the end of the proof of theorem 1.

The system constructed in the proof of theorem 1 works in the max-sequential manner, where one of the active neurons is nondeterministically chosen to fire when the system has several active neurons with the maximum number of spikes, which makes the system nondeterministic. If we consider that the system works in the max-pseudo-sequential manner, the nondeterminism from choosing one of the active neurons is lost. In what follows, we will show that SN P systems without such a nondeterminism are still Turing universal. That is, the following corollary holds:

Corollary 1. 

Proof. 

We can check that under the max-pseudo-sequential manner, the SUB module shown in Figure 3 can also correctly simulate a SUB instruction, and the FIN module shown in Figure 4 outputs exactly the number stored in register 1 at the moment when the register machine halts. So in order to prove corollary 1, we only need to construct an ADD module to simulate the ADD instruction. The ADD module can be constructed by slightly modifying the ADD module shown in Figure 2, which is depicted in Figure 5.

Figure 5:

Module ADD for simulating working in the max-pseudo-sequential manner.

Figure 5:

Module ADD for simulating working in the max-pseudo-sequential manner.

Similar to the ADD module shown in Figure 2, in the module from Figure 5, the number of spikes in neuron is first multiplied by 6 (corresponding to the fact that the number stored in register r is incremented by one); then neuron or is nondeterministically chosen to fire. The firing of neurons and is achieved in the following way. After finishing the process of multiplying the number of spikes in neuron by 6, neuron accumulates exactly two spikes (one spike is received from neuron and the other spike is received from ). So neuron can fire by rule or , nondeterministically chosen. If rule in neuron is applied, then two spikes are sent to neurons and , enabling rule in and rule in . Under the max-pseudo-sequential manner, the two rules are applied at the same step. The use of rule removes two spikes from , while the use of rule in activates neuron , which means that the system starts to simulate the instruction lj of M. If rule in neuron is applied, one spike is sent to neurons and . At the next step, neurons and fire at the same time (because of the max-pseudo-sequential manner), removing the spike from neuron and sending one spike to neurons and from . Each of neurons and sends one spike to neuron , and so neuron is activated and the system starts to simulate the instruction lk of M.

It remains open whether we can construct in such a way that the maximum number of spikes appears in only one neuron during every computation (in such a system, the max-sequential and max-pseudo-sequential manners would coincide).

6.  MinsEx and MinpsEx SN P Systems

In the ADD module shown in Figure 2, after neuron fires, neurons , , and become active. By the max-sequential manner, it is only after neuron fires, that neuron can fire, and then neuron fires. When neuron fires, the system starts the process in which the 6n+1 spikes in neuron are removed. When neuron fires, the system starts the process in which 6n+2 spikes are added in neuron . When neuron fires, the system starts the process in which two spikes are nondeterministically sent to neurons or . However, if the system works in the min-sequential manner, then neurons , , and will fire one by one in the order , , . That is, the system will first nondeterministically choose neurons , to fire. Then the system starts the other two processes: removing 6n+1 spikes from neuron and adding 6n+2 spikes in neuron . In this way, when these two processes are still running, the simulation of instruction li or lk possibly starts; that is, it is possible that two simulations of different instructions are in process at the same time, which will cause undesired simulation steps. Therefore, in the min-sequential manner, the system constructed in the proof of theorem 1 cannot correctly simulate a register machine.

In what follows, by constructing appropriate SN P systems to simulate register machines, we prove that MinsEx and MinpsEx SN P systems are also Turing universal.

Theorem 2. 

.

Proof. 

The proof is similar to that of theorem 1. We construct an MinsEx SN P system to simulate a register machine M. System consists of three types of modules: ADD module, SUB module, and FIN module shown in Figures 6, 7, and 8, respectively. In the modules, function is defined as in the proof of theorem 1. The number n () stored in register r is encoded as 5n+1 spikes in neuron , and each of neurons associated with an instruction is activated when it contains three spikes.

Figure 6:

Module ADD for simulating working in the min-sequential manner.

Figure 6:

Module ADD for simulating working in the min-sequential manner.

Figure 7:

Module SUB for simulating working in the min-sequential manner.

Figure 7:

Module SUB for simulating working in the min-sequential manner.

Figure 8:

Module FIN for ending the computation working in the min-sequential manner.

Figure 8:

Module FIN for ending the computation working in the min-sequential manner.

In the ADD module shown in Figure 6, when neuron fires, the system starts with the following two processes: removing 5n+1 spikes from neuron and adding 5n+2 spikes to neuron ; neurons , , take care of the adding process. When neuron fires, the system starts to nondeterministically send three spikes to neuron or , which is implemented by neurons , , , and , .

In the SUB module shown in Figure 7, when neuron fires, the system starts to test whether neuron contains five spikes (i.e., the number in register r of M is 0). If neuron contains five spikes, then fires, sending one spike to neuron . With the collaboration of neurons , , , neuron receives three spikes, while neuron receives no spike. If neuron contains 5n+1, , spikes, then will not fire; the result of the collaborative work of neurons , , is that neuron receives no spike, while neuron receives three spikes.

In the FIN module shown in Figure 8, the auxiliary neurons , , implement the process: for every five spikes in neuron , neuron will receive four spikes and fire, sending one spike into the environment.

Similar to the proof of theorem 1, we can check the work of modules ADD, SUB, and FIN step by step. We omit the details here.

As in the case of the max-pseudo-sequential manner, we can prove that the universality result holds for MinpsEx SN P systems. That is, the following corollary holds:

Corollary 2. 

Proof. 

We can check that the SUB module from Figure 7 and the FIN module from Figure 8 can also correctly work in the min-pseudo-sequential manner. A slightly modified version of the ADD module from Figure 6 can correctly simulate an ADD instruction, which is shown in Figure 9.

Figure 9:

Module ADD for simulating working in the min-pseudo-sequential manner.

Figure 9:

Module ADD for simulating working in the min-pseudo-sequential manner.

It remains open whether we can construct in such a way that the minimum number of spikes appears in only one neuron during every computation (in such a system, the min-sequential and min-pseudo-sequential manners would coincide).

7.  Conclusion and Discussion

In this letter, we investigated the computation power of MaxsEx, MaxpsEx, MinsEx, and MinpsEx SN P systems. We proved that all of the systems are Turing universal. The results show that the power of SN P systems is not reduced if some restrictions are imposed on the working manner of neurons and rules of the systems, or the systems do not have the nondeterminism resulting from the manner of choosing one of the active neurons with maximum or minimum spikes to fire.

In the SN P systems constructed in this work, forgetting rules are used, but the feature of delay is not used. It is of interest to investigate the contribution of the feature of delay and forgetting rules to the computation power. Future investigations in this aspect include whether MaxsEx, MaxpsEx, MinsEx, and MinpsEx SN P systems have the same computation power when forgetting rules (and delays) are removed.

In this work, we define the result of a computation as the number of spikes sent to the environment by the output neuron. However, in neural computation based on spikes, it is usual to use time as data support. So it is natural to define the result of a computation as the time associated with spikes—for example, the interval of time elapsed between the first two consecutive spikes sent out by the output neuron. The computation power of MaxsEx, MaxpsEx, MinsEx, and MinpsEx SN P systems deserves to be investigated when the result of a computation is defined as the interval of time elapsed between the first two consecutive spikes sent out by the output neuron.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61033003, 91130034, 61272152, and 61320106005), the Ph.D. Programs Foundation of Ministry of Education of China (20100142110072 and 2012014213008), the Natural Science Foundation of Hubei Province (2011CDA027), the Natural Science Foundation of Anhui Higher Education Institutions of China (KJ2012A010 and KJ2012A008), and the Scientific Research Foundation for Doctor of Anhui University under grant 02203104.

A series of suggestions made by the anonymous referees, who carefully read this letter, are gratefully acknowledged.

References

Cavaliere
,
M.
,
Ibarra
,
O. H.
,
Păun
,
G.
,
Egecioglu
,
O.
,
Ionescu
,
M.
, &
Woodworth
,
S.
(
2009
).
Asynchronous spiking neural P systems
.
Theoretical Computer Science
,
410
,
2352
2364
.
Chen
,
H.
,
Freund
,
R.
,
Ionescu
,
M.
,
Păun
,
G.
, &
Pérez-Jiménez
,
M. J.
(
2007
).
On string languages generated by spiking neural P systems
.
Fundamenta Informaticae
,
75
,
141
162
.
Chen
,
H.
,
Ionescu
,
M.
,
Ishdorj
,
T.-O.
,
Păun
,
A.
,
Păun
,
G.
, &
Pérez-Jiménez
,
M. J.
(
2008
).
Spiking neural P systems with extended rules: Universality and languages
.
Natural Computing
,
7
,
147
166
.
Freund
,
R.
(
2005
).
Asynchronous P systems and P systems working in the sequential mode
. In
G. Mauri, G. Păun, M. J. Pérez-Jiménez, G. Rozenberg, & A. Salomaa
(Eds.),
Membrane computing
(pp
36
62
).
New York
:
Springer-Verlag
.
Ibarra
,
O. H.
,
Păun
,
A.
, &
Rodríguez-Patón
,
A.
(
2009
).
Sequential SN P systems based on min/max spike number
.
Theoretical Computer Science
,
410
,
2982
2991
.
Ibarra
,
O. H.
,
Woodworth
,
S.
,
Yu
,
F.
, &
Păun
,
A.
(
2006
).
On spiking neural P systems and partially blind counter machines
. In
Proceedings of the 5th International Conference on Unconventional Computation
(pp.
113
129
).
Berlin
:
Springer-Verlag
.
Ionescu
,
M.
,
Păun
,
G.
, &
Yokomori
,
T.
(
2006
).
Spiking neural P systems
.
Fundamenta Informaticae
,
71
,
279
308
.
Ionescu
,
M.
,
Păun
,
G.
, &
Yokomori
,
T.
(
2007
).
Spiking neural P systems with an exhaustive use of rules
.
International Journal of Unconventional Computing
,
3
,
135
154
.
Ishdorj
,
T.-O.
,
Leporati
,
A.
,
Pan
,
L.
,
Zeng
,
X.
, &
Zhang
,
X.
(
2010
).
Deterministic solutions to QSAT and Q3SAT by spiking neural P systems with pre-computed resources
.
Theoretical Computer Science
,
411
,
2345
2358
.
Minsky
,
M.
(
1967
).
Computation: Finite and infinite machines
.
Upper Saddle River, NJ
:
Prentice Hall
.
Pan
,
L.
,
Wang
,
J.
, &
Hoogeboom
,
H. J.
(
2012
).
Spiking neural P systems with astrocytes
.
Neural Computation
,
24
,
805
825
.
Pan
,
L.
,
Zeng
,
X.
, &
Zhang
,
X.
(
2011
).
Time-free spiking neural P systems
.
Neural Computation
,
23
,
1320
1342
.
Păun
,
A.
, &
Păun
,
G.
(
2007
).
Small universal spiking neural P systems
.
BioSystems
,
90
,
48
60
.
Păun
,
G.
(
2000
).
Computing with membranes
.
Journal of Computer and System Sciences
,
61
,
108
143
.
Păun
,
G.
(
2002
).
Membrane computing: An introduction
.
Berlin
:
Springer-Verlag
.
Păun
,
G.
, &
Pérez-Jiménez
,
M. J.
(
2009
).
Spiking neural P systems: Recent results, research topics
.
Algorithmic Bioprocesses
,
5
,
273
291
.
Păun
,
G.
,
Rozenberg
,
G.
, &
Salomaa
,
A.
(
Eds.
). (
2010
).
The Oxford handbook of membrane computing
.
New York
:
Oxford University Press
.
Rozenberg
,
G.
, &
Salomaa
,
A.
(
Eds.
). (
1997
).
Handbook of formal languages
.
Berlin
:
Springer-Verlag
.
Wang
,
J.
,
Hoogeboom
,
H. J.
,
Pan
,
L.
,
Păun
,
G.
, &
Pérez-Jiménez
,
M. J.
(
2010
).
Spiking neural P systems with weights
.
Neural Computation
,
22
,
2615
2646
.
Xu
,
L.
, &
Jeavons
,
P.
(
2013
).
Simple neural-like P systems for maximal independent set selection
.
Neural Computation
,
25
,
1642
1659
.
Zhang
,
X.
,
Luo
,
B.
,
Fang
,
X.
, &
Pan
,
L.
(
2012
).
Sequential spiking neural P systems with exhaustive use of rules
.
BioSystems
,
108
,
52
62
.
Zhang
,
X.
,
Zeng
,
X.
, &
Pan
,
L.
(
2008
).
On string languages generated by spiking neural P systems with exhaustive use of rules
.
Natural Computing
,
7
,
535
549
.

Author notes

*Corresponding author