Abstract

Inspired by gamma-band oscillations and other neurobiological discoveries, neural networks research shifts the emphasis toward temporal coding, which uses explicit times at which spikes occur as an essential dimension in neural representations. We present a feature-linking model (FLM) that uses the timing of spikes to encode information. The first spiking time of FLM is applied to image enhancement, and the processing mechanisms are consistent with the human visual system. The enhancement algorithm achieves boosting the details while preserving the information of the input image. Experiments are conducted to demonstrate the effectiveness of the proposed method. Results show that the proposed method is effective.

1  Introduction

The description of neural processing has piqued interest in temporal correlation, which involves the precision timing of spikes (Hopfield, 1995; Singer & Gray, 1995; Gray, 1999; Victor, 2000; Reinagel & Reid, 2002; Wang, 2005; VanRullen, Guyonneau, & Thorpe, 2005; Izhikevich, 2006; von der Malsburg, Phillips, & Singer, 2010; Gütig, Gollisch, Sompolinsky, & Meister, 2013; Nikolić, Fries, & Singer, 2013). Neurons communicate by spikes that carry information about their time of arrival, and stimulus information can be encoded in the timing of individual spikes (Victor, 2000). Neurons encode information about the spatial image content in the timing of the first spike (Gütig et al., 2013; Gollisch & Meister, 2008). Neural networks can represent information through the explicit time at which a spike occurs, and Hopfield (1995) pointed out that gamma-band oscillations play a very important role in representing information. Gamma-band oscillations are a fundamental process in cortical computation. They have been discovered in the primary visual cortex (Eckhorn et al., 1988; Gray, König, Engel, & Singer, 1989), and numerous studies have discussed the processes underlying them (Fries, 2009; Buzsáki & Wang, 2012). After the discovery of gamma-band oscillations (Eckhorn et al., 1988; Gray et al., 1989), Eckhorn, Reitboeck, Arndt, and Dicke (1990) proposed the linking field network inspired by gamma-band oscillations, and it was applied to scene segmentation using temporal correlation (Stoecker, Reitboeck, & Eckhorn, 1996). Temporal correlation provides an elegant method of scene analysis and may encode feature binding among neurons (Milner, 1974; von der Malsburg, 1994; Gray, 1999). These findings have supported the binding problem addressed in a special issue of Neuron in September 1999. A study of the dynamics of coupled neural oscillators that interact with spikes via a fast threshold modulation (Somers & Kopell, 1993, 1995) produces a temporal correlation approach for solving the problem of scene analysis (Wang & Terman, 1997; Wang, 2005). Béroule (2004) surveyed how temporal correlation had been implicated in perception, learning, and memory. Synaptic efficacy among neurons was modified under the influence of spike-timing-dependent plasticity (Izhikevich, 2006; Lubenov & Siapas, 2008), and Izhikevich (2004, 2006, 2007) studied various models of cortical neurons and analyzed their features and computational efficiencies. Haken (2005, 2008) devoted work to the relationship between the synchronization of spikes and pattern recognition.

Johnson et al. studied synchronous spike dynamics and developed pulse-coupled neural networks (PCNN) (Johnson & Ritter, 1993; Johnson, 1994; Johnson & Padgett, 1999), which have been widely applied to image processing (Ranganath, Kuntimad, & Johnson, 1995; Johnson & Padgett, 1999; Ranganath & Kuntimad, 1999; Kuntimad & Ranganath, 1999; Ma, Zhan, & Wang, 2011; Lindblad & Kinser, 2013). The evidence of precise spike-timing dynamics can be seen from a special issue of IEEE Transactions on Neural Networks on PCNN in May 1999 and on temporal coding in July 2004. The time series of PCNN, the summation of spikes in a time course, has been applied to invariant image feature extraction (Johnson, 1994; Zhang, Zhan, & Ma, 2007; Zhan, Zhang, & Ma, 2009). The time matrix of PCNN is defined to record the time when each neuron fires the first spike (Johnson & Padgett, 1999). It was used to mark different regions for image segmentation (Stewart, Fermin, & Opper, 2002). Zhan et al. (2009) found that the time matrix had a high sensitivity for low intensities but a low sensitivity for high intensities.

In this letter, we propose a feature linking model (FLM). We find that the time matrix of FLM has a logarithmic relationship with a stimulus when the threshold decays exponentially, and records the timing of spikes simultaneously. FLM has two inputs: feeding inputs and linking inputs. It has a similar input structure to PCNN, but there are two leaky integrators in FLM rather than the three in PCNN. FLM is more effective for obtaining synchronization in a region and desynchronization among different regions. If the feeding synaptic weight of FLM is set to 0, the membrane potential is the same as the spiking cortical model (SCM) (Zhan et al., 2009). If the linking synaptic weight of FLM is set to 0, the membrane potential is the same as the intersecting cortical model (ICM) (Ekblad & Kinser, 2004). Besides the mechanism of desynchronization, we use the time matrix as a fundamental equation of FLM to enhance image contrast.

Image enhancement improves visual appearance in some predefined sense. In most cases, the enhancement effects are evaluated by human visual perception, and image enhancement methods improve the detail for the human visual system (HVS). Most image enhancement methods are based on histograms in the image domain (Stark, 2000; Arici, Dikbas, & Altunbasak, 2009; Gonzalez, Woods, & Eddins, 2009; Xu, Zhai, Wu, & Yang, 2014), and the histogram equalization is one the best-known methods. The histogram is also combined with other techniques for image enhancement (Gonzalez et al., 2009; Tizhoosh, 2000; Cheng & Xu, 2000). Histogram-based methods are useful for images with a poor intensity distribution. Details can be enhanced by filters in the image domain or some transform domains to increase high-frequency coefficients (Starck, Murtagh, Candes, & Donoho, 2003; Tang, Peli, & Acton, 2003; Tang, Kim, & Peli, 2004). These methods are difficult in selecting the parameters for high-frequency components. The time matrix is applied to image enhancement with SCM (Zhan et al., 2009; Ma, Teng, Zhan, & Zhang, 2012), but the results of SCM produce overenhancement and render parts of dark regions white.

FLM is applied to image enhancement based on the timing of the first spike, which reveals much of the image information. There is a corresponding neuron for each pixel in the image, and the stimulus corresponds to the grayscale intensity of an image. A natural image is input to the network, and the output is obtained based on the time matrix. The time matrix is recorded in the single-pass working form of FLM. The time matrix has an approximate logarithmic relation with the stimuli matrix, which is consistent with the Weber-Fechner law. We set parameters carefully under the qualitative analysis of FLM and simulate the Mach band effect in the image enhancement algorithm. The enhanced way of FLM is consistent with HVS.

This paper makes the following contributions:

  1. A neural network FLM is designed. Besides the mechanism of linking modulation and dynamic threshold inspired by the gamma band oscillations, the time matrix of FLM, which has neurophysiological support, is emphasized in this paper.

  2. We use the single-pass working form to obtain the time matrix. This form makes it easy to understand the time structure of FLM. FLM produces spikes synchronously via the modulation of two types of synaptic inputs, and we analyze two types of waves that are related to these synaptic inputs.

  3. An image enhancement method is proposed. The method simulates the Mach band effect well, and the processing mechanism is consistent with the Weber-Fechner law. Thus, the processed results of FLM are consistent with HVS.

The rest of the letter is organized as follows. In section 2, we present FLM. In section 3, we describe the time matrix and the single-pass working form of FLM. In section 4, we introduce image enhancement in detail. Section 5, presents numerical experiments and comparison results. Section 6 concludes with some discussion.

2  Feature Linking Model

The feature linking model (FLM) has three components: the membrane potential, the threshold, and the action potential. A dendrite receives postsynaptic action potential through synapses from receptive fields. The produced action potential is transferred to the neighboring neurons by means of localized synapses located on the dendrites. Electrical charges at the synapses produce the membrane potential. If the potential is large enough to exceed a threshold, the neuron generates an action potential or spike.

In FLM, the membrane potential and the threshold are represented by leaky integrators.

2.1  Leaky Integrator

Leaky integrators are the fundamental components of neural networks (Koch & Segev, 2000). The dynamic potential of a neural oscillator is described via a leaky integrator,
formula
2.1
where t is time, s is the input, and is the leak rate.
The potential, equation 2.1, can be discretized as
formula
2.2
where is the discretized potential and n is the discrete time.
We can rewrite equation 2.2 as
formula
2.3
where is the attenuation time constant of the leaky integrator.

2.2  Membrane Potential

A cortical neuron is mostly bidirectionally connected: feeding synapses are feedforward and the linking synapses are feedback (Eckhorn et al., 1990; Gove, Grossberg, & Mingolla, 1995; Barnes & Mingolla, 2013; Brosch & Neumann, 2014). As shown in Figure 1, a neuron has feeding synapses and lateral linking synapses. Feeding synapses are connected to a spatially corresponding stimulus, and lateral linking synapses are connected to outputs of neighboring neurons within a predetermined radius (Eckhorn et al., 1990; Johnson & Padgett, 1999; Zhan et al., 2009). Locally excitatory linking inputs have a negative globally inhibitory term that supports desynchronization (Stewart et al., 2002; Reitboeck, Stoecker, & Hahn, 1993; Kuntimad & Ranganath, 1999). Therefore, the dendritic signals to the neuron are the feeding inputs and the linking inputs,
formula
2.4
formula
2.5
where each neuron is denoted with indices (i, j), one of its neighboring neurons is denoted with indices (k, l) or (p, q), denotes a feeding input, is the postsynaptic action potential, Sij is the stimulus for the neuron, Mijkl is a synaptic weight applied to feeding inputs, denotes a linking input, Wijpq is a synaptic weight applied to linking inputs, and d is a positive constant for the globally inhibitory.
Figure 1:

Schematic of the feature linking model.

Figure 1:

Schematic of the feature linking model.

Stimulus-driven feedforward streams are combined with stimulus-induced feedback streams to enable synchronization (Eckhorn et al., 1990; Brosch & Neumann, 2014). The membrane potential of the neuron is instantiated as a leaky integrator,
formula
2.6
where f is the attenuation time constant of the membrane potential and is the linking strength.
We substitute equations 2.4 and 2.5 for 2.6, and obtain the neural membrane potential term as
formula
2.7

2.3  Threshold

The threshold of a neuron is represented by the leaky integrator (French & Stein, 1970; Eckhorn et al., 1990). The input of the threshold is the postsynaptic action potential,
formula
2.8
where g is the attenuation time constant, h is a magnitude adjustment, and is the postsynaptic action potential.

The postsynaptic action potential increases the threshold by an amount h so that a secondary action potential cannot be generated during a certain period, and the increased threshold decays with the time constant g.

Before the neuron produces the first action potential, the threshold decreases from the initial threshold by exponential law with an increase of n:
formula
2.9

Besides the exponential decay, the linear decay can be used in these models (Johnson & Padgett, 1999). Therefore, the decay function, equation 2.9, can be replaced with other decay functions.

2.4  Action Potential

Precise spike timing is a significant element in neural encoding. During iterations, when the membrane potential of a neuron exceeds its threshold, the neuron produces an action potential:
formula
2.10

2.5  Feature Linking Model

The FLM is described by equations 2.7, 2.8, and 2.10.

The factoring of equation 2.7 shows that the membrane potential is composed of a leaky integrator term, a stimulus, a feeding synapse term, a linking synapse term, and a multiplicative term. In the multiplicative term, , Mijkl, and Wijpq are in the range of , and multiplying them returns a much smaller value than the other terms (e.g., ) of the membrane potential . Because the multiplicative term is a very small modulation term, we can omit its effect. Therefore, the membrane potential can be given by
formula
2.11
in which we add the feeding strength for the feeding synapse in order to analyze the model easily.

Different from the integrate-and-fire model, FLM has a secondary synapse and the dynamic threshold. The secondary synapse is the linking synapse inspired by the gamma-band synchronization (Eckhorn et al., 1990) and the dynamic threshold is designed to simulate the refractory period of a neuron (French & Stein, 1970; Eckhorn et al., 1990), so FLM has properties close to a biological neural structure. FLM has fewer parameters and variables than PCNN, and FLM is described by using two leaky integrators but PCNN has three. FLM simplifies the membrane potential to a single equation rather than using three in PCNN, because the membrane potential of most biological neural networks is represented by a leaky integrator (Koch & Segev, 2000). In PCNN, the pulse period is emphasized (Johnson & Ritter, 1993; Johnson, 1994; Johnson & Padgett, 1999), but we use the time matrix of FLM to process a signal that has neurophysiological support. The time matrix records the firing order of neurons and reflects the synchronization. With the global inhibition term in FLM, it is effective to obtain synchronization of neurons in each single region and desynchronization among different regions (Stewart et al., 2002).

Both PCNN and FLM have two types of synapses. SCM has only linking synapses and ICM has only feeding synapses. If is set to 0, FLM is the same as SCM (Zhan et al., 2009). If is set to 0, FLM is the same as ICM (Ekblad & Kinser, 2004). Therefore, FLM is flexible with regard to the set of parameters.

Because the synaptic input is a weakly coupled term for the stimulus, the feeding and linking strength are usually set to a small value. If both are set to 0, we can obtain the central terms of the membrane potential:
formula
2.12
With equation 2.12, the function relationship between the membrane potential and the iteration time n is expressed by
formula
2.13

2.6  Feature Linking via Synchronization

There are two types of synaptic inputs in FLM. The feeding synaptic inputs and the linking synaptic inputs modulate the membrane potential of neurons in the neighborhood to be coupled to each other when postsynaptic action potentials go through the synapses, and the coupled neurons in the neighborhood produce spikes synchronously.

We explore the concept in the context of an example in which we analyze the feeding and linking synaptic modulation independently. The feeding synaptic modulation is given by
formula
2.14
formula
2.15
The linking synaptic modulation is given by
formula
2.16
formula
2.17
In this example, S is a matrix in which values are set to 0 except the values in the center window, , which are shown in Figure 2. The initial values of spike matrix are assigned to 0 except the center one is 1. is assigned to 0.7, and is 0.0172. The weighted matrices M and W are set to
formula
2.18
Figure 2:

Values in the center window of the stimulus matrix for the example.

Figure 2:

Values in the center window of the stimulus matrix for the example.

As shown in Figure 3, the changes in the matrix Y are observed, and the changes look like waves traveling. The example shows that once a neuron is firing, its efficacy always exists in the iteration and spreads as waves. There are two distinct types of synchronization in FLM: stimulus forced and stimulus induced (Eckhorn et al., 1990). The stimulus-forced synchronization is related to the feeding waves, and the stimulus-induced synchronization is related to the linking waves. The two forms of waves spread from a central neuron with radii that increase step by step. All the neurons in the neighborhood are coupled to each other, and a fired neuron captures some of the neighboring neurons to fire synchronously. Several neighboring neurons are fired, and their neighboring neurons can be captured by them. The feeding waves, efficacy propagate to all neurons once it has fired, and the linking waves select only neurons whose stimuli are similar to the central one neuron.

Figure 3:

Top row: Feeding waves. Bottom row: Linking waves.

Figure 3:

Top row: Feeding waves. Bottom row: Linking waves.

As the overall structural features are the primary data of human perception (Arnheim, 1954), FLM processes images in the light of HVS. The synchronization reveals that two pixels with similar intensity in a neighborhood are usually not able to be perceived by humans. FLM has the property that neurons with similar stimuli in a region are modulated by the waves to make corresponding neurons in the region fire synchronously.

3  Time Matrix

3.1  Time Matrix

Most action of neurons occur when the action potentials are produced the first time. A time matrix T is defined for the first firing time of neurons (Johnson & Padgett, 1999; Stewart et al., 2002; Zhan et al., 2009).

In implementation, the time matrix T is obtained on the premise that the threshold amplification factor h is set to a large enough value that it makes sure that neurons fire only once:
formula
3.1

3.2  Single Pass

The single pass, a working form of FLM, is completed when all of neurons have generated the action potentials (Johnson & Padgett, 1999). The single-pass form is used to obtain the time matrix.

The single-pass working form of FLM is a method for the network stopping condition—that is, the network automatically stops when all neurons are fired.

The single pass can be realized when the threshold amplification factor h is set to a large enough value that neurons generate spikes only once (Kuntimad & Ranganath, 1999).

In implementation, the single-pass working form can be given by algorithm 1. In that algorithm, is the total number of neurons, and counts the number of the fired neurons.

formula

4  Image Enhancement

The intensity of the input image corresponds to the stimulus of the network; the neuron is located at pixel . A two-dimensional image matrix with size is represented by neurons.

The digital image with intensity in the range [0, 255] has 8 bits for a grayscale image or each color channel. We process the value channel and keep the hue and saturation channel for color images in the HSV color model. We normalize the image intensity I by
formula
4.1
where returns the minimum value of I, returns the maximum value of I, and is a small, positive value to avoid that any pixel in S is equal to 0.

Elements in the matrix S are assigned values larger than 0; otherwise, these neurons cannot be captured and never fire when their thresholds are positive values. Values in the stimulus matrix have the smallest grayscale, . Under these considerations, is set to .

As shown in algorithm 1, stimulus S, the second term of equation 2.11, is the input of FLM and the time matrix T is the output of FLM. U, , and Y are updated, respectively, by using equations 2.11, 2.8, and 2.10 until all neurons generate spikes.

4.1  Relationship between Tij and Sij

Based on equations 2.9 and 2.13, we sketch three curves in Figure 4. In the figure, the neuron with a higher stimulus produces the first action potential at the iterative time 3; its value in time matrix is 3. The curve of is decided by the parameter f, Sij, and . The curve of is decided by g and .

Figure 4:

The spiking time of two different neurons.

Figure 4:

The spiking time of two different neurons.

The neuron with the higher stimulus produces an action potential naturally at time . The action potential modulates the membrane potential of its neighboring neuron with a low-stimulus , and the lower stimulus can be captured and fired at or in advance. If there is no feature-linking modulation, the lower stimulus is fired at naturally.

The threshold is subject to exponential decay, and it is decayed from the first predetermined , as shown in equation 2.9. The neuron is firing at the time when the membrane potential exceeds its threshold and the two values are almost equal. This is described as
formula
4.2
Then the first firing time Tij is
formula
4.3
Based on equations 2.13 and 4.2, we obtain an approximate implicit function relationship between Tij and Sij:
formula
4.4
In equation 2.13, the term f n tends to 0 if n tends to a large value, so is approximately equal to the stable value . We substitute the term into equation 4.3 and obtain
formula
4.5

Equation 4.5 indicates that the relationship between T and S is consistent with the Weber-Fechner law when f and are set to constants (Zhan et al., 2009). The Weber-Fechner law reveals that the objective intensity and the human subjective response are related logarithmically.

As equation 4.4 is an implicit function obtained under the assumption that there is no synaptic modulation and equation 4.5 is obtained under the further assumption that the discrete time tends to a large value, the time matrix cannot be obtained by equations 4.4 and 4.5. This section is a way of qualitative analysis. In implementation, we obtain the time matrix by equation 3.1 and algorithm 1.

4.2  Parameter  f

Because of the logarithmic relationship and the limited dynamic range of images, the time matrix mainly enhances the contrast of low-intensity pixels so that the highest-intensity pixels are not processed well. We use the parameter f to solve the problem. In this letter, f is given by
formula
4.6
where c0 and c1 are two positive constants and is the standard deviation.

We substitute equations 4.6 and 4.7 into 4.4 to obtain an implicit function between Tij and Sij, and draw the curve of the implicit function. It can be seen from equations 2.12 and 4.3 that pixels with high intensity are usually fired earlier than low-intensity pixels. The effect of equation 4.6 delays the firing time for the high-intensity pixels to make their values Tij higher because a high-intensity pixel Sij with a lower fij affects the stable value of to be lower, as shown in Figure 4. Therefore, c0, c1, and are adjusted, respectively, to 0.75, 0.05, and 0.4 by using the implicit function, equation 4.4.

The parameter matrix f is smoothed by the gaussian filter with the standard deviation of 1 in order to tend to be more homogeneous for each region with a similar intensity.

4.3  Initial Threshold

It can be seen from equation 4.5 that values Tij are related to the initial threshold . We set an edge-enhanced effect for so that the output image has the Mach band effect. We initialize as
formula
4.7
where denotes the convolution operation and is the Laplacian derivative operator (Gonzalez et al., 2009):
formula
4.8
Figure 5 reveals how to simulate the Mach band effect. We assume that , , and f is set to a scalar constant. Then equation 4.5 can be approximately represented by
formula
4.9
where is a constant.
Figure 5:

The top line is an edge of the stimulus signal, the middle line shows that the top line is filtered by the Laplacian operator, and the bottom line is obtained by subtraction of the top and middle lines to show the Mach band effect.

Figure 5:

The top line is an edge of the stimulus signal, the middle line shows that the top line is filtered by the Laplacian operator, and the bottom line is obtained by subtraction of the top and middle lines to show the Mach band effect.

Figure 5 is drawn by using equations 4.7 and 4.9. After being filtered by the Laplacian operator, the pixels with the low-intensity side of the edge obtain a positive value while the pixels with the high-intensity side obtain a negative value. When the filtering result of the Laplacian operator is subtracted by the stimulus signal, the low-intensity side of the edge is lower, while the high side is higher, so the edge becomes sharper to achieve the edge enhancement.

4.4  Output-Enhanced Image

Figure 4 shows that neurons with higher stimuli are fired earlier than those with lower stimuli. As a pixel with high intensity is input in the network and the network returns a small value in time matrix, the values in the time matrix need to be reversed. In this letter, the transformation function is given by
formula
4.10
It is normalized and rounded off to the nearest integer:
formula
4.11

4.5  Optimize the Output

As some neurons may fire far earlier or later than most others, we optimize the intensity values in grayscale image to new values in J such that 2% of the data is saturated at low and high intensities of . This increases the contrast of the output image J.

Let denote the saturated high intensity of and m the saturated low intensity. The intensities are adjusted by
formula
4.12
Finding the best solution of and m is an optimization problem of the form
formula
4.13
where is the probability of occurrence of the intensity level .

The simplest way to solve equation 4.13 is to loop m and over the whole intensity range and compute the difference of every latent pair (m, ) that satisfies conforms the constraint condition. That fact is not necessary to loop over the whole intensity range because the variable m varies only from 0 to a certain intensity whose cumulative distribution probability (CDP) satisfies CDP. In this way, we narrow the searching range of m. In the same way, the searching range of is narrowed too. After ascertaining the searching ranges of m and , the next step is computing the difference of each latent pair (m, ) and finding the desired pair with the minimum difference.

5  Experimental Results

We compare our approach with state-of-the-art methods qualitatively and quantitatively. A large number of experiments have been conducted to demonstrate the effectiveness of the proposed method.

5.1  Experiment Setup

Before the algorithm is implemented, we initialize all elements in the matrices U, Y, and T to 0. is initialized by equation 4.7.

If an image with the size is input, is given by
formula
5.1

The algorithm is implemented as shown in algorithm 1, and we obtain the time matrix T. The output-enhanced image is obtained after the time matrix T is processed by equations 4.10 to 4.12.

After a number of tests, scalar parameters of the proposed algorithm are given in Table 1. As discussed in section 3, h is set by using equation 7 of Kuntimad and Ranganath (1999), by which the each neuron is guaranteed to produce a spike once during a cycle. d is set by considering Stewart et al.’s (2002) analysis. Considering equation 2.9, g is set to a value that is close to 1, which makes the iteration number large. A relatively large iteration number implies that the values in the time matrix have a large range and the output image has a high grayscale resolution.

Table 1:
Parameters of the Proposed Algorithm.
hdg
2e10 0.9811 0.01 0.03 
hdg
2e10 0.9811 0.01 0.03 
The synaptic weight is approximately with the reciprocal of distance (Johnson & Padgett, 1999), so the synaptic weight matrices are given by
formula
5.2

The proposed FLM method is compared with six different algorithms based on the histogram equalization (HEQ) (Gonzalez et al., 2009), the fuzzy set method (FSM) (Cheng & Xu, 2000; Gonzalez et al., 2009), the discrete cosine transform domain method (DDM) (Tang et al., 2003), the SCM method (Zhan et al., 2009; Ma et al., 2012), the generalized equalization model (GEM) (Xu et al., 2014), and gradient distribution specification (GDS) (Gong & Sbalzarini, 2014), respectively.

The parameter settings of the other methods are as follows. HEQ has no parameter. The parameter of DDM is set to 1.2. The default parameters given by respective authors are adopted for the methods based on FSM (Cheng & Xu, 2000; Gonzalez et al., 2009), SCM (Zhan et al., 2009; Ma et al., 2012), GEM (Xu et al., 2014), and GDS (Gong & Sbalzarini, 2014).

5.2  Quantitative Evaluation

To quantitatively evaluate the image enhancement algorithms in terms of contrast enhancing, we use the average of a local contrast metric (De Vries, 1990). We obtain the contrast of enhanced images by computing the average value of the local contrast metric, defined in a sliding window,
formula
5.3
where the local mean of intensity is computed for the pixels within the sliding window centered on the pixel , and is the local variance of intensity.
As the spatial frequency measures the overall activity level in an image (Eskicioglu & Fisher, 1995; Chandler, 2013), we use the spatial frequency to evaluate the quality of enhanced images. The spatial frequency is defined as follows:
formula
5.4
where Row_F and Column_F are the row and column frequencies, given by:
formula
5.5
formula
5.6
Important visual information is associated with edge information, which is supported by HVS (Hendee & Wells, 1997). We use the average gradient to evaluate the edge information of enhanced images (Bai & Zhang, 2014). The average gradient is given by
formula
5.7
where Gij represents the edge strength, the magnitude of the gradient, at the pixel location in an image.

5.3  Visual and Quantitative Comparison

The experiments are conducted to demonstrate the effectiveness of the proposed image-enhancement algorithm. We select two grayscale images and three color images. The two grayscale images (cameraman.tif and tire.tif) are provided in Matlab image toolbox. A color image (sunset) is downloaded from Koren (2004). Two other color images (flower and lunaria) are from Farbman, Fattal, Lischinski, and Szeliski (2008). The input images and the enhanced images obtained by different methods are shown in Figures 6 to 10.

HEQ, DDM, and FSM enhance contrast globally, so they are not suitable for every image. HEQ obtains good results for the images tire, flower, and lunaria, as shown in Figures 7b, 9b, and 10b, respectively. However, the tire is still not clearer than the result of FLM (see Figure 7h). The background green plants are not enhanced well in Figure 9b. For the dark region and the leaves of the lunaria, FLM algorithm always obtains a clearer result than HEQ, as shown in Figure 10.

In the DDM algorithm, the contrast is defined in DCT domain, and the details of the image are globally enhanced with a unified parameter. The results of DDM algorithm improve visibility in only a limited way, as shown in Figures 6c, 7c, 8c, 9c, and 10c.

Figure 6:

Input image and enhanced images obtained by different methods: (a) cameraman, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

Figure 6:

Input image and enhanced images obtained by different methods: (a) cameraman, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

Figure 7:

Input image and enhanced images obtained by different methods: (a) tire, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

Figure 7:

Input image and enhanced images obtained by different methods: (a) tire, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

Figure 8:

Input image and enhanced images obtained by different methods: (a) sunset, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

Figure 8:

Input image and enhanced images obtained by different methods: (a) sunset, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

Figure 9:

Input image and enhanced images obtained by different methods: (a) flower, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

Figure 9:

Input image and enhanced images obtained by different methods: (a) flower, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

Figure 10:

Input image and enhanced images obtained by different methods: (a) lunaria, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

Figure 10:

Input image and enhanced images obtained by different methods: (a) lunaria, (b) HEQ, (c) DDM, (d) FSM, (e) SCM, (f) GEM, (g) GDS, (h) FLM.

The FSM algorithm does not enhance the local contrast, and the results are even not clearer than input images, as shown in Figures 6d, 7d, and 10d. Globally, the FSM algorithm makes the dark pixels darker and the bright ones brighter. This algorithm has a good visual performance of the lawn in Figure 6d.

Some dark pixels in SCM results change to bright ones, as shown in Figures 7e, 9e, and 10e. These dark pixels are not fired, and their values in time matrix are still the initial value 0; the inverse operation makes them change to white (Zhan et al., 2009). Some results of the SCM algorithm suffer from contrast degradation in bright regions, as shown in Figures 6e, 8e, 9e, and 10e, because a lot of the bright pixels in SCM fire too early and their values in time matrix tend to close. The problems of the SCM algorithm are solved in the FLM algorithm well. In FLM, the single-pass working form and the positive stimulus matrix S guarantee that even the neurons with the lowest stimuli will fire. The attenuation coefficients f are set as a matrix related to the input image rather than a small, single, scalar value, which keeps the proposed algorithm located at the dark zones correctly. The algorithm enhances the dark zones while preserving the contrast within the bright scene as much as possible.

The GEM algorithm is easily regressed to HEQ. The GEM results of the tire are a little worse than HEQ, and the other four are similar to HEQ. GEM is as similar as HEQ to enhance image contrast, and it is mainly used to correct the tone of images (Xu et al., 2014).

The GDS-based method improves an image’s quality by remapping the image, which makes the distribution of the image’s gradients matches the specified distribution (Gong & Sbalzarini, 2014). In practice, it yields an inconspicuous enhancement effect, as seen in Figures 6g, 7g, 8g, 9g, and 10g.

The FLM image enhancement algorithm enhances the contrast globally because of the Mach band effect. What is more, the Mach band effect makes the edges clear—the coat in Figure 6h, the small cannon in Figure 8h, and the red flower in Figure 9h. FLM renders it easy to see pixels with invisible low intensity such as the coat of the cameraman (see Figure 6a), the dark regions of the flower (see Figure 9a), and the dark regions of the lunaria (see Figure 10a). Because of the limited dynamic range of grayscale, the global intensity increases when invisible regions tend to be easily seen. When the global intensity increases, the details are still clear in the FLM results, but the images are a little overexposed. FLM enhances the contrast for the low intensity because of the logarithmic relationship between T and S, and pixels with high intensity keep their contrast because of the effectiveness of the parameter f. Such visual performances are seen in all results of FLM. Because information preservation is as important as enhancing the contrast, the processed results of FLM are consistent with HVS.

The quantitative evaluation metrics are summarized in Table 2. The grayscale intensity or the value channel of the HSV model is used to evaluate the performance. The objective quantitative evaluation is consistent with the subjective visual effect of the enhanced images.

Table 2:
Quantitative Metrics of Different Algorithms.
ImageHEQDDMFSMSCMGEMGDSFLM
Contrast        
cameraman 4.6068 7.0473 7.2381 3.8507 4.7407 2.7261 7.2199 
tire 3.4615 5.3799 3.7983 5.8633 3.1042 3.1348 6.3370 
sunset 0.4944 1.2227 1.1290 0.5390 0.5623 1.9152 2.0587 
flower 0.5713 0.4495 0.7321 0.1219 0.4597 1.5989 1.2676 
lunaria 2.0622 3.7747 3.2817 1.3124 2.1494 4.4117 3.0132 
Spatial frequency        
cameraman 36.9242 44.0719 47.3154 43.0689 35.4488 25.6411 50.8685 
tire 27.7017 28.4286 28.5350 48.2947 22.6101 19.9757 43.4343 
sunset 10.2265 15.9630 15.1565 17.1565 9.8779 13.5430 27.9789 
flower 10.0084 10.3355 12.6778 8.0510 9.1475 14.9398 18.9665 
lunaria 20.7059 27.7046 28.9641 23.2269 20.2242 25.7231 31.4470 
Gradient        
cameraman 13.9524 15.1293 15.3225 13.9147 13.1619 8.9328 16.8542 
tire 14.7928 13.9627 8.9980 18.9433 12.7750 10.3877 20.5421 
sunset 4.8611 4.2247 2.8678 4.3599 4.2600 2.4712 7.3517 
flower 4.8252 4.8267 4.0334 2.7674 4.5245 5.0411 6.7963 
lunaria 10.7742 11.7581 7.9267 9.3887 10.0776 10.7558 13.6666 
ImageHEQDDMFSMSCMGEMGDSFLM
Contrast        
cameraman 4.6068 7.0473 7.2381 3.8507 4.7407 2.7261 7.2199 
tire 3.4615 5.3799 3.7983 5.8633 3.1042 3.1348 6.3370 
sunset 0.4944 1.2227 1.1290 0.5390 0.5623 1.9152 2.0587 
flower 0.5713 0.4495 0.7321 0.1219 0.4597 1.5989 1.2676 
lunaria 2.0622 3.7747 3.2817 1.3124 2.1494 4.4117 3.0132 
Spatial frequency        
cameraman 36.9242 44.0719 47.3154 43.0689 35.4488 25.6411 50.8685 
tire 27.7017 28.4286 28.5350 48.2947 22.6101 19.9757 43.4343 
sunset 10.2265 15.9630 15.1565 17.1565 9.8779 13.5430 27.9789 
flower 10.0084 10.3355 12.6778 8.0510 9.1475 14.9398 18.9665 
lunaria 20.7059 27.7046 28.9641 23.2269 20.2242 25.7231 31.4470 
Gradient        
cameraman 13.9524 15.1293 15.3225 13.9147 13.1619 8.9328 16.8542 
tire 14.7928 13.9627 8.9980 18.9433 12.7750 10.3877 20.5421 
sunset 4.8611 4.2247 2.8678 4.3599 4.2600 2.4712 7.3517 
flower 4.8252 4.8267 4.0334 2.7674 4.5245 5.0411 6.7963 
lunaria 10.7742 11.7581 7.9267 9.3887 10.0776 10.7558 13.6666 

Note: The best results are highlighted in bold.

5.4  Further Experiments

Further experiments are conducted to demonstrate the effectiveness of FLM with different databases: Berkeley segmentation database (BSD) (Arbelaez, Maire, Fowlkes, & Malik, 2011), digitally retouched image quality database (DRIQ) (Vu, Phan, Banga, & Chandler, 2012), LIVE image quality database (Wang, Bovik, Sheikh, & Simoncelli, 2004; Sheikh, Sabir, & Bovik, 2006), and real blur image database (RBID) (Laboratório de Processamento de Sinais). We use the 500 images in BSD, the 26 reference images in DRIQ, the 29 reference images in LIVE, and the 590 images in RBID, for a total of 1145 images to evaluate the FLM method against other methods.

We calculate the average performance of different methods in each database and the best performance rate of FLM for each database. The results are in Tables 3 and 4. As shown in Table 3, FLM outperforms the other methods in terms of contrast, spatial frequency, and gradient. Table 4 indicates that FLM enhances most images in the four databases.

Table 3:
Average Performance in Each Database.
DatabaseHEQDDMFSMSCMGEMGDSFLM
Contrast        
BSD 5.0744 4.0259 5.9565 2.1982 3.9367 1.2822 10.6300 
DRIQ 3.7958 3.4754 3.7281 1.7522 2.8180 1.6012 7.9198 
LIVE 5.1354 4.6715 6.3934 1.4202 4.0946 1.2463 11.4124 
RBID 1.5860 0.7982 1.3743 1.5397 0.9683 1.2579 3.2589 
Spatial frequency        
BSD 35.6996 35.5235 39.4841 35.2413 31.7083 17.8678 67.8565 
DRIQ 30.5085 27.8447 30.0883 28.3726 24.8831 16.5290 53.8515 
LIVE 35.2392 37.1892 40.6825 28.9438 31.4691 18.0467 69.1108 
RBID 15.9230 12.2622 15.2089 16.2203 12.4811 13.4059 27.4024 
Gradient        
BSD 17.0280 15.7457 14.8268 13.9555 15.3901 7.9587 27.8353 
DRIQ 14.5817 13.2218 11.6078 10.4151 12.4959 7.8183 22.1819 
LIVE 16.5350 16.6340 15.5983 11.4367 14.9971 7.9646 28.5334 
RBID 8.3691 5.9218 5.6215 6.3222 6.8273 5.7682 11.7275 
DatabaseHEQDDMFSMSCMGEMGDSFLM
Contrast        
BSD 5.0744 4.0259 5.9565 2.1982 3.9367 1.2822 10.6300 
DRIQ 3.7958 3.4754 3.7281 1.7522 2.8180 1.6012 7.9198 
LIVE 5.1354 4.6715 6.3934 1.4202 4.0946 1.2463 11.4124 
RBID 1.5860 0.7982 1.3743 1.5397 0.9683 1.2579 3.2589 
Spatial frequency        
BSD 35.6996 35.5235 39.4841 35.2413 31.7083 17.8678 67.8565 
DRIQ 30.5085 27.8447 30.0883 28.3726 24.8831 16.5290 53.8515 
LIVE 35.2392 37.1892 40.6825 28.9438 31.4691 18.0467 69.1108 
RBID 15.9230 12.2622 15.2089 16.2203 12.4811 13.4059 27.4024 
Gradient        
BSD 17.0280 15.7457 14.8268 13.9555 15.3901 7.9587 27.8353 
DRIQ 14.5817 13.2218 11.6078 10.4151 12.4959 7.8183 22.1819 
LIVE 16.5350 16.6340 15.5983 11.4367 14.9971 7.9646 28.5334 
RBID 8.3691 5.9218 5.6215 6.3222 6.8273 5.7682 11.7275 

Note: The best results are highlighted in bold.

Table 4:
The Best Performance Rate of FLM.
DatabaseNumberContrastSpatial FrequencyGradient
BSD 500 0.9200 0.9760 0.9460 
DRIQ 26 0.9231 0.9231 0.8846 
LIVE 29 0.9655 
RBID 590 0.6237 0.7627 0.7847 
DatabaseNumberContrastSpatial FrequencyGradient
BSD 500 0.9200 0.9760 0.9460 
DRIQ 26 0.9231 0.9231 0.8846 
LIVE 29 0.9655 
RBID 590 0.6237 0.7627 0.7847 

6  Conclusion

We propose FLM inspired by gamma-band oscillations. We use the single-pass working form of FLM to obtain the time matrix, and the single pass makes it easy to understand the time structure of the network. FLM has two types of waves: feeding and linking. We study the FLM time matrix and propose an effective method for image enhancement. The comparisons with HEQ, DDM, FSM, SCM, and GEM illustrate the validity of the proposed algorithm. The FLM-based image enhancement method is a general and powerful technique that can be applied to low-contrast images and obtain satisfactory results. The processing mechanisms of the algorithm are consistent with HVS.

Acknowledgments

This work was supported by the National Science Foundation of China under grant no. 61201422 and the Specialized Research Fund for the Doctoral Program of Higher Education under grant 20120211120013. We are extremely grateful to Yide Ma, Hongjuan Zhang, and Fei Teng for giving us many useful suggestions. We thank Dani Lischinski and Norman Koren for letting us use their images.

References

Arbelaez
,
P.
,
Maire
,
M.
,
Fowlkes
,
C.
, &
Malik
,
J.
(
2011
).
Contour detection and hierarchical image segmentation
.
IEEE Transactions on Pattern Analysis and Machine Intelligence
33
,
898
916
.
Arici
,
T.
,
Dikbas
,
S.
, &
Altunbasak
,
Y.
(
2009
).
A histogram modification framework and its application for image contrast enhancement
.
IEEE Transactions on Image Processing
18
,
1921
1935
.
Arnheim
,
R.
(
1954
).
Art and visual perception: A psychology of the creative eye
.
Berkeley
:
University of California Press
.
Bai
,
X.
, &
Zhang
,
Y.
(
2014
).
Enhancement of microscopy mineral images through constructing alternating operators using opening and closing based toggle operator
.
Journal of Optics
,
16
,
125407
.
Barnes
,
T.
, &
Mingolla
,
E.
(
2013
).
A neural model of visual figure-ground segregation from kinetic occlusion
.
Neural Networks
,
37
,
141
164
.
Béroule
,
D.
(
2004
).
An instance of coincidence detection architecture relying on temporal coding
.
IEEE Transactions on Neural Networks
,
15
,
963
979
.
Brosch
,
T.
, &
Neumann
,
H.
(
2014
).
Interaction of feedforward and feedback streams in visual cortex in a firing-rate model of columnar computations
.
Neural Networks
,
54
,
11
16
.
Buzsáki
,
G.
, &
Wang
,
X. J.
(
2012
).
Mechanisms of gamma oscillations
.
Annual Review of Neuroscience
,
35
,
203
225
.
Chandler
,
D. M.
(
2013
).
Seven challenges in image quality assessment: Past, present, and future research
.
ISRN Signal Processing
,
2013
,
905685
.
Cheng
,
H. D.
, &
Xu
,
H.
(
2000
).
A novel fuzzy logic approach to contrast enhancement
.
Pattern Recognition
,
33
,
809
819
.
De Vries
,
F.
(
1990
).
Automatic, adaptive, brightness independent contrast enhancement
.
Signal Processing
,
21
,
169
182
.
Eckhorn
,
R.
,
Bauer
,
R.
,
Jordan
,
W.
,
Brosch
,
M.
,
Kruse
,
W.
,
Munk
,
M.
, &
Reitboeck
,
H.
(
1988
).
Coherent oscillations: A mechanism of feature linking in the visual cortex
?
Biological Cybernetics
,
60
,
121
130
.
Eckhorn
,
R.
,
Reitboeck
,
H.
,
Arndt
,
M.
, &
Dicke
,
P.
(
1990
).
Feature linking via synchronization among distributed assemblies: Simulations of results from cat visual cortex
.
Neural Computation
,
2
,
293
307
.
Ekblad
,
U.
, &
Kinser
,
J. M.
(
2004
).
Theoretical foundation of the intersecting cortical model and its use for change detection of aircraft, cars, and nuclear explosion tests
.
Signal Processing
,
84
,
1131
1146
.
Eskicioglu
,
A. M.
, &
Fisher
,
P. S.
(
1995
).
Image quality measures and their performance
.
IEEE Transactions on Communications
,
43
,
2959
2965
.
Farbman
,
Z.
,
Fattal
,
R.
,
Lischinski
,
D.
, &
Szeliski
,
R.
(
2008
).
Edge-preserving decompositions for multi-scale tone and detail manipulation
.
ACM Transactions on Graphics
,
27
,
67
.
French
,
A. S.
, &
Stein
,
R. B.
(
1970
).
A flexible neural analog using integrated circuits
.
IEEE Transactions on Biomedical Engineering
,
17
,
248
253
.
Fries
,
P.
(
2009
).
Neuronal gamma-band synchronization as a fundamental process in cortical computation
.
Annual Review of Neuroscience
,
32
,
209
224
.
Gollisch
,
T.
, &
Meister
,
M.
(
2008
).
Rapid neural coding in the retina with relative spike latencies
.
Science
,
319
,
1108
1111
.
Gong
,
Y.
, &
Sbalzarini
,
I. F.
(
2014
).
Image enhancement by gradient distribution specification
. In
Computer Vision-ACCV 2014 Workshops
(pp.
47
62
).
New York
:
Springer
.
Gonzalez
,
R. C.
,
Woods
,
R. E.
, &
Eddins
,
S. L.
(
2009
).
Digital image processing using MATLAB
(2nd ed.).
Gatesmark Publishing
.
Gove
,
A.
,
Grossberg
,
S.
, &
Mingolla
,
E.
(
1995
).
Brightness perception, illusory contours, and corticogeniculate feedback
.
Visual Neuroscience
,
12
,
1027
1052
.
Gray
,
C. M.
(
1999
).
The temporal correlation hypothesis of visual feature integration: Still alive and well
.
Neuron
,
24
,
31
47
.
Gray
,
C. M.
,
König
,
P.
,
Engel
,
A. K.
, &
Singer
,
W.
(
1989
).
Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties
.
Nature
,
338
,
334
337
.
Gütig
,
R.
,
Gollisch
,
T.
,
Sompolinsky
,
H.
, &
Meister
,
M.
(
2013
).
Computing complex visual features with retinal spike times
.
PloS One
,
8
,
e53063
.
Haken
,
H.
(
2005
).
Synchronization and pattern recognition in a pulse-coupled neural net
.
Physica D: Nonlinear Phenomena
,
205
,
1
6
.
Haken
,
H.
(
2008
).
Brain dynamics: An introduction to models and simulations
.
New York
:
Springer
.
Hendee
,
W. R.
, &
Wells
,
P. N.
(
1997
).
The perception of visual information
.
New York
:
Springer
.
Hopfield
,
J. J.
(
1995
).
Pattern recognition computation using action potential timing for stimulus representation
.
Nature
,
376
,
33
36
.
Izhikevich
,
E. M.
(
2004
).
Which model to use for cortical spiking neurons
?
IEEE Transactions on Neural Networks
,
15
,
1063
1070
.
Izhikevich
,
E. M.
(
2006
).
Polychronization: Computation with spikes
.
Neural Computation
,
18
,
245
282
.
Izhikevich
,
E. M.
(
2007
).
Dynamical systems in neuroscience
.
Cambridge, MA
:
MIT Press
.
Johnson
,
J. L.
(
1994
).
Pulse-coupled neural nets: Translation, rotation, scale, distortion, and intensity signal invariance for images
.
Applied Optics
,
33
,
6239
6253
.
Johnson
,
J. L.
, &
Padgett
,
M. L.
(
1999
).
PCNN models and applications
.
IEEE Transactions on Neural Networks
,
10
,
480
498
.
Johnson
,
J. L.
, &
Ritter
,
D.
(
1993
).
Observation of periodic waves in a pulse-coupled neural network
.
Optics Letters
,
18
,
1253
1255
.
Koch
,
C.
, &
Segev
,
I.
(
2000
).
The role of single neurons in information processing
.
Nature Neuroscience
,
3
,
1171
1177
.
Koren
,
N.
(
2004
).
India, part 4: Mehrangarh fort, jodhpur
. http://www.normankoren.com/India_04_4.html
Kuntimad
,
G.
, &
Ranganath
,
H. S.
(
1999
).
Perfect image segmentation using pulse coupled neural networks
.
IEEE Transactions on Neural Networks
,
10
,
591
598
.
Laboratório de Processamento de Sinais
.
RBID
realistic blurred image database
. http://www.lps.ufrj.br/profs/eduardo/ImageDatabase.htm
Lindblad
,
T.
, &
Kinser
,
J.
(
2013
).
Image processing using pulse-coupled neural networks: Applications in Python
.
New York
:
Springer
.
Lubenov
,
E. V.
, &
Siapas
,
A. G.
(
2008
).
Decoupling through synchrony in neuronal circuits with propagation delays
.
Neuron
,
58
,
118
131
.
Ma
,
Y.
,
Teng
,
F.
,
Zhan
,
K.
, &
Zhang
,
H.
(
2012
).
A new method of color image enhancement using spiking cortical model
.
Journal of Beijing University of Posts and Telecommunications
,
35
,
70
73
.
Ma
,
Y.
,
Zhan
,
K.
, &
Wang
,
Z.
(
2011
).
Applications of pulse-coupled neural networks
.
New York
:
Springer
.
Milner
,
P. M.
(
1974
).
A model for visual shape recognition
.
Psychological Review
,
81
,
521
535
.
Nikolić
,
D.
,
Fries
,
P.
, &
Singer
,
W.
(
2013
).
Gamma oscillations: Precise temporal coordination without a metronome
.
Trends in Cognitive Sciences
,
17
,
54
55
.
Ranganath
,
H. S.
, &
Kuntimad
,
G.
(
1999
).
Object detection using pulse coupled neural networks
.
IEEE Transactions on Neural Networks
,
10
,
615
620
.
Ranganath
,
H.
,
Kuntimad
,
G.
, &
Johnson
,
J.
(
1995
).
Pulse coupled neural networks for image processing
. In
Proceedings of Southeastcon ’95. Visualize the Future. IEEE
(pp.
37
43
).
Piscataway, NJ
:
IEEE
.
Reinagel
,
P.
, &
Reid
,
R. C.
(
2002
).
Precise firing events are conserved across neurons
.
Journal of Neuroscience
,
22
,
6837
6841
.
Reitboeck
,
H. J.
,
Stoecker
,
M.
, &
Hahn
,
C.
(
1993
).
Object separation in dynamic neural networks
. In
Proceedings of the IEEE International Conference on Neural Networks
(pp.
638
641
).
Piscataway, NJ
:
IEEE
.
Sheikh
,
H. R.
,
Sabir
,
M. F.
, &
Bovik
,
A. C.
(
2006
).
A statistical evaluation of recent full reference image quality assessment algorithms
.
IEEE Transactions on Image Processing
,
15
,
3440
3451
.
Singer
,
W.
, &
Gray
,
C. M.
(
1995
).
Visual feature integration and the temporal correlation hypothesis
.
Annual Review of Neuroscience
,
18
,
555
586
.
Somers
,
D.
, &
Kopell
,
N.
(
1993
).
Rapid synchronization through fast threshold modulation
.
Biological Cybernetics
,
68
,
393
407
.
Somers
,
D.
, &
Kopell
,
N.
(
1995
).
Waves and synchrony in networks of oscillators of relaxation and non-relaxation type
.
Physica D: Nonlinear Phenomena
,
89
,
169
183
.
Starck
,
J. L.
,
Murtagh
,
F.
,
Candes
,
E. J.
, &
Donoho
,
D. L.
(
2003
).
Gray and color image contrast enhancement by the curvelet transform
.
IEEE Transactions on Image Processing
,
12
,
706
717
.
Stark
,
J. A.
(
2000
).
Adaptive image contrast enhancement using generalizations of histogram equalization
.
IEEE Transactions on Image Processing
,
9
,
889
896
.
Stewart
,
R. D.
,
Fermin
,
I.
, &
Opper
,
M.
(
2002
).
Region growing with pulse-coupled neural networks: An alternative to seeded region growing
.
IEEE Transactions on Neural Networks
,
13
,
1557
1562
.
Stoecker
,
M.
,
Reitboeck
,
H. J.
, &
Eckhorn
,
R.
(
1996
).
A neural network for scene segmentation by temporal coding
.
Neurocomputing
,
11
,
123
134
.
Tang
,
J.
,
Kim
,
J.
, &
Peli
,
E.
(
2004
).
Image enhancement in the JPEG domain for people with vision impairment
.
IEEE Transactions on Biomedical Engineering
,
51
,
2013
2023
.
Tang
,
J.
,
Peli
,
E.
, &
Acton
,
S.
(
2003
).
Image enhancement using a contrast measure in the compressed domain
.
IEEE Signal Processing Letters
,
10
,
289
292
.
Tizhoosh
,
H. R.
(
2000
).
Fuzzy image enhancement: An overview
. In
M.
Nachtegael
&
E. E.
Kerre
(Eds.),
Fuzzy Techniques in image processing
(pp.
137
171
).
New York
:
Springer
.
VanRullen
,
R.
,
Guyonneau
,
R.
, &
Thorpe
,
S. J.
(
2005
).
Spike times make sense
.
Trends in Neurosciences
,
28
,
1
4
.
Victor
,
J. D.
(
2000
).
How the brain uses time to represent and process visual information
.
Brain Research
,
886
,
33
46
.
von der Malsburg
,
C.
(
1994
).
The correlation theory of brain function
.
New York
:
Springer
.
von der Malsburg
,
C.
,
Phillips
,
W. A.
, &
Singer
,
W.
(
2010
).
Dynamic coordination in the brain: From neurons to mind
.
Cambridge, MA
:
MIT Press
.
Vu
,
C. T.
,
Phan
,
T. D.
,
Banga
,
P. S.
, &
Chandler
,
D. M.
(
2012
).
On the quality assessment of enhanced images: A database, analysis, and strategies for augmenting existing methods
. In
Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation
(pp.
181
184
).
Piscataway, NJ
:
IEEE
.
Wang
,
D.
(
2005
).
The time dimension for scene analysis
.
IEEE Transactions on Neural Networks
,
16
,
1401
1426
.
Wang
,
D.
, &
Terman
,
D.
(
1997
).
Image segmentation based on oscillatory correlation
.
Neural Computation
,
9
,
805
836
.
Wang
,
Z.
,
Bovik
,
A. C.
,
Sheikh
,
H. R.
, &
Simoncelli
,
E. P.
(
2004
).
Image quality assessment: From error visibility to structural similarity
.
IEEE Transactions on Image Processing
,
13
,
600
612
.
Xu
,
H.
,
Zhai
,
G.
,
Wu
,
X.
, &
Yang
,
X.
(
2014
).
Generalized equalization model for image enhancement
.
IEEE Transactions on Multimedia
,
16
,
68
82
.
Zhan
,
K.
,
Zhang
,
H.
, &
Ma
,
Y.
(
2009
).
New spiking cortical model for invariant texture retrieval and image processing
.
IEEE Transactions on Neural Networks
,
20
,
1980
1986
.
Zhang
,
J.
,
Zhan
,
K.
, &
Ma
,
Y.
(
2007
).
Rotation and scale invariant antinoise PCNN features for content-based image retrieval
.
Neural Network World
,
17
,
121
132
.