The theory of statistical learning has been influential in providing a framework for how humans learn to segment patterns of regularities from continuous sensory inputs, such as speech and music. This form of learning is based on statistical cues and is thought to underlie the ability to learn to segment patterns of regularities from continuous sensory inputs, such as the transition probabilities in speech and music. However, the connection between statistical learning and brain measurements is not well understood. Here we focus on ERPs in the context of tone sequences that contain statistically cohesive melodic patterns. We hypothesized that implicit learning of statistical regularities would influence what was held in auditory working memory. We predicted that a wrong note occurring within a cohesive pattern (within-pattern deviant) would lead to a significantly larger brain signal than a wrong note occurring between cohesive patterns (between-pattern deviant), even though both deviant types were equally likely to occur with respect to the global tone sequence. We discuss this prediction within a simple Markov model framework that learns the transition probability regularities within the tone sequence. Results show that signal strength was stronger when cohesive patterns were violated and demonstrate that the transitional probability of the sequence influences the memory basis for melodic patterns. Our results thus characterize how informational units are stored in auditory memory trace for deviance detection and provide new evidence about how the brain organizes sequential sound input that is useful for perception.