In this article, a combination of two novel approaches to the harmonization of chorales in the style of J. S. Bach is proposed, implemented, and profiled. The first is the use of the bass line, as opposed to the melody, as the primary input into a chorale-harmonization algorithm. The second is a compromise between methods guided by music knowledge and by machine-learning techniques, designed to mimic the way a music student learns. Specifically, our approach involves learning harmonic structure through a hidden Markov model, and determining individual voice lines by optimizing a Boltzmann pseudolikelihood function incorporating musical constraints through a weighted linear combination of constraint indicators. Although previous generative models have focused only on codifying musical rules or on machine learning without any rule specification, by using a combination of musicologically sound constraints with weights estimated from chorales composed by Bach, we were able to produce musical output in a style that closely resembles Bach's chorale harmonizations. A group of test subjects was able to distinguish which chorales were computer generated only 51.3% of the time, a rate not significantly different from guessing.