Algorithmic composition methods must prove themselves within real-world musical contexts to more firmly solidify their adoption in musical practice. The present project is an automatic composing program trained on a corpus of songs from musical theater to create novel material, directly generating a scored lead sheet of vocal melody and chords. The program can also produce output based upon phonetic analysis of user-provided lyrics. The chance to undertake the research arose from a television documentary funded by Sky Arts that considered the question of whether current-generation, computationally creative methods could devise a new work of musical theater (the research described here provides but one strand within that project). Allied with the documentary, the resultant musical had a two-week West End run in London and was itself broadcast in full. Evaluation of the project included both design feedback from a musical theater composer team, and critical feedback from audiences and media coverage. The research challenges of the real-world context are discussed, with respect to the compromises necessary to get such a project to the stage.