Skip Nav Destination
Close Modal
Update search
NARROW
Date
Availability
1-2 of 2
Environments for Computer Music Applications
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2015) 39 (1): 27–40.
Published: 01 March 2015
Abstract
View article
PDF
Native Web technologies provide great potential for musical expression. We introduce two JavaScript libraries towards this end: Gibberish.js, providing heavily optimized audio DSP, and Interface.js, a GUI toolkit that works with mouse, touch, and motion events. Together they provide a complete system for defining musical instruments that can be used in both desktop and mobile Web browsers. Interface.js also enables control of remote synthesis applications via a server application that translates the socket protocol used by Web interfaces into both MIDI and OSC messages. We have incorporated these libraries into the creative coding environment Gibber, where we provide mapping abstractions that enable users to create digital musical instruments in as little as a single line of code. They can then be published to a central database, enabling new instruments to be created, distributed, and run entirely in the browser.
Journal Articles
Publisher: Journals Gateway
Computer Music Journal (2013) 37 (4): 10–23.
Published: 01 December 2013
Abstract
View article
PDF
The computer music community has historically pushed the boundaries of technologies for music-making, using and developing cutting-edge computing, communication, and interfaces in a wide variety of creative practices to meet exacting standards of quality. Several separate systems and protocols have been developed to serve this community, such as Max/MSP and Pd for synthesis and teaching, JackTrip for networked audio, MIDI/OSC for communication, as well as Max/MSP and TouchOSC for interface design, to name a few. With the still-nascent Web Audio API standard and related technologies, we are now, more than ever, seeing an increase in these capabilities and their integration in a single ubiquitous platform: the Web browser. In this article, we examine the suitability of the Web browser as a computer music platform in critical aspects of audio synthesis, timing, I/O, and communication. We focus on the new Web Audio API and situate it in the context of associated technologies to understand how well they together can be expected to meet the musical, computational, and development needs of the computer music community. We identify timing and extensibility as two key areas that still need work in order to meet those needs.