I want to build my presentation around the question "Will there ever be a computational model of music perception?". For the visual sensory system, people are developing models for the early processing stages and more advanced for scene and object recognition. I don't know about the development in the auditory domain.
One would have to discuss models of early auditory processing (things like volume, pitch, harmony), then going on to aspects like rhythm, melody and music itself. The question will raise what to expect from a possible model of music perception, what could it practically account for?
I see at least one connection between the notion of "music" and possible neural models: Music is a highly-structured input and might therefore help to shape the neural circuitry for auditory processing (similar as natural images (as structured visual input) can shape V1-like receptive fields with neurobiological plausible plasticity rules). Felipe Gerhard 10:08, 10 July 2008 (CDT)
In the end, it turned out to be a bit more general than I thought.
What is still needed for a complete article: Build in the references and cross-links to other topics. Add the picture of the modular framework.
Felipe Gerhard 18:34, 6 September 2008 (CDT)