Natural sound representation
Four different representational hypotheses will be considered, each based on a different set of features: (i) low-level acoustical features (e.g., time-varying output of cochlear filters); (ii) mid-level acoustical features (e.g., time-varying loudness); (iii) natural-sound categories (e.g., human-action sounds); (iv) within-category features (e.g., state of matter of the vibrating object as in solid vs. liquid vs. gaseous).
Each of the levels of this framework has been object of much research on the cortical processing of natural sound. Notably, however, the different levels have been rarely compared in their ability to account for neuroimaging data, precluding an accurate understanding of the mechanics of cortical processing (e.g., does the auditory cortex truly represent sound categories or is it merely sensitive to systematic between-category differences in acoustical structure; Giordano et al., 2013).
Giordano, B.L., McAdams, S., Zatorre, R.J., Kriegeskorte, N., and Belin, P. (2013). Abstract encoding of auditory objects in cortical activity patterns. Cerebral cortex 23, 2025-2037. PDF
Giordano, B.L., Pernet, C., Charest, I., Belizaire, G., Zatorre, R.J., and Belin, P. (2014). Automatic domain-general processing of sound source identity in the left posterior middle frontal gyrus. Cortex 58, 170-185. PDF