My PhD takes lessons learned from taking a radical embodied approach to human motor learning and extends it to the perception and production of new phonemes. Students of Speech and Language Therapy are typically encouraged to learn such novel sounds through various methods, including studying waveforms and spectrograms of their productions or even MRI images of their own vocal tracts. I propose that the use of such abstractions or focusing of attention on the vocal apparatus are misguided, since they are not ecologically relevant sources of information. Augmented feedback tends to be most useful when it facilitates the detection of naturally available information, such that after feedback is removed it can still be used in the control of action.
I will be training subjects to make new vowel sounds, providing real-time continuous visual feedback of the formant frequencies that specify where their productions lie in vowel-space. Using this information coupled to their productions, they will aim to alter their vowel to reach a target vowel area. The coupled display will not only function as a guide to movement during training, but will also provide criterial feedback on what a successful production sounds and feels like, so that subjects can utilise this information when the display is no longer present.