Change the future

Wednesday 9 a.m.–12:20 p.m.

Digital signal processing through speech, hearing, and Python

Mel Chua

Audience level:


Why do pianos sound different from guitars? How can we visualize how deafness affects a child's speech? These are signal processing questions, traditionally tackled only by upper-level engineering students with MATLAB and differential equations; we're going to do it with algebra and basic Python skills. Based on a signal processing class for audiology graduate students, taught by a deaf musician.


One thing Python is great for is bringing "advanced" technical topics within the grasp of relative beginners. To illustrate, we'll be taking an upper-level electrical engineering course (Signals and Systems / Digital Signal Processing) that typically has 4-6 semesters of engineering/math/science coursework as a prerequisite... and teaching exactly the same concepts to an audience with only algebra and basic Python knowledge.

This workshop is based on a graduate course in signal processing for audiology doctoral students, and is being taught by a deaf engineering education researcher who is a musician, dancer, and polyglot. As such, the exercises and examples will be from the realms of speech, hearing, and music.

  • First, we'll introduce the time and frequency domains and the Fourier transform by making equalizers and modeling different sorts of hearing loss. What does your favorite song sound like to someone with this hearing profile? What does speech sound like?
  • We'll get into spectrograms and visualizations by introducing envelopes, impulse responses, and phonology. After discovering why an "A" sounds different from an "O" and what makes trumpets sound "brassy," we'll use visualizations to solve practical problems in speech and noise: for instance, a high-frequency loss makes it difficult to hear plosives (sounds like "p" and "b") but vowels are fine. Why? Can you predict which auditory situations will be more understandable?
  • How to break sounds. We'll play with clipping, undersampling, aliasing, and other Bad Techniques most audio folks try desperately to avoid in order to find out why they make signals Sound Wrong.
  • Fun With Filtering: if you need to fit an 8kHz signal into a 2kHz bandwidth, what can you do to bring information-rich parts of the signal into perceptual range? We'll experiment with various techniques for implementing auditory superpowers such as giving humans bat-like hearing powers while still retaining the ability to understand speech.
  • Other topics and labs depending on time and audience interest, including discussion on pedagogy and how this approach could be used for other "advanced" topics in engineering education.

We will sing, dance, and make music. Bring headphones. If you play a portable instrument, bring it for an in-workshop jam session.