Sunday 10 a.m.–1 p.m. in Expo Hall
Don't Hear Music! See It!
The songs we hear today are very complex and rich: multiple instruments, sound effects and voices. As we hear a song, we may enjoy the overall sound, however, we are not aware of every audio component (words, tones, etc) composing it. But, what if we are able to identify the single instrument or artificial effect that made us love the song? What if we are able to find that particular effect or frequency that brothers our ears? Or probably create a mathematical model for music tastes? In this proposal, we aim to analyze songs using Python (Machine Learning, Signal Processing and Audio libraries). The idea is to use Machine Learning techniques and Signal Processing tools to have a better view of the songs we hear. In the poster, we will show how to analyze music in both time and frequency domains. We will visualize music, as we do with any other type of data. We will decompose it into its building components. Then we will apply Machine Learning to find similarities between songs, model music tastes and study the evolution of music characteristics over time (example: popular frequencies in each decade). There will be also a live demo during the session.