Automatic drum transcription using the student-teacher learning paradigm with unlabeled music data

by Chih-Wei Wu

Building a computer system that “listens” and “understands” music is the goal of many researchers working in the field of Music Information Retrieval (MIR). To achieve this objective, identifying effective ways of translating human domain knowledge into computer language is the key. Machine learning (ML) promises to provide methods to fulfill this goal. In short, ML algorithms are capable of making decisions (or predictions) in a way that is similar to human experts; this is achievable by browsing and observing patterns within so called “training data.” When large amounts of data are available (e.g., images and text), modern ML systems can perform comparably or even outperform human experts in tasks such as object recognition in images.

Similarly, to train a successful ML model for MIR tasks, (openly available) data also plays an essential role. Useful training data usually includes both the raw data (e.g., audio files, video files) and annotations that describe the answer for a certain task (such as the music genre, the tempo of the music). With a reasonable amount of data and correct ground truth labels, the ML models may build a function that maps the raw data to their corresponding answers.

One of the first questions new researchers ask is: “How much data is needed to build a good model?” The short answer to the first question is the more the better. This answer may be a little unsatisfying, but it is often true for ML algorithms (especially the increasingly popular deep neural networks!). The human annotation of data, however, is labor-intensive and does not scale well. This situation gets worse when the target task requires highly skilled annotators and crowdsourcing is not an option. Automatic Drum Transcription (ADT), a process that extracts the drum events from the audio signals, is a good example of such skill demanding task. To date, most of the existing ADT datasets are either too small or too simple (synthetic).

To find a potential solution for this problem, we try to explore the possibility of having ML systems learn from the data without labels (as shown in Fig. 1).

unlabeled data
The concept of learning from unlabeled data

Unlabeled data has the following advantages: 1) it is easily available compared to labeled data, 2) it is diverse, and 3) it is realistic.

We explore a fascinating way of using unlabeled data referred to as the “student-teacher” learning paradigm. In a way, it uses “machines to teach machines.” As researchers have been working on systems for drum transcription before, these existing systems can be utilized as teachers. Multiple teachers “transfer” their knowledge to the student and the unlabeled data is used as the medium to carry the knowledge of the teachers. The teachers make their predictions on the unlabeled data and the student will try to mimic the teachers’ predictions and become better and better at certain task. Of course, the teachers might be wrong, but the assumption is that multiple teachers and a large amount of data will compensate for this.

System flowchart

Figure 2 shows the presented system consisting of a training phase and a testing phase. During the training phase, all teacher models will be used to generate their predictions on the unlabeled data. These predictions will become “soft targets” or pseudo ground truth. Next, the student model is trained on the same unlabeled data with the soft targets. In the testing phase, the trained student model will be tested against an existing labeled dataset for evaluation.

The exciting (preliminary) result of this research is that the student model is actually able to outperform the teachers!  Through our evaluation, we show that it is possible to get a student model that outperforms the teacher models on certain drum instruments for ADT task. This finding is encouraging and shows  the potential benefits we can get from working with unlabeled data.

For more information, please refer to our full paper. The unlabeled dataset can be found on github.

Objective descriptors for the assessment of student music performances

by Amruta Vidwans

Learning a musical instrument is difficult. It needs regular practice, expert advice, and supervision. Even today, musical training is largely driven by interaction between student and a human teacher plus individual practice session at home.

Can technology improve this process and the learning experience? Can an algorithm perform an assessment of a student music performance? If yes, we are one step closer to a truly musically intelligent music tutoring system  that will support students learn their instrument of choice by providing feedback on aspects like rhythmic correctness, note accuracy, etc. An automatic assessment is not only useful to students for their practice sessions but could also help band directors in the auditioning and (pre-)selection process. While there are a few commercial products for practicing instruments, the assessment in these products is usually either trivial or opaque to the user.

The realization of a musically intelligent system for music performance assessment requires knowledge from multiple disciplines such as digital signal processing, machine learning, audio content analysis, musicology, and music psychology. With recent advances in Music Information Retrieval (MIR), noticeable progress has been made in related research topics.

Despite these efforts, identifying a reliable and effective method for assessing music performances remains an unsolved problem. In our study, we explore the effectiveness of various objective descriptors by comparing three sets of features extracted from the audio recording of a music performance, (i) a baseline set with common low-level features (often used but hardly meaningful for this task), (ii) a score-independent set with designed performance features (custom-designed descriptors such as pitch deviation etc., but without knowledge of the musical score), and (iii) a score-based set with designed performance features (taking advantage of the known musical score). The goal is to identify a set of meaningful objective descriptors for the general assessment of student music performances. The data we used covers Alto Saxophone recordings of three years of student auditions (Florida state auditions) rated by experts in the assessment categories of musicality, note accuracy, rhythmic accuracy, and tone quality.

Label: Musicality E1 E2 E3 E4
Correlation (r) 0.19 0.49 0.56 0.58

Our observations (as seen in Table 1) are that, as expected, the baseline features (E1) are not able to capture any qualitative aspects of the music performance so that the regression model mostly fails to predict the expert assessments . Another expected result is that score-based features (E3) are able represent the data generally better than score-independent features (E2) in all categories. The combination of score-independent and score-based features (E4) show some trend to improve results, but the gain remains small, hinting at redundancies between the feature sets. With values between 0.5 and 0.65 for the correlation between the prediction and the human assessments, there is still a long way to go before computers will be able to reliably assess student music performance, but the results show that an automatic assessment is possible to a certain degree.

To learn more, please see the published paper for details.

Header image used with kind permission of Rachel Maness from http://wrongguytoask.blogspot.com/2012/08/woodwinds.html

older projects

Assessment of Music Performances
Design and evaluation of features for the characterization of (student) music performances and create models to automatically assess these performances, detect errors, and give instantaneous feedback to the performer.

show more

Resources
Source repository: github
Publications:
– Wu, C.-W.; Gururani, S.; Laguna, C.; Pati, A.; Vidwans, A.; Lerch, A., Towards the Objective Assessment of Music Performances, Proceedings of the International Conference on Music Perception and Cognition (ICMPC), San Francisco, 2016
Contributors (current)
Siddharth Kumar Gururani, Chris Laguna, Ashis Pati, Amruta Jayant Vidwans, Chih-Wei Wu
Contributors (past)
Cian O’Brien, Yujia Yan, Ying Zhan

show less

Automatic Drum Transcription (PhD Project)
Automatic drum transcription in polyphonic mixtures of music using a signal-adaptive NMF-based method.

show more

Resources
Source repository: github
Publications:
– Wu, C.-W.; Lerch, A., On Drum Playing Technique Detection in Polyphonic Mixtures, Proceedings of the International Conference on Music Information Retrieval (ISMIR), New York, 2016
– Wu, C.-W.; Lerch, A., Drum Transcription using Partially Fixed Non-Negative Matrix Factorization With Template Adaptation, in Proceedings of the International Conference on Music Information Retrieval (ISMIR), Malaga, 2015.
– Wu, C.-W.; Lerch, A., Drum Transcription using Partially Fixed Non-Negative Matrix Factorization, Proceedings of the European Signal Processing Conference (EUSIPCO), Nice, 2015.
Contributors
Chih-Wei Wu

show less

Audio Quality Enhancement (MS Project)
Web application to improve audio quality of low quality recordings (especially for low quality mobile phone recordings). Processing steps include detecting and correcting clipping (distortion), removing noise, normalization of loudness, and equalization. The REPAIR Web App allows users to upload low-quality audio and download the improved audio.

show more

Resources
Web Application: REPAIR Web App
Source repository: github
Publications:
– Laguna, C, Master Project Report: A Web Application for Audio Quality Enhancement, MS Project Report, Georgia Institute of Technology, 2016
– Laguna, C.; Lerch, A., An Efficient Algorithm for Clipping Detection and Declipping Audio, Proceedings of the 141st AES Convention, Los Angeles, 2016
– Laguna, C.; Lerch, A., Client-Side Audio Declipping, Proceedings of the 2nd Web Audio Conference (WAC), Atlanta, 2016
Contributors
Chris Laguna

show less

Outlier detection in music datasets (Cooperation with Virginia Tech)
Unsupervised detection of anomalies in music datasets.

show more

Resources
Publications:
– Lu, Y.-C.; Wu, C.-W.; Lu, C.T.; Lerch, A., Automatic Outlier Detection in Music Genre Datasets, Proceedings of the International Conference on Music Information Retrieval (ISMIR), New York, 2016
– Lu, Y.-C.; Wu, C.-W.; Lu, C.-T.; Lerch, A., An Unsupervised Approach to Anomaly Detection in Music Datasets, Proceedings of the ACM SIGIR Conference (SIGIR), Pisa, 2016
Contributors
Chih-Wei Wu

show less

Automatic Practice Logging (Semester Project)
Automatic identification of continuous recordings of musicians practicing their repertoire. The goal is a detailed description of what and where they practiced, which can be used by students and instructors to communicate about the countless hours spent practicing.

show more

Resources
Publications:
– Winters, R. M.; Gururani, S.; Lerch, A., Automatic Practice Logging: Introduction, Dataset & Preliminary Study, Proceedings of the International Conference on Music Information Retrieval (ISMIR), New York, 2016
Source repository: github
Contributors
R. Michael Winters, Siddharth Kumar Gururani

show less

Machine Listening Module (MS Project)
Machine listening provides a set of data with which music can be synthesized, modified, or sonified. Real time audio feature extraction opens up new worlds for interactive music, improvisation, and generative composition. Promoting the use of machine listening as a compositional tool, this project brings the technique into DIY embedded systems such as the Raspberry Pi, integrating machine listening with analog synthesizers in the eurorack format.

show more

Resources
Source repository: github
Project Report:
– Latina, C., Machine Listening Eurorack Module, MS Project Report, Georgia Institute of Technology, 2016.
Contributors
Chris Latina

show less

Sample detection in Polyphonic Music
Sampling, the usage of snippets or loops from existing songs or libraries in new music productions or mashups, is a common technique in many music genres. The goal of this project is to design an NMF-based algorithm that is able to detect the presence of a sample of audio in a set of tracks. The sample audio may be pitch shifted or time stretched so the algorithm should ideally be robust against such manipulation.

show more

Resources
Contributors
Siddarth Kumar

show less

Web Resources for Audio Content Analysis
Online resources for tasks related to music information retrieval and machine learning, including matlab files, a list of datasets, and exercises.

show more

Resources
WWW: AudioContentAnalysis.org
Contributors
Alexander Lerch

show less

Other Projects

Application of MIR Techniques to Medical Signals
Based on the physionet.org challenge dataset for reducing false alarms in ECG and blood pressure signals, MIR approaches are investigated for the detection of alarm situations in the intensive care unit. The 5 types of alarms asystole, extreme bradycardia, extreme tachycardia, ventricular tachycardia, and ventricular flutter are detected.

show more

Resources
Contributors
Amruta Vidwans

show less

Real-time speaker annotation in conference settings
Generating a transcript of a conference meeting requires not only the transcription of text but also assigning the text to specific speakers. This system is designed to detect an unknown number of speakers and assign text to these speakers in a real-time scenario.

show more

Resources
Source repository: github
Contributors
Avrosh Kumar

show less

Application for Vocal Training and Assessment using Real-Time Pitch Tracking
A cross-platform application for vocal training and evaluation Screenshot Vocal Assessmentusing monophonic pitch tracking. The system is designed to take real-time voice input using standard microphones available in most mobile devices. The assessment is carried out in reference to reference vocal lessons based on pitch and timing accuracy. Real-time feedback is provided to the user in the form of a pitch contour plotted against the reference pitch to be sung.

show more

Resources
Source repository: github
Project Report:
– Pati, A., An Application for Vocal Training and Evaluation using Real-time Monophonic Pitch Tracking, Technical Report, Georgia Tech, 2015.
Contributors
Ashis Pati

show less

Vocopter Singing Game
Vocopter is a mobile game adapted from the classic Copter game. Vocopter allows a playful approach to assess the accuracy of intonation.

show more

Resources
Source repository: github
Contributors
Rithesh Kumar

show less

Project Riyaaz
Riyaaz is an Urdu word which means devoted practice. The project aims at implementating an app that aids the practice of Indian classical vocal music. It requires the student pass through a curriculum of exercises designed to strengthen their grasp of Swara (tonality, pitch) and Tala (rhythm). The interface provides real-time graphical feedback in order to help improving their skills.

show more

Resources
Contributors
Milap Rane

show less

Audio-Adaptive Visual Animations of Paintings

Animated_Gif
Original oil on canvas painting: Dusan Malobabic

A painting is an expression frozen in time. It is the imagination of the viewer that paints the untold past and the future of the captured moment. This project is an attempt to induce movements in a painting evoked by sounds or music. The idea is to extract various descriptors from music, for example, onsets and tonal content, and map them to a function to process an image and bring it to life as well enhance the music listening experience.

show more

Resources
Contributors
Avrosh Kumar

show less

Automatic Audio-Lyrics Alignment
Automatic alignment of song lyrics to audio recordings at the line level. The alignment makes use of voice activity detection, pitch detection, and the detection of repeating structures.

show more

Resources
Contributors
Amruta Vidwans

show less

Genre-specific Key Profiles
Investigation of differences and commonalities of audio pitch class profiles of different musical genres.

show more

Resources
Publication: O’Brien, C.; Lerch, A., Genre-Specific Key Profiles; Proceedings of the International Computer Music Conference (ICMC), Denton, 2015.
Contributors
Cian O’Brien

show less

Supervised Feature Learning via Sparse Coding for Music Information Retrieval
Sparse coding allows to learn features from the dataset in an unsupervised way. It is investigated how added supervised training functionality can improve the descriptiveness of the learned features.

show more

Resources:
Thesis: smartech
Contributors
Cian O’Brien

show less

Real-time Onset Detection
Design of an Onset Detection Algorithm suitable for real-time processing and a low latency live input scenario.

show more

<Contributors
Rithesh Kumar

show less

Predominant Instrument Recognition in Polyphonic Audio
Identification of a single predominant instrument per audio file using pitch features, timbre features and features extracted from short-time harmonics.

show more

Contributors
Chris Laguna

show less

Time-Domain Multi-Pitch Detection with Sparse Additive Modeling
Frame-level multi-pitch detection in the time domain with locally periodic kernel functions and sparsity constraints.

show more

Contributors
Yujia Yan

show less

Identification of live music performance via ambient audio content features
Automatic identification of recordings of live performance as opposed to studio recordings.

show more

Resources
dataset: github
Contributors
Raja Raman

show less

Wiki tutorial for running SuperCollider on Raspberry Pi
Various tutorial on installation and configuration of SuperCollider on a Raspberry Pi.

show more

Resources
WWW: Embedded Music Page
Contributors
Chris Latina

show less

Metric Learning for Music Discovery with Source and Target Playlists
Playlist generation for music exploration by defining sets of source songs and target songs and deriving a playlist through metric learning and boundary constraints.

show more

Resources
slides: presentation
Contributors
Ying-Shu Kuo

show less

Audio Chord Detection Using Deep Learning
Improve audio chord detection by using a Deep Network to extract the tonal features from the audio.

show more

Resources
Publication: Zhou, X.; Lerch, A., Chord Detection Using Deep Learning, in Proceedings of the International Conference on Music Information Retrieval (ISMIR), Malaga, 2015.
Contributors
Xinquan Zhou

show less