Introducing an audio labeling toolkit

Earlier this year, Pop Up worked with Tanya Clement and Steve McLaughlin of the UT-Austin School of Information on a massive effort to use machine learning to identify notable speakers’ voices (for example, Martin Luther King, Jr.) from within the American Archive of Public Broadcasting’s 70,000 digitized audio and video recordings. Now, Tanya and Steve are sharing DIY techniques for using free machine learning algorithms to help label speakers in “unheard” audio.

This is a huge and hugely important effort: a model that can identify a single speaker’s voice has vast potential implications for the ability to see inside audio, making the content to accessible to researchers, organizations, and the general public. That the toolkit they’re sharing is DIY means its appropriate for use by programming novices who may be working on their first audio project.

Read the full post here. (This is one in a series, so stay tuned for more. Also — anyone currently working on archival sound projects can access another related tool called the Audio Tagging Toolkit.)

Introducing the HiPSTAS Audio Toolkit Workflow: Audio Labeling