Code Tips for Improving Mac & iOS Pitch Detection. From Ariel Ramos

youtubeface2

Ariel Ramos is the talented developer and musician behind mDecks Music Apps.

You can read more here about Ariel’s awesome apps that use AudioKit.

Ariel was kind enough to take time out of his busy schedule to share some Mac & iOS audio development tips that he learned along the way. Without any further ado, here’s Ariel:


By Ariel J. Ramos:

Composer, Pianist and Music app developer

Our apps need to interact with the user in many ways: from playing simple chords using sampled sounds to complex accompaniments. They need to receive and send MIDI data. And, also listen to audio input from the user and convert into useful musical information.

In See Music, which is a sight reading app that listens to the player and gives instant note-by-note feedback on their performance, we were able to, not only identify pitch, but also to transcribe the entire performance into standard music notation and include a pitch accuracy report on every note in the score.

When we were designing the app the hardest decision to make was what to use to do pitch recognition.

Implementing code that analyzes audio and turns it into pitch and length information involves lots of advanced math, low-level access to memory, and functions. As we delved deeper into the process, the complexity began to remind us of navigating a trang casino trực tuyến, where each decision requires precision and careful calculation. Soon we realized this was much tougher than expected.

After finding AudioKit, we realized that 90% of the work has already been done. The library is simple enough to incorporate to the project, well documented and works really well.

We were able to solve the entire process by just using the AKFrequencyTracker which returns frequency and amplitude.

Since we wanted to analyze an entire musical phrase we needed something a bit more elaborate than a simple tuner.

In our solution we used a timer to stored all the data received from the tracker

The readAndSaveNotes function simple stores the data at regular intervals (timeBetweenReads) with 3 different listening modes (readStyle)

We found the biggest challenges were: how to ignore background noise and how to re-interpret frequency base on the instrument’s timbre and how to get the starting and ending time of note with accuracy (since the player is playing a melodic line)

Since See Music is an app for all instruments, it must interpret correctly the notes played by instruments with different timbres.

The weight of the overtones is different on every instrument, so the collected frequencies using the AKFrequencyTracker on a single note is usually a set of related frequencies based on the instrument’s timbre.

We found the best way to achieve this was to parametrize the way we collect the data from the AKFrequencyTracker based on each instrument

Here’s an example of the parameters settings for a default instrument:

Also to identify the notes, don’t forget to reduce the frequencies to the pitch and octave that makes sense on an instrument.

Here’s a simple class we used to reduced the frequencies and identify notes

Great job, Ariel!

Want to learn more? You can read more about Ariel’s awesome apps that use AudioKit.

Explore all of the mDecks Music Apps at mDecks.com.

You can follow Ariel on Twitter at: @mDecksMusic

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.