Algorithm to get the Key and Scale from musical notes? [closed]
Asked Answered
M

4

15

From a series of MIDI notes stored in array (with MIDI note number), does an algorithm exist to get the most likely key or scale implied by these notes?

Minutiae answered 6/2, 2013 at 17:1 Comment(6)
There are a few methods to doing this. Is your series of notes just one note at a time? Or, do you have chords?Changeling
I doubt it's possible. Just for example, every major scale has a "relative minor" scale, meaning exactly the same sequence of notes can be viewed as either of two entirely different scales (e.g., C major is also A minor).Jeminah
@Brad: They are a series of notes just one note at a time. I don't have any chords.Minutiae
@JerryCoffin: If you get the key of the song first, then we would be able to detect if it's a CM or an Am.Minutiae
@JerryCoffin, There are several algorithms for doing this, with decent confidence. They often work the same way humans do... contextual clues.Changeling
for a single regular scale (the same 7 notes) there are actually 7 different modes.. Major and Minor are only two of them.Taillight
M
22

If you're using Python you can use the music21 toolkit to do this:

import music21
score = music21.converter.parse('filename.mid')
key = score.analyze('key')
print(key.tonic.name, key.mode)

if you care about specific algorithms for key finding, you can use them instead of the generic "key":

key1 = score.analyze('Krumhansl')
key2 = score.analyze('AardenEssen')

etc. Any of these methods will work for chords also.

(Disclaimer: music21 is my project, so of course I have a vested interest in promoting it; but you can look at the music21.analysis.discrete module to take ideas from there for other projects/languages. If you have a MIDI parser, the Krumhansl algorithm is not hard to implement).

Mood answered 7/2, 2013 at 15:56 Comment(2)
do have support for raw audio in terms of either parsing to MIDI or directly applying these algorithms to audio files?Cosmetician
very little, but not zero, support. see the music21.audioSearch module. However, you're much better off using a dedicated Audio to MIDI or MusicXML program and then loading those results into music21.Mood
P
9

The algorithm by Carol Krumhansl is the best-known. The basic idea is very straightforward. A reference sample of pitches are drawn from music in a known key, and transposed to the other 11 keys. Major and minor keys must be handled separately. Then a sample of pitches are drawn from the music in an unknown key. This yields a 12-component pitch vector for each of 24 reference samples and one unknown sample, something like:

[ I,        I#,      II,       II#     III,       IV,      IV#,   V,     V#,    VI,     VI#,   VII    ]
[ 0.30,  0.02,  0.10,  0.05, 0.25,  0.20,  0.03, 0.30,  0.05,  0.13, 0.10  0.15]

Compute the correlation coefficient between the unknown pitch vector and each reference pitch vector and choose the best match.

Craig Sapp has written (copyrighted) code, available at http://sig.sapp.org/doc/examples/humextra/keycor/

David Temperley and Daniel Sleator developed a different, more difficult algorithm as part of their (copyrighted) Melisma package, available at http://www.link.cs.cmu.edu/music-analysis/ftp-contents.html

A (free) Matlab version of the Krumhansl algorithm is available from T. Eerola and P. Toiviainen in their Midi Toolbox: https://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/miditoolbox

Penates answered 23/2, 2013 at 22:19 Comment(1)
A nice description of Krumhansl-Schmuckler key-finding algorithm can be found here: rnhart.net/articles/key-findingUnamerican
P
3

There are a number of key finding algorithms around, in particular the ones of Carol Krumhansl (most papers that I've seen always cite Krumhansl's methods)

Ploughman answered 6/2, 2013 at 17:10 Comment(0)
A
3

Assuming no key changes, a simple algorithm could be based on a pitch class histogram (an array with 12 entries for each pitch class (each note in an octave)), when you get a note you add one in the correct entry, then at the end you'll very likely have two most frequent notes that will be 7 semitones (or entries) apart representing the tonic and the dominant, the tonic being the note you're looking for and the dominant being 7 semitones above or 5 semitones below.

The good thing about this approach is that it's scale-independent, it relies on the tonic and the dominant being the two most important notes and occurring more often. The algorithm could probably be made more robust by giving extra weight to the first and last notes of large subdivisions of a piece.

As for detecting the scale then once you have the key you can generate a list of the notes you have above a certain threshold in your histogram as offsets from that root note, so let's say you detect a key of A (from having A and E occur more often) and the notes you have are A C D E G then you would obtain the offsets 0 3 5 7 10, which searching in a database like this one would give you "Minor Pentatonic" as a scale name.

Applegate answered 11/2, 2017 at 18:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.