First, you need to know that there are many ways to make "rhythm":
- Using the audio and some math to determine the "rhythm" (the information is coded in the engine, the audio is untouched)
- "Pseudo rhythm" (there's no rhythm calculation, just information bundled in the audio that triggers stuff)
- Fake "rhythm" (is there rhythm at all? I heard sound and I'm playing but I don't think these spawning circles are even following the sound compass)
We're going with the first way, because is the chad way I like it and is one that I've already implemented.
Back in Godot 3, you need to synchronize with audio time, not with "game" time.
For this purpose, I follow the same path that fizzd (creator of many rhythm games) did: we create a "conductor", a class/object/thing that will be responsible of keeping the beat. No more, no less.
A general rules is to sync everything with the audio. If you use a timer, a function based on frames or something like that, you'll start loosing precision and, eventually, everything will be desync.
You got the idea, right? But how do we implement those with code?
Easy, first, we see what wee need (read the docs, for duck sake):
song_position = <AudioStreamPlayer>.get_playback_position() + AudioServer.get_time_since_last_mix()
That's the core of our game, our real (or the most real possible) audio time, and is the one you should be using.
According to docs, if you want more precision, subtract the latency information:
song_position -= AudioServer.get_output_latency()
Now we need beats:
song_position_in_beats = song_position / seconds_per_beat
_report_beat()
I often decide to arbitrarily ignore offbeats, so I convert that value to an integer and floor it int(floor())
We create a song_position_in_beats
variable, that is determined using the song_position
and the seconds_per_beat
:
# Distribute 60s in the defined BPM of your song.
# This is technically a constant, but BPM varies between songs
seconds_per_beat = 60.0 / song_bpm
Sometimes, audio that you don't make comes with extra "trash" (is not trash, is audio information or a little silence to make an effect for the song), so I add a beats_before_start
variable offset.
Let's stick all together
extends AudioStreamPlayer
# I prefer to use the main stream player directly, avoiding to spawn others, since there can be
# just one conductor
## Our song BPM.
## Try to not determine this in engine, look at your audio files for this value.
export var song_bpm:float = 100.0
# Tracking the beat and song position
var song_position:float = 0.0
var song_position_in_beats = 0
var seconds_per_beat = 60.0 / bpm
func _ready() -> void:
seconds_per_beat = 60.0 / bpm
# We're stick to game frames no matter how we're trying to bound to audio time. So let's
# use this loop to determine the audio time and work with it
func _process(_delta):
if playing:
song_position = get_playback_position() + AudioServer.get_time_since_last_mix()
song_position -= AudioServer.get_output_latency()
song_position_in_beats = int(floor(song_position / sec_per_beat)) + beats_before_start
_report_beat()
Finally, you _report_beat()
. Each frame you'll see if the current song position is in a certain beat (and is probably the one you want to make sure everyone react to it).
My implementation follows a bunch of steps that splits the song in many "measures" or sections of the song (compass)
func _report_beat() -> void:
if last_reported_beat < song_position_in_beats:
if measure > measures:
measure = 1
emit_signal("beat", song_position_in_beats)
emit_signal("measure", measure)
last_reported_beat = song_position_in_beats
measure += 1
But for a DDR game it would be simply as
func _report_beat() -> void:
if last_reported_beat < song_position_in_beats:
emit_signal("beat", song_position_in_beats)
last_reported_beat = song_position_in_beats