The meaning of period in ALSA
Asked Answered
F

2

29

I'm using ALSA for and audio application on Linux, I found great docs explain how to use it : 1 and this one. although I have some issues to understand this part of the setup :

 /* Set number of periods. Periods used to be called fragments. */ 
if (snd_pcm_hw_params_set_periods(pcm_handle, hwparams, periods, 0) < 0) {
  fprintf(stderr, "Error setting periods.\n");
  return(-1);
}

what does mean set a number of period when I'm using the PLAYBACK mode and :

/* Set buffer size (in frames). The resulting latency is given by */
/* latency = periodsize * periods / (rate * bytes_per_frame)     */
if (snd_pcm_hw_params_set_buffer_size(pcm_handle, hwparams, (periodsize * periods)>>2) < 0) {
  fprintf(stderr, "Error setting buffersize.\n");
  return(-1);
}

and the same question here about the latency , how should I understand it?

Funiculus answered 4/6, 2014 at 14:48 Comment(0)
E
24

I assume you've read and understood this section of linux-journal. You may also find that this blog clarify things with respect to period size selection (or fragment in the blog) in the context of ALSA. To quote:

You shouldn't misuse the fragments logic of sound devices. It's like this:

The latency is defined by the buffer size.
The wakeup interval is defined by the fragment size.

The buffer fill level will oscillate between 'full buffer' and 'full buffer minus 1x fragment size minus OS scheduling latency'. Setting smaller fragment sizes will increase the CPU load and decrease battery time since you force the CPU to wake up more often. OTOH it increases drop out safety, since you fill up playback buffer earlier. Choosing the fragment size is hence something which you should do balancing out your needs between power consumption and drop-out safety. With modern processors and a good OS scheduler like the Linux one setting the fragment size to anything other than half the buffer size does not make much sense.

... (Oh, ALSA uses the term 'period' for what I call 'fragment' above. It's synonymous)

So essentially, typically you would set period to 2 (as was done in the howto you referenced). Then periodsize * period is your total buffer size in bytes. Finally, the latency is the delay that is induced by the buffering of that many samples, and can be computed by dividing the buffer size by the rate at which samples are played back (ie. according to the formula latency = periodsize * periods / (rate * bytes_per_frame) in the code comments).

For example, the parameters from the howto:

  • period = 2
  • periodsize = 8192 bytes
  • rate = 44100Hz
  • 16 bits stereo data (4 bytes per frame)

correspond to a total buffer size of period * periodsize = 2 * 8192 = 16384 bytes, and a latency of 16384 / (44100 * 4) ~ 0.093` seconds.

Note also that your hardware may have some size limitations for the supported period size (see this trouble shooting guide)

Elidiaelie answered 5/6, 2014 at 0:23 Comment(0)
I
8

When the application tries to write samples into the buffer, an if the buffer is already full, the process goes to sleep. It gets woken up by the hardware through an interrupt; this interrupt is raised at the end of each period.

There should be at least two periods per buffer; otherwise, the buffer is already empty when a wakeup happens, which result in an underrun.

Increasing the number of periods (i.e., reducing the period size) increases the safety margin against underruns caused by scheduling or processing delays.

The latency is just proportional to the buffer size: when you completely fill the buffer, the last sample written is played by the hardware only after all the other samples have been played.

Incorporating answered 4/6, 2014 at 20:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.