I assume you've read and understood this section of linux-journal. You may also find that this blog clarify things with respect to period size selection (or fragment in the blog) in the context of ALSA. To quote:
You shouldn't misuse the fragments logic of sound devices. It's like
this:
The latency is defined by the buffer size.
The wakeup interval is defined by the fragment size.
The buffer fill level will oscillate between 'full buffer' and 'full
buffer minus 1x fragment size minus OS scheduling latency'. Setting
smaller fragment sizes will increase the CPU load and decrease battery
time since you force the CPU to wake up more often. OTOH it increases
drop out safety, since you fill up playback buffer earlier. Choosing
the fragment size is hence something which you should do balancing out
your needs between power consumption and drop-out safety. With modern
processors and a good OS scheduler like the Linux one setting the
fragment size to anything other than half the buffer size does not
make much sense.
...
(Oh, ALSA uses the term 'period' for what I call 'fragment'
above. It's synonymous)
So essentially, typically you would set period
to 2 (as was done in the howto you referenced). Then periodsize * period
is your total buffer size in bytes. Finally, the latency is the delay that is induced by the buffering of that many samples, and can be computed by dividing the buffer size by the rate at which samples are played back (ie. according to the formula latency = periodsize * periods / (rate * bytes_per_frame)
in the code comments).
For example, the parameters from the howto:
- period = 2
- periodsize = 8192 bytes
- rate = 44100Hz
- 16 bits stereo data (4 bytes per frame)
correspond to a total buffer size of period * periodsize = 2 * 8192 = 16384
bytes, and a latency of 16384 / (44100 * 4) ~ 0.093` seconds.
Note also that your hardware may have some size limitations for the supported period size (see this trouble shooting guide)