Only an "in general" type of answer is possible as far as the optimization bit goes since an implementation is not required to work in some particular way as long as it works within the conditions laid down by the OpenAL specification. It is nevertheless likely that all implementations more or less work similarly.
In general, alSourcei
/alSourcef
involves at least calling a function like GetContextSuspended
, which involves an access to thread-local storage and entering/leaving a critical section, as well as a switch
statement (it also means a jump through a function pointer equivalent to a possibly not cached address in a possibly out-of-core page, and likely wasting one TLB cache entry).
alSourcei
further needs to do a thread-safe increment of a reference count, and allocate/append a new list node to the source's buffer list, which means something on the order of magnitude of calling malloc
at least once.
Setting AL_GAIN
and AL_PITCH
per se is almost a free operation. It sets a value and marks the source as being updated, so the context mixer thread knows something has changed when mixing the next time slice. In the worst case, if the parameters are illegal, alSourcef
needs to set the last error code.
Insofar, removing the calls to alSourcef
will of course avoid some unnecessary calls, and since you say that there is no chance the values could be anything else but 1.0, there is actually no reason to ever touch them at all, since that is the default value as per the specification.
But... if you expect a noticeable speed-up from removing these calls, you will likely be disappointed (unless there's several hundred thousands of them per second).