VBA-M Forums

Full Version: Help understanding sound system and timing.
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I'm trying to understand the code of VBA, but can't figure out some things related to sound and timing. The code I debugged was the one from the original VBA, but due to similarity I hope I can find help over here.
I guess that all the 'real' timing is going on in the callback function soundCallback() and the systemWriteDataToSoundBuffer() function. I figure this out because disabling sound makes the emulator go 'full speed' (too fast). There are several semaphores for synchronizing the access to buffers between the two functions. Moreover, the thread that is executing the emulator (not the audio thread from SDL) sometimes waits 1 second when executing the following code in systemWriteDataToSoundBuffer():

/* wait for buffer to be dumped by soundCallback() */

So is the sound code really timing the emulator so it can go at 'real' speed? Why does it go too fast when disabling sound? isn't the video code generally responsible for this timing throught frame skipping?
I mean, it's easy to get the program to run at the correct speed throught frame skipping and the 60fps, but how do you achieve this throught the sound?

I hope someone get the idea out of the mess of this message and can just say the words that can clear out my doubts.
Might not necessarily be that the sound code is timing the emulator.
Might be that the operating system places a greater thread priority for audio than it does with graphics. I know with other platforms like the N64 this was done.

In that case the graphics rendering will constitute a speed rate in video frames per second, but the operating system itself was still not timed by this measure. The actual number of vertical interrupts per second of drawing the screen in an emulator was untimed by default, so N64 emulators used the audio interface (sound plugins) to synchronize the audio to the video to keep the speed controlled.

So I can't really tell you anything about SDL or what might internally be done within the sound threading, but maybe it's just a choice done by the operating system here, too? After all sound is constantly in the background, while there aren't always video updates to draw.
Well, from what I've studied from the code here is my analysis:
There are two threads executing:
Thread N1: This is the 'normal' thread, the one that indeed emulates the machine.
Thread N2: This one is created by Thread N1 when SDL_OpenAudio() is called. It writes the buffer to the hardware and calls the callback function (the function is soundCallback()).
I debugged the code through this function and, apparently, there is no thread with a higher priority; I'm not sure if this is an option passed by the user with pthreads, or if one of the threads get more ticks to execute. The thing is that when debugging both threads alternate execution and it appears to be no 'privileged' thread.
Three semaphores are created by Thread N1 (this is before creating Thread N2):
sdlBufferLock = SDL_CreateSemaphore(1);
sdlBufferFull = SDL_CreateSemaphore(0);
sdlBufferEmpty = SDL_CreateSemaphore(1);

Thread N2 is created and it calls soundCallback(). It executes the following:
Because this semaphore was initialized to 0, Thread N2 blocks until the buffer is filled (until Thread N1 calls SDL_SemPost(sdlBufferFull).
Now, with Thread N2 sleeping, Thread N1 use a counter to determine when the buffers must be written. When the sample rate is 22khz, the counter is 48. The counter is clocked by the main CPU clock; that is, the counter is decremented with the number of ticks of the executing instruction. When the counter is equal or less than 0, only one sample is written to the buffer, so the buffer is filled after various of this loops.
When the buffer is filled, Thread N1 executes the following (Thread N2 is still sleeping):
Because sdlBufferEmpty was initialized to 1, the thread doesn't sleep. Next it fills the last portion of the buffer, and then:
SDL_SemPost(sdlBufferFull); // Thread N2 is woken up.
Now, Thread N1 executes again SDL_SemWait(sdlBufferEmpty), so it goes to sleep.
Now Thread N2 is the only one executing. It does the following:
Copies the buffer to the stream set up by SDL that is going to be written to the hardware. It then executes SDL_SemPost(sdlBufferEmpty), so Thread N1 is woken up.
Next, Thread N2 returns from the callback function, writes the buffer to the hardware, and calls the callback function again.

One of my conjectures was that the timing we are discussing was accomplished when Thread N1 has to wait until Thread N2 dumps the buffer to the hardware (after all, Thread N1 has to wait Thread N2, and Thread N2 does a write() to the sound hardware, so I guess this takes some time). I thinks this because I noticed that when the basic loop just described (the synchronization between the threads with the semaphores) is not executed, the emulator goes way too fast (as if there were no timing at all). So I suppose the timing is done here, and not through frame skipping.
But, if this is so, I can't figure out how the correct timing is achieved in this way. I mean, the sound counter set to 48, and the waiting for a write() system call, How is this related to the real 60-frame-per-second Gameboy timing? Although I must say that when I compiled and ran VisualBoy with this apparent timing it was a bit too fast, so I can't figure out how the task of 'going the right speed' is accomplished.

If any ideas, I'd really appreciate it.
you're looking at the output rate, rather than the internal DAC rate which is where the synchronisation occurs off.
So how does this sync work?
Reference URL's