I'm writing a graphic equalizer in python and I want to use pygame for the video and audio output.
The video display of the moving bars works great, now that I have a correct way to read samples and decode them.
The problem now is: they're way out of sync with the audio playing in the background!
At the moment my code, that draws the bars basically looks like this:
# loop as long as we can get slice of the audio file for the fft for n_frame in range(0, len(framebuffer), fftframesz): clock.tick_busy_loop(t_loop) # t_loop was defined as fftframesz over total_frame_count # which means in words: t_loop is the amount of FFTs per second - or the cycles of THIS loop # grab the frames and perform the fft here ...numpy.fft.fft... # set the new bar heights and draw them for bar in bars: bar.update() # if playback has not started yet, make it so if not pygame.mixer.get_busy(): musicfile.play()
The way I'm thinking, this should be enough to keep sound and video in sync. Because we know the time that one cycle should take at most,
we can set the clock to tick_busy_loop this time. This way we make sure that the video is not faster than the sound.
But to me it seems that is is the other way round, i.e. that the video is slower than the sound. I already did some timing, but to no avail.
Is there perhaps a more easy, more elegant way to make sure that image and audio are in sync?
Any help is greatly appreciated!
Last edited by n0stradamus (2012-10-01 18:36:53)