Same old topic, new person. The GIL sucks. It makes programming easy, but it makes true parallel processing impossible. Audio sequencing plugin hosts use a different thread for each track, allowing their DSP to scale with the number of available processors. While python has performed beautifully for doing control-rate midi processing in our commercial plugin (http://www.soundsonline.com/) running in RT threads, the presence of the GIL causes brief CPU spikes when one or more threads take to much time to run their scripts. The lack of performance in a single thread is not a problem since the audio event will safely occur later, it’s the fact that that thread slags all the other threads that are running perfectly fine that sucks.
The Problem
Python has otherwise proven to be a fantastic platform for our scripting engine, and it would even more fantasticer if the GIL didn’t get in the way. It would be a shame for such a great scripting engine to be blocked by such a small hurdle (although still a major design decision). A per-interpreter lock would solve the problem for us since each track is always independent from another and so never share objects.
One solution is to try and patch the python source to actually be able to use completely separate interpreters. This means that all objects would live on one interpreter, and their accompanying functions would need to modified to take an interpreter as an argument. Loading extension modules would have to be disabled since they generally use static data. As of right now I don’t know how hard this would be but if anything I’d get a good code read out of it.
Another idea I was playing with was to somehow find a way to use the fact that dlls use their own address spaces to maintain separate images of the python lib and therefore separate GILs. If there was a way to copy a dll and store separate instances of it, this would be possible. I wouldn’t mind the extra footprint on the heap if I could get rid of those CPU spikes.
The correctly pythonic answer to this problem is to use process migration. This would imply creating a daemon process for every instance of our engine. I don’t know if this is a very nice thing to do to a plugin host, but it would fix the problem if we could get around all the shared memory and IPC overhead. But ummm, I think I’ll read the python code for now.
The Future
At Present, making python available to the audio world is what interests me most. It would be great if we could use the language for control-rate audio scripting, graphical interfaces, etc. The biggest problem I’ve had with the current implementation is when using it with plugin instances, and I can see problems popping up whenever complex intra-process use is involved.
That’s two major python no-no’s: single-process scalability, and use in high-priority threads. I have the belief, though, that while processors have already become fast enough to get over the old “python is bad for real time use” performance myth, and we are going to see more cores, which makes true mutli-threading a very very good thing to have.
Leave A Comment