Skip to content Skip to sidebar Skip to footer

Dramatic Slow Down Using Multiprocess And Numpy In Python

I write a python code for Q-learning algorithm and I have to run it multiple times since this algorithm has random output. Thus I use multiprocessing module. The structure of the c

Solution 1:

When you use a multiprocessing pool, all the arguments and results get sent through pickle. This can be very processor-intensive and time-consuming. That could be the source of your problem, especially if your arguments and/or results are large. In those cases, Python may spend more time pickling and unpickling the data than it does running computations.

However, numpy releases the global interpreter lock during computations, so if your work is numpy-intensive, you may be able to speed it up by using threading instead of multiprocessing. That would avoid the pickling step. See here for more details: https://stackoverflow.com/a/38775513/3830997

Post a Comment for "Dramatic Slow Down Using Multiprocess And Numpy In Python"