首页 > 解决方案 > Run function in parallel

问题描述

I have a function that takes a "point" as an input and through an algorithm "minimizes" it (basically an optimizer of sorts).

Now I have a list of points to be minimized, but doing it serially through a loop wastes a lot of time where the program is just waiting for an external program to finish, so I need to a way to send more than one point at a time.

The way I've been doing it so far was to separate the optimization function to a different file and then call the function via a system call from the main program, i.e. something like: os.system('python3 filename.py ') and then check the output folder every few seconds to see if one finished so that it can be removed from the queue and allow the next point to start.

This technically works but it's not very elegant to say the least, so I was wondering if there's a better way to do this.

To summarize I want something like this:

for point in point_list:
    while len(queue) < MAX_QUEUE_LENGTH:
        wait until a space opens up
    add point to queue

This way if MAX_QUEUE_LENGTH = 4 there should always be 4 point running in parallel, and when one is finished the next point from the list would start.

标签: python

解决方案


请参阅https://docs.python.org/3.7/library/multiprocessing.html

像这样的东西(点应该在你的代码中定义)

from multiprocessing import Pool

def minimize_point(point):
    # do mnimize the point

if __name__ == '__main__':
    pool = Pool(5)
    print(pool.map(minimize_point, [Point(1,3), Point(2,5), Point(3,5)]))

推荐阅读