首页 > 解决方案 > How to start two functions at the same time and only wait for the faster one?

问题描述

I am having a working code, but not really sure that this is the right way. I have two functions, both making an API request that can take an arbitrary time between 1 and 5 seconds, but both are designed to return the same output. I want to run the two simultaneously and once the quicker finished its job, terminate the other and drop whatever it would return.

p1 = Process(target = search1, args=(name) )
p2 = Process(target = search2, args=(name) )

if __name__=='__main__':
    p1.start()
    p2.start()

    while p1.is_alive() and p2.is_alive():
        time.sleep(0.2)

        if not p1.is_alive():
            p2.terminate()

        if not p2.is_alive():
            p1.terminate()

If I do not wait some time (0.2 seconds in this case) sometimes both are returning if both take somewhat the same time. I tested many times and it works, but is this the right way doing this? Is there any issue that can surface with this approach?

--- ti7 suggestion

Trying with threading after ti7's suggestion, now it works with threadings rather than Processes.

def search1(Q_result, name):
    result = somefunction()
    Q_result.put(result)

def search2(Q_result, name):
    time.sleep(10)
    result = somefunction()
    Q_result.put(result )


import Queue as queue
import threading

Q_result = queue.Queue()  # create a Queue to hold the result(s)

if __name__=='__main__':

    t1 = threading.Thread(
        target=search1,
        args=(Q_result, name),
    )
    t1.daemon = True
    t1.start()
        

    t2 = threading.Thread(
        target=search2,
        args=(Q_result),
    )
    t2.daemon = True
    t2.start()


print(Q_result.get())

标签: pythonpython-2.7multiprocessing

解决方案


If you are making the same request multiple times, you'll likely be better off just doing it once and contacting the owner of the service to improve its performance. (for example, it could be distributing connections and one of the nodes is very slow).

As @Arty notes, using threads will be lighter to create than a process, and so be more performant. You can make threads a daemon, so they don't need to be .join()ed to exit (blocking program exit until all of them are complete).

Async logic may be a little faster still, but can be frustrating to reason about, especially in Python 2. Additionally, you may find if you're using a 3rd-party library such as Twisted's Defferred, loading the needed libraries to be very slow and reduce performance overall.

With threads, you may find it convenient to get and put your results in a queue.Queue, which is both threadsafe and can block until content is available.

Rough thread example

from __future__ import print_function  # print(x) over print x
import queue
import threading

# if these are the same logic, use an argument to differentiate
# otherwise you could have any number of unique functions,
# each of which makes some different request
def make_request(Q_result, request_args):
    result = "whatever logic is needed to make request A"
    Q_result.put(result)  # put the result into the Queue

list_of_different_request_args = []  # fill with whatever you need

Q_result = queue.Queue()  # create a Queue to hold the result(s)

# iterate over input args (could be list of target functions instead)
for request_args in list_of_different_request_args:
    t = threading.Thread(
        target=make_request,
        args=(Q_result, request_args),
    )
    t.daemon = True  # set via arg in Python 3
    t.start()

# get the first result, blocking until one is available
print(Q_result.get())

# program exits and discards threads

推荐阅读