首页 > 解决方案 > Python - 替换子进程源文件

问题描述

我正在尝试编写一个程序,其中我说两个文件,一个调用launcher.py另一个调用sysupdate.py,其中launcher生成子进程以同时运行(包括 sysupdate)并sysupdate在网络上侦听压缩的软件更新文件。当sysupdate收到更新文件时,它需要能够杀死/暂停其他进程(由创建launcher),替换它们的源代码文件,然后重新启动它们。我正在努力寻找一种巧妙的方法来实现这一点,并且想知道是否有人对我如何实现这一点有任何建议?

我应该提到,这些子进程被设计为无限循环,因此我不能等待它们不幸退出,我需要能够手动杀死它们,替换它们的源文件,然后重新启动它们。

在子进程运行时,我需要启动器能够“让它们保持活力”,因此如果它们因任何原因死亡,则应该重新启动它们。显然,当他们因软件更新而被杀死时,我需要暂停这种行为。此代码用于始终在线的传感器系统,因此我需要一致的循环和重新启动。

例如:

启动器.py:

def launch_threads():   
    # Reading thread
    try:
        readthread = Process(target=read_loop, args=(sendqueue, mqttqueue))
        processes.append(readthread)
    except Exception as ex:
        log("Read process creation failed: " + str(ex), 3)
        
    # ..... Other threads/processes here
    
    # System Update Thread
    try:
        global updatethread
        updatethread = Process(target=update_loop, args=(updatequeue,))
        processes.append(updatethread)
    except Exception as ex:
        log("Software updater process creation failed: " + str(ex), 3)

    return processes


if __name__ == '__main__':
        processes = launch_threads()
        for p in processes:
            p.start()
        for p in processes:              # Here I have it trying to keep processes alive permanently, .. 
            p.join()                     # .. I need a way to 'pause' this
            if not p.is_alive():
                p.start()

系统更新.py:

def update_loop():

    wait_for_zip_on_network()
    extract_zip()
    
    kill_processes()           # Need sysupdate to be able to tell 'launcher' to kill/pause the processes

    replace_source_files()

    resume_processes()         # Tell 'launcher' to resume/restart the processes

标签: pythonmultiprocessingupdates

解决方案


launch_threads可能是用词不当,因为您正在启动进程而不是线程。我假设您正在启动其中一些可以分配给变量N_TASKS和一个附加进程的进程update_loop,因此进程总数为N_TASKS+ 1。此外,我假设这些N_TASKS进程最终会在没有源更新的情况下完成。我的建议是使用多处理池,它方便地提供了一些设施,使我们的工作更简单一些。我还将使用一个修改版本update_loop,它只监听更改、更新源并终止,但可以重新启动:

系统更新文件

def modified_update():
    zip_file = wait_for_zip_on_network()
    return zip_file

然后我们使用模块中的Poolmultiprocessing和各种回调,这样我们就可以知道各种提交的任务何时完成。我们希望等待modified_update任务完成或所有“常规”任务。在任何一种情况下,我们都会终止所有未完成的任务,但在第一种情况下,我们会重新启动所有任务,在第二种情况下,我们会完成:

from multiprocessing import Pool
from threading import Event

# the number of processes that need to run besides the modified_update process:
N_TASKS = 4

completed_event = None
completed_count = 0

def regular_task_completed_callback(result):
    global completed_count, completed_event
    completed_count += 1
    if completed_count == N_TASKS:
        completed_event.set() # we are throug with all the tasks

def new_source_files_callback(zip_file):
    global completed_event
    extract_zip(zip_file)
    replace_source_files()
    completed_event.set()

def launch_threads():
    global completed_event, completed_count
    POOLSIZE = N_TASKS + 1
    while True:
        completed_event = Event()
        completed_count = 0
        pool = Pool(POOLSIZE)
        # start the "regular" processes:
        pool.apply_async(read_loop, args=(sendqueue, mqttqueue), callback=regular_task_completed_callback)
        # etc.
        # start modified update_loop:
        pool.apply_async(modified_update, callback=new_source_files_callback)
        # wait for either the source files to have changed or the "regular" tasks to have completed:
        completed_event.wait()
        # terminate all outstanding tasks
        pool.terminate()
        if completed_count == N_TASKS: # all the "regular" tasks have completed
            return # we are done
        # else we start all over again


if __name__ == '__main__':
    processes = launch_threads()

更新

如果“常规”任务永远不会终止,那么这大大简化了逻辑。modified_update变成:

系统更新文件

def modified_update():
    zip_file = wait_for_zip_on_network()
    extract_zip(zip_file)
    replace_source_files()

接着:

启动器.py

from multiprocessing import Pool


def launch_threads():
    # the number of processes that need to run besides the modified_update process:
    N_TASKS = 4
    POOLSIZE = N_TASKS + 1
    while True:
        pool = Pool(POOLSIZE)
        # start the "regular" processes:
        pool.apply_async(read_loop, args=(sendqueue, mqttqueue))
        # etc.
        # start modified_update:
        result = pool.apply_async(modified_update)
        result.get() # wait for modified_update to complete
        # terminate all outstanding (i.e. "regular") tasks
        pool.terminate()
        # and start all over


if __name__ == '__main__':
    launch_threads()

笔记

由于我现在使用的Pool设施较少,因此您可以返回启动单个Process实例。正在做的事情的要点是:

  1. modified_update不再循环,但在执行源更新后终止。
  2. launch_threads由一个循环组成,该循环启动“常规”并modified_update处理并等待modified_update完成,表示已发生源更新。结果,所有“常规”进程都必须终止,一切重新开始。使用池只是简化了跟踪所有进程并通过一次调用终止它们。

推荐阅读