gh-80961: Add daemon parameter to ThreadPoolExecutor#13250
gh-80961: Add daemon parameter to ThreadPoolExecutor#13250hniksic wants to merge 1 commit intopython:mainfrom
Conversation
Lib/concurrent/futures/thread.py
Outdated
| work_item.future.set_exception(BrokenThreadPool(self._broken)) | ||
|
|
||
| def shutdown(self, wait=True): | ||
| def shutdown(self, wait=True, wait_at_exit=True): |
There was a problem hiding this comment.
shutdown is defined in Executor because Executor is the abstract superclass for both ThreadPoolExecutor and ProcessPoolExecutor. Unless there is a very strong reason not to, this method should work the same in both executors.
There was a problem hiding this comment.
@brianquinlan This was intentional - I tested shutdown(wait=False) with ProcessPoolExecutor, and found that it raised exceptions and hanged the process at exit. (Not just hanged in the sense of waiting for the pending futures, but completely hanged, even when the futures exited.) So the new functionality is only available in and documented for ThreadPoolExecutor.
For example, when I run this script on Python 3.7:
import time, concurrent.futures
pool = concurrent.futures.ProcessPoolExecutor()
pool.submit(time.sleep, 5)
print(1)
pool.shutdown(wait=False)
print(2)The expected behavior is for the program to print 1 and 2 and then to wait for 5 seconds before exiting. Instead, it prints 1 and 2, but hangs at exit with the following output:
$ python3.7 ~/Desktop/x
1
2
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python3.7/concurrent/futures/process.py", line 101, in _python_exit
thread_wakeup.wakeup()
File "/usr/lib/python3.7/concurrent/futures/process.py", line 89, in wakeup
self._writer.send_bytes(b"")
File "/usr/lib/python3.7/multiprocessing/connection.py", line 183, in send_bytes
self._check_closed()
File "/usr/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
Exception in thread QueueManagerThread:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.7/concurrent/futures/process.py", line 368, in _queue_management_worker
thread_wakeup.clear()
File "/usr/lib/python3.7/concurrent/futures/process.py", line 92, in clear
while self._reader.poll():
File "/usr/lib/python3.7/multiprocessing/connection.py", line 255, in poll
self._check_closed()
File "/usr/lib/python3.7/multiprocessing/connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
OSError: handle is closed
There was a problem hiding this comment.
I really wanted ThreadPoolExecutor and ProcessPoolExecutor to have the same API when I designed them.
Do you have any bandwidth to debug this? If not, I could take a look.
There was a problem hiding this comment.
Please do take a look if you can. I am not acquainted with the implementation of ProcessPoolExecutor, so it would take quite some time for me to trace what's going on.
It would of course be ideal if both classes supported the new flag.
There was a problem hiding this comment.
I was playing with ProcessPoolExecutor and it seems like there are a bunch of problems that are triggered when pool.shutdown(wait=False) is used. I filed a bug for one issue: https://bugs.python.org/issue39205
Do you think that your PR could hold off until I have a chance to sort some of this out?
There was a problem hiding this comment.
Do you think that your PR could hold off until I have a chance to sort some of this out?
Sure, thanks for asking. We have a workaround, so it's no problem to wait for the proper solution. It's just that the workaround is so extremely ugly, involving monkey patch of a private method, that we'd definitely prefer the proper fix to land eventually.
|
This PR is stale because it has been open for 30 days with no activity. |
Add `daemon=False` keyword-only parameter to ThreadPoolExecutor. When True, worker threads are created as daemon threads and are not registered in the global _threads_queues, allowing the interpreter to exit without waiting for them to finish.
|
I've now updated this PR to address the feedback from the issue discussion. The Since Python 3.9 made executor threads non-daemon, the implementation now also creates actual daemon threads, not just removes them from Tests are updated accordingly. |
See https://bugs.python.org/issue36780 (GH-80961).
https://bugs.python.org/issue36780