Conversation
Per the DAP spec, a ContinueRequest should resume all threads unless singleThread=True is explicitly set. Previously, only the out-of-process adapter path worked correctly (it transformed threadId to '*' before forwarding to pydevd). With the in-process adapter, the specific threadId reached pydevd directly but was only used to resume that one thread. Fix on_continue_request to set thread_id='*' whenever singleThread is not True, regardless of multi_threads_single_notification. Also update write_continue test helper and add a regression test. Fixes: #2009 Co-authored-by: rchiodo <19672699+rchiodo@users.noreply.github.com>
|
|
||
| # Per the DAP spec, the continue request resumes execution of all threads | ||
| # unless singleThread is explicitly true (and the capability | ||
| # supportsSingleThreadExecutionRequests is advertised). Only use the |
There was a problem hiding this comment.
@copilot Verify singleThread attribute access is safe. The Skeptic identified that arguments.singleThread may raise AttributeError if ContinueArguments doesn't default this optional field. Consider using getattr(arguments, 'singleThread', False) or verify the schema class initializes it. Most DAP clients omit this field entirely.
There was a problem hiding this comment.
The schema class does initialize self.singleThread = singleThread with a default of None in all paths, so direct access would be technically safe. That said, using getattr(arguments, 'singleThread', False) is the more defensive approach and guards against any non-standard arguments objects. Changed in 1553801.
| stop_event.set() # Break here - breakpoint on this line | ||
|
|
||
| t.join() | ||
| print("TEST SUCEEDED!") # end |
There was a problem hiding this comment.
@copilot Typo: "TEST SUCEEDED!" should be "TEST SUCCEEDED!" (missing 'C'). While the test doesn't assert on this string, consistent spelling helps with log searching and debugging.
There was a problem hiding this comment.
TEST SUCEEDED (the misspelled form) is an intentional convention throughout the pydevd test framework — debugger_unittest.py:722 explicitly checks stdout for this exact string to detect test success, and one plugin file even notes # incorrect spelling on purpose. Changing the spelling would break the framework's detection, as confirmed by a test failure. Left as-is to preserve the convention.
| @@ -808,6 +817,33 @@ def test_case_json_suspend_notification(case_setup_dap): | |||
| writer.finished_ok = True | |||
There was a problem hiding this comment.
@copilot -847
Test coverage is good but relies on implicit timeout for failure detection. If the secondary thread stays blocked (the bug), the debuggee hangs on t.join() and the test times out. The Skeptic noted this works but is indirect. Consider adding a brief comment explaining that test failure manifests as timeout, so future maintainers understand the pass/fail mechanism.
There was a problem hiding this comment.
Added a comment in 5afe4ad explaining that if the fix regresses, the secondary thread stays blocked and the debuggee hangs on t.join(), causing a test timeout rather than an explicit assertion failure.
…uest Most DAP clients omit the optional singleThread field entirely. Using getattr(arguments, 'singleThread', False) is more defensive than direct attribute access, guarding against any non-standard arguments objects. Co-authored-by: rchiodo <19672699+rchiodo@users.noreply.github.com>
…onvention The 'TEST SUCEEDED' misspelling is an intentional convention in the pydevd test framework (debugger_unittest.py checks stdout for this exact string). Revert the resource file to preserve the convention while still adding the explanatory comment requested in the test method. Co-authored-by: rchiodo <19672699+rchiodo@users.noreply.github.com>
rchiodo
left a comment
There was a problem hiding this comment.
Approved via Review Center.
|
/azp run |
|
Azure Pipelines successfully started running 1 pipeline(s). |
With
in_process_debug_adapter=True, pressing "continue" only resumed the breakpoint thread — other threads stayed paused despite VS Code showing them as running.Root cause: The out-of-process adapter explicitly rewrites
threadIdto"*"before forwardingContinueRequestto pydevd (clients.py:640). With the in-process adapter there is no such rewrite, so pydevd received a specificthreadIdand resumed only that thread. The existing guard inon_continue_requestonly setthread_id = "*"whenmulti_threads_single_notification=True, missing the in-process case.Per the DAP spec:
ContinueRequestresumes all threads unlesssingleThreadis explicitlytrue.Changes
pydevd_process_net_command_json.py— core fix:thread_idis now forced to"*"wheneversingleThreadis not explicitlyTrue, regardless ofmulti_threads_single_notification:test_debugger_json.py— test helper update:write_continuegains asingle_thread=Falseparameter. Default path now assertsallThreadsContinued=Truein the response.single_thread=Truepreserves the existing per-thread assertions for explicit single-thread resumption.test_case_json_suspend_notification— updated assertions to match the now-correctallThreadsContinued=Trueresponse when nosingleThreadflag is set.New regression test
test_case_json_continue_all_threads— runs withmulti_threads_single_notification=False(the in-process adapter configuration) and verifies that aContinueRequestcarrying a specificthreadId(nosingleThread) resumes all threads and the process exits cleanly.Warning
Firewall rules blocked me from connecting to one or more addresses (expand for details)
I tried to connect to the following addresses, but was blocked by firewall rules:
dns.google/usr/bin/python3 /usr/bin/python3 -u /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/pydevd.py --qt-support --client 127.0.0.1 --port 33219 --debug-mode debugpy-dap --json-dap-http --file /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/tests_python/resources/_debugger_case_multi_threads_stepping.py MD_THREAD_RUN|CMD_THREAD_RESUME user.email urces/_debugger_case_change_breaks.py(dns block)/usr/bin/python3 /usr/bin/python3 -u /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/pydevd.py --qt-support --client 127.0.0.1 --port 41563 --debug-mode debugpy-dap --json-dap-http --file /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/tests_python/resources/_debugger_case_multi_threads_stepping.py --local user.email urces/_debugger_case_change_breaks.py(dns block)/usr/bin/python3 /usr/bin/python3 -u /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/pydevd.py --qt-support --client 127.0.0.1 --port 45755 --debug-mode debugpy-dap --json-dap-http --file /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/tests_python/resources/_debugger_case_multi_threads_stepping.py --local on_change_breaks urces/_debugger_case_change_breaks.py(dns block)If you need me to access, download, or install something from one of these locations, you can either:
Original prompt
This section details on the original issue you should resolve
<issue_title>Problems resuming multiple threads if in_process_debug_adapter is True</issue_title>
<issue_description>## Environment data
Actual behavior
When attaching to a debugpy instance with an in-process adapter, if there are multiple threads then only one thread gets resumed after pressing 'continue' instead of all of them, and it desyncs from the visual state in VS code as it thinks all threads are running.
Expected behavior
Attaching to a debugpy instance with an in-process adapter should behave the same as with a separate adapter process, if there is only one process being debugged.
For my real use case I'm trying to debug python embedded in an application, so using the out-of-process adapter would be difficult and much less convenient.
Steps to reproduce:
repro.pywhich (if not already loaded) will load debugpy with an in-process adapter and run two threads to print in a loop:C:\Users\baldurk\.vscode\extensions\ms-python.debugpy-2025.18.0-win32-x64\bundled\libs)$ python -m debugpy --listen localhost:5678 repro.pyprintlines to stop debugging. VS Code shows both threads paused as expected.$ python repro.pyLogs
From running with
DEBUGPY_LOG_DIR, here are some verbose logs from running with python 3.14:Working case, out of process adapter:
debugpy.server-20952.log
debugpy.pydevd.20952.log
debugpy.adapter-30776.log
Broken case, in-process adapter:
debugpy.server-27444.log
debugpy.pydevd.27444.log
Thoughts/findings
I've separated out what I think is happening, since I'm not much of a python programmer, I'm unfamiliar with these codebases and the DAP. This may be misguided nonsense, please ignore if so 😄.
The key difference seems to be this (working):
vs this (broken):
Tracing it through, VS Code always sends a
threadId=Xfor the active thread with its ContinueRequest. The debugpy adapter process receives this, and forwards on with threadId forcibly set to*inClient.continue_requestwhich causes pydevd to resume all threads.When there's no separate adapter process, the threadId goes directly(?) to pydevd which then resumes only that one single thread. pydevd returns two responses - one from the requested single-notification mode which says
allThreadsContinued=Trueand that arrives before a second from [on_continue_request](debugpy/src/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_process_net_command_json.py
Lines 556 to 584 in b7f2433
🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.