Skip to content

Fix: ContinueRequest with specific threadId resumes all threads (in-process adapter fix)#2012

Merged
rchiodo merged 4 commits intomainfrom
copilot/fix-thread-resume-issue
Mar 13, 2026
Merged

Fix: ContinueRequest with specific threadId resumes all threads (in-process adapter fix)#2012
rchiodo merged 4 commits intomainfrom
copilot/fix-thread-resume-issue

Conversation

Copy link
Contributor

Copilot AI commented Mar 11, 2026

With in_process_debug_adapter=True, pressing "continue" only resumed the breakpoint thread — other threads stayed paused despite VS Code showing them as running.

Root cause: The out-of-process adapter explicitly rewrites threadId to "*" before forwarding ContinueRequest to pydevd (clients.py:640). With the in-process adapter there is no such rewrite, so pydevd received a specific threadId and resumed only that thread. The existing guard in on_continue_request only set thread_id = "*" when multi_threads_single_notification=True, missing the in-process case.

Per the DAP spec: ContinueRequest resumes all threads unless singleThread is explicitly true.

Changes

  • pydevd_process_net_command_json.py — core fix:
    thread_id is now forced to "*" whenever singleThread is not explicitly True, regardless of multi_threads_single_notification:

    single_thread = arguments.singleThread
    if not single_thread or py_db.multi_threads_single_notification:
        thread_id = "*"
  • test_debugger_json.py — test helper update:
    write_continue gains a single_thread=False parameter. Default path now asserts allThreadsContinued=True in the response. single_thread=True preserves the existing per-thread assertions for explicit single-thread resumption.

  • test_case_json_suspend_notification — updated assertions to match the now-correct allThreadsContinued=True response when no singleThread flag is set.

  • New regression test test_case_json_continue_all_threads — runs with multi_threads_single_notification=False (the in-process adapter configuration) and verifies that a ContinueRequest carrying a specific threadId (no singleThread) resumes all threads and the process exits cleanly.

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • dns.google
    • Triggering command: /usr/bin/python3 /usr/bin/python3 -u /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/pydevd.py --qt-support --client 127.0.0.1 --port 33219 --debug-mode debugpy-dap --json-dap-http --file /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/tests_python/resources/_debugger_case_multi_threads_stepping.py MD_THREAD_RUN|CMD_THREAD_RESUME user.email urces/_debugger_case_change_breaks.py (dns block)
    • Triggering command: /usr/bin/python3 /usr/bin/python3 -u /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/pydevd.py --qt-support --client 127.0.0.1 --port 41563 --debug-mode debugpy-dap --json-dap-http --file /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/tests_python/resources/_debugger_case_multi_threads_stepping.py --local user.email urces/_debugger_case_change_breaks.py (dns block)
    • Triggering command: /usr/bin/python3 /usr/bin/python3 -u /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/pydevd.py --qt-support --client 127.0.0.1 --port 45755 --debug-mode debugpy-dap --json-dap-http --file /home/REDACTED/work/debugpy/debugpy/src/debugpy/_vendored/pydevd/tests_python/resources/_debugger_case_multi_threads_stepping.py --local on_change_breaks urces/_debugger_case_change_breaks.py (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Original prompt

This section details on the original issue you should resolve

<issue_title>Problems resuming multiple threads if in_process_debug_adapter is True</issue_title>
<issue_description>## Environment data

  • debugpy version: 1.8.19 (from ms-python.debugpy-2025.18.0-win32-x64 extension)
  • OS and version: Windows 10 19045.6456
  • Python version (& distribution if applicable, e.g. Anaconda): Tested on Python 3.8.10, 3.12.10 and 3.14.0
  • Using VS Code or Visual Studio: VS Code 1.111

Actual behavior

When attaching to a debugpy instance with an in-process adapter, if there are multiple threads then only one thread gets resumed after pressing 'continue' instead of all of them, and it desyncs from the visual state in VS code as it thinks all threads are running.

Expected behavior

Attaching to a debugpy instance with an in-process adapter should behave the same as with a separate adapter process, if there is only one process being debugged.

For my real use case I'm trying to debug python embedded in an application, so using the out-of-process adapter would be difficult and much less convenient.

Steps to reproduce:

  1. Create repro.py which (if not already loaded) will load debugpy with an in-process adapter and run two threads to print in a loop:
import threading, time, sys

if 'debugpy' not in sys.modules.keys():
    import debugpy
    debugpy.configure(subProcess=False)
    debugpy.listen(5678, in_process_debug_adapter=True)

def f():
    for i in range(100):
        print(f"Thread {i}")
        time.sleep(0.3)

threading.Thread(target=f).start()

for i in range(100):
    print(f"Main {i}")
    time.sleep(0.3)
  1. Create a launch.json with 'Python Debugger: Remote Attach' to the default host+port (omitted for brevity, I remove the default pathMappings)
  2. Set PYTHONPATH so that debugpy can be found (for me e.g. C:\Users\baldurk\.vscode\extensions\ms-python.debugpy-2025.18.0-win32-x64\bundled\libs)
  3. Run first via the debugpy module (this creates an out-of-process adapter):
    $ python -m debugpy --listen localhost:5678 repro.py
  4. Attach from VS code, breakpoint on one of the print lines to stop debugging. VS Code shows both threads paused as expected.
  5. Remove the breakpoint and continue. Both threads are shown as running now in VS Code and both continue printing.
  6. Now run directly to let the script open an in-process adapter:
    $ python repro.py
  7. Attach from VS code and repeat the steps to breakpoint and continue. Only the thread that hit the breakpoint continues printing, the other remains paused even though both threads are shown as running in VS Code.

Logs

From running with DEBUGPY_LOG_DIR, here are some verbose logs from running with python 3.14:

Working case, out of process adapter:

debugpy.server-20952.log
debugpy.pydevd.20952.log
debugpy.adapter-30776.log

Broken case, in-process adapter:

debugpy.server-27444.log
debugpy.pydevd.27444.log

Thoughts/findings

I've separated out what I think is happening, since I'm not much of a python programmer, I'm unfamiliar with these codebases and the DAP. This may be misguided nonsense, please ignore if so 😄.

The key difference seems to be this (working):

0.83s - Process ContinueRequest: {
    "arguments": {
        "threadId": "*"
    },
    "command": "continue",
    "seq": 18,
    "type": "request"
}

vs this (broken):

2.17s - Process ContinueRequest: {
    "arguments": {
        "threadId": 1
    },
    "command": "continue",
    "seq": 15,
    "type": "request"
}

Tracing it through, VS Code always sends a threadId=X for the active thread with its ContinueRequest. The debugpy adapter process receives this, and forwards on with threadId forcibly set to * in Client.continue_request which causes pydevd to resume all threads.

When there's no separate adapter process, the threadId goes directly(?) to pydevd which then resumes only that one single thread. pydevd returns two responses - one from the requested single-notification mode which says allThreadsContinued=True and that arrives before a second from [on_continue_request](

def on_continue_request(self, py_db, request):
"""
:param ContinueRequest request:
"""
arguments = request.arguments # : :type arguments: ContinueArguments
thread_id = arguments.threadId
if py_db.multi_threads_single_notification:
thread_id = "*"
def on_resumed():
body = {"allThreadsContinued": thread_id == "*"}
response = pydevd_base_schema.build_response(request, kwargs={"body": body})
cmd = NetCommand(CMD_RETURN, 0, response, is_json=True)
py_db.writer.add_command(cmd)
if py_db.multi_threads_single_notification:
# Only send resumed notification when it has actually resumed!
# (otherwise the user could send a continue, receive the notification and then
# request a new pause which would be paused without sending any notification as
# it didn't really run in the first place).
py_db.threads_suspended_single_notification.add_on_resumed_callback(on_resumed)
self.api.request_resume_thread(thread_id)
else:
# Only send resumed notification when it has actually resumed!
# (otherwise the user could send a continue, receive the notification and then
# request a new pause which would be paused without sending any notification as
# it didn't really run in the first place).
self.api.request_resume_thread(thread_id)
on_resumed()
...


🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.

Per the DAP spec, a ContinueRequest should resume all threads unless
singleThread=True is explicitly set. Previously, only the out-of-process
adapter path worked correctly (it transformed threadId to '*' before
forwarding to pydevd). With the in-process adapter, the specific threadId
reached pydevd directly but was only used to resume that one thread.

Fix on_continue_request to set thread_id='*' whenever singleThread is not
True, regardless of multi_threads_single_notification. Also update
write_continue test helper and add a regression test.

Fixes: #2009

Co-authored-by: rchiodo <19672699+rchiodo@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix issues resuming multiple threads in debugpy Fix: ContinueRequest with specific threadId resumes all threads (in-process adapter fix) Mar 11, 2026

# Per the DAP spec, the continue request resumes execution of all threads
# unless singleThread is explicitly true (and the capability
# supportsSingleThreadExecutionRequests is advertised). Only use the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Verify singleThread attribute access is safe. The Skeptic identified that arguments.singleThread may raise AttributeError if ContinueArguments doesn't default this optional field. Consider using getattr(arguments, 'singleThread', False) or verify the schema class initializes it. Most DAP clients omit this field entirely.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The schema class does initialize self.singleThread = singleThread with a default of None in all paths, so direct access would be technically safe. That said, using getattr(arguments, 'singleThread', False) is the more defensive approach and guards against any non-standard arguments objects. Changed in 1553801.

stop_event.set() # Break here - breakpoint on this line

t.join()
print("TEST SUCEEDED!") # end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Typo: "TEST SUCEEDED!" should be "TEST SUCCEEDED!" (missing 'C'). While the test doesn't assert on this string, consistent spelling helps with log searching and debugging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TEST SUCEEDED (the misspelled form) is an intentional convention throughout the pydevd test framework — debugger_unittest.py:722 explicitly checks stdout for this exact string to detect test success, and one plugin file even notes # incorrect spelling on purpose. Changing the spelling would break the framework's detection, as confirmed by a test failure. Left as-is to preserve the convention.

@@ -808,6 +817,33 @@ def test_case_json_suspend_notification(case_setup_dap):
writer.finished_ok = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot -847
Test coverage is good but relies on implicit timeout for failure detection. If the secondary thread stays blocked (the bug), the debuggee hangs on t.join() and the test times out. The Skeptic noted this works but is indirect. Consider adding a brief comment explaining that test failure manifests as timeout, so future maintainers understand the pass/fail mechanism.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment in 5afe4ad explaining that if the fix regresses, the secondary thread stays blocked and the debuggee hangs on t.join(), causing a test timeout rather than an explicit assertion failure.

…uest

Most DAP clients omit the optional singleThread field entirely. Using
getattr(arguments, 'singleThread', False) is more defensive than direct
attribute access, guarding against any non-standard arguments objects.

Co-authored-by: rchiodo <19672699+rchiodo@users.noreply.github.com>
…onvention

The 'TEST SUCEEDED' misspelling is an intentional convention in the pydevd
test framework (debugger_unittest.py checks stdout for this exact string).
Revert the resource file to preserve the convention while still adding the
explanatory comment requested in the test method.

Co-authored-by: rchiodo <19672699+rchiodo@users.noreply.github.com>
Copy link
Contributor

@rchiodo rchiodo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved via Review Center.

@rchiodo
Copy link
Contributor

rchiodo commented Mar 11, 2026

/azp run

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@rchiodo rchiodo marked this pull request as ready for review March 12, 2026 17:53
@rchiodo rchiodo requested a review from a team as a code owner March 12, 2026 17:53
@rchiodo rchiodo merged commit fb467d8 into main Mar 13, 2026
25 of 28 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Problems resuming multiple threads if in_process_debug_adapter is True

3 participants