Fix debugger stepping actions in forked process

Fix the debugger stepping state when debugging a process that has been
forked from the main process. The new sys.monitoring mechanism didn't
fully clear the thread local storage after a fork leading to a state
where the forked child process tracked the wrong thread information and
was never updated on the latest continue action.
This commit is contained in:
Jordan Borean 2025-07-08 10:08:59 +10:00
parent b387710b7f
commit 00cf186452
No known key found for this signature in database
GPG key ID: 2AAC89085FBBDAB5
6 changed files with 4804 additions and 4624 deletions

View file

@ -108,21 +108,21 @@ Pydevd (at src/debugpy/_vendored/pydevd) is a subrepo of https://github.com/fabi
In order to update the source, you would:
- git checkout -b "branch name"
- python subrepo.py pull
- git push
- git push
- Fix any debugpy tests that are failing as a result of the pull
- Create a PR from your branch
You might need to regenerate the Cython modules after any changes. This can be done by:
- Install Python latest (3.12 as of this writing)
- pip install cython, django>=1.9, setuptools>=0.9, wheel>0.21, twine
- pip install cython 'django>=1.9' 'setuptools>=0.9' 'wheel>0.21' twine
- On a windows machine:
- set FORCE_PYDEVD_VC_VARS=C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Auxiliary\Build\vcvars64.bat
- in the pydevd folder: python .\build_tools\build.py
## Pushing pydevd back to PyDev.Debugger
If you've made changes to pydevd (at src/debugpy/_vendored/pydevd), you'll want to push back changes to pydevd so as Fabio makes changes to pydevd we can continue to share updates.
If you've made changes to pydevd (at src/debugpy/_vendored/pydevd), you'll want to push back changes to pydevd so as Fabio makes changes to pydevd we can continue to share updates.
To do this, you would:
@ -148,13 +148,13 @@ You run all of the tests with (from the root folder):
- python -m pytest -n auto -rfE
That will run all of the tests in parallel and output any failures.
That will run all of the tests in parallel and output any failures.
If you want to just see failures you can do this:
- python -m pytest -n auto -q
That should generate output that just lists the tests which failed.
That should generate output that just lists the tests which failed.
```
=============================================== short test summary info ===============================================
@ -167,7 +167,7 @@ With that you can then run individual tests like so:
- python -m pytest -n auto tests_python/test_debugger.py::test_path_translation[False]
That will generate a log from the test run.
That will generate a log from the test run.
Logging the test output can be tricky so here's some information on how to debug the tests.
@ -194,7 +194,7 @@ Make sure if you add this in a module that gets `cythonized`, that you turn off
#### How to use logs to debug failures
Investigating log failures can be done in multiple ways.
Investigating log failures can be done in multiple ways.
If you have an existing test failing, you can investigate it by running the test with the main branch and comparing the results. To do so you would:
@ -238,7 +238,7 @@ Breakpoint command
0.00s - Received command: CMD_SET_BREAK 111 3 1 python-line C:\Users\rchiodo\source\repos\PyDev.Debugger\tests_python\resources\_debugger_case_remove_breakpoint.py 7 None None None
```
In order to investigate a failure you'd look for the CMDs you expect and then see where the CMDs deviate. At that point you'd add logging around what might have happened next.
In order to investigate a failure you'd look for the CMDs you expect and then see where the CMDs deviate. At that point you'd add logging around what might have happened next.
## Using modified debugpy in Visual Studio Code
To test integration between debugpy and Visual Studio Code, the latter can be directed to use a custom version of debugpy in lieu of the one bundled with the Python extension. This is done by specifying `"debugAdapterPath"` in `launch.json` - it must point at the root directory of the *package*, which is `src/debugpy` inside the repository:
@ -257,7 +257,7 @@ https://github.com/microsoft/debugpy/wiki/Enable-debugger-logs
## Debugging native code (Windows)
To debug the native components of `debugpy`, such as `attach.cpp`, you can use Visual Studio's native debugging feature.
To debug the native components of `debugpy`, such as `attach.cpp`, you can use Visual Studio's native debugging feature.
Follow these steps to set up native debugging in Visual Studio:

File diff suppressed because it is too large Load diff

View file

@ -257,7 +257,7 @@ class ThreadInfo:
self.additional_info = additional_info
self.trace = trace
self._use_is_stopped = hasattr(thread, '_is_stopped')
# fmt: off
# IFDEF CYTHON
# cdef bint is_thread_alive(self):
@ -755,6 +755,16 @@ def enable_code_tracing(thread_ident: Optional[int], code, frame) -> bool:
return _enable_code_tracing(py_db, additional_info, func_code_info, code, frame, False)
# fmt: off
# IFDEF CYTHON
# cpdef reset_thread_local_info():
# ELSE
def reset_thread_local_info():
# ENDIF
# fmt: on
"""Resets the thread local info TLS store for use after a fork()."""
global _thread_local_info
_thread_local_info = threading.local()
# fmt: off
# IFDEF CYTHON
@ -942,7 +952,7 @@ def _raise_event(code, instruction, exc):
thread_info = _get_thread_info(True, 1)
if thread_info is None:
return
py_db: object = GlobalDebuggerHolder.global_dbg
if py_db is None or py_db.pydb_disposed:
return
@ -1085,12 +1095,12 @@ def _return_event(code, instruction, retval):
if func_code_info.plugin_return_stepping:
_plugin_stepping(py_db, step_cmd, "return", frame, thread_info)
return
if info.pydev_state == STATE_SUSPEND:
# We're already suspended, don't handle any more events on this thread.
_do_wait_suspend(py_db, thread_info, frame, "return", None)
return
# Python line stepping
stop_frame = info.pydev_step_stop
if step_cmd in (CMD_STEP_INTO, CMD_STEP_INTO_MY_CODE, CMD_STEP_INTO_COROUTINE):
@ -1453,7 +1463,7 @@ def _line_event(code, line):
# For thread-related stuff we can't disable the code tracing because other
# threads may still want it...
return
func_code_info: FuncCodeInfo = _get_func_code_info(code, 1)
if func_code_info.always_skip_code or func_code_info.always_filtered_out:
return monitor.DISABLE
@ -1887,7 +1897,7 @@ def update_monitor_events(suspend_requested: Optional[bool]=None) -> None:
monitor.register_callback(DEBUGGER_ID, monitor.events.LINE, _line_event)
if not IS_PY313_OR_GREATER:
# In Python 3.13+ jump_events aren't necessary as we have a line_event for every
# jump location.
# jump location.
monitor.register_callback(DEBUGGER_ID, monitor.events.JUMP, _jump_event)
monitor.register_callback(DEBUGGER_ID, monitor.events.PY_RETURN, _return_event)

View file

@ -263,7 +263,7 @@ cdef class ThreadInfo:
self.additional_info = additional_info
self.trace = trace
self._use_is_stopped = hasattr(thread, '_is_stopped')
# fmt: off
# IFDEF CYTHON -- DONT EDIT THIS FILE (it is automatically generated)
cdef bint is_thread_alive(self):
@ -761,6 +761,16 @@ cpdef enable_code_tracing(unsigned long thread_ident, code, frame):
return _enable_code_tracing(py_db, additional_info, func_code_info, code, frame, False)
# fmt: off
# IFDEF CYTHON -- DONT EDIT THIS FILE (it is automatically generated)
cpdef reset_thread_local_info():
# ELSE
# def reset_thread_local_info():
# ENDIF
# fmt: on
"""Resets the thread local info TLS store for use after a fork()."""
global _thread_local_info
_thread_local_info = threading.local()
# fmt: off
# IFDEF CYTHON -- DONT EDIT THIS FILE (it is automatically generated)
@ -948,7 +958,7 @@ cdef _raise_event(code, instruction, exc):
thread_info = _get_thread_info(True, 1)
if thread_info is None:
return
py_db: object = GlobalDebuggerHolder.global_dbg
if py_db is None or py_db.pydb_disposed:
return
@ -1091,12 +1101,12 @@ cdef _return_event(code, instruction, retval):
if func_code_info.plugin_return_stepping:
_plugin_stepping(py_db, step_cmd, "return", frame, thread_info)
return
if info.pydev_state == STATE_SUSPEND:
# We're already suspended, don't handle any more events on this thread.
_do_wait_suspend(py_db, thread_info, frame, "return", None)
return
# Python line stepping
stop_frame = info.pydev_step_stop
if step_cmd in (CMD_STEP_INTO, CMD_STEP_INTO_MY_CODE, CMD_STEP_INTO_COROUTINE):
@ -1459,7 +1469,7 @@ cdef _line_event(code, int line):
# For thread-related stuff we can't disable the code tracing because other
# threads may still want it...
return
func_code_info: FuncCodeInfo = _get_func_code_info(code, 1)
if func_code_info.always_skip_code or func_code_info.always_filtered_out:
return monitor.DISABLE
@ -1893,7 +1903,7 @@ def update_monitor_events(suspend_requested: Optional[bool]=None) -> None:
monitor.register_callback(DEBUGGER_ID, monitor.events.LINE, _line_event)
if not IS_PY313_OR_GREATER:
# In Python 3.13+ jump_events aren't necessary as we have a line_event for every
# jump location.
# jump location.
monitor.register_callback(DEBUGGER_ID, monitor.events.JUMP, _jump_event)
monitor.register_callback(DEBUGGER_ID, monitor.events.PY_RETURN, _return_event)

View file

@ -3354,6 +3354,9 @@ def settrace_forked(setup_tracing=True):
if clear_thread_local_info is not None:
clear_thread_local_info()
if PYDEVD_USE_SYS_MONITORING:
pydevd_sys_monitoring.reset_thread_local_info()
settrace(
host,
port=port,