Compare commits

..

No commits in common. "main" and "v1.8.12" have entirely different histories.

59 changed files with 3696 additions and 4552 deletions

2
.github/CODEOWNERS vendored
View file

@ -4,4 +4,4 @@
# in the repository, i.e. bar/baz will match /bar/baz and /foo/bar/baz.
# The default owners for everything that is not overridden by specific patterns below.
* @microsoft/pyrx-admins
* @microsoft/debugpy-CodeReviewers

View file

@ -24,7 +24,7 @@ jobs:
owner: context.repo.owner,
repo: context.repo.repo
});
return (issue.data.assignees && issue.data.assignees[0].login) || '';
return issue.data.assignees[0].login || '';
- name: Dump last assigned
env:
LAST_ASSIGNED: ${{ steps.assigned.outputs.result }}
@ -32,7 +32,7 @@ jobs:
- uses: lee-dohm/team-rotation@v1
with:
last: ${{ steps.assigned.outputs.result }}
include: bschnurr heejaechang StellaHuang95 rchiodo gramster
include: AdamYoblick bschnurr debonte heejaechang StellaHuang95 rchiodo KacieKK judej
id: rotation
- name: Dump next in rotation
env:

View file

@ -24,7 +24,7 @@ The following tools are required to work on debugpy:
- [Black](https://black.readthedocs.io/en/stable/)
- [tox](https://tox.readthedocs.io/en/latest/)
We recommend using [Visual Studio Code](https://code.visualstudio.com/) with the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) to work on debugpy, but it's not a requirement. A workspace file, [debugpy.code-workspace], is provided for the convenience of VSCode users, and sets it up to use the other tools listed above properly.
We recommend using [Visual Studio Code](https://code.visualstudio.com/) with the (Python extension)[https://marketplace.visualstudio.com/items?itemName=ms-python.python] to work on debugpy, but it's not a requirement. A workspace file, [debugpy.code-workspace], is provided for the convenience of VSCode users, and sets it up to use the other tools listed above properly.
Tools that are Python packages should be installed via pip corresponding to the Python 3 installation. On Windows:
```
@ -65,7 +65,7 @@ On Linux or macOS:
```
.../debugpy$ python3 -m tox
```
This will perform a full run with the default settings. A full run will run tests on Python 2.7 and 3.5-3.8, and requires all of those to be installed. If some versions are missing, or it is desired to skip them for a particular run, tox can be directed to only run tests on specific versions with `-e`. In addition, the `--develop` option can be used to skip the packaging step, running tests directly against the source code in `src/debugpy`. This should only be used when iterating on the code, and a proper run should be performed before submitting a PR. On Windows:
This will perform a full run with the default settings. A full run will run tests on Python 2.7 and 3.5-3.8, and requires all of those to be installed. If some versions are missing, or it is desired to skip them for a particular run, tox can be directed to only run tests on specific versions with `-e`. In addition, the `--developer` option can be used to skip the packaging step, running tests directly against the source code in `src/debugpy`. This should only be used when iterating on the code, and a proper run should be performed before submitting a PR. On Windows:
```
...\debugpy> py -m tox -e py27,py37 --develop
```
@ -93,8 +93,6 @@ The tests are run concurrently, and the default number of workers is 8. You can
While tox is the recommended way to run the test suite, pytest can also be invoked directly from the root of the repository. This requires packages in tests/requirements.txt to be installed first.
Using a venv created by tox in the '.tox' folder can make it easier to get the pytest configuration correct. Debugpy needs to be installed into the venv for the tests to run, so using the tox generated .venv makes that easier.
#### Keeping logs on test success
There's an internal setting `debugpy_log_passed` that if set to true will not erase the logs after a successful test run. Just search for this in the code and remove the code that deletes the logs on success.
@ -110,21 +108,21 @@ Pydevd (at src/debugpy/_vendored/pydevd) is a subrepo of https://github.com/fabi
In order to update the source, you would:
- git checkout -b "branch name"
- python subrepo.py pull
- git push
- git push
- Fix any debugpy tests that are failing as a result of the pull
- Create a PR from your branch
You might need to regenerate the Cython modules after any changes. This can be done by:
- Install Python latest (3.14 as of this writing)
- pip install cython 'django>=1.9' 'setuptools>=0.9' 'wheel>0.21' twine
- Install Python latest (3.12 as of this writing)
- pip install cython, django>=1.9, setuptools>=0.9, wheel>0.21, twine
- On a windows machine:
- set FORCE_PYDEVD_VC_VARS=C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools\VC\Auxiliary\Build\vcvars64.bat
- in the pydevd folder: python .\build_tools\build.py
## Pushing pydevd back to PyDev.Debugger
If you've made changes to pydevd (at src/debugpy/_vendored/pydevd), you'll want to push back changes to pydevd so as Fabio makes changes to pydevd we can continue to share updates.
If you've made changes to pydevd (at src/debugpy/_vendored/pydevd), you'll want to push back changes to pydevd so as Fabio makes changes to pydevd we can continue to share updates.
To do this, you would:
@ -150,13 +148,13 @@ You run all of the tests with (from the root folder):
- python -m pytest -n auto -rfE
That will run all of the tests in parallel and output any failures.
That will run all of the tests in parallel and output any failures.
If you want to just see failures you can do this:
- python -m pytest -n auto -q
That should generate output that just lists the tests which failed.
That should generate output that just lists the tests which failed.
```
=============================================== short test summary info ===============================================
@ -169,7 +167,7 @@ With that you can then run individual tests like so:
- python -m pytest -n auto tests_python/test_debugger.py::test_path_translation[False]
That will generate a log from the test run.
That will generate a log from the test run.
Logging the test output can be tricky so here's some information on how to debug the tests.
@ -196,7 +194,7 @@ Make sure if you add this in a module that gets `cythonized`, that you turn off
#### How to use logs to debug failures
Investigating log failures can be done in multiple ways.
Investigating log failures can be done in multiple ways.
If you have an existing test failing, you can investigate it by running the test with the main branch and comparing the results. To do so you would:
@ -240,7 +238,7 @@ Breakpoint command
0.00s - Received command: CMD_SET_BREAK 111 3 1 python-line C:\Users\rchiodo\source\repos\PyDev.Debugger\tests_python\resources\_debugger_case_remove_breakpoint.py 7 None None None
```
In order to investigate a failure you'd look for the CMDs you expect and then see where the CMDs deviate. At that point you'd add logging around what might have happened next.
In order to investigate a failure you'd look for the CMDs you expect and then see where the CMDs deviate. At that point you'd add logging around what might have happened next.
## Using modified debugpy in Visual Studio Code
To test integration between debugpy and Visual Studio Code, the latter can be directed to use a custom version of debugpy in lieu of the one bundled with the Python extension. This is done by specifying `"debugAdapterPath"` in `launch.json` - it must point at the root directory of the *package*, which is `src/debugpy` inside the repository:
@ -259,7 +257,7 @@ https://github.com/microsoft/debugpy/wiki/Enable-debugger-logs
## Debugging native code (Windows)
To debug the native components of `debugpy`, such as `attach.cpp`, you can use Visual Studio's native debugging feature.
To debug the native components of `debugpy`, such as `attach.cpp`, you can use Visual Studio's native debugging feature.
Follow these steps to set up native debugging in Visual Studio:

View file

@ -135,8 +135,6 @@ stages:
python.version: 3.12
py313:
python.version: 3.13
py314:
python.version: 3.14.0-rc.2
steps:
@ -180,8 +178,6 @@ stages:
python.version: 3.12
py313:
python.version: 3.13
py314:
python.version: 3.14.0-rc.2
steps:
@ -228,8 +224,6 @@ stages:
python.version: 3.12
py313:
python.version: 3.13
py314:
python.version: 3.14.0-rc.2
steps:

View file

@ -3,19 +3,11 @@ steps:
displayName: Setup Python packages
- pwsh: |
$raw = '$(python.version)'
if ($raw.StartsWith('pypy')) {
# For PyPy keep original pattern stripping dots after first two numeric components if needed later.
$toxEnv = 'py' + ($raw -replace '^pypy(\d+)\.(\d+).*$','$1$2')
$toxEnv = '$(python.version)'
if (-not $toxEnv.startsWith('pypy')) {
$toxEnv = 'py' + $toxEnv.Replace('.', '')
}
else {
# Extract major.minor even from prerelease like 3.14.0-rc.2 -> 3.14
$mm = [regex]::Match($raw,'^(\d+)\.(\d+)')
if (-not $mm.Success) { throw "Unable to parse python.version '$raw'" }
$toxEnv = 'py' + $mm.Groups[1].Value + $mm.Groups[2].Value
}
Write-Host "python.version raw: $raw"
Write-Host "Derived tox environment: $toxEnv"
echo 'tox environment: $toxEnv'
python -m tox -e $toxEnv -- --junitxml=$(Build.ArtifactStagingDirectory)/tests.xml --debugpy-log-dir=$(Build.ArtifactStagingDirectory)/logs tests
displayName: Run tests using tox
env:

View file

@ -3,5 +3,4 @@ steps:
inputs:
versionSpec: $(python.version)
architecture: $(architecture)
allowUnstable: true
displayName: Use Python $(python.version) $(architecture)

View file

@ -1,7 +1,6 @@
# This script is used for building the pydevd binaries
import argparse
import os
import platform
def build_pydevd_binaries(force: bool):
os.environ["PYDEVD_USE_CYTHON"] = "yes"
@ -21,19 +20,16 @@ def build_pydevd_binaries(force: bool):
# Run the appropriate batch script to build the binaries if necessary.
pydevd_attach_to_process_root = os.path.join(pydevd_root, "pydevd_attach_to_process")
if platform.system() == "Windows":
if os.name == "nt":
if not os.path.exists(os.path.join(pydevd_attach_to_process_root, "attach_amd64.dll")) or force:
os.system(os.path.join(pydevd_attach_to_process_root, "windows", "compile_windows.bat"))
elif platform.system() == "Linux":
elif os.name == "posix":
if not os.path.exists(os.path.join(pydevd_attach_to_process_root, "attach_linux_amd64.so")) or force:
os.system(os.path.join(pydevd_attach_to_process_root, "linux_and_mac", "compile_linux.sh"))
elif platform.system() == "Darwin":
if not os.path.exists(os.path.join(pydevd_attach_to_process_root, "attach.dylib")) or force:
os.system(os.path.join(pydevd_attach_to_process_root, "linux_and_mac", "compile_mac.sh"))
else:
print("unsupported platform.system(): {}".format(platform.system()))
exit(1)
if not os.path.exists(os.path.join(pydevd_attach_to_process_root, "attach_x86_64.dylib")) or force:
os.system(os.path.join(pydevd_attach_to_process_root, "linux_and_mac", "compile_mac.sh"))
if __name__ == "__main__":
arg_parser = argparse.ArgumentParser(description="Build the pydevd binaries.")

View file

@ -1,5 +1,5 @@
[pytest]
testpaths=tests
timeout=120
timeout=60
timeout_method=thread
addopts=-n8

View file

@ -78,12 +78,14 @@ def override_build_py(cmds):
"Compiling pydevd Cython extension modules (set SKIP_CYTHON_BUILD=1 to omit)."
)
try:
subprocess.check_call([
sys.executable,
os.path.join(PYDEVD_ROOT, "setup_pydevd_cython.py"),
"build_ext",
"--inplace",
])
subprocess.check_call(
[
sys.executable,
os.path.join(PYDEVD_ROOT, "setup_pydevd_cython.py"),
"build_ext",
"--inplace",
]
)
except subprocess.SubprocessError:
# pydevd Cython extensions are optional performance enhancements, and debugpy is
# usable without them. Thus, we want to ignore build errors here by default, so
@ -168,8 +170,6 @@ if __name__ == "__main__":
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Topic :: Software Development :: Debuggers",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacOS",
@ -186,7 +186,7 @@ if __name__ == "__main__":
"debugpy._vendored",
],
package_data={
"debugpy": ["ThirdPartyNotices.txt", "py.typed"],
"debugpy": ["ThirdPartyNotices.txt"],
"debugpy._vendored": [
# pydevd extensions must be built before this list can be computed properly,
# so it is populated in the overridden build_py.finalize_options().
@ -196,11 +196,6 @@ if __name__ == "__main__":
has_ext_modules=lambda: True,
cmdclass=cmds,
# allow the user to call "debugpy" instead of "python -m debugpy"
entry_points={
"console_scripts": [
"debugpy = debugpy.server.cli:main",
"debugpy-adapter = debugpy.adapter.__main__:main",
],
},
**extras,
entry_points={"console_scripts": ["debugpy = debugpy.server.cli:main"]},
**extras
)

View file

@ -15,7 +15,6 @@ __all__ = [ # noqa
"configure",
"connect",
"debug_this_thread",
"get_cli_options",
"is_client_connected",
"listen",
"log_to",

View file

@ -5,10 +5,6 @@ from _pydev_bundle._pydev_saved_modules import threading
# It is required to debug threads started by start_new_thread in Python 3.4
_temp = threading.Thread()
if hasattr(_temp, "_os_thread_handle"): # Python 3.14 and later has this
def is_thread_alive(t):
return not t._os_thread_handle.is_done()
if hasattr(_temp, "_handle") and hasattr(_temp, "_started"): # Python 3.13 and later has this
def is_thread_alive(t):

View file

@ -1022,9 +1022,6 @@ def patch_new_process_functions():
monkey_patch_os("spawnvpe", create_spawnve)
monkey_patch_os("posix_spawn", create_posix_spawn)
if not IS_WINDOWS:
monkey_patch_os("posix_spawnp", create_posix_spawn)
if not IS_JYTHON:
if not IS_WINDOWS:
monkey_patch_os("fork", create_fork)

View file

@ -934,7 +934,7 @@ def internal_change_variable_json(py_db, request):
)
return
child_var = variable.change_variable(arguments.name, arguments.value, py_db, fmt=fmt, scope=scope)
child_var = variable.change_variable(arguments.name, arguments.value, py_db, fmt=fmt)
if child_var is None:
_write_variable_response(py_db, request, value="", success=False, message="Unable to change: %s." % (arguments.name,))

View file

@ -155,8 +155,7 @@ class FilesFiltering(object):
# Make sure we always get at least the standard library location (based on the `os` and
# `threading` modules -- it's a bit weird that it may be different on the ci, but it happens).
if hasattr(os, "__file__"):
roots.append(os.path.dirname(os.__file__))
roots.append(os.path.dirname(os.__file__))
roots.append(os.path.dirname(threading.__file__))
if IS_PYPY:
# On PyPy 3.6 (7.3.1) it wrongly says that sysconfig.get_path('stdlib') is

View file

@ -77,10 +77,7 @@ class NetCommand(_BaseNetCommand):
as_dict["pydevd_cmd_id"] = cmd_id
as_dict["seq"] = seq
self.as_dict = as_dict
try:
text = json.dumps(as_dict)
except TypeError:
text = json.dumps(as_dict, default=str)
text = json.dumps(as_dict)
assert isinstance(text, str)

View file

@ -199,9 +199,9 @@ class PluginManager(object):
return None
def change_variable(self, frame, attr, expression, scope=None):
def change_variable(self, frame, attr, expression):
for plugin in self.active_plugins:
ret = plugin.change_variable(frame, attr, expression, self.EMPTY_SENTINEL, scope)
ret = plugin.change_variable(frame, attr, expression, self.EMPTY_SENTINEL)
if ret is not self.EMPTY_SENTINEL:
return ret

View file

@ -200,7 +200,7 @@ class _ObjectVariable(_AbstractVariable):
return children_variables
def change_variable(self, name, value, py_db, fmt=None, scope: Optional[ScopeRequest]=None):
def change_variable(self, name, value, py_db, fmt=None):
children_variable = self.get_child_variable_named(name)
if children_variable is None:
return None
@ -255,10 +255,12 @@ class _FrameVariable(_AbstractVariable):
self._register_variable = register_variable
self._register_variable(self)
def change_variable(self, name, value, py_db, fmt=None, scope: Optional[ScopeRequest]=None):
def change_variable(self, name, value, py_db, fmt=None):
frame = self.frame
pydevd_vars.change_attr_expression(frame, name, value, py_db, scope=scope)
return self.get_child_variable_named(name, fmt=fmt, scope=scope)
pydevd_vars.change_attr_expression(frame, name, value, py_db)
return self.get_child_variable_named(name, fmt=fmt)
@silence_warnings_decorator
@overrides(_AbstractVariable.get_children_variables)

View file

@ -15,12 +15,11 @@ import sys # @Reimport
from _pydev_bundle._pydev_saved_modules import threading
from _pydevd_bundle import pydevd_save_locals, pydevd_timeout, pydevd_constants
from _pydev_bundle.pydev_imports import Exec, execfile
from _pydevd_bundle.pydevd_utils import to_string, ScopeRequest
from _pydevd_bundle.pydevd_utils import to_string
import inspect
from _pydevd_bundle.pydevd_daemon_thread import PyDBDaemonThread
from _pydevd_bundle.pydevd_save_locals import update_globals_and_locals
from functools import lru_cache
from typing import Optional
SENTINEL_VALUE = []
@ -596,15 +595,11 @@ def evaluate_expression(py_db, frame, expression, is_exec):
del frame
def change_attr_expression(frame, attr, expression, dbg, value=SENTINEL_VALUE, /, scope: Optional[ScopeRequest]=None):
def change_attr_expression(frame, attr, expression, dbg, value=SENTINEL_VALUE):
"""Changes some attribute in a given frame."""
if frame is None:
return
if scope is not None:
assert isinstance(scope, ScopeRequest)
scope = scope.scope
try:
expression = expression.replace("@LINE@", "\n")
@ -613,15 +608,13 @@ def change_attr_expression(frame, attr, expression, dbg, value=SENTINEL_VALUE, /
if result is not dbg.plugin.EMPTY_SENTINEL:
return result
if attr[:7] == "Globals" or scope == "globals":
attr = attr[8:] if attr.startswith("Globals") else attr
if attr[:7] == "Globals":
attr = attr[8:]
if attr in frame.f_globals:
if value is SENTINEL_VALUE:
value = eval(expression, frame.f_globals, frame.f_locals)
frame.f_globals[attr] = value
return frame.f_globals[attr]
else:
raise VariableError("Attribute %s not found in globals" % attr)
else:
if "." not in attr: # i.e.: if we have a '.', we're changing some attribute of a local var.
if pydevd_save_locals.is_save_locals_available():
@ -638,9 +631,8 @@ def change_attr_expression(frame, attr, expression, dbg, value=SENTINEL_VALUE, /
Exec("%s=%s" % (attr, expression), frame.f_globals, frame.f_locals)
return result
except Exception as e:
pydev_log.exception(e)
except Exception:
pydev_log.exception()
MAXIMUM_ARRAY_SIZE = 100

View file

@ -13,7 +13,6 @@ from typing import Dict, Optional, Tuple, Any
from os.path import basename, splitext
from _pydev_bundle import pydev_log
from _pydev_bundle.pydev_is_thread_alive import is_thread_alive as pydevd_is_thread_alive
from _pydevd_bundle import pydevd_dont_trace
from _pydevd_bundle.pydevd_constants import (
IS_PY313_OR_GREATER,
@ -268,7 +267,7 @@ class ThreadInfo:
if self._use_is_stopped:
return not self.thread._is_stopped
else:
return pydevd_is_thread_alive(self.thread)
return not self.thread._handle.is_done()
class _DeleteDummyThreadOnDel:
@ -755,16 +754,6 @@ def enable_code_tracing(thread_ident: Optional[int], code, frame) -> bool:
return _enable_code_tracing(py_db, additional_info, func_code_info, code, frame, False)
# fmt: off
# IFDEF CYTHON
# cpdef reset_thread_local_info():
# ELSE
def reset_thread_local_info():
# ENDIF
# fmt: on
"""Resets the thread local info TLS store for use after a fork()."""
global _thread_local_info
_thread_local_info = threading.local()
# fmt: off
# IFDEF CYTHON

View file

@ -19,7 +19,6 @@ from typing import Dict, Optional, Tuple, Any
from os.path import basename, splitext
from _pydev_bundle import pydev_log
from _pydev_bundle.pydev_is_thread_alive import is_thread_alive as pydevd_is_thread_alive
from _pydevd_bundle import pydevd_dont_trace
from _pydevd_bundle.pydevd_constants import (
IS_PY313_OR_GREATER,
@ -274,7 +273,7 @@ cdef class ThreadInfo:
if self._use_is_stopped:
return not self.thread._is_stopped
else:
return pydevd_is_thread_alive(self.thread)
return not self.thread._handle.is_done()
class _DeleteDummyThreadOnDel:
@ -761,16 +760,6 @@ cpdef enable_code_tracing(unsigned long thread_ident, code, frame):
return _enable_code_tracing(py_db, additional_info, func_code_info, code, frame, False)
# fmt: off
# IFDEF CYTHON -- DONT EDIT THIS FILE (it is automatically generated)
cpdef reset_thread_local_info():
# ELSE
# def reset_thread_local_info():
# ENDIF
# fmt: on
"""Resets the thread local info TLS store for use after a fork()."""
global _thread_local_info
_thread_local_info = threading.local()
# fmt: off
# IFDEF CYTHON -- DONT EDIT THIS FILE (it is automatically generated)

View file

@ -726,8 +726,6 @@ class PyDB(object):
self._local_thread_trace_func = threading.local()
self._client_socket = None
self._server_socket_ready_event = ThreadingEvent()
self._server_socket_name = None
@ -1506,7 +1504,6 @@ class PyDB(object):
def connect(self, host, port):
if host:
s = start_client(host, port)
self._client_socket = s
else:
s = start_server(port)
@ -2554,10 +2551,6 @@ class PyDB(object):
except:
pass
finally:
if self._client_socket:
self._client_socket.close()
self._client_socket = None
pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads: finished")
def prepare_to_run(self):
@ -2947,7 +2940,6 @@ def settrace(
client_access_token=None,
notify_stdin=True,
protocol=None,
ppid=0,
**kwargs,
):
"""Sets the tracing function with the pydev debug function and initializes needed facilities.
@ -3007,11 +2999,6 @@ def settrace(
When using in Eclipse the protocol should not be passed, but when used in VSCode
or some other IDE/editor that accepts the Debug Adapter Protocol then 'dap' should
be passed.
:param ppid:
Override the parent process id (PPID) for the current debugging session. This PPID is
reported to the debug client (IDE) and can be used to act like a child process of an
existing debugged process without being a child process.
"""
if protocol and protocol.lower() == "dap":
pydevd_defaults.PydevdCustomization.DEFAULT_PROTOCOL = pydevd_constants.HTTP_JSON_PROTOCOL
@ -3040,7 +3027,6 @@ def settrace(
client_access_token,
__setup_holder__=__setup_holder__,
notify_stdin=notify_stdin,
ppid=ppid,
)
@ -3064,7 +3050,6 @@ def _locked_settrace(
client_access_token,
__setup_holder__,
notify_stdin,
ppid,
):
if patch_multiprocessing:
try:
@ -3096,7 +3081,6 @@ def _locked_settrace(
"port": int(port),
"multiprocess": patch_multiprocessing,
"skip-notify-stdin": not notify_stdin,
pydevd_constants.ARGUMENT_PPID: ppid,
}
SetupHolder.setup = setup
@ -3354,9 +3338,6 @@ def settrace_forked(setup_tracing=True):
if clear_thread_local_info is not None:
clear_thread_local_info()
if PYDEVD_USE_SYS_MONITORING:
pydevd_sys_monitoring.reset_thread_local_info()
settrace(
host,
port=port,

View file

@ -183,7 +183,7 @@ def get_target_filename(is_target_process_64=None, prefix=None, extension=None):
print("Unable to attach to process in platform: %s", sys.platform)
return None
if arch.lower() not in ("arm64", "amd64", "x86", "x86_64", "i386", "x86"):
if arch.lower() not in ("amd64", "x86", "x86_64", "i386", "x86"):
# We don't support this processor by default. Still, let's support the case where the
# user manually compiled it himself with some heuristics.
#
@ -237,11 +237,8 @@ def get_target_filename(is_target_process_64=None, prefix=None, extension=None):
if not prefix:
# Default is looking for the attach_ / attach_linux
if IS_WINDOWS: # just the extension changes
if IS_WINDOWS or IS_MAC: # just the extension changes
prefix = "attach_"
elif IS_MAC:
prefix = "attach"
suffix = ""
elif IS_LINUX:
prefix = "attach_linux_" # historically it has a different name
else:
@ -528,7 +525,7 @@ def run_python_code_mac(pid, python_code, connect_debugger_tracing=False, show_d
cmd.extend(
[
"-o 'process detach'",
"-o 'script import os; os._exit(0)'",
"-o 'script import os; os._exit(1)'",
]
)

View file

@ -24,7 +24,6 @@ enum PythonVersion {
PythonVersion_311 = 0x030B,
PythonVersion_312 = 0x030C,
PythonVersion_313 = 0x030D,
PythonVersion_314 = 0x030E,
};
@ -79,9 +78,6 @@ static PythonVersion GetPythonVersion(void *module) {
if(version[3] == '3'){
return PythonVersion_313;
}
if(version[3] == '4'){
return PythonVersion_314;
}
}
return PythonVersion_Unknown; // we don't care about 3.1 anymore...

View file

@ -8,4 +8,4 @@ case $ARCH in
esac
SRC="$(dirname "$0")/.."
g++ -std=c++11 -shared -fPIC -O2 -D_FORTIFY_SOURCE=2 -nostartfiles -fstack-protector-strong $SRC/linux_and_mac/attach.cpp -o $SRC/attach_linux_$SUFFIX.so
g++ -std=c++11 -shared -fPIC -nostartfiles $SRC/linux_and_mac/attach.cpp -o $SRC/attach_linux_$SUFFIX.so

View file

@ -1,14 +1,4 @@
#!/bin/bash
set -e
SRC="$(dirname "$0")/.."
g++ -fPIC -D_REENTRANT -std=c++11 -D_FORTIFY_SOURCE=2 -arch arm64 -c -o "$SRC/attach_arm64.o" "$SRC/linux_and_mac/attach.cpp"
g++ -dynamiclib -nostartfiles -arch arm64 -o "$SRC/attach_arm64.dylib" "$SRC/attach_arm64.o" -lc
rm "$SRC/attach_arm64.o"
g++ -fPIC -D_REENTRANT -std=c++11 -D_FORTIFY_SOURCE=2 -arch x86_64 -c -o "$SRC/attach_x86_64.o" "$SRC/linux_and_mac/attach.cpp"
g++ -dynamiclib -nostartfiles -arch x86_64 -o "$SRC/attach_x86_64.dylib" "$SRC/attach_x86_64.o" -lc
rm "$SRC/attach_x86_64.o"
lipo -create "$SRC/attach_arm64.dylib" "$SRC/attach_x86_64.dylib" -output "$SRC/attach.dylib"
rm "$SRC/attach_arm64.dylib" "$SRC/attach_x86_64.dylib"
g++ -fPIC -D_REENTRANT -std=c++11 -arch x86_64 -c $SRC/linux_and_mac/attach.cpp -o $SRC/attach_x86_64.o
g++ -dynamiclib -nostartfiles -arch x86_64 -lc $SRC/attach_x86_64.o -o $SRC/attach_x86_64.dylib

View file

@ -5,6 +5,6 @@
:: [wsl2]
:: kernelCommandLine = vsyscall=emulate
docker run --rm -v %~dp0/..:/src quay.io/pypa/manylinux1_x86_64 g++ -std=c++11 -D_FORTIFY_SOURCE=2 -shared -o /src/attach_linux_amd64.so -fPIC -nostartfiles /src/linux_and_mac/attach.cpp
docker run --rm -v %~dp0/..:/src quay.io/pypa/manylinux1_x86_64 g++ -std=c++11 -shared -o /src/attach_linux_amd64.so -fPIC -nostartfiles /src/linux_and_mac/attach.cpp
docker run --rm -v %~dp0/..:/src quay.io/pypa/manylinux1_i686 g++ -std=c++11 -D_FORTIFY_SOURCE=2 -shared -o /src/attach_linux_x86.so -fPIC -nostartfiles /src/linux_and_mac/attach.cpp
docker run --rm -v %~dp0/..:/src quay.io/pypa/manylinux1_i686 g++ -std=c++11 -shared -o /src/attach_linux_x86.so -fPIC -nostartfiles /src/linux_and_mac/attach.cpp

View file

@ -88,14 +88,7 @@ def _get_library_dir():
break
if library_dir is None or not os_path_exists(library_dir):
if hasattr(os, "__file__"):
# "os" is a frozen import an thus "os.__file__" is not always set.
# See https://github.com/python/cpython/pull/28656
library_dir = os.path.dirname(os.__file__)
else:
# "threading" is not a frozen import an thus "threading.__file__" is always set.
import threading
library_dir = os.path.dirname(threading.__file__)
library_dir = os.path.dirname(os.__file__)
return library_dir

View file

@ -427,10 +427,10 @@ class DjangoTemplateSyntaxErrorFrame(object):
self.f_trace = None
def change_variable(frame, attr, expression, default, scope=None):
def change_variable(frame, attr, expression, default):
if isinstance(frame, DjangoTemplateFrame):
result = eval(expression, frame.f_globals, frame.f_locals)
frame._change_variable(attr, result, scope=scope)
frame._change_variable(attr, result)
return result
return default

View file

@ -249,10 +249,10 @@ class Jinja2TemplateSyntaxErrorFrame(object):
self.f_trace = None
def change_variable(frame, attr, expression, default, scope=None):
def change_variable(frame, attr, expression, default):
if isinstance(frame, Jinja2TemplateFrame):
result = eval(expression, frame.f_globals, frame.f_locals)
frame._change_variable(frame.f_back, attr, result, scope=scope)
frame._change_variable(frame.f_back, attr, result)
return result
return default

View file

@ -201,7 +201,7 @@ def get_python_helper_lib_filename():
pydev_log.info("Unable to set trace to all threads in platform: %s", sys.platform)
return None
if arch.lower() not in ("arm64", "amd64", "x86", "x86_64", "i386", "x86"):
if arch.lower() not in ("amd64", "x86", "x86_64", "i386", "x86"):
# We don't support this processor by default. Still, let's support the case where the
# user manually compiled it himself with some heuristics.
#
@ -250,11 +250,8 @@ def get_python_helper_lib_filename():
else:
suffix = suffix_32
if IS_WINDOWS: # just the extension changes
if IS_WINDOWS or IS_MAC: # just the extension changes
prefix = "attach_"
elif IS_MAC:
prefix = "attach"
suffix = ""
elif IS_LINUX: #
prefix = "attach_linux_" # historically it has a different name
else:

View file

@ -170,8 +170,6 @@ try:
# uncomment to generate pdbs for visual studio.
# extra_compile_args=["-Zi", "/Od"]
# extra_link_args=["-debug"]
extra_compile_args = ["/guard:cf"]
extra_link_args = ["/guard:cf", "/DYNAMICBASE"]
kwargs = {}
if extra_link_args:

View file

@ -207,8 +207,6 @@ def build_extension(dir_name, extension_name, target_pydevd_name, force_cython,
# uncomment to generate pdbs for visual studio.
# extra_compile_args=["-Zi", "/Od"]
# extra_link_args=["-debug"]
extra_compile_args = ["/guard:cf"]
extra_link_args = ["/guard:cf", "/DYNAMICBASE"]
if IS_PY311_ONWARDS:
# On py311 we need to add the CPython include folder to the include path.
extra_compile_args.append("-I%s\\include\\CPython" % sys.exec_prefix)

View file

@ -9,5 +9,4 @@ class SomeClass(object):
if __name__ == '__main__':
SomeClass().method()
print('second breakpoint')
print('TEST SUCEEDED')

View file

@ -5931,28 +5931,13 @@ def test_send_json_message(case_setup_dap):
def test_global_scope(case_setup_dap):
with case_setup_dap.test_file("_debugger_case_globals.py") as writer:
json_facade = JsonFacade(writer)
break1 = writer.get_line_index_with_content("breakpoint here")
break2 = writer.get_line_index_with_content("second breakpoint")
json_facade.write_set_breakpoints([break1, break2])
json_facade.write_set_breakpoints(writer.get_line_index_with_content("breakpoint here"))
json_facade.write_make_initial_run()
json_hit = json_facade.wait_for_thread_stopped()
local_var = json_facade.get_global_var(json_hit.frame_id, "in_global_scope")
assert local_var.value == "'in_global_scope_value'"
scopes_request = json_facade.write_request(pydevd_schema.ScopesRequest(pydevd_schema.ScopesArguments(json_hit.frame_id)))
scopes_response = json_facade.wait_for_response(scopes_request)
assert len(scopes_response.body.scopes) == 2
assert scopes_response.body.scopes[0]["name"] == "Locals"
assert scopes_response.body.scopes[1]["name"] == "Globals"
globals_varreference = scopes_response.body.scopes[1]["variablesReference"]
json_facade.write_set_variable(globals_varreference, "in_global_scope", "'new_value'")
json_facade.write_continue()
json_hit2 = json_facade.wait_for_thread_stopped()
global_var = json_facade.get_global_var(json_hit2.frame_id, "in_global_scope")
assert global_var.value == "'new_value'"
json_facade.write_continue()
writer.finished_ok = True

View file

@ -13,9 +13,7 @@ import sys
# and should be imported locally inside main() instead.
def main():
args = _parse_argv(sys.argv)
def main(args):
# If we're talking DAP over stdio, stderr is not guaranteed to be read from,
# so disable it to avoid the pipe filling and locking up. This must be done
# as early as possible, before the logging module starts writing to it.
@ -65,10 +63,9 @@ def main():
else:
endpoints["client"] = {"host": client_host, "port": client_port}
localhost = sockets.get_default_localhost()
if args.for_server is not None:
try:
server_host, server_port = servers.serve(localhost)
server_host, server_port = servers.serve()
except Exception as exc:
endpoints = {"error": "Can't listen for server connections: " + str(exc)}
else:
@ -81,11 +78,10 @@ def main():
)
try:
ipv6 = localhost.count(":") > 1
sock = sockets.create_client(ipv6)
sock = sockets.create_client()
try:
sock.settimeout(None)
sock.connect((localhost, args.for_server))
sock.connect(("127.0.0.1", args.for_server))
sock_io = sock.makefile("wb", 0)
try:
sock_io.write(json.dumps(endpoints).encode("utf-8"))
@ -139,10 +135,6 @@ def main():
def _parse_argv(argv):
from debugpy.common import sockets
host = sockets.get_default_localhost()
parser = argparse.ArgumentParser()
parser.add_argument(
@ -160,7 +152,7 @@ def _parse_argv(argv):
parser.add_argument(
"--host",
type=str,
default=host,
default="127.0.0.1",
metavar="HOST",
help="start the adapter in debugServer mode on the specified host",
)
@ -238,4 +230,4 @@ if __name__ == "__main__":
# the default "C" locale if so.
pass
main()
main(_parse_argv(sys.argv))

View file

@ -191,7 +191,6 @@ class Client(components.Component):
"supportsSetExpression": True,
"supportsSetVariable": True,
"supportsValueFormattingOptions": True,
"supportsTerminateDebuggee": True,
"supportsTerminateRequest": True,
"supportsGotoTargetsRequest": True,
"supportsClipboardContext": True,
@ -404,8 +403,7 @@ class Client(components.Component):
self._forward_terminate_request = on_terminate == "KeyboardInterrupt"
launcher_path = request("debugLauncherPath", os.path.dirname(launcher.__file__))
localhost = sockets.get_default_localhost()
adapter_host = request("debugAdapterHost", localhost)
adapter_host = request("debugAdapterHost", "127.0.0.1")
try:
servers.serve(adapter_host)
@ -473,21 +471,20 @@ class Client(components.Component):
'"processId" and "subProcessId" are mutually exclusive'
)
localhost = sockets.get_default_localhost()
if listen != ():
if servers.is_serving():
raise request.isnt_valid(
'Multiple concurrent "listen" sessions are not supported'
)
host = listen("host", localhost)
host = listen("host", "127.0.0.1")
port = listen("port", int)
adapter.access_token = None
self.restart_requested = request("restart", False)
host, port = servers.serve(host, port)
else:
if not servers.is_serving():
servers.serve(localhost)
host, port = sockets.get_address(servers.listener)
servers.serve()
host, port = servers.listener.getsockname()
# There are four distinct possibilities here.
#
@ -702,17 +699,11 @@ class Client(components.Component):
except Exception:
log.swallow_exception()
# Close the client channel since we disconnected from the client.
try:
self.channel.close()
except Exception:
log.swallow_exception(level="warning")
def disconnect(self):
super().disconnect()
def report_sockets(self):
socks = [
sockets = [
{
"host": host,
"port": port,
@ -720,12 +711,12 @@ class Client(components.Component):
}
for listener in [clients.listener, launchers.listener, servers.listener]
if listener is not None
for (host, port) in [sockets.get_address(listener)]
for (host, port) in [listener.getsockname()]
]
self.channel.send_event(
"debugpySockets",
{
"sockets": socks
"sockets": sockets
},
)
@ -761,11 +752,10 @@ class Client(components.Component):
if "connect" not in body:
body["connect"] = {}
if "host" not in body["connect"]:
localhost = sockets.get_default_localhost()
body["connect"]["host"] = host or localhost
body["connect"]["host"] = host if host is not None else "127.0.0.1"
if "port" not in body["connect"]:
if port is None:
_, port = sockets.get_address(listener)
_, port = listener.getsockname()
body["connect"]["port"] = port
if self.capabilities["supportsStartDebuggingRequest"]:
@ -782,7 +772,7 @@ def serve(host, port):
global listener
listener = sockets.serve("Client", Client, host, port)
sessions.report_sockets()
return sockets.get_address(listener)
return listener.getsockname()
def stop_serving():

View file

@ -89,7 +89,7 @@ def spawn_debuggee(
arguments = dict(start_request.arguments)
if not session.no_debug:
_, arguments["port"] = sockets.get_address(servers.listener)
_, arguments["port"] = servers.listener.getsockname()
arguments["adapterAccessToken"] = adapter.access_token
def on_launcher_connected(sock):
@ -108,11 +108,10 @@ def spawn_debuggee(
sessions.report_sockets()
try:
launcher_host, launcher_port = sockets.get_address(listener)
localhost = sockets.get_default_localhost()
launcher_host, launcher_port = listener.getsockname()
launcher_addr = (
launcher_port
if launcher_host == localhost
if launcher_host == "127.0.0.1"
else f"{launcher_host}:{launcher_port}"
)
cmdline += [str(launcher_addr), "--"]
@ -153,24 +152,6 @@ def spawn_debuggee(
request_args["cwd"] = cwd
if shell_expand_args:
request_args["argsCanBeInterpretedByShell"] = True
# VS Code debugger extension may pass us an argument indicating the
# quoting character to use in the terminal. Otherwise default based on platform.
default_quote = '"' if os.name != "nt" else "'"
quote_char = arguments["terminalQuoteCharacter"] if "terminalQuoteCharacter" in arguments else default_quote
# VS code doesn't quote arguments if `argsCanBeInterpretedByShell` is true,
# so we need to do it ourselves for the arguments up to the first argument passed to
# debugpy (this should be the python file to run).
args = request_args["args"]
for i in range(len(args)):
s = args[i]
if " " in s and not ((s.startswith('"') and s.endswith('"')) or (s.startswith("'") and s.endswith("'"))):
s = f"{quote_char}{s}{quote_char}"
args[i] = s
if i > 0 and args[i-1] == "--":
break
try:
# It is unspecified whether this request receives a response immediately, or only
# after the spawned command has completed running, so do not block waiting for it.

View file

@ -395,7 +395,7 @@ def serve(host="127.0.0.1", port=0):
global listener
listener = sockets.serve("Server", Connection, host, port)
sessions.report_sockets()
return sockets.get_address(listener)
return listener.getsockname()
def is_serving():
@ -475,7 +475,7 @@ def dont_wait_for_first_connection():
def inject(pid, debugpy_args, on_output):
host, port = sockets.get_address(listener)
host, port = listener.getsockname()
cmdline = [
sys.executable,

View file

@ -9,68 +9,18 @@ import threading
from debugpy.common import log
from debugpy.common.util import hide_thread_from_debugger
def can_bind_ipv4_localhost():
"""Check if we can bind to IPv4 localhost."""
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Try to bind to IPv4 localhost on port 0 (any available port)
sock.bind(("127.0.0.1", 0))
sock.close()
return True
except (socket.error, OSError, AttributeError):
return False
def can_bind_ipv6_localhost():
"""Check if we can bind to IPv6 localhost."""
try:
sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Try to bind to IPv6 localhost on port 0 (any available port)
sock.bind(("::1", 0))
sock.close()
return True
except (socket.error, OSError, AttributeError):
return False
def get_default_localhost():
"""Get the default localhost address.
Defaults to IPv4 '127.0.0.1', but falls back to IPv6 '::1' if IPv4 is unavailable.
"""
# First try IPv4 (preferred default)
if can_bind_ipv4_localhost():
return "127.0.0.1"
# Fall back to IPv6 if IPv4 is not available
if can_bind_ipv6_localhost():
return "::1"
# If neither works, still return IPv4 as a last resort
# (this is a very unusual situation)
return "127.0.0.1"
def get_address(sock):
"""Gets the socket address host and port."""
try:
host, port = sock.getsockname()[:2]
except Exception as exc:
log.swallow_exception("Failed to get socket address:")
raise RuntimeError(f"Failed to get socket address: {exc}") from exc
return host, port
def create_server(host, port=0, backlog=socket.SOMAXCONN, timeout=None):
"""Return a local server socket listening on the given port."""
assert backlog > 0
if host is None:
host = get_default_localhost()
host = "127.0.0.1"
if port is None:
port = 0
ipv6 = host.count(":") > 1
try:
server = _new_sock(ipv6)
server = _new_sock()
if port != 0:
# If binding to a specific port, make sure that the user doesn't have
# to wait until the OS times out the socket to be able to use that port
@ -92,14 +42,13 @@ def create_server(host, port=0, backlog=socket.SOMAXCONN, timeout=None):
return server
def create_client(ipv6=False):
def create_client():
"""Return a client socket that may be connected to a remote address."""
return _new_sock(ipv6)
return _new_sock()
def _new_sock(ipv6=False):
address_family = socket.AF_INET6 if ipv6 else socket.AF_INET
sock = socket.socket(address_family, socket.SOCK_STREAM, socket.IPPROTO_TCP)
def _new_sock():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP)
# Set TCP keepalive on an open socket.
# It activates after 1 second (TCP_KEEPIDLE,) of idleness,
@ -153,14 +102,13 @@ def serve(name, handler, host, port=0, backlog=socket.SOMAXCONN, timeout=None):
log.reraise_exception(
"Error listening for incoming {0} connections on {1}:{2}:", name, host, port
)
host, port = get_address(listener)
host, port = listener.getsockname()
log.info("Listening for incoming {0} connections on {1}:{2}...", name, host, port)
def accept_worker():
while True:
try:
sock, address = listener.accept()
other_host, other_port = address[:2]
sock, (other_host, other_port) = listener.accept()
except (OSError, socket.error):
# Listener socket has been closed.
break

View file

@ -23,8 +23,7 @@ def connect(host, port):
log.info("Connecting to adapter at {0}:{1}", host, port)
ipv6 = host.count(":") > 1
sock = sockets.create_client(ipv6)
sock = sockets.create_client()
sock.connect((host, port))
adapter_host = host

View file

@ -14,7 +14,7 @@ import sys
def main():
from debugpy import launcher
from debugpy.common import log, sockets
from debugpy.common import log
from debugpy.launcher import debuggee
log.to_file(prefix="debugpy.launcher")
@ -38,10 +38,9 @@ def main():
# The first argument specifies the host/port on which the adapter is waiting
# for launcher to connect. It's either host:port, or just port.
adapter = launcher_argv[0]
host, sep, port = adapter.rpartition(":")
host.strip("[]")
host, sep, port = adapter.partition(":")
if not sep:
host = sockets.get_default_localhost()
host = "127.0.0.1"
port = adapter
port = int(port)

View file

@ -4,7 +4,6 @@
from __future__ import annotations
import dataclasses
import functools
import typing
@ -22,21 +21,6 @@ from debugpy import _version
Endpoint = typing.Tuple[str, int]
@dataclasses.dataclass(frozen=True)
class CliOptions:
"""Options that were passed to the debugpy CLI entry point."""
mode: typing.Literal["connect", "listen"]
target_kind: typing.Literal["file", "module", "code", "pid"]
address: Endpoint
log_to: str | None = None
log_to_stderr: bool = False
target: str | None = None
wait_for_client: bool = False
adapter_access_token: str | None = None
config: dict[str, object] = dataclasses.field(default_factory=dict)
parent_session_pid: int | None = None
def _api(cancelable=False):
def apply(f):
@functools.wraps(f)
@ -136,7 +120,7 @@ def listen(
...
@_api()
def connect(__endpoint: Endpoint | int, *, access_token: str | None = None, parent_session_pid: int | None = None) -> Endpoint:
def connect(__endpoint: Endpoint | int, *, access_token: str | None = None) -> Endpoint:
"""Tells an existing debug adapter instance that is listening on the
specified address to debug this process.
@ -147,10 +131,6 @@ def connect(__endpoint: Endpoint | int, *, access_token: str | None = None, pare
`access_token` must be the same value that was passed to the adapter
via the `--server-access-token` command-line switch.
`parent_session_pid` is the PID of the parent session to associate
with. This is useful if running in a process that is not an immediate
child of the parent process being debugged.
This function does't wait for a client to connect to the debug
adapter that it connects to. Use `wait_for_client` to block
execution until the client connects.
@ -212,37 +192,4 @@ def trace_this_thread(__should_trace: bool):
"""
def get_cli_options() -> CliOptions | None:
"""Returns the CLI options that were processed by debugpy.
These options are all the options after the CLI args and
environment variables that were processed on startup.
If the debugpy CLI entry point was not called in this process, the
returned value is None.
"""
from debugpy.server import cli
options = cli.options
if options.mode is None or options.target_kind is None or options.address is None:
# The CLI entrypoint was not called so there are no options present.
return None
# We don't return the actual options object because we don't want callers
# to be able to mutate it. Instead we use a frozen dataclass as a snapshot
# with richer type annotations.
return CliOptions(
mode=options.mode,
target_kind=options.target_kind,
address=options.address,
log_to=options.log_to,
log_to_stderr=options.log_to_stderr,
target=options.target,
wait_for_client=options.wait_for_client,
adapter_access_token=options.adapter_access_token,
config=options.config,
parent_session_pid=options.parent_session_pid,
)
__version__: str = _version.get_versions()["version"]

View file

View file

@ -100,8 +100,7 @@ def _starts_debugging(func):
_, port = address
except Exception:
port = address
localhost = sockets.get_default_localhost()
address = (localhost, port)
address = ("127.0.0.1", port)
try:
port.__index__() # ensure it's int-like
except Exception:
@ -144,8 +143,8 @@ def listen(address, settrace_kwargs, in_process_debug_adapter=False):
# Multiple calls to listen() cause the debuggee to hang
raise RuntimeError("debugpy.listen() has already been called on this process")
host, port = address
if in_process_debug_adapter:
host, port = address
log.info("Listening: pydevd without debugpy adapter: {0}:{1}", host, port)
settrace_kwargs["patch_multiprocessing"] = False
_settrace(
@ -162,14 +161,13 @@ def listen(address, settrace_kwargs, in_process_debug_adapter=False):
server_access_token = codecs.encode(os.urandom(32), "hex").decode("ascii")
try:
localhost = sockets.get_default_localhost()
endpoints_listener = sockets.create_server(localhost, 0, timeout=30)
endpoints_listener = sockets.create_server("127.0.0.1", 0, timeout=30)
except Exception as exc:
log.swallow_exception("Can't listen for adapter endpoints:")
raise RuntimeError("can't listen for adapter endpoints: " + str(exc))
try:
endpoints_host, endpoints_port = sockets.get_address(endpoints_listener)
endpoints_host, endpoints_port = endpoints_listener.getsockname()
log.info(
"Waiting for adapter endpoints on {0}:{1}...",
endpoints_host,
@ -293,9 +291,9 @@ listen.called = False
@_starts_debugging
def connect(address, settrace_kwargs, access_token=None, parent_session_pid=None):
def connect(address, settrace_kwargs, access_token=None):
host, port = address
_settrace(host=host, port=port, client_access_token=access_token, ppid=parent_session_pid or 0, **settrace_kwargs)
_settrace(host=host, port=port, client_access_token=access_token, **settrace_kwargs)
class wait_for_client:

View file

@ -7,7 +7,7 @@ import os
import re
import sys
from importlib.util import find_spec
from typing import Any, Union, Tuple, Dict, Literal
from typing import Any, Union, Tuple, Dict
# debugpy.__main__ should have preloaded pydevd properly before importing this module.
# Otherwise, some stdlib modules above might have had imported threading before pydevd
@ -20,10 +20,9 @@ from _pydevd_bundle import pydevd_runpy as runpy
import debugpy
import debugpy.server
from debugpy.common import log, sockets
from debugpy.common import log
from debugpy.server import api
TargetKind = Literal["file", "module", "code", "pid"]
TARGET = "<filename> | -m <module> | -c <code> | --pid <pid>"
@ -35,8 +34,6 @@ Usage: debugpy --listen | --connect
[--wait-for-client]
[--configure-<name> <value>]...
[--log-to <path>] [--log-to-stderr]
[--parent-session-pid <pid>]]
[--adapter-access-token <token>]
{1}
[<arg>]...
""".format(
@ -44,18 +41,16 @@ Usage: debugpy --listen | --connect
)
# Changes here should be aligned with the public API CliOptions.
class Options(object):
mode: Union[Literal["connect", "listen"], None] = None
mode = None
address: Union[Tuple[str, int], None] = None
log_to = None
log_to_stderr = False
target: Union[str, None] = None
target_kind: Union[TargetKind, None] = None
target_kind: Union[str, None] = None
wait_for_client = False
adapter_access_token = None
config: Dict[str, Any] = {}
parent_session_pid: Union[int, None] = None
options = Options()
@ -109,10 +104,9 @@ def set_address(mode):
# It's either host:port, or just port.
value = next(it)
host, sep, port = value.rpartition(":")
host = host.strip("[]")
host, sep, port = value.partition(":")
if not sep:
host = sockets.get_default_localhost()
host = "127.0.0.1"
port = value
try:
port = int(port)
@ -148,7 +142,7 @@ def set_config(arg, it):
options.config[name] = value
def set_target(kind: TargetKind, parser=(lambda x: x), positional=False):
def set_target(kind: str, parser=(lambda x: x), positional=False):
def do(arg, it):
options.target_kind = kind
target = parser(arg if positional else next(it))
@ -184,7 +178,8 @@ switches = [
("--connect", "<address>", set_address("connect")),
("--wait-for-client", None, set_const("wait_for_client", True)),
("--configure-.+", "<value>", set_config),
("--parent-session-pid", "<pid>", set_arg("parent_session_pid", lambda x: int(x) if x else None)),
# Switches that are used internally by the client or debugpy itself.
("--adapter-access-token", "<token>", set_arg("adapter_access_token")),
# Targets. The "" entry corresponds to positional command line arguments,
@ -234,8 +229,6 @@ def parse_args():
raise ValueError("either --listen or --connect is required")
if options.adapter_access_token is not None and options.mode != "connect":
raise ValueError("--adapter-access-token requires --connect")
if options.parent_session_pid is not None and options.mode != "connect":
raise ValueError("--parent-session-pid requires --connect")
if options.target_kind == "pid" and options.wait_for_client:
raise ValueError("--pid does not support --wait-for-client")
@ -327,7 +320,7 @@ def start_debugging(argv_0):
if options.mode == "listen" and options.address is not None:
debugpy.listen(options.address)
elif options.mode == "connect" and options.address is not None:
debugpy.connect(options.address, access_token=options.adapter_access_token, parent_session_pid=options.parent_session_pid)
debugpy.connect(options.address, access_token=options.adapter_access_token)
else:
raise AssertionError(repr(options.mode))

View file

@ -25,9 +25,8 @@ class BackChannel(object):
def listen(self):
self._server_socket = sockets.create_server("127.0.0.1", 0, self.TIMEOUT)
_, self.port = sockets.get_address(self._server_socket)
_, self.port = self._server_socket.getsockname()
self._server_socket.listen(0)
log.info("{0} created server socket on port {1}", self, self.port)
def accept_worker():
log.info(
@ -68,14 +67,8 @@ class BackChannel(object):
self._established.set()
def receive(self):
log.info("{0} waiting for connection to be established...", self)
if not self._established.wait(timeout=self.TIMEOUT):
log.error("{0} timed out waiting for connection after {1} seconds", self, self.TIMEOUT)
raise TimeoutError(f"{self} timed out waiting for debuggee to connect")
log.info("{0} connection established, reading JSON...", self)
result = self._stream.read_json()
log.info("{0} received: {1}", self, result)
return result
self._established.wait()
return self._stream.read_json()
def send(self, value):
self.session.timeline.unfreeze()

View file

@ -281,11 +281,7 @@ class Session(object):
if self.adapter_endpoints is not None and self.expected_exit_code is not None:
log.info("Waiting for {0} to close listener ports ...", self.adapter_id)
timeout_start = time.time()
while self.adapter_endpoints.check():
if time.time() - timeout_start > 10:
log.warning("{0} listener ports did not close within 10 seconds", self.adapter_id)
break
time.sleep(0.1)
if self.adapter is not None:
@ -294,20 +290,8 @@ class Session(object):
self.adapter_id,
self.adapter.pid,
)
try:
self.adapter.wait(timeout=10)
except Exception:
log.warning("{0} did not exit gracefully within 10 seconds, force-killing", self.adapter_id)
try:
self.adapter.kill()
self.adapter.wait(timeout=5)
except Exception as e:
log.error("Failed to force-kill {0}: {1}", self.adapter_id, e)
try:
watchdog.unregister_spawn(self.adapter.pid, self.adapter_id)
except Exception as e:
log.warning("Failed to unregister adapter spawn: {0}", e)
self.adapter.wait()
watchdog.unregister_spawn(self.adapter.pid, self.adapter_id)
self.adapter = None
if self.backchannel is not None:
@ -382,23 +366,9 @@ class Session(object):
return env
def _make_python_cmdline(self, exe, *args):
def normalize(s, strip_quotes=False):
# Convert py.path.local to string
if isinstance(s, py.path.local):
s = s.strpath
else:
s = str(s)
# Strip surrounding quotes if requested
if strip_quotes and len(s) >= 2 and " " in s and (s[0] == s[-1] == '"' or s[0] == s[-1] == "'"):
s = s[1:-1]
return s
# Strip quotes from exe
result = [normalize(exe, strip_quotes=True)]
for arg in args:
# Don't strip quotes on anything except the exe
result.append(normalize(arg, strip_quotes=False))
return result
return [
str(s.strpath if isinstance(s, py.path.local) else s) for s in [exe, *args]
]
def spawn_debuggee(self, args, cwd=None, exe=sys.executable, setup=None):
assert self.debuggee is None
@ -494,8 +464,7 @@ class Session(object):
self.expected_adapter_sockets["client"]["port"] = port
ipv6 = host.count(":") > 1
sock = sockets.create_client(ipv6)
sock = sockets.create_client()
sock.connect(address)
stream = messaging.JsonIOStream.from_socket(sock, name=self.adapter_id)
@ -594,78 +563,25 @@ class Session(object):
def run_in_terminal(self, args, cwd, env):
exe = args.pop(0)
if getattr(self, "_run_in_terminal_args_can_be_interpreted_by_shell", False):
exe = self._shell_unquote(exe)
args = [self._shell_unquote(a) for a in args]
self.spawn_debuggee.env.update(env)
self.spawn_debuggee(args, cwd, exe=exe)
return {}
@staticmethod
def _shell_unquote(s):
s = str(s)
if len(s) >= 2 and s[0] == s[-1] and s[0] in ("\"", "'"):
return s[1:-1]
return s
@classmethod
def _split_shell_arg_string(cls, s):
"""Split a shell argument string into args, honoring simple single/double quotes.
This is intentionally minimal: it matches how terminals remove surrounding quotes
before passing args to the spawned process, which our tests need to emulate.
"""
s = str(s)
args = []
current = []
quote = None
def flush():
if current:
args.append("".join(current))
current.clear()
for ch in s:
if quote is None:
if ch.isspace():
flush()
continue
if ch in ("\"", "'"):
quote = ch
continue
current.append(ch)
else:
if ch == quote:
quote = None
continue
current.append(ch)
flush()
return [cls._shell_unquote(a) for a in args]
def _process_request(self, request):
self.timeline.record_request(request, block=False)
if request.command == "runInTerminal":
args = request("args", json.array(str, vectorize=True))
args_can_be_interpreted_by_shell = request("argsCanBeInterpretedByShell", False)
if len(args) > 0 and args_can_be_interpreted_by_shell:
if len(args) > 0 and request("argsCanBeInterpretedByShell", False):
# The final arg is a string that contains multiple actual arguments.
# Split it like a shell would, but keep the rest of the args (including
# any quoting) intact so tests can inspect the raw runInTerminal argv.
last_arg = args.pop()
args += self._split_shell_arg_string(last_arg)
args += last_arg.split()
cwd = request("cwd", ".")
env = request("env", json.object(str))
try:
self._run_in_terminal_args_can_be_interpreted_by_shell = (
args_can_be_interpreted_by_shell
)
return self.run_in_terminal(args, cwd, env)
except Exception as exc:
log.swallow_exception('"runInTerminal" failed:')
raise request.cant_handle(str(exc))
finally:
self._run_in_terminal_args_can_be_interpreted_by_shell = False
elif request.command == "startDebugging":
pid = request("configuration", dict)("subProcessId", int)

View file

@ -15,6 +15,7 @@ from unittest import mock
from debugpy.common import log
from tests.patterns import some
@pytest.fixture
def cli(pyfile):
@pyfile
@ -45,15 +46,11 @@ def cli(pyfile):
"target",
"target_kind",
"wait_for_client",
"parent_session_pid",
]
}
# Serialize the command line args and the options to stdout
serialized_data = pickle.dumps([sys.argv[1:], options])
os.write(1, serialized_data)
# Ensure all data is written before process exits
sys.stdout.flush()
os.write(1, pickle.dumps([sys.argv[1:], options]))
def parse(args):
log.debug("Parsing argv: {0!r}", args)
@ -61,19 +58,12 @@ def cli(pyfile):
try:
# Run the CLI parser in a subprocess, and capture its output.
output = subprocess.check_output(
[sys.executable, "-u", cli_parser.strpath] + args,
stderr=subprocess.PIPE # Capture stderr to help with debugging
[sys.executable, "-u", cli_parser.strpath] + args
)
# Deserialize the output and return the parsed argv and options.
try:
argv, options = pickle.loads(output)
except Exception as e:
log.debug("Failed to deserialize output: {0}, Output was: {1!r}", e, output)
raise
argv, options = pickle.loads(output)
except subprocess.CalledProcessError as exc:
log.debug("Process exited with code {0}. Output: {1!r}, Error: {2!r}",
exc.returncode, exc.output, exc.stderr)
raise pickle.loads(exc.output)
except EOFError:
# We may have just been shutting down. If so, return an empty argv and options.
@ -89,7 +79,7 @@ def cli(pyfile):
# Test a combination of command line switches
@pytest.mark.parametrize("target_kind", ["file", "module", "code"])
@pytest.mark.parametrize("mode", ["listen", "connect"])
@pytest.mark.parametrize("address", ["8888", "localhost:8888", "[::1]:8888"])
@pytest.mark.parametrize("address", ["8888", "localhost:8888"])
@pytest.mark.parametrize("wait_for_client", ["", "wait_for_client"])
@pytest.mark.parametrize("script_args", ["", "script_args"])
def test_targets(cli, target_kind, mode, address, wait_for_client, script_args):
@ -101,8 +91,7 @@ def test_targets(cli, target_kind, mode, address, wait_for_client, script_args):
args = ["--" + mode, address]
host, sep, port = address.rpartition(":")
host = host.strip("[]")
host, sep, port = address.partition(":")
if sep:
expected_options["address"] = (host, int(port))
else:
@ -164,20 +153,20 @@ def test_configure_subProcess_from_environment(cli, value):
def test_unsupported_switch(cli):
with pytest.raises(ValueError) as ex:
cli(["--listen", "8888", "--xyz", "123", "spam.py"])
assert "unrecognized switch --xyz" in str(ex.value)
def test_unsupported_switch_from_environment(cli):
with pytest.raises(ValueError) as ex:
with mock.patch.dict(os.environ, {"DEBUGPY_EXTRA_ARGV": "--xyz 123"}):
cli(["--listen", "8888", "spam.py"])
assert "unrecognized switch --xyz" in str(ex.value)
def test_unsupported_configure(cli):
with pytest.raises(ValueError) as ex:
cli(["--connect", "127.0.0.1:8888", "--configure-xyz", "123", "spam.py"])
assert "unknown property 'xyz'" in str(ex.value)
def test_unsupported_configure_from_environment(cli):
@ -190,26 +179,26 @@ def test_unsupported_configure_from_environment(cli):
def test_address_required(cli):
with pytest.raises(ValueError) as ex:
cli(["-m", "spam"])
assert "either --listen or --connect is required" in str(ex.value)
def test_missing_target(cli):
with pytest.raises(ValueError) as ex:
cli(["--listen", "8888"])
assert "missing target" in str(ex.value)
def test_duplicate_switch(cli):
with pytest.raises(ValueError) as ex:
cli(["--listen", "8888", "--listen", "9999", "spam.py"])
assert "duplicate switch on command line: --listen" in str(ex.value)
def test_duplicate_switch_from_environment(cli):
with pytest.raises(ValueError) as ex:
with mock.patch.dict(os.environ, {"DEBUGPY_EXTRA_ARGV": "--listen 8888 --listen 9999"}):
cli(["spam.py"])
assert "duplicate switch from environment: --listen" in str(ex.value)
# Test that switches can be read from the environment
@ -240,11 +229,4 @@ def test_script_args(cli):
argv, options = cli(args)
assert argv == ["arg1", "arg2"]
assert options["target"] == "spam.py"
# Tests that --parent-session-pid fails with --listen
def test_script_parent_pid_with_listen_failure(cli):
with pytest.raises(ValueError) as ex:
cli(["--listen", "8888", "--parent-session-pid", "1234", "spam.py"])
assert "--parent-session-pid requires --connect" in str(ex.value)
assert options["target"] == "spam.py"

View file

@ -2,8 +2,6 @@
# Licensed under the MIT License. See LICENSE in the project root
# for license information.
import os
import sys
import pytest
from debugpy.common import log
@ -37,15 +35,9 @@ def test_args(pyfile, target, run):
@pytest.mark.parametrize("target", targets.all)
@pytest.mark.parametrize("run", runners.all_launch)
@pytest.mark.parametrize("expansion", ["preserve", "expand"])
@pytest.mark.parametrize("python_with_space", [False, True])
def test_shell_expansion(pyfile, tmpdir, target, run, expansion, python_with_space):
def test_shell_expansion(pyfile, target, run, expansion):
if expansion == "expand" and run.console == "internalConsole":
pytest.skip('Shell expansion is not supported for "internalConsole"')
# Skip tests with python_with_space=True and target="code" on Windows
# because .cmd wrappers cannot properly handle multiline string arguments
if (python_with_space and target == targets.Code and sys.platform == "win32"):
pytest.skip('Windows .cmd wrapper cannot handle multiline code arguments')
@pyfile
def code_to_debug():
@ -65,34 +57,14 @@ def test_shell_expansion(pyfile, tmpdir, target, run, expansion, python_with_spa
args[i] = arg[1:]
log.info("After expansion: {0}", args)
captured_run_in_terminal_args = []
class Session(debug.Session):
def run_in_terminal(self, args, cwd, env):
captured_run_in_terminal_args.append(args[:]) # Capture a copy of the args
expand(args)
return super().run_in_terminal(args, cwd, env)
argslist = ["0", "$1", "2"]
args = argslist if expansion == "preserve" else " ".join(argslist)
with Session() as session:
# Create a Python wrapper with a space in the path if requested
if python_with_space:
# Create a directory with a space in the name
python_dir = tmpdir / "python with space"
python_dir.mkdir()
if sys.platform == "win32":
wrapper = python_dir / "python.cmd"
wrapper.write(f'@echo off\n"{sys.executable}" %*')
else:
wrapper = python_dir / "python.sh"
wrapper.write(f'#!/bin/sh\nexec "{sys.executable}" "$@"')
os.chmod(wrapper.strpath, 0o777)
session.config["python"] = wrapper.strpath
backchannel = session.open_backchannel()
with run(session, target(code_to_debug, args=args)):
pass
@ -101,103 +73,3 @@ def test_shell_expansion(pyfile, tmpdir, target, run, expansion, python_with_spa
expand(argslist)
assert argv == [some.str] + argslist
# Verify that the python executable path is correctly quoted if it contains spaces
if python_with_space and captured_run_in_terminal_args:
terminal_args = captured_run_in_terminal_args[0]
log.info("Captured runInTerminal args: {0}", terminal_args)
# Check if the python executable (first arg) contains a space
python_arg = terminal_args[0]
assert "python with space" in python_arg, \
f"Expected 'python with space' in python path: {python_arg}"
if expansion == "expand":
assert (python_arg.startswith('"') or python_arg.startswith("'")), f"Python_arg is not quoted: {python_arg}"
@pytest.mark.parametrize("run", runners.all_launch_terminal)
@pytest.mark.parametrize("expansion", ["preserve", "expand"])
def test_debuggee_filename_with_space(tmpdir, run, expansion):
"""Test that a debuggee filename with a space gets properly quoted in runInTerminal."""
# Create a script file with a space in both directory and filename
# Create a Python script with a space in the filename
script_dir = tmpdir / "test dir"
script_dir.mkdir()
script_file = script_dir / "script with space.py"
script_content = """import sys
import debuggee
from debuggee import backchannel
debuggee.setup()
backchannel.send(sys.argv)
import time
time.sleep(2)
"""
script_file.write(script_content)
captured_run_in_terminal_request = []
captured_run_in_terminal_args = []
class Session(debug.Session):
def _process_request(self, request):
if request.command == "runInTerminal":
# Capture the raw runInTerminal request before any processing
args_from_request = list(request.arguments.get("args", []))
captured_run_in_terminal_request.append({
"args": args_from_request,
"argsCanBeInterpretedByShell": request.arguments.get("argsCanBeInterpretedByShell", False)
})
return super()._process_request(request)
def run_in_terminal(self, args, cwd, env):
# Capture the processed args after the framework has handled them
captured_run_in_terminal_args.append(args[:])
return super().run_in_terminal(args, cwd, env)
argslist = ["arg1", "arg2"]
args = argslist if expansion == "preserve" else " ".join(argslist)
with Session() as session:
backchannel = session.open_backchannel()
target = targets.Program(script_file, args=args)
with run(session, target):
pass
argv = backchannel.receive()
assert argv == [some.str] + argslist
# Verify that runInTerminal was called
assert captured_run_in_terminal_request, "Expected runInTerminal request to be sent"
request_data = captured_run_in_terminal_request[0]
terminal_request_args = request_data["args"]
args_can_be_interpreted_by_shell = request_data["argsCanBeInterpretedByShell"]
log.info("Captured runInTerminal request args: {0}", terminal_request_args)
log.info("argsCanBeInterpretedByShell: {0}", args_can_be_interpreted_by_shell)
# With expansion="expand", argsCanBeInterpretedByShell should be True
if expansion == "expand":
assert args_can_be_interpreted_by_shell, \
"Expected argsCanBeInterpretedByShell=True for expansion='expand'"
# Find the script path in the arguments (it should be after the debugpy launcher args)
script_path_found = False
for arg in terminal_request_args:
if "script with space.py" in arg:
script_path_found = True
log.info("Found script path argument: {0}", arg)
# NOTE: With shell expansion enabled, we currently have a limitation:
# The test framework splits the last arg by spaces when argsCanBeInterpretedByShell=True,
# which makes it incompatible with quoting individual args. This causes issues with
# paths containing spaces. This is a known limitation that needs investigation.
# For now, just verify the script path is found.
break
assert script_path_found, \
f"Expected to find 'script with space.py' in runInTerminal args: {terminal_request_args}"

View file

@ -14,9 +14,8 @@ from tests.patterns import some
@pytest.mark.parametrize("stop_method", ["breakpoint", "pause"])
@pytest.mark.skipif(IS_PY312_OR_GREATER, reason="Flakey test on 312 and higher")
@pytest.mark.parametrize("is_client_connected", ["is_client_connected", ""])
@pytest.mark.parametrize("host", ["127.0.0.1", "::1"])
@pytest.mark.parametrize("wait_for_client", ["wait_for_client", pytest.param("", marks=pytest.mark.skipif(sys.platform.startswith("darwin"), reason="Flakey test on Mac"))])
def test_attach_api(pyfile, host, wait_for_client, is_client_connected, stop_method):
def test_attach_api(pyfile, wait_for_client, is_client_connected, stop_method):
@pyfile
def code_to_debug():
import debuggee
@ -59,8 +58,7 @@ def test_attach_api(pyfile, host, wait_for_client, is_client_connected, stop_met
time.sleep(0.1)
with debug.Session() as session:
host = runners.attach_connect.host if host == "127.0.0.1" else host
port = runners.attach_connect.port
host, port = runners.attach_connect.host, runners.attach_connect.port
session.config.update({"connect": {"host": host, "port": port}})
backchannel = session.open_backchannel()
@ -104,8 +102,7 @@ def test_attach_api(pyfile, host, wait_for_client, is_client_connected, stop_met
session.request_continue()
@pytest.mark.parametrize("host", ["127.0.0.1", "::1"])
def test_multiple_listen_raises_exception(pyfile, host):
def test_multiple_listen_raises_exception(pyfile):
@pyfile
def code_to_debug():
import debuggee
@ -127,8 +124,7 @@ def test_multiple_listen_raises_exception(pyfile, host):
debugpy.breakpoint()
print("break") # @breakpoint
host = runners.attach_connect.host if host == "127.0.0.1" else host
port = runners.attach_connect.port
host, port = runners.attach_connect.host, runners.attach_connect.port
with debug.Session() as session:
backchannel = session.open_backchannel()
session.spawn_debuggee(
@ -151,6 +147,7 @@ def test_multiple_listen_raises_exception(pyfile, host):
assert backchannel.receive() == "listen_exception"
session.request_continue()
@pytest.mark.parametrize("run", runners.all_attach_connect)
def test_reattach(pyfile, target, run):
@pyfile
@ -268,8 +265,7 @@ def test_attach_pid_client(pyfile, target, pid_type):
session2.request_continue()
@pytest.mark.parametrize("host", ["127.0.0.1", "::1"])
def test_cancel_wait(pyfile, host):
def test_cancel_wait(pyfile):
@pyfile
def code_to_debug():
import debugpy
@ -291,8 +287,7 @@ def test_cancel_wait(pyfile, host):
backchannel.send("exit")
with debug.Session() as session:
host = runners.attach_connect.host if host == "127.0.0.1" else host
port = runners.attach_connect.port
host, port = runners.attach_connect.host, runners.attach_connect.port
session.config.update({"connect": {"host": host, "port": port}})
session.expected_exit_code = None

View file

@ -1,35 +0,0 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See LICENSE in the project root
# for license information.
from tests import debug
def test_cli_options_with_no_debugger():
import debugpy
cli_options = debugpy.get_cli_options()
assert cli_options is None
def test_cli_options_under_file_connect(pyfile, target, run):
@pyfile
def code_to_debug():
import dataclasses
import debugpy
import debuggee
from debuggee import backchannel
debuggee.setup()
backchannel.send(dataclasses.asdict(debugpy.get_cli_options()))
with debug.Session() as session:
backchannel = session.open_backchannel()
with run(session, target(code_to_debug)):
pass
cli_options = backchannel.receive()
assert cli_options['mode'] == 'connect'
assert cli_options['target_kind'] == 'file'

View file

@ -5,7 +5,7 @@
import pytest
from tests import code, debug, log, net, test_data
from tests.debug import targets
from tests.debug import runners, targets
from tests.patterns import some
pytestmark = pytest.mark.timeout(60)
@ -25,6 +25,7 @@ class lines:
@pytest.fixture
@pytest.mark.parametrize("run", [runners.launch, runners.attach_connect["cli"]])
def start_django(run):
def start(session, multiprocess=False):
# No clean way to kill Django server, expect non-zero exit code

View file

@ -6,7 +6,7 @@ import pytest
import sys
from tests import code, debug, log, net, test_data
from tests.debug import targets
from tests.debug import runners, targets
from tests.patterns import some
pytestmark = pytest.mark.timeout(60)
@ -27,6 +27,7 @@ class lines:
@pytest.fixture
@pytest.mark.parametrize("run", [runners.launch, runners.attach_connect["cli"]])
def start_flask(run):
def start(session, multiprocess=False):
# No clean way to kill Flask server, expect non-zero exit code

View file

@ -87,7 +87,7 @@ def test_multiprocessing(pyfile, target, run, start_method):
p.join()
def child(q, a):
print("entering child") # @bp
print("entering child")
assert q.get() == "foo?"
a.put(Foo())
@ -136,20 +136,7 @@ def test_multiprocessing(pyfile, target, run, start_method):
with debug.Session(child_config) as child_session:
with child_session.start():
child_session.set_breakpoints(code_to_debug, all)
expected_frame = some.dap.frame(code_to_debug, line="bp")
stop = child_session.wait_for_stop(
"breakpoint",
expected_frames=[expected_frame],
)
child_session.request('stepIn', {"threadId": stop.thread_id})
stop = child_session.wait_for_stop(
"step",
expected_frames=[some.dap.frame(code_to_debug, line=expected_frame.items['line'] + 1)],
)
child_session.request_continue()
pass
expected_grandchild_config = expected_subprocess_config(child_session)
grandchild_config = child_session.wait_for_next_event("debugpyAttach")
@ -216,7 +203,7 @@ def test_subprocess(pyfile, target, run, subProcess, method):
return
expected_child_config = expected_subprocess_config(parent_session)
if method == "startDebugging":
subprocess_request = parent_session.timeline.wait_for_next(timeline.Request("startDebugging"))
child_config = subprocess_request.arguments("configuration", dict)
@ -609,73 +596,3 @@ def test_subprocess_replace(pyfile, target, run):
child_pid = backchannel.receive()
assert child_pid == child_config["subProcessId"]
assert str(child_pid) in child_config["name"]
@pytest.mark.parametrize("run", runners.all_launch)
def test_subprocess_with_parent_pid(pyfile, target, run):
@pyfile
def child():
import sys
assert "debugpy" in sys.modules
import debugpy
assert debugpy # @bp
@pyfile
def parent():
import debuggee
import os
import subprocess
import sys
import debugpy
debuggee.setup()
# Running it through a shell is necessary to ensure the
# --parent-session-pid option is tested and the underlying
# Python subprocess can associate with this one's debug session.
if sys.platform == "win32":
argv = ["cmd.exe", "/c"]
else:
argv = ["/bin/sh", "-c"]
cli_opts = debugpy.get_cli_options()
assert cli_opts, "No CLI options found"
host, port = cli_opts.address
access_token = cli_opts.adapter_access_token
shell_args = [
sys.executable,
"-m",
"debugpy",
"--connect", f"{host}:{port}",
"--parent-session-pid", str(os.getpid()),
"--adapter-access-token", access_token,
sys.argv[1],
]
argv.append(" ".join(shell_args))
subprocess.check_call(argv, env=os.environ | {"DEBUGPY_RUNNING": "false"})
with debug.Session() as parent_session:
with run(parent_session, target(parent, args=[child])):
parent_session.set_breakpoints(child, all)
with parent_session.wait_for_next_subprocess() as child_session:
expected_child_config = expected_subprocess_config(parent_session)
child_config = child_session.config
child_config.pop("isOutputRedirected", None)
assert child_config == expected_child_config
with child_session.start():
child_session.set_breakpoints(child, all)
child_session.wait_for_stop(
"breakpoint",
expected_frames=[some.dap.frame(child, line="bp")],
)
child_session.request_continue()

View file

@ -102,7 +102,7 @@ def test_step_multi_threads(pyfile, target, run, resume):
stop = session.wait_for_stop()
threads = session.request("threads")
assert len(threads["threads"]) >= 3
assert len(threads["threads"]) == 3
thread_name_to_id = {t["name"]: t["id"] for t in threads["threads"]}
assert stop.thread_id == thread_name_to_id["thread1"]

View file

@ -17,7 +17,7 @@ from tests.patterns import some
used_ports = set()
def get_test_server_port(max_retries=10):
def get_test_server_port():
"""Returns a server port number that can be safely used for listening without
clashing with another test worker process, when running with pytest-xdist.
@ -27,9 +27,6 @@ def get_test_server_port(max_retries=10):
Note that if multiple test workers invoke this function with different ranges
that overlap, conflicts are possible!
Args:
max_retries: Number of times to retry finding an available port
"""
try:
@ -42,32 +39,11 @@ def get_test_server_port(max_retries=10):
), "Unrecognized PYTEST_XDIST_WORKER format"
n = int(worker_id[2:])
# Try multiple times to find an available port, with retry logic
for attempt in range(max_retries):
port = 5678 + (n * 300) + attempt
while port in used_ports:
port += 1
# Verify the port is actually available by trying to bind to it
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
try:
sock.bind(("127.0.0.1", port))
sock.close()
used_ports.add(port)
log.info("Allocated port {0} for worker {1}", port, n)
return port
except OSError as e:
log.warning("Port {0} unavailable (attempt {1}/{2}): {3}", port, attempt + 1, max_retries, e)
sock.close()
time.sleep(0.1 * (attempt + 1)) # Exponential backoff
# Fall back to original behavior if all retries fail
port = 5678 + (n * 300)
while port in used_ports:
port += 1
used_ports.add(port)
log.warning("Using fallback port {0} after {1} retries", port, max_retries)
return port

View file

@ -46,27 +46,19 @@ def test_wrapper(request, long_tmpdir):
session.Session.reset_counter()
# Add worker-specific isolation for tmpdir and log directory
try:
worker_id = os.environ.get("PYTEST_XDIST_WORKER", "gw0")
worker_suffix = f"_{worker_id}"
except Exception:
worker_suffix = ""
session.Session.tmpdir = long_tmpdir / f"session{worker_suffix}"
session.Session.tmpdir.ensure(dir=True)
session.Session.tmpdir = long_tmpdir
original_log_dir = log.log_dir
failed = True
try:
if log.log_dir is None:
log.log_dir = (long_tmpdir / f"debugpy_logs{worker_suffix}").strpath
log.log_dir = (long_tmpdir / "debugpy_logs").strpath
else:
log_subdir = request.node.nodeid
log_subdir = log_subdir.replace("::", "/")
for ch in r":?*|<>":
log_subdir = log_subdir.replace(ch, f"&#{ord(ch)};")
log.log_dir += "/" + log_subdir + worker_suffix
log.log_dir += "/" + log_subdir
try:
py.path.local(log.log_dir).remove()

View file

@ -1,5 +1,5 @@
[tox]
envlist = py{38,39,310,311,312,313,314}{,-cov}
envlist = py{38,39,310,311,312,313}{,-cov}
[testenv]
deps = -rtests/requirements.txt
@ -10,5 +10,5 @@ commands_pre = python build_attach_binaries.py
commands =
py{38,39}-!cov: python -m pytest {posargs}
py{38,39}-cov: python -m pytest --cov --cov-append --cov-config=.coveragerc {posargs}
py{310,311,312,313,314}-!cov: python -Xfrozen_modules=off -m pytest {posargs}
py{310,311,312,313,314}-cov: python -Xfrozen_modules=off -m pytest --cov --cov-append --cov-config=.coveragerc {posargs}
py{310,311,312,313}-!cov: python -Xfrozen_modules=off -m pytest {posargs}
py{310,311,312,313}-cov: python -Xfrozen_modules=off -m pytest --cov --cov-append --cov-config=.coveragerc {posargs}