mirror of
https://github.com/microsoft/debugpy.git
synced 2025-12-23 08:48:12 +00:00
parent
a6ebc3e3a6
commit
9aaf937478
246 changed files with 189 additions and 26592 deletions
|
|
@ -34,8 +34,8 @@ VSC Python settings for formating:
|
|||
],
|
||||
```
|
||||
|
||||
### Running `pytest` based tests
|
||||
We are currently migrating the tests to use `pytest`. Please run both set of tests. Newer tests must go into the [pytests](pytests) directory. Use [test_requirements.txt](test_requirements.txt) to install packages needed to run the tests.
|
||||
### Running tests
|
||||
We are currently migrating the tests to use `pytest`. Please run both set of tests. Newer tests must go into the [tests](tests) directory. Use [test_requirements.txt](test_requirements.txt) to install packages needed to run the tests.
|
||||
#### Windows
|
||||
```
|
||||
C:\> git clone https://github.com/Microsoft/ptvsd
|
||||
|
|
@ -50,20 +50,7 @@ C:\ptvsd> py -3.7 -m pytest -v
|
|||
~/ptvsd: python3 -m pip install -r ./test_requirements.txt
|
||||
~/ptvsd: python3 -m pytest -v
|
||||
```
|
||||
### Running `unittest` based tests
|
||||
`git clone` ptvsd and change directory to `ptvsd`. Run the `tests` module from there. Newer tests must be written using `pytest` and must go into the [pytests](pytests) directory. Please do not add tests to this directory.
|
||||
#### Windows
|
||||
```
|
||||
C:\> git clone https://github.com/Microsoft/ptvsd
|
||||
C:\> cd ptvsd
|
||||
C:\ptvsd> py -3.7 -m tests -v
|
||||
```
|
||||
#### Linux\Mac
|
||||
```
|
||||
~: git clone https://github.com/Microsoft/ptvsd
|
||||
~: cd ptvsd
|
||||
~/ptvsd: python3 -m tests -v
|
||||
```
|
||||
|
||||
|
||||
### Debug in VSC using development version
|
||||
Set `PYTHONPATH` to point to cloned version of ptvsd, in `launch.json`, to debug any python project to test the debugger you are working on:
|
||||
|
|
|
|||
|
|
@ -5,6 +5,8 @@
|
|||
[](https://raw.githubusercontent.com/Microsoft/ptvsd/master/LICENSE)
|
||||
[](https://pypi.org/project/ptvsd/)
|
||||
|
||||
This debugger is based on the Debug Adapter Protocol for VS Code: [debugProtocol.json](https://github.com/Microsoft/vscode-debugadapter-node/blob/master/debugProtocol.json)
|
||||
|
||||
## `ptvsd` CLI Usage
|
||||
### Debugging a script file
|
||||
To run a script file with debugging enabled, but without waiting for the debugger to attach (i.e. code starts executing immediately):
|
||||
|
|
|
|||
|
|
@ -1,114 +0,0 @@
|
|||
# VSC Debugger Protocol
|
||||
|
||||
[Visual Studio Code](https://code.visualstudio.com/) defines several
|
||||
protocols that extensions may leverage to fully integrate with VSC
|
||||
features. For ptvad the most notable of those is the debugger protocol.
|
||||
When VSC handles debugger-related input via the UI, it delegates the
|
||||
underlying behavior to an extension's debug adapter (e.g. ptvsd) via
|
||||
the protocol. The
|
||||
[debugger_protocol](https://github.com/ericsnowcurrently/ptvsd/blob/master/debugger_protocol)
|
||||
package (at which you are looking) provides resources for understanding
|
||||
and using the protocol.
|
||||
|
||||
For more high-level info see:
|
||||
|
||||
* [the VSC debug protocol page](https://code.visualstudio.com/docs/extensionAPI/api-debugging)
|
||||
* [the example extension page](https://code.visualstudio.com/docs/extensions/example-debuggers)
|
||||
|
||||
|
||||
## Protocol Definition
|
||||
|
||||
The VSC debugger protocol has [a schema](https://github.com/Microsoft/vscode-debugadapter-node/blob/master/debugProtocol.json)
|
||||
which defines its messages. The wire format is HTTP messages with JSON
|
||||
bodies. Note that the schema does not define the semantics of the
|
||||
protocol, though a large portion is elaborated in the "description"
|
||||
fields in
|
||||
[the schema document](https://github.com/Microsoft/vscode-debugadapter-node/blob/master/debugProtocol.json).
|
||||
|
||||
[//]: # (TODO: Add a link to where the wire format is defined.)
|
||||
|
||||
|
||||
## Components
|
||||
|
||||
### Participants
|
||||
|
||||
The VSC debugger protocol involves 2 participants: the `client` and the
|
||||
`debug adapter`, AKA `server`. VSC is an example of a `client`. ptvsd
|
||||
is an example of a `debug adapter`. VSC extensions are responsible for
|
||||
providing the `debug adapter`, declaring it to VSC and connecting the
|
||||
adapter to VSC when desired.
|
||||
|
||||
### Communication
|
||||
|
||||
Messages are sent back and forth over a socket. The messages are
|
||||
JSON-encoded and sent as the body of an HTTP message.
|
||||
|
||||
Flow:
|
||||
|
||||
<TBD>
|
||||
|
||||
### Message Types
|
||||
|
||||
All messages specify their `type` and a globally-unique
|
||||
monotonically-increasing ID (`seq`).
|
||||
|
||||
The protocol consists for 3 types of messages:
|
||||
|
||||
* event
|
||||
* request
|
||||
* response
|
||||
|
||||
An `event` is a message by which the `debug adapter` reports to the
|
||||
`client` that something happened. Only the `debug adapter` sends
|
||||
`event`s. An `event` may be sent at any time, so the `client` may get
|
||||
one after sending a `request` but before receiving the corresponding
|
||||
`response`.
|
||||
|
||||
A `request` is a message by which the `client` requests something from
|
||||
the `debug adapter` over the connection. That "something" may be data
|
||||
corresponding to the state of the debugger or it may be an action that
|
||||
should be performed. Note that the protocol dictates that the `debug
|
||||
adapter` may also send `request`s to the `client`, but currently there
|
||||
aren't any such `request` types.
|
||||
|
||||
Each `request` type has a corresponding `response` type; and for each
|
||||
`request` sent by the `client`, the `debug adapter` sends back the
|
||||
corresponding `response`. `response` messages include a `request_seq`
|
||||
field that matches the `seq` field of the corresponding `request`.
|
||||
|
||||
|
||||
## Protocol-related Tools
|
||||
|
||||
Tools related to the schema, as well as
|
||||
[a vendored copy](https://github.com/ericsnowcurrently/ptvsd/blob/master/debugger_protocol/schema/debugProtocol.json)
|
||||
of the schema file itself, are found in
|
||||
[debugger_protocol/schema](https://github.com/Microsoft/ptvsd/tree/master/debugger_protocol/schema).
|
||||
Python bindings for the messages are found in
|
||||
[debugger_protocol/messages](https://github.com/ericsnowcurrently/ptvsd/blob/master/debugger_protocol/messages).
|
||||
Tools for handling the wire format are found in
|
||||
[debugger_protocol/messages/wireformat.py](https://github.com/ericsnowcurrently/ptvsd/blob/master/debugger_protocol/messages/wireformat.py).
|
||||
|
||||
### Using the Python-implemented Message Types
|
||||
|
||||
The Python implementations of the schema-defined messages all share a
|
||||
[ProtocolMessage](https://github.com/ericsnowcurrently/ptvsd/blob/master/debugger_protocol/messages/message.py#L27)
|
||||
base class. The 3 message types each have their own base class. Every
|
||||
message class has the following methods to aid with serialization:
|
||||
|
||||
* a `from_data(**raw)` factory method
|
||||
* a `as_data()` method
|
||||
|
||||
These methods are used by
|
||||
[the wireformat helpers](https://github.com/ericsnowcurrently/ptvsd/blob/master/debugger_protocol/messages/wireformat.py).
|
||||
|
||||
|
||||
## Other Resources
|
||||
|
||||
* https://github.com/Microsoft/vscode-mock-debug
|
||||
* https://github.com/Microsoft/vscode-debugadapter-node/tree/master/testSupport
|
||||
* https://github.com/Microsoft/vscode-debugadapter-node/blob/master/protocol/src/debugProtocol.ts
|
||||
* https://github.com/Microsoft/vscode-mono-debug
|
||||
|
||||
* http://json-schema.org/latest/json-schema-core.html
|
||||
* https://python-jsonschema.readthedocs.io/
|
||||
* http://python-jsonschema-objects.readthedocs.io/
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
|
||||
|
||||
class Readonly(object):
|
||||
"""For read-only instances."""
|
||||
|
||||
def __setattr__(self, name, value):
|
||||
raise AttributeError(
|
||||
'{} objects are read-only'.format(type(self).__name__))
|
||||
|
||||
def __delattr__(self, name):
|
||||
raise AttributeError(
|
||||
'{} objects are read-only'.format(type(self).__name__))
|
||||
|
||||
def _bind_attrs(self, **attrs):
|
||||
for name, value in attrs.items():
|
||||
object.__setattr__(self, name, value)
|
||||
|
||||
|
||||
class WithRepr(object):
|
||||
|
||||
def _init_args(self):
|
||||
# XXX Extract from __init__()...
|
||||
return ()
|
||||
|
||||
def __repr__(self):
|
||||
args = ', '.join('{}={!r}'.format(arg, value)
|
||||
for arg, value in self._init_args())
|
||||
return '{}({})'.format(type(self).__name__, args)
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
from ._common import NOT_SET, ANY # noqa
|
||||
from ._datatype import FieldsNamespace # noqa
|
||||
from ._decl import Enum, Union, Array, Mapping, Field # noqa
|
||||
from ._errors import ( # noqa
|
||||
ArgumentError,
|
||||
ArgMissingError, IncompleteArgError, ArgTypeMismatchError,
|
||||
)
|
||||
from ._params import param_from_datatype # noqa
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
|
||||
def sentinel(name):
|
||||
"""Return a named value to use as a sentinel."""
|
||||
class Sentinel(object):
|
||||
def __repr__(self):
|
||||
return name
|
||||
|
||||
return Sentinel()
|
||||
|
||||
|
||||
# NOT_SET indicates that an arg was not provided.
|
||||
NOT_SET = sentinel('NOT_SET')
|
||||
|
||||
# ANY is a datatype surrogate indicating that any value is okay.
|
||||
ANY = sentinel('ANY')
|
||||
|
||||
SIMPLE_TYPES = {None, bool, int, str}
|
||||
|
|
@ -1,307 +0,0 @@
|
|||
from debugger_protocol._base import Readonly, WithRepr
|
||||
from ._common import NOT_SET, ANY, SIMPLE_TYPES
|
||||
from ._decl import (
|
||||
_transform_datatype, _replace_ref,
|
||||
Enum, Union, Array, Field, Fields)
|
||||
from ._errors import ArgTypeMismatchError, ArgMissingError, IncompleteArgError
|
||||
|
||||
|
||||
def _coerce(datatype, value, call=True):
|
||||
if datatype is ANY:
|
||||
return value
|
||||
elif type(value) is datatype:
|
||||
return value
|
||||
elif value is datatype:
|
||||
return value
|
||||
elif datatype is None:
|
||||
pass # fail below
|
||||
elif datatype in SIMPLE_TYPES:
|
||||
# We already checked for exact type match above.
|
||||
pass # fail below
|
||||
|
||||
# decl types
|
||||
elif isinstance(datatype, Enum):
|
||||
value = _coerce(datatype.datatype, value, call=False)
|
||||
if value in datatype.choice:
|
||||
return value
|
||||
elif isinstance(datatype, Union):
|
||||
for dt in datatype:
|
||||
try:
|
||||
return _coerce(dt, value, call=False)
|
||||
except ArgTypeMismatchError:
|
||||
continue
|
||||
else:
|
||||
raise ArgTypeMismatchError(value)
|
||||
elif isinstance(datatype, Array):
|
||||
try:
|
||||
values = iter(value)
|
||||
except TypeError:
|
||||
raise ArgTypeMismatchError(value)
|
||||
return [_coerce(datatype.itemtype, v, call=False)
|
||||
for v in values]
|
||||
elif isinstance(datatype, Field):
|
||||
return _coerce(datatype.datatype, value)
|
||||
elif isinstance(datatype, Fields):
|
||||
class ArgNamespace(FieldsNamespace):
|
||||
FIELDS = datatype
|
||||
|
||||
return _coerce(ArgNamespace, value)
|
||||
elif issubclass(datatype, FieldsNamespace):
|
||||
arg = datatype.bind(value)
|
||||
try:
|
||||
arg_coerce = arg.coerce
|
||||
except AttributeError:
|
||||
return arg
|
||||
else:
|
||||
return arg_coerce()
|
||||
|
||||
# fallbacks
|
||||
elif callable(datatype) and call:
|
||||
try:
|
||||
return datatype(value)
|
||||
except ArgTypeMismatchError:
|
||||
raise
|
||||
except (TypeError, ValueError):
|
||||
raise ArgTypeMismatchError(value)
|
||||
elif value == datatype:
|
||||
return value
|
||||
|
||||
raise ArgTypeMismatchError(value)
|
||||
|
||||
|
||||
########################
|
||||
# fields
|
||||
|
||||
class FieldsNamespace(Readonly, WithRepr):
|
||||
"""A namespace of field values exposed via attributes."""
|
||||
|
||||
FIELDS = None
|
||||
PARAM_TYPE = None
|
||||
PARAM = None
|
||||
|
||||
_TRAVERSING = False
|
||||
|
||||
@classmethod
|
||||
def traverse(cls, op, **kwargs):
|
||||
"""Apply op to each field in cls.FIELDS."""
|
||||
if cls._TRAVERSING: # recursion check
|
||||
return cls
|
||||
cls._TRAVERSING = True
|
||||
|
||||
fields = cls._normalize(cls.FIELDS)
|
||||
try:
|
||||
fields_traverse = fields.traverse
|
||||
except AttributeError:
|
||||
# must be normalizing right now...
|
||||
return cls
|
||||
fields = fields_traverse(op)
|
||||
cls.FIELDS = cls._normalize(fields, force=True)
|
||||
|
||||
cls._TRAVERSING = False
|
||||
return cls
|
||||
|
||||
@classmethod
|
||||
def normalize(cls, *transforms):
|
||||
"""Normalize FIELDS and apply the given ops."""
|
||||
fields = cls._normalize(cls.FIELDS)
|
||||
for transform in transforms:
|
||||
fields = _transform_datatype(fields, transform)
|
||||
fields = cls._normalize(fields)
|
||||
cls.FIELDS = fields
|
||||
return cls
|
||||
|
||||
@classmethod
|
||||
def _normalize(cls, fields, force=False):
|
||||
if fields is None:
|
||||
raise TypeError('missing FIELDS')
|
||||
|
||||
try:
|
||||
fixref = cls._fixref
|
||||
except AttributeError:
|
||||
fixref = cls._fixref = True
|
||||
if not isinstance(fields, Fields):
|
||||
fields = Fields(*fields)
|
||||
if fixref or force:
|
||||
cls._fixref = False
|
||||
fields = _transform_datatype(fields,
|
||||
lambda dt: _replace_ref(dt, cls))
|
||||
return fields
|
||||
|
||||
@classmethod
|
||||
def param(cls):
|
||||
param = cls.PARAM
|
||||
if param is None:
|
||||
if cls.PARAM_TYPE is None:
|
||||
return None
|
||||
param = cls.PARAM_TYPE(cls.FIELDS, cls)
|
||||
return param
|
||||
|
||||
@classmethod
|
||||
def bind(cls, ns, **kwargs):
|
||||
if isinstance(ns, cls):
|
||||
return ns
|
||||
param = cls.param()
|
||||
if param is None:
|
||||
return cls(**ns)
|
||||
return param.bind(ns, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def _bind(cls, kwargs):
|
||||
cls.FIELDS = cls._normalize(cls.FIELDS)
|
||||
bound, missing = _fields_bind(cls.FIELDS, kwargs)
|
||||
if missing:
|
||||
raise IncompleteArgError(cls.FIELDS, missing)
|
||||
|
||||
values = {}
|
||||
validators = []
|
||||
serializers = {}
|
||||
for field, arg in bound.items():
|
||||
if arg is NOT_SET:
|
||||
continue
|
||||
|
||||
try:
|
||||
coerce = arg.coerce
|
||||
except AttributeError:
|
||||
value = arg
|
||||
else:
|
||||
value = coerce(arg)
|
||||
values[field.name] = value
|
||||
|
||||
try:
|
||||
validate = arg.validate
|
||||
validate = value.validate
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
validators.append(validate)
|
||||
|
||||
try:
|
||||
as_data = arg.as_data
|
||||
as_data = value.as_data
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
serializers[field.name] = as_data
|
||||
values['_validators'] = validators
|
||||
values['_serializers'] = serializers
|
||||
return values
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
super(FieldsNamespace, self).__init__()
|
||||
validate = kwargs.pop('_validate', True)
|
||||
|
||||
kwargs = self._bind(kwargs)
|
||||
self._bind_attrs(**kwargs)
|
||||
if validate:
|
||||
self.validate()
|
||||
|
||||
def _init_args(self):
|
||||
if self.FIELDS is not None:
|
||||
for field in self.FIELDS:
|
||||
try:
|
||||
value = getattr(self, field.name)
|
||||
except AttributeError:
|
||||
continue
|
||||
yield (field.name, value)
|
||||
else:
|
||||
for item in sorted(vars(self).items()):
|
||||
yield item
|
||||
|
||||
def __eq__(self, other):
|
||||
try:
|
||||
other_as_data = other.as_data
|
||||
except AttributeError:
|
||||
other_data = other
|
||||
else:
|
||||
other_data = other_as_data()
|
||||
|
||||
return self.as_data() == other_data
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
|
||||
def validate(self):
|
||||
"""Ensure that the field values are valid."""
|
||||
for validate in self._validators:
|
||||
validate()
|
||||
|
||||
def as_data(self):
|
||||
"""Return serializable data for the instance."""
|
||||
data = {name: as_data()
|
||||
for name, as_data in self._serializers.items()}
|
||||
for field in self.FIELDS:
|
||||
if field.name in data:
|
||||
continue
|
||||
try:
|
||||
data[field.name] = getattr(self, field.name)
|
||||
except AttributeError:
|
||||
pass
|
||||
return data
|
||||
|
||||
|
||||
def _field_missing(field, value):
|
||||
if value is NOT_SET:
|
||||
return True
|
||||
|
||||
try:
|
||||
missing = field.datatype.missing
|
||||
except AttributeError:
|
||||
return None
|
||||
else:
|
||||
return missing(value)
|
||||
|
||||
|
||||
def _field_bind(field, value, applydefaults=True):
|
||||
missing = _field_missing(field, value)
|
||||
if missing:
|
||||
if field.optional:
|
||||
if applydefaults:
|
||||
return field.default
|
||||
return NOT_SET
|
||||
raise ArgMissingError(field, missing)
|
||||
|
||||
try:
|
||||
bind = field.datatype.bind
|
||||
except AttributeError:
|
||||
bind = (lambda v: _coerce(field.datatype, v))
|
||||
return bind(value)
|
||||
|
||||
|
||||
def _fields_iter_values(fields, remainder):
|
||||
for field in fields or ():
|
||||
value = remainder.pop(field.name, NOT_SET)
|
||||
yield field, value
|
||||
|
||||
|
||||
def _fields_iter_bound(fields, remainder, applydefaults=True):
|
||||
for field, value in _fields_iter_values(fields, remainder):
|
||||
try:
|
||||
arg = _field_bind(field, value, applydefaults=applydefaults)
|
||||
except ArgMissingError as exc:
|
||||
yield field, value, exc, False
|
||||
# except ArgTypeMismatchError as exc:
|
||||
# yield field, value, None, exc
|
||||
else:
|
||||
yield field, arg, False, False
|
||||
|
||||
|
||||
def _fields_bind(fields, kwargs, applydefaults=True):
|
||||
bound = {}
|
||||
missing = {}
|
||||
mismatched = {}
|
||||
remainder = dict(kwargs)
|
||||
bound_iter = _fields_iter_bound(fields, remainder,
|
||||
applydefaults=applydefaults)
|
||||
for field, arg, missed, mismatch in bound_iter:
|
||||
if missed:
|
||||
missing[field.name] = missed
|
||||
elif mismatch:
|
||||
mismatched[field.name] = arg
|
||||
else:
|
||||
bound[field] = arg
|
||||
if remainder:
|
||||
remainder = ', '.join(sorted(remainder))
|
||||
raise TypeError('got extra fields: {}'.format(remainder))
|
||||
if mismatched:
|
||||
raise ArgTypeMismatchError(mismatched)
|
||||
return bound, missing
|
||||
|
|
@ -1,407 +0,0 @@
|
|||
from collections import namedtuple
|
||||
try:
|
||||
from collections.abc import Sequence
|
||||
except ImportError:
|
||||
from collections import Sequence
|
||||
|
||||
from debugger_protocol._base import Readonly
|
||||
from ._common import sentinel, NOT_SET, ANY, SIMPLE_TYPES
|
||||
|
||||
|
||||
REF = '<ref>'
|
||||
TYPE_REFERENCE = sentinel('TYPE_REFERENCE')
|
||||
|
||||
|
||||
def _is_simple(datatype):
|
||||
if datatype is ANY:
|
||||
return True
|
||||
elif datatype in list(SIMPLE_TYPES):
|
||||
return True
|
||||
elif isinstance(datatype, Enum):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def _normalize_datatype(datatype):
|
||||
cls = type(datatype)
|
||||
# normalized when instantiated:
|
||||
if isinstance(datatype, Union):
|
||||
return datatype
|
||||
elif isinstance(datatype, Array):
|
||||
return datatype
|
||||
elif isinstance(datatype, Array):
|
||||
return datatype
|
||||
elif isinstance(datatype, Mapping):
|
||||
return datatype
|
||||
elif isinstance(datatype, Fields):
|
||||
return datatype
|
||||
# do not need normalization:
|
||||
elif datatype is TYPE_REFERENCE:
|
||||
return TYPE_REFERENCE
|
||||
elif datatype is ANY:
|
||||
return ANY
|
||||
elif datatype in list(SIMPLE_TYPES):
|
||||
return datatype
|
||||
elif isinstance(datatype, Enum):
|
||||
return datatype
|
||||
# convert to canonical types:
|
||||
elif type(datatype) == type(REF) and datatype == REF:
|
||||
return TYPE_REFERENCE
|
||||
elif cls is set or cls is frozenset:
|
||||
return Union.unordered(*datatype)
|
||||
elif cls is list or cls is tuple:
|
||||
datatype, = datatype
|
||||
return Array(datatype)
|
||||
elif cls is dict:
|
||||
if len(datatype) != 1:
|
||||
raise NotImplementedError
|
||||
[keytype, valuetype], = datatype.items()
|
||||
return Mapping(valuetype, keytype)
|
||||
# fallback:
|
||||
else:
|
||||
try:
|
||||
normalize = datatype.normalize
|
||||
except AttributeError:
|
||||
return datatype
|
||||
else:
|
||||
return normalize()
|
||||
|
||||
|
||||
def _transform_datatype(datatype, op):
|
||||
datatype = op(datatype)
|
||||
try:
|
||||
dt_traverse = datatype.traverse
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
datatype = dt_traverse(lambda dt: _transform_datatype(dt, op))
|
||||
return datatype
|
||||
|
||||
|
||||
def _replace_ref(datatype, target):
|
||||
if datatype is TYPE_REFERENCE:
|
||||
return target
|
||||
else:
|
||||
return datatype
|
||||
|
||||
|
||||
class Enum(namedtuple('Enum', 'datatype choice')):
|
||||
"""A simple type with a limited set of allowed values."""
|
||||
|
||||
@classmethod
|
||||
def _check_choice(cls, datatype, choice, strict=True):
|
||||
if callable(choice):
|
||||
return choice
|
||||
|
||||
if isinstance(choice, str):
|
||||
msg = 'bad choice (expected {!r} values, got {!r})'
|
||||
raise ValueError(msg.format(datatype, choice))
|
||||
|
||||
choice = frozenset(choice)
|
||||
if not choice:
|
||||
raise TypeError('missing choice')
|
||||
if not strict:
|
||||
return choice
|
||||
|
||||
for value in choice:
|
||||
if type(value) is not datatype:
|
||||
msg = 'bad choice (expected {!r} values, got {!r})'
|
||||
raise ValueError(msg.format(datatype, choice))
|
||||
return choice
|
||||
|
||||
def __new__(cls, datatype, choice, **kwargs):
|
||||
strict = kwargs.pop('strict', True)
|
||||
normalize = kwargs.pop('_normalize', True)
|
||||
(lambda: None)(**kwargs) # Make sure there aren't any other kwargs.
|
||||
|
||||
if not isinstance(datatype, type):
|
||||
raise ValueError('expected a class, got {!r}'.format(datatype))
|
||||
if datatype not in list(SIMPLE_TYPES):
|
||||
msg = 'only simple datatypes are supported, got {!r}'
|
||||
raise ValueError(msg.format(datatype))
|
||||
if normalize:
|
||||
# There's no need to normalize datatype (it's a simple type).
|
||||
pass
|
||||
choice = cls._check_choice(datatype, choice, strict=strict)
|
||||
|
||||
self = super(Enum, cls).__new__(cls, datatype, choice)
|
||||
return self
|
||||
|
||||
|
||||
class Union(tuple):
|
||||
"""Declare a union of different types.
|
||||
|
||||
The declared order is preserved and respected.
|
||||
|
||||
Sets and frozensets are treated Unions in declarations.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def unordered(cls, *datatypes, **kwargs):
|
||||
"""Return an unordered union of the given datatypes."""
|
||||
self = cls(*datatypes, **kwargs)
|
||||
self._ordered = False
|
||||
return self
|
||||
|
||||
@classmethod
|
||||
def _traverse(cls, datatypes, op):
|
||||
changed = False
|
||||
result = []
|
||||
for datatype in datatypes:
|
||||
transformed = op(datatype)
|
||||
if transformed is not datatype:
|
||||
changed = True
|
||||
result.append(transformed)
|
||||
return result, changed
|
||||
|
||||
def __new__(cls, *datatypes, **kwargs):
|
||||
normalize = kwargs.pop('_normalize', True)
|
||||
(lambda: None)(**kwargs) # Make sure there aren't any other kwargs.
|
||||
|
||||
datatypes = list(datatypes)
|
||||
if normalize:
|
||||
datatypes, _ = cls._traverse(
|
||||
datatypes,
|
||||
lambda dt: _transform_datatype(dt, _normalize_datatype),
|
||||
)
|
||||
self = super(Union, cls).__new__(cls, datatypes)
|
||||
self._simple = all(_is_simple(dt) for dt in datatypes)
|
||||
self._ordered = True
|
||||
return self
|
||||
|
||||
def __repr__(self):
|
||||
return '{}{}'.format(type(self).__name__, tuple(self))
|
||||
|
||||
def __hash__(self):
|
||||
return super(Union, self).__hash__()
|
||||
|
||||
def __eq__(self, other): # honors order
|
||||
if not isinstance(other, Union):
|
||||
return NotImplemented
|
||||
elif super(Union, self).__eq__(other):
|
||||
return True
|
||||
elif set(self) != set(other):
|
||||
return False
|
||||
elif self._simple and other._simple:
|
||||
return True
|
||||
elif not self._ordered and not other._ordered:
|
||||
return True
|
||||
else:
|
||||
return NotImplemented
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
|
||||
@property
|
||||
def datatypes(self):
|
||||
return set(self)
|
||||
|
||||
def traverse(self, op, **kwargs):
|
||||
"""Return a copy with op applied to each contained datatype."""
|
||||
datatypes, changed = self._traverse(self, op)
|
||||
if not changed and not kwargs:
|
||||
return self
|
||||
updated = self.__class__(*datatypes, **kwargs)
|
||||
if not self._ordered:
|
||||
updated._ordered = False
|
||||
return updated
|
||||
|
||||
|
||||
class Array(Readonly):
|
||||
"""Declare an array (of a single type).
|
||||
|
||||
Lists and tuples (single-item) are treated equivalently
|
||||
in declarations.
|
||||
"""
|
||||
|
||||
def __init__(self, itemtype, _normalize=True):
|
||||
if _normalize:
|
||||
itemtype = _transform_datatype(itemtype, _normalize_datatype)
|
||||
self._bind_attrs(
|
||||
itemtype=itemtype,
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return '{}(itemtype={!r})'.format(type(self).__name__, self.itemtype)
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self.itemtype)
|
||||
|
||||
def __eq__(self, other):
|
||||
try:
|
||||
other_itemtype = other.itemtype
|
||||
except AttributeError:
|
||||
return False
|
||||
return self.itemtype == other_itemtype
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
|
||||
def traverse(self, op, **kwargs):
|
||||
"""Return a copy with op applied to the item datatype."""
|
||||
datatype = op(self.itemtype)
|
||||
if datatype is self.itemtype and not kwargs:
|
||||
return self
|
||||
return self.__class__(datatype, **kwargs)
|
||||
|
||||
|
||||
class Mapping(Readonly):
|
||||
"""Declare a mapping (to a single type)."""
|
||||
|
||||
def __init__(self, valuetype, keytype=str, _normalize=True):
|
||||
if _normalize:
|
||||
keytype = _transform_datatype(keytype, _normalize_datatype)
|
||||
valuetype = _transform_datatype(valuetype, _normalize_datatype)
|
||||
self._bind_attrs(
|
||||
keytype=keytype,
|
||||
valuetype=valuetype,
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
if self.keytype is str:
|
||||
return '{}(valuetype={!r})'.format(
|
||||
type(self).__name__, self.valuetype)
|
||||
else:
|
||||
return '{}(keytype={!r}, valuetype={!r})'.format(
|
||||
type(self).__name__, self.keytype, self.valuetype)
|
||||
|
||||
def __hash__(self):
|
||||
return hash((self.keytype, self.valuetype))
|
||||
|
||||
def __eq__(self, other):
|
||||
try:
|
||||
other_keytype = other.keytype
|
||||
other_valuetype = other.valuetype
|
||||
except AttributeError:
|
||||
return False
|
||||
if self.keytype != other_keytype:
|
||||
return False
|
||||
if self.valuetype != other_valuetype:
|
||||
return False
|
||||
return True
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
|
||||
def traverse(self, op, **kwargs):
|
||||
"""Return a copy with op applied to the item datatype."""
|
||||
keytype = op(self.keytype)
|
||||
valuetype = op(self.valuetype)
|
||||
if (keytype is self.keytype and
|
||||
valuetype is self.valuetype and
|
||||
not kwargs
|
||||
):
|
||||
return self
|
||||
return self.__class__(valuetype, keytype, **kwargs)
|
||||
|
||||
|
||||
class Field(namedtuple('Field', 'name datatype default optional')):
|
||||
"""Declare a field in a data map param."""
|
||||
|
||||
START_OPTIONAL = sentinel('START_OPTIONAL')
|
||||
|
||||
def __new__(cls, name, datatype=str, enum=None, default=NOT_SET,
|
||||
optional=False, _normalize=True, **kwargs):
|
||||
if enum is not None and not isinstance(enum, Enum):
|
||||
datatype = Enum(datatype, enum)
|
||||
enum = None
|
||||
|
||||
if _normalize:
|
||||
datatype = _transform_datatype(datatype, _normalize_datatype)
|
||||
self = super(Field, cls).__new__(
|
||||
cls,
|
||||
name=str(name) if name else None,
|
||||
datatype=datatype,
|
||||
default=default,
|
||||
optional=bool(optional),
|
||||
)
|
||||
self._kwargs = kwargs.items()
|
||||
return self
|
||||
|
||||
@property
|
||||
def kwargs(self):
|
||||
return dict(self._kwargs)
|
||||
|
||||
def traverse(self, op, **kwargs):
|
||||
"""Return a copy with op applied to the datatype."""
|
||||
datatype = op(self.datatype)
|
||||
if datatype is self.datatype and not kwargs:
|
||||
return self
|
||||
kwargs.setdefault('default', self.default)
|
||||
kwargs.setdefault('optional', self.optional)
|
||||
return self.__class__(self.name, datatype, **kwargs)
|
||||
|
||||
|
||||
class Fields(Readonly, Sequence):
|
||||
"""Declare a set of fields."""
|
||||
|
||||
@classmethod
|
||||
def _iter_fixed(cls, fields, _normalize=True):
|
||||
optional = None
|
||||
for field in fields or ():
|
||||
if field is Field.START_OPTIONAL:
|
||||
if optional is not None:
|
||||
raise RuntimeError('START_OPTIONAL used more than once')
|
||||
optional = True
|
||||
continue
|
||||
|
||||
if not isinstance(field, Field):
|
||||
raise TypeError('got non-field {!r}'.format(field))
|
||||
if _normalize:
|
||||
field = _transform_datatype(field, _normalize_datatype)
|
||||
if optional is not None and field.optional is not optional:
|
||||
field = field._replace(optional=optional)
|
||||
yield field
|
||||
|
||||
def __init__(self, *fields, **kwargs):
|
||||
fields = list(self._iter_fixed(fields, **kwargs))
|
||||
self._bind_attrs(
|
||||
_fields=fields,
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return '{}(*{})'.format(type(self).__name__, self._fields)
|
||||
|
||||
def __hash__(self):
|
||||
return hash(tuple(self))
|
||||
|
||||
def __eq__(self, other):
|
||||
try:
|
||||
other_len = len(other)
|
||||
other_iter = iter(other)
|
||||
except TypeError:
|
||||
return False
|
||||
if len(self) != other_len:
|
||||
return False
|
||||
for i, item in enumerate(other_iter):
|
||||
if self[i] != item:
|
||||
return False
|
||||
return True
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
|
||||
def __len__(self):
|
||||
return len(self._fields)
|
||||
|
||||
def __getitem__(self, index):
|
||||
return self._fields[index]
|
||||
|
||||
def as_dict(self):
|
||||
return {field.name: field for field in self._fields}
|
||||
|
||||
def traverse(self, op, **kwargs):
|
||||
"""Return a copy with op applied to each field."""
|
||||
changed = False
|
||||
updated = []
|
||||
for field in self._fields:
|
||||
transformed = op(field)
|
||||
if transformed is not field:
|
||||
changed = True
|
||||
updated.append(transformed)
|
||||
|
||||
if not changed and not kwargs:
|
||||
return self
|
||||
kwargs['_normalize'] = False
|
||||
return self.__class__(*updated, **kwargs)
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
|
||||
class ArgumentError(TypeError):
|
||||
"""The base class for argument-related exceptions."""
|
||||
|
||||
|
||||
class ArgMissingError(ArgumentError):
|
||||
"""Indicates that the argument for the field is missing."""
|
||||
|
||||
def __init__(self, field):
|
||||
super(ArgMissingError, self).__init__(
|
||||
'missing arg {!r}'.format(field.name))
|
||||
self.field = field
|
||||
|
||||
|
||||
class IncompleteArgError(ArgumentError):
|
||||
"""Indicates that the "complex" arg has missing fields."""
|
||||
|
||||
def __init__(self, fields, missing):
|
||||
msg = 'incomplete arg (missing or incomplete fields: {})'
|
||||
super(IncompleteArgError, self).__init__(
|
||||
msg.format(', '.join(sorted(missing))))
|
||||
self.fields = fields
|
||||
self.missing = missing
|
||||
|
||||
|
||||
class ArgTypeMismatchError(ArgumentError):
|
||||
"""Indicates that the arg did not have the expected type."""
|
||||
|
||||
def __init__(self, value):
|
||||
super(ArgTypeMismatchError, self).__init__(
|
||||
'bad value {!r} (unsupported type)'.format(value))
|
||||
self.value = value
|
||||
|
|
@ -1,221 +0,0 @@
|
|||
from debugger_protocol._base import Readonly, WithRepr
|
||||
|
||||
|
||||
class _ParameterBase(WithRepr):
|
||||
|
||||
def __init__(self, datatype):
|
||||
self._datatype = datatype
|
||||
|
||||
def _init_args(self):
|
||||
yield ('datatype', self._datatype)
|
||||
|
||||
def __hash__(self):
|
||||
try:
|
||||
return hash(self._datatype)
|
||||
except TypeError:
|
||||
return hash(id(self))
|
||||
|
||||
def __eq__(self, other):
|
||||
if type(self) is not type(other):
|
||||
return NotImplemented
|
||||
return self._datatype == other._datatype
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
|
||||
@property
|
||||
def datatype(self):
|
||||
return self._datatype
|
||||
|
||||
|
||||
class Parameter(_ParameterBase):
|
||||
"""Base class for different parameter types."""
|
||||
|
||||
def __init__(self, datatype, handler=None):
|
||||
super(Parameter, self).__init__(datatype)
|
||||
self._handler = handler
|
||||
|
||||
def _init_args(self):
|
||||
for item in super(Parameter, self)._init_args():
|
||||
yield item
|
||||
if self._handler is not None:
|
||||
yield ('handler', self._handler)
|
||||
|
||||
def bind(self, raw):
|
||||
"""Return an Arg for the given raw value.
|
||||
|
||||
As with match_type(), if the value is not supported by this
|
||||
parameter return None.
|
||||
"""
|
||||
handler = self.match_type(raw)
|
||||
if handler is None:
|
||||
return None
|
||||
return Arg(self, raw, handler)
|
||||
|
||||
def match_type(self, raw):
|
||||
"""Return the datatype handler to use for the given raw value.
|
||||
|
||||
If the value does not match then return None.
|
||||
"""
|
||||
return self._handler
|
||||
|
||||
|
||||
class DatatypeHandler(_ParameterBase):
|
||||
"""Base class for datatype handlers."""
|
||||
|
||||
def coerce(self, raw):
|
||||
"""Return the deserialized equivalent of the given raw value."""
|
||||
# By default this is a noop.
|
||||
return raw
|
||||
|
||||
def validate(self, coerced):
|
||||
"""Ensure that the already-deserialized value is correct."""
|
||||
# By default this is a noop.
|
||||
return
|
||||
|
||||
def as_data(self, coerced):
|
||||
"""Return a serialized equivalent of the given value.
|
||||
|
||||
This method round-trips with the "coerce()" method.
|
||||
"""
|
||||
# By default this is a noop.
|
||||
return coerced
|
||||
|
||||
|
||||
class Arg(Readonly, WithRepr):
|
||||
"""The bridge between a raw value and a deserialized one.
|
||||
|
||||
This is primarily the product of Parameter.bind().
|
||||
"""
|
||||
# The value of this type lies in encapsulating intermediate state
|
||||
# and in caching data.
|
||||
|
||||
def __init__(self, param, value, handler=None, israw=True):
|
||||
if not isinstance(param, Parameter):
|
||||
raise TypeError(
|
||||
'bad param (expected Parameter, got {!r})'.format(param))
|
||||
if handler is None:
|
||||
if israw:
|
||||
handler = param.match_type(value)
|
||||
else:
|
||||
raise TypeError('missing handler')
|
||||
if not isinstance(handler, DatatypeHandler):
|
||||
msg = 'bad handler (expected DatatypeHandler, got {!r})'
|
||||
raise TypeError(msg.format(handler))
|
||||
|
||||
key = '_raw' if israw else '_value'
|
||||
kwargs = {key: value}
|
||||
self._bind_attrs(
|
||||
param=param,
|
||||
_handler=handler,
|
||||
_validated=False,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
def _init_args(self):
|
||||
yield ('param', self.param)
|
||||
israw = True
|
||||
try:
|
||||
yield ('value', self._raw)
|
||||
except AttributeError:
|
||||
yield ('value', self._value)
|
||||
israw = False
|
||||
if self.datatype != self.param.datatype:
|
||||
yield ('handler', self._handler)
|
||||
if not israw:
|
||||
yield ('israw', False)
|
||||
|
||||
def __hash__(self):
|
||||
try:
|
||||
return hash(self.datatype)
|
||||
except TypeError:
|
||||
return hash(id(self))
|
||||
|
||||
def __eq__(self, other):
|
||||
if type(self) is not type(other):
|
||||
return False
|
||||
if self.param != other.param:
|
||||
return False
|
||||
return self._as_data() == other._as_data()
|
||||
|
||||
def __ne__(self, other):
|
||||
return not (self == other)
|
||||
|
||||
@property
|
||||
def datatype(self):
|
||||
return self._handler.datatype
|
||||
|
||||
@property
|
||||
def raw(self):
|
||||
"""The serialized value."""
|
||||
return self.as_data()
|
||||
|
||||
@property
|
||||
def value(self):
|
||||
"""The de-serialized value."""
|
||||
value = self.coerce()
|
||||
if not self._validated:
|
||||
self._validate()
|
||||
return value
|
||||
|
||||
def coerce(self, cached=True):
|
||||
"""Return the deserialized equivalent of the raw value."""
|
||||
if not cached:
|
||||
try:
|
||||
raw = self._raw
|
||||
except AttributeError:
|
||||
# Use the cached value anyway.
|
||||
return self._value
|
||||
else:
|
||||
return self._handler.coerce(raw)
|
||||
|
||||
try:
|
||||
return self._value
|
||||
except AttributeError:
|
||||
value = self._handler.coerce(self._raw)
|
||||
self._bind_attrs(
|
||||
_value=value,
|
||||
)
|
||||
return value
|
||||
|
||||
def validate(self, force=False):
|
||||
"""Ensure that the (deserialized) value is correct.
|
||||
|
||||
If the value has a "validate()" method then it gets called.
|
||||
Otherwise it's up to the handler.
|
||||
"""
|
||||
if not self._validated or force:
|
||||
self.coerce()
|
||||
self._validate()
|
||||
|
||||
def _validate(self):
|
||||
try:
|
||||
validate = self._value.validate
|
||||
except AttributeError:
|
||||
self._handler.validate(self._value)
|
||||
else:
|
||||
validate()
|
||||
self._bind_attrs(
|
||||
_validated=True,
|
||||
)
|
||||
|
||||
def as_data(self, cached=True):
|
||||
"""Return a serialized equivalent of the value."""
|
||||
self.validate()
|
||||
if not cached:
|
||||
return self._handler.as_data(self._value)
|
||||
return self._as_data()
|
||||
|
||||
def _as_data(self):
|
||||
try:
|
||||
return self._raw
|
||||
except AttributeError:
|
||||
try:
|
||||
as_data = self._value.as_data
|
||||
except AttributeError:
|
||||
as_data = self._handler.as_data
|
||||
raw = as_data(self._value)
|
||||
self._bind_attrs(
|
||||
_raw=raw,
|
||||
)
|
||||
return raw
|
||||
|
|
@ -1,476 +0,0 @@
|
|||
from ._common import NOT_SET, ANY, SIMPLE_TYPES
|
||||
from ._datatype import FieldsNamespace
|
||||
from ._decl import Enum, Union, Array, Mapping, Field, Fields
|
||||
from ._errors import ArgTypeMismatchError
|
||||
from ._param import Parameter, DatatypeHandler
|
||||
|
||||
|
||||
#def as_parameter(cls):
|
||||
# """Return a parameter that wraps the given FieldsNamespace subclass."""
|
||||
# # XXX inject_params
|
||||
# cls.normalize(_inject_params)
|
||||
# param = param_from_datatype(cls)
|
||||
## cls.PARAM = param
|
||||
# return param
|
||||
#
|
||||
#
|
||||
#def _inject_params(datatype):
|
||||
# return param_from_datatype(datatype)
|
||||
|
||||
|
||||
def param_from_datatype(datatype, **kwargs):
|
||||
"""Return a parameter for the given datatype."""
|
||||
if isinstance(datatype, Parameter):
|
||||
return datatype
|
||||
|
||||
if isinstance(datatype, DatatypeHandler):
|
||||
return Parameter(datatype.datatype, datatype, **kwargs)
|
||||
elif isinstance(datatype, Fields):
|
||||
return ComplexParameter(datatype, **kwargs)
|
||||
elif isinstance(datatype, Field):
|
||||
return param_from_datatype(datatype.datatype, **kwargs)
|
||||
elif datatype is ANY:
|
||||
return NoopParameter()
|
||||
elif datatype is None:
|
||||
return SingletonParameter(None)
|
||||
elif datatype in list(SIMPLE_TYPES):
|
||||
return SimpleParameter(datatype, **kwargs)
|
||||
elif isinstance(datatype, Enum):
|
||||
return EnumParameter(datatype.datatype, datatype.choice, **kwargs)
|
||||
elif isinstance(datatype, Union):
|
||||
return UnionParameter(datatype, **kwargs)
|
||||
elif isinstance(datatype, (set, frozenset)):
|
||||
return UnionParameter(Union(*datatype), **kwargs)
|
||||
elif isinstance(datatype, Array):
|
||||
return ArrayParameter(datatype, **kwargs)
|
||||
elif isinstance(datatype, (list, tuple)):
|
||||
datatype, = datatype
|
||||
return ArrayParameter(Array(datatype), **kwargs)
|
||||
elif not isinstance(datatype, type):
|
||||
raise NotImplementedError
|
||||
elif issubclass(datatype, FieldsNamespace):
|
||||
param = datatype.param()
|
||||
return param or ComplexParameter(datatype, **kwargs)
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
########################
|
||||
# param types
|
||||
|
||||
class NoopParameter(Parameter):
|
||||
"""A parameter that treats any value as-is."""
|
||||
def __init__(self):
|
||||
handler = DatatypeHandler(ANY)
|
||||
super(NoopParameter, self).__init__(ANY, handler)
|
||||
|
||||
|
||||
NOOP = NoopParameter()
|
||||
|
||||
|
||||
class SingletonParameter(Parameter):
|
||||
"""A parameter that works only for the given value."""
|
||||
|
||||
class HANDLER(DatatypeHandler):
|
||||
def validate(self, coerced):
|
||||
if coerced is not self.datatype:
|
||||
raise ValueError(
|
||||
'expected {!r}, got {!r}'.format(self.datatype, coerced))
|
||||
|
||||
def __init__(self, obj):
|
||||
handler = self.HANDLER(obj)
|
||||
super(SingletonParameter, self).__init__(obj, handler)
|
||||
|
||||
def match_type(self, raw):
|
||||
# Note we do not check equality for singletons.
|
||||
if raw is not self.datatype:
|
||||
return None
|
||||
return super(SingletonParameter, self).match_type(raw)
|
||||
|
||||
|
||||
class SimpleHandler(DatatypeHandler):
|
||||
"""A datatype handler for basic value types."""
|
||||
|
||||
def __init__(self, cls):
|
||||
if not isinstance(cls, type):
|
||||
raise ValueError('expected a class, got {!r}'.format(cls))
|
||||
super(SimpleHandler, self).__init__(cls)
|
||||
|
||||
def coerce(self, raw):
|
||||
if type(raw) is self.datatype:
|
||||
return raw
|
||||
return self.datatype(raw)
|
||||
|
||||
def validate(self, coerced):
|
||||
if type(coerced) is not self.datatype:
|
||||
raise ValueError(
|
||||
'expected {!r}, got {!r}'.format(self.datatype, coerced))
|
||||
|
||||
|
||||
class SimpleParameter(Parameter):
|
||||
"""A parameter for basic value types."""
|
||||
|
||||
HANDLER = SimpleHandler
|
||||
|
||||
def __init__(self, cls, strict=True):
|
||||
handler = self.HANDLER(cls)
|
||||
super(SimpleParameter, self).__init__(cls, handler)
|
||||
self._strict = strict
|
||||
|
||||
def match_type(self, raw):
|
||||
if self._strict:
|
||||
if type(raw) is not self.datatype:
|
||||
return None
|
||||
elif not isinstance(raw, self.datatype):
|
||||
return None
|
||||
return super(SimpleParameter, self).match_type(raw)
|
||||
|
||||
|
||||
class EnumParameter(Parameter):
|
||||
"""A parameter for enums of basic value types."""
|
||||
|
||||
class HANDLER(SimpleHandler):
|
||||
|
||||
def __init__(self, cls, enum):
|
||||
if not enum:
|
||||
raise TypeError('missing enum')
|
||||
super(EnumParameter.HANDLER, self).__init__(cls)
|
||||
if not callable(enum):
|
||||
enum = set(enum)
|
||||
self.enum = enum
|
||||
|
||||
def validate(self, coerced):
|
||||
super(EnumParameter.HANDLER, self).validate(coerced)
|
||||
|
||||
if not self._match_enum(coerced):
|
||||
msg = 'expected one of {!r}, got {!r}'
|
||||
raise ValueError(msg.format(self.enum, coerced))
|
||||
|
||||
def _match_enum(self, coerced):
|
||||
if callable(self.enum):
|
||||
if not self.enum(coerced):
|
||||
return False
|
||||
elif coerced not in self.enum:
|
||||
return False
|
||||
return True
|
||||
|
||||
def __init__(self, cls, enum):
|
||||
handler = self.HANDLER(cls, enum)
|
||||
super(EnumParameter, self).__init__(cls, handler)
|
||||
self._match_enum = handler._match_enum
|
||||
|
||||
def match_type(self, raw):
|
||||
if type(raw) is not self.datatype:
|
||||
return None
|
||||
if not self._match_enum(raw):
|
||||
return None
|
||||
return super(EnumParameter, self).match_type(raw)
|
||||
|
||||
|
||||
class UnionParameter(Parameter):
|
||||
"""A parameter that supports multiple different types."""
|
||||
|
||||
HANDLER = None # no handler
|
||||
|
||||
@classmethod
|
||||
def from_datatypes(cls, *datatypes, **kwargs):
|
||||
datatype = Union(*datatypes)
|
||||
return cls(datatype, **kwargs)
|
||||
|
||||
def __init__(self, datatype, **kwargs):
|
||||
if not isinstance(datatype, Union):
|
||||
raise ValueError('expected Union, got {!r}'.format(datatype))
|
||||
super(UnionParameter, self).__init__(datatype)
|
||||
|
||||
choice = []
|
||||
for dt in datatype:
|
||||
param = param_from_datatype(dt)
|
||||
choice.append(param)
|
||||
self.choice = choice
|
||||
|
||||
def __eq__(self, other):
|
||||
if type(self) is not type(other):
|
||||
return False
|
||||
return set(self.datatype) == set(other.datatype)
|
||||
|
||||
def match_type(self, raw):
|
||||
for param in self.choice:
|
||||
handler = param.match_type(raw)
|
||||
if handler is not None:
|
||||
return handler
|
||||
return None
|
||||
|
||||
|
||||
class ArrayParameter(Parameter):
|
||||
"""A parameter that is a list of some fixed type."""
|
||||
|
||||
class HANDLER(DatatypeHandler):
|
||||
|
||||
def __init__(self, datatype, handlers=None, itemparam=None):
|
||||
if not isinstance(datatype, Array):
|
||||
raise ValueError(
|
||||
'expected an Array, got {!r}'.format(datatype))
|
||||
super(ArrayParameter.HANDLER, self).__init__(datatype)
|
||||
self.handlers = handlers
|
||||
self.itemparam = itemparam
|
||||
|
||||
def coerce(self, raw):
|
||||
if self.handlers is None:
|
||||
if self.itemparam is None:
|
||||
itemtype = self.datatype.itemtype
|
||||
self.itemparam = param_from_datatype(itemtype)
|
||||
handlers = []
|
||||
for item in raw:
|
||||
handler = self.itemparam.match_type(item)
|
||||
if handler is None:
|
||||
raise ArgTypeMismatchError(item)
|
||||
handlers.append(handler)
|
||||
self.handlers = handlers
|
||||
|
||||
result = []
|
||||
for i, item in enumerate(raw):
|
||||
handler = self.handlers[i]
|
||||
item = handler.coerce(item)
|
||||
result.append(item)
|
||||
return result
|
||||
|
||||
def validate(self, coerced):
|
||||
if self.handlers is None:
|
||||
raise TypeError('coerce first')
|
||||
for i, item in enumerate(coerced):
|
||||
handler = self.handlers[i]
|
||||
handler.validate(item)
|
||||
|
||||
def as_data(self, coerced):
|
||||
if self.handlers is None:
|
||||
raise TypeError('coerce first')
|
||||
data = []
|
||||
for i, item in enumerate(coerced):
|
||||
handler = self.handlers[i]
|
||||
datum = handler.as_data(item)
|
||||
data.append(datum)
|
||||
return data
|
||||
|
||||
@classmethod
|
||||
def from_itemtype(cls, itemtype, **kwargs):
|
||||
datatype = Array(itemtype)
|
||||
return cls(datatype, **kwargs)
|
||||
|
||||
def __init__(self, datatype):
|
||||
if not isinstance(datatype, Array):
|
||||
raise ValueError('expected Array, got {!r}'.format(datatype))
|
||||
itemparam = param_from_datatype(datatype.itemtype)
|
||||
handler = self.HANDLER(datatype, None, itemparam)
|
||||
super(ArrayParameter, self).__init__(datatype, handler)
|
||||
|
||||
self.itemparam = itemparam
|
||||
|
||||
def match_type(self, raw):
|
||||
if not isinstance(raw, list):
|
||||
return None
|
||||
handlers = []
|
||||
for item in raw:
|
||||
handler = self.itemparam.match_type(item)
|
||||
if handler is None:
|
||||
return None
|
||||
handlers.append(handler)
|
||||
return self.HANDLER(self.datatype, handlers)
|
||||
|
||||
|
||||
class MappingParameter(Parameter):
|
||||
"""A parameter that is a mapping of some fixed type."""
|
||||
|
||||
class HANDLER(DatatypeHandler):
|
||||
|
||||
def __init__(self, datatype, handlers=None,
|
||||
keyparam=None, valueparam=None):
|
||||
if not isinstance(datatype, Mapping):
|
||||
raise ValueError(
|
||||
'expected an Mapping, got {!r}'.format(datatype))
|
||||
super(MappingParameter.HANDLER, self).__init__(datatype)
|
||||
self.handlers = handlers
|
||||
self.keyparam = keyparam
|
||||
self.valueparam = valueparam
|
||||
|
||||
def coerce(self, raw):
|
||||
if self.handlers is None:
|
||||
if self.keyparam is None:
|
||||
keytype = self.datatype.keytype
|
||||
self.keyparam = param_from_datatype(keytype)
|
||||
if self.valueparam is None:
|
||||
valuetype = self.datatype.valuetype
|
||||
self.valueparam = param_from_datatype(valuetype)
|
||||
handlers = {}
|
||||
for key, value in raw.items():
|
||||
keyhandler = self.keyparam.match_type(key)
|
||||
if keyhandler is None:
|
||||
raise ArgTypeMismatchError(key)
|
||||
valuehandler = self.valueparam.match_type(value)
|
||||
if valuehandler is None:
|
||||
raise ArgTypeMismatchError(value)
|
||||
handlers[key] = (keyhandler, valuehandler)
|
||||
self.handlers = handlers
|
||||
|
||||
result = {}
|
||||
for key, value in raw.items():
|
||||
keyhandler, valuehandler = self.handlers[key]
|
||||
key = keyhandler.coerce(key)
|
||||
value = valuehandler.coerce(value)
|
||||
result[key] = value
|
||||
return result
|
||||
|
||||
def validate(self, coerced):
|
||||
if self.handlers is None:
|
||||
raise TypeError('coerce first')
|
||||
for key, value in coerced.items():
|
||||
keyhandler, valuehandler = self.handlers[key]
|
||||
keyhandler.validate(key)
|
||||
valuehandler.validate(value)
|
||||
|
||||
def as_data(self, coerced):
|
||||
if self.handlers is None:
|
||||
raise TypeError('coerce first')
|
||||
data = {}
|
||||
for key, value in coerced.items():
|
||||
keyhandler, valuehandler = self.handlers[key]
|
||||
key = keyhandler.as_data(key)
|
||||
value = valuehandler.as_data(value)
|
||||
data[key] = value
|
||||
return data
|
||||
|
||||
@classmethod
|
||||
def from_valuetype(cls, valuetype, keytype=str, **kwargs):
|
||||
datatype = Mapping(valuetype, keytype)
|
||||
return cls(datatype, **kwargs)
|
||||
|
||||
def __init__(self, datatype):
|
||||
if not isinstance(datatype, Mapping):
|
||||
raise ValueError('expected Mapping, got {!r}'.format(datatype))
|
||||
keyparam = param_from_datatype(datatype.keytype)
|
||||
valueparam = param_from_datatype(datatype.valuetype)
|
||||
handler = self.HANDLER(datatype, None, keyparam, valueparam)
|
||||
super(MappingParameter, self).__init__(datatype, handler)
|
||||
|
||||
self.keyparam = keyparam
|
||||
self.valueparam = valueparam
|
||||
|
||||
def match_type(self, raw):
|
||||
if not isinstance(raw, dict):
|
||||
return None
|
||||
handlers = {}
|
||||
for key, value in raw.items():
|
||||
keyhandler = self.keyparam.match_type(key)
|
||||
if keyhandler is None:
|
||||
return None
|
||||
valuehandler = self.valueparam.match_type(value)
|
||||
if valuehandler is None:
|
||||
return None
|
||||
handlers[key] = (keyhandler, valuehandler)
|
||||
return self.HANDLER(self.datatype, handlers)
|
||||
|
||||
|
||||
class ComplexParameter(Parameter):
|
||||
|
||||
class HANDLER(DatatypeHandler):
|
||||
|
||||
def __init__(self, datatype, handlers=None):
|
||||
if (type(datatype) is not type or
|
||||
not issubclass(datatype, FieldsNamespace)
|
||||
):
|
||||
msg = 'expected FieldsNamespace, got {!r}'
|
||||
raise ValueError(msg.format(datatype))
|
||||
super(ComplexParameter.HANDLER, self).__init__(datatype)
|
||||
self.handlers = handlers
|
||||
|
||||
def coerce(self, raw):
|
||||
if self.handlers is None:
|
||||
fields = self.datatype.FIELDS.as_dict()
|
||||
handlers = {}
|
||||
for name, value in raw.items():
|
||||
param = param_from_datatype(fields[name])
|
||||
handler = param.match_type(value)
|
||||
if handler is None:
|
||||
raise ArgTypeMismatchError((name, value))
|
||||
handlers[name] = handler
|
||||
self.handlers = handlers
|
||||
|
||||
result = {}
|
||||
for name, value in raw.items():
|
||||
handler = self.handlers[name]
|
||||
value = handler.coerce(value)
|
||||
result[name] = value
|
||||
return self.datatype(**result)
|
||||
|
||||
def validate(self, coerced):
|
||||
if self.handlers is None:
|
||||
raise TypeError('coerce first')
|
||||
for field in self.datatype.FIELDS:
|
||||
try:
|
||||
value = getattr(coerced, field.name)
|
||||
except AttributeError:
|
||||
continue
|
||||
handler = self.handlers[field.name]
|
||||
handler.validate(value)
|
||||
|
||||
def as_data(self, coerced):
|
||||
if self.handlers is None:
|
||||
raise TypeError('coerce first')
|
||||
data = {}
|
||||
for field in self.datatype.FIELDS:
|
||||
try:
|
||||
value = getattr(coerced, field.name)
|
||||
except AttributeError:
|
||||
continue
|
||||
handler = self.handlers[field.name]
|
||||
datum = handler.as_data(value)
|
||||
data[field.name] = datum
|
||||
return data
|
||||
|
||||
def __init__(self, datatype):
|
||||
if isinstance(datatype, Fields):
|
||||
class ArgNamespace(FieldsNamespace):
|
||||
FIELDS = datatype
|
||||
|
||||
datatype = ArgNamespace
|
||||
elif (type(datatype) is not type or
|
||||
not issubclass(datatype, FieldsNamespace)):
|
||||
msg = 'expected Fields or FieldsNamespace, got {!r}'
|
||||
raise ValueError(msg.format(datatype))
|
||||
datatype.normalize()
|
||||
datatype.PARAM = self
|
||||
# We set handler later in match_type().
|
||||
super(ComplexParameter, self).__init__(datatype)
|
||||
|
||||
self.params = {field.name: param_from_datatype(field)
|
||||
for field in datatype.FIELDS}
|
||||
|
||||
def __eq__(self, other):
|
||||
if super(ComplexParameter, self).__eq__(other):
|
||||
return True
|
||||
try:
|
||||
fields = self._datatype.FIELDS
|
||||
other_fields = other._datatype.FIELDS
|
||||
except AttributeError:
|
||||
return NotImplemented
|
||||
else:
|
||||
return fields == other_fields
|
||||
|
||||
def match_type(self, raw):
|
||||
if not isinstance(raw, dict):
|
||||
return None
|
||||
handlers = {}
|
||||
for field in self.datatype.FIELDS:
|
||||
try:
|
||||
value = raw[field.name]
|
||||
except KeyError:
|
||||
if not field.optional:
|
||||
return None
|
||||
value = field.default
|
||||
if value is NOT_SET:
|
||||
continue
|
||||
param = self.params[field.name]
|
||||
handler = param.match_type(value)
|
||||
if handler is None:
|
||||
return None
|
||||
handlers[field.name] = handler
|
||||
return self.HANDLER(self.datatype, handlers)
|
||||
|
|
@ -1,76 +0,0 @@
|
|||
|
||||
|
||||
MESSAGE_TYPES = {}
|
||||
MESSAGE_TYPE_KEYS = {}
|
||||
|
||||
|
||||
def register(cls, msgtype=None, typekey=None, key=None):
|
||||
"""Add the message class to the registry.
|
||||
|
||||
The class is also fixed up if necessary.
|
||||
"""
|
||||
if not isinstance(cls, type):
|
||||
raise RuntimeError('may not be used as a decorator factory.')
|
||||
|
||||
if msgtype is None:
|
||||
if cls.TYPE is None:
|
||||
raise RuntimeError('class missing TYPE')
|
||||
msgtype = cls.TYPE
|
||||
if typekey is None:
|
||||
if cls.TYPE_KEY is None:
|
||||
raise RuntimeError('class missing TYPE_KEY')
|
||||
typekey = cls.TYPE_KEY
|
||||
if key is None:
|
||||
key = getattr(cls, typekey,
|
||||
getattr(cls, typekey.upper(), None))
|
||||
if not key:
|
||||
raise RuntimeError('missing type key attribute')
|
||||
|
||||
try:
|
||||
registered = MESSAGE_TYPES[msgtype]
|
||||
except KeyError:
|
||||
registered = MESSAGE_TYPES[msgtype] = {}
|
||||
MESSAGE_TYPE_KEYS[msgtype] = typekey
|
||||
else:
|
||||
if typekey != MESSAGE_TYPE_KEYS[msgtype]:
|
||||
msg = 'mismatch on TYPE_KEY ({!r} != {!r})'
|
||||
raise RuntimeError(
|
||||
msg.format(typekey, MESSAGE_TYPE_KEYS[msgtype]))
|
||||
|
||||
if key in registered:
|
||||
raise RuntimeError('{}:{} already registered'.format(msgtype, key))
|
||||
registered[key] = cls
|
||||
|
||||
# XXX init args
|
||||
|
||||
return cls
|
||||
|
||||
|
||||
def look_up(data):
|
||||
"""Return the message class for the given raw message data."""
|
||||
msgtype = data['type']
|
||||
typekey = MESSAGE_TYPE_KEYS[msgtype]
|
||||
key = data[typekey]
|
||||
return MESSAGE_TYPES[msgtype][key]
|
||||
|
||||
|
||||
class Message(object):
|
||||
"""The API for register-able message types."""
|
||||
|
||||
TYPE = None
|
||||
TYPE_KEY = None
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, **kwargs):
|
||||
"""Return an instance based on the given raw data."""
|
||||
raise NotImplementedError
|
||||
|
||||
def as_data(self):
|
||||
"""Return serializable data for the instance."""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
# Force registration.
|
||||
from .message import ProtocolMessage, Request, Response, Event # noqa
|
||||
from .requests import * # noqa
|
||||
from .events import * # noqa
|
||||
|
|
@ -1,336 +0,0 @@
|
|||
from debugger_protocol.arg import FieldsNamespace, Field, Enum
|
||||
from .shared import Checksum, Source
|
||||
|
||||
|
||||
class Message(FieldsNamespace):
|
||||
"""A structured message object.
|
||||
|
||||
Used to return errors from requests.
|
||||
"""
|
||||
FIELDS = [
|
||||
Field('id', int),
|
||||
Field('format'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('variables', {str: str}),
|
||||
Field('sendTelemetry', bool),
|
||||
Field('showUser', bool),
|
||||
Field('url'),
|
||||
Field('urlLabel'),
|
||||
]
|
||||
|
||||
|
||||
class ExceptionBreakpointsFilter(FieldsNamespace):
|
||||
"""
|
||||
An ExceptionBreakpointsFilter is shown in the UI as an option for
|
||||
configuring how exceptions are dealt with.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('filter'),
|
||||
Field('label'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('default', bool),
|
||||
]
|
||||
|
||||
|
||||
class ColumnDescriptor(FieldsNamespace):
|
||||
"""
|
||||
A ColumnDescriptor specifies what module attribute to show in a
|
||||
column of the ModulesView, how to format it, and what the column's
|
||||
label should be. It is only used if the underlying UI actually
|
||||
supports this level of customization.
|
||||
"""
|
||||
|
||||
TYPES = {"string", "number", "boolean", "unixTimestampUTC"}
|
||||
FIELDS = [
|
||||
Field('attributeName'),
|
||||
Field('label'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('format'),
|
||||
Field('type'),
|
||||
Field('width', int),
|
||||
]
|
||||
|
||||
|
||||
class Capabilities(FieldsNamespace):
|
||||
"""Information about the capabilities of a debug adapter."""
|
||||
|
||||
FIELDS = [
|
||||
Field.START_OPTIONAL,
|
||||
Field('supportsConfigurationDoneRequest', bool),
|
||||
Field('supportsFunctionBreakpoints', bool),
|
||||
Field('supportsConditionalBreakpoints', bool),
|
||||
Field('supportsHitConditionalBreakpoints', bool),
|
||||
Field('supportsEvaluateForHovers', bool),
|
||||
Field('exceptionBreakpointFilters', [ExceptionBreakpointsFilter]),
|
||||
Field('supportsStepBack', bool),
|
||||
Field('supportsSetVariable', bool),
|
||||
Field('supportsRestartFrame', bool),
|
||||
Field('supportsGotoTargetsRequest', bool),
|
||||
Field('supportsStepInTargetsRequest', bool),
|
||||
Field('supportsCompletionsRequest', bool),
|
||||
Field('supportsModulesRequest', bool),
|
||||
Field('additionalModuleColumns', [ColumnDescriptor]),
|
||||
Field('supportedChecksumAlgorithms', [Enum(str, Checksum.ALGORITHMS)]),
|
||||
Field('supportsRestartRequest', bool),
|
||||
Field('supportsExceptionOptions', bool),
|
||||
Field('supportsValueFormattingOptions', bool),
|
||||
Field('supportsExceptionInfoRequest', bool),
|
||||
Field('supportsLogPoints', bool),
|
||||
Field('supportTerminateDebuggee', bool),
|
||||
Field('supportsDelayedStackTraceLoading', bool),
|
||||
Field('supportsLoadedSourcesRequest', bool),
|
||||
Field('supportsSetExpression', bool),
|
||||
Field('supportsModulesRequest', bool),
|
||||
Field('supportsDebuggerProperties', bool),
|
||||
Field('supportsCompletionsRequest', bool),
|
||||
]
|
||||
|
||||
|
||||
class ModulesViewDescriptor(FieldsNamespace):
|
||||
"""
|
||||
The ModulesViewDescriptor is the container for all declarative
|
||||
configuration options of a ModuleView. For now it only specifies
|
||||
the columns to be shown in the modules view.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('columns', [ColumnDescriptor]),
|
||||
]
|
||||
|
||||
|
||||
class Thread(FieldsNamespace):
|
||||
"""A thread."""
|
||||
|
||||
FIELDS = [
|
||||
Field('id', int),
|
||||
Field('name'),
|
||||
]
|
||||
|
||||
|
||||
class StackFrame(FieldsNamespace):
|
||||
"""A Stackframe contains the source location."""
|
||||
|
||||
PRESENTATION_HINTS = {"normal", "label", "subtle"}
|
||||
FIELDS = [
|
||||
Field('id', int),
|
||||
Field('name'),
|
||||
Field('source', Source, optional=True),
|
||||
Field('line', int),
|
||||
Field('column', int),
|
||||
Field.START_OPTIONAL,
|
||||
Field('endLine', int),
|
||||
Field('endColumn', int),
|
||||
Field("moduleId", {int, str}),
|
||||
Field('presentationHint'),
|
||||
]
|
||||
|
||||
|
||||
class Scope(FieldsNamespace):
|
||||
"""
|
||||
A Scope is a named container for variables. Optionally a scope
|
||||
can map to a source or a range within a source.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('name'),
|
||||
Field('variablesReference', int),
|
||||
Field('namedVariables', int, optional=True),
|
||||
Field('indexedVariables', int, optional=True),
|
||||
Field('expensive', bool),
|
||||
Field.START_OPTIONAL,
|
||||
Field('source', Source),
|
||||
Field('line', int),
|
||||
Field('column', int),
|
||||
Field('endLine', int),
|
||||
Field('endColumn', int),
|
||||
]
|
||||
|
||||
|
||||
class VariablePresentationHint(FieldsNamespace):
|
||||
"""
|
||||
Optional properties of a variable that can be used to determine
|
||||
how to render the variable in the UI.
|
||||
"""
|
||||
|
||||
KINDS = {"property", "method", "class", "data", "event", "baseClass",
|
||||
"innerClass", "interface", "mostDerivedClass", "virtual"}
|
||||
ATTRIBUTES = {"static", "constant", "readOnly", "rawString",
|
||||
"hasObjectId", "canHaveObjectId", "hasSideEffects"}
|
||||
VISIBILITIES = {"public", "private", "protected", "internal", "final"}
|
||||
FIELDS = [
|
||||
Field.START_OPTIONAL,
|
||||
Field('kind', enum=KINDS),
|
||||
Field('attributes', [Enum(str, ATTRIBUTES)]),
|
||||
Field('visibility', enum=VISIBILITIES),
|
||||
]
|
||||
|
||||
|
||||
class Variable(FieldsNamespace):
|
||||
"""A Variable is a name/value pair.
|
||||
|
||||
Optionally a variable can have a 'type' that is shown if space
|
||||
permits or when hovering over the variable's name. An optional
|
||||
'kind' is used to render additional properties of the variable,
|
||||
e.g. different icons can be used to indicate that a variable is
|
||||
public or private. If the value is structured (has children), a
|
||||
handle is provided to retrieve the children with the
|
||||
VariablesRequest. If the number of named or indexed children is
|
||||
large, the numbers should be returned via the optional
|
||||
'namedVariables' and 'indexedVariables' attributes. The client can
|
||||
use this optional information to present the children in a paged UI
|
||||
and fetch them in chunks.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('name'),
|
||||
Field('value'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('type'),
|
||||
Field('presentationHint', VariablePresentationHint),
|
||||
Field('evaluateName'),
|
||||
Field('variablesReference', int, optional=False),
|
||||
Field('namedVariables', int),
|
||||
Field('indexedVariables', int),
|
||||
]
|
||||
|
||||
|
||||
class SourceBreakpoint(FieldsNamespace):
|
||||
"""Properties of a breakpoint passed to the setBreakpoints request."""
|
||||
|
||||
FIELDS = [
|
||||
Field('line', int),
|
||||
Field.START_OPTIONAL,
|
||||
Field('column', int),
|
||||
Field('condition'),
|
||||
Field('hitCondition'),
|
||||
]
|
||||
|
||||
|
||||
class FunctionBreakpoint(FieldsNamespace):
|
||||
"""
|
||||
Properties of a breakpoint passed to the setFunctionBreakpoints request.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('name'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('condition'),
|
||||
Field('hitCondition'),
|
||||
]
|
||||
|
||||
|
||||
class StepInTarget(FieldsNamespace):
|
||||
"""
|
||||
A StepInTarget can be used in the 'stepIn' request and determines
|
||||
into which single target the stepIn request should step.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('id', int),
|
||||
Field('label'),
|
||||
]
|
||||
|
||||
|
||||
class GotoTarget(FieldsNamespace):
|
||||
"""
|
||||
A GotoTarget describes a code location that can be used as a target
|
||||
in the 'goto' request. The possible goto targets can be determined
|
||||
via the 'gotoTargets' request.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('id', int),
|
||||
Field('label'),
|
||||
Field('line', int),
|
||||
Field.START_OPTIONAL,
|
||||
Field('column', int),
|
||||
Field('endLine', int),
|
||||
Field('endColumn', int),
|
||||
]
|
||||
|
||||
|
||||
class CompletionItem(FieldsNamespace):
|
||||
"""
|
||||
CompletionItems are the suggestions returned from the CompletionsRequest.
|
||||
"""
|
||||
|
||||
TYPES = {"method", "function", "constructor", "field", "variable",
|
||||
"class", "interface", "module", "property", "unit", "value",
|
||||
"enum", "keyword", "snippet", "text", "color", "file",
|
||||
"reference", "customcolor"}
|
||||
FIELDS = [
|
||||
Field('label'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('text'),
|
||||
Field('type'),
|
||||
Field('start', int),
|
||||
Field('length', int),
|
||||
]
|
||||
|
||||
|
||||
class ValueFormat(FieldsNamespace):
|
||||
"""Provides formatting information for a value."""
|
||||
|
||||
FIELDS = [
|
||||
Field.START_OPTIONAL,
|
||||
Field('hex', bool),
|
||||
]
|
||||
|
||||
|
||||
class StackFrameFormat(ValueFormat):
|
||||
"""Provides formatting information for a stack frame."""
|
||||
|
||||
FIELDS = ValueFormat.FIELDS + [
|
||||
Field('parameters', bool),
|
||||
Field('parameterTypes', bool),
|
||||
Field('parameterNames', bool),
|
||||
Field('parameterValues', bool),
|
||||
Field('line', bool),
|
||||
Field('module', bool),
|
||||
Field('includeAll', bool),
|
||||
]
|
||||
|
||||
|
||||
class ExceptionPathSegment(FieldsNamespace):
|
||||
"""
|
||||
An ExceptionPathSegment represents a segment in a path that is used
|
||||
to match leafs or nodes in a tree of exceptions. If a segment
|
||||
consists of more than one name, it matches the names provided if
|
||||
'negate' is false or missing or it matches anything except the names
|
||||
provided if 'negate' is true.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('negate', bool, optional=True),
|
||||
Field('names', [str]),
|
||||
]
|
||||
|
||||
|
||||
ExceptionBreakMode = Enum(str,
|
||||
{"never", "always", "unhandled", "userUnhandled"})
|
||||
|
||||
|
||||
class ExceptionOptions(FieldsNamespace):
|
||||
"""
|
||||
An ExceptionOptions assigns configuration options to a set of exceptions.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('path', [ExceptionPathSegment], optional=True),
|
||||
Field('breakMode', ExceptionBreakMode),
|
||||
]
|
||||
|
||||
|
||||
class ExceptionDetails(FieldsNamespace):
|
||||
"""Detailed information about an exception that has occurred."""
|
||||
|
||||
FIELDS = [
|
||||
Field.START_OPTIONAL,
|
||||
Field('message'),
|
||||
Field('typeName'),
|
||||
Field('fullTypeName'),
|
||||
Field('evaluateName'),
|
||||
Field('stackTrace'),
|
||||
Field('innerException', ['<ref>']),
|
||||
]
|
||||
|
|
@ -1,235 +0,0 @@
|
|||
from debugger_protocol.arg import ANY, FieldsNamespace, Field
|
||||
from . import register
|
||||
from .shared import Breakpoint, Module, Source
|
||||
from .message import Event
|
||||
|
||||
|
||||
@register
|
||||
class InitializedEvent(Event):
|
||||
""""Event message for 'initialized' event type.
|
||||
|
||||
This event indicates that the debug adapter is ready to accept
|
||||
configuration requests (e.g. SetBreakpointsRequest,
|
||||
SetExceptionBreakpointsRequest). A debug adapter is expected to
|
||||
send this event when it is ready to accept configuration requests
|
||||
(but not before the InitializeRequest has finished).
|
||||
|
||||
The sequence of events/requests is as follows:
|
||||
- adapters sends InitializedEvent (after the InitializeRequest
|
||||
has returned)
|
||||
- frontend sends zero or more SetBreakpointsRequest
|
||||
- frontend sends one SetFunctionBreakpointsRequest
|
||||
- frontend sends a SetExceptionBreakpointsRequest if one or more
|
||||
exceptionBreakpointFilters have been defined (or if
|
||||
supportsConfigurationDoneRequest is not defined or false)
|
||||
- frontend sends other future configuration requests
|
||||
- frontend sends one ConfigurationDoneRequest to indicate the end
|
||||
of the configuration
|
||||
"""
|
||||
|
||||
EVENT = 'initialized'
|
||||
|
||||
|
||||
@register
|
||||
class StoppedEvent(Event):
|
||||
"""Event message for 'stopped' event type.
|
||||
|
||||
The event indicates that the execution of the debuggee has stopped
|
||||
due to some condition. This can be caused by a break point
|
||||
previously set, a stepping action has completed, by executing a
|
||||
debugger statement etc.
|
||||
"""
|
||||
|
||||
EVENT = 'stopped'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
REASONS = {'step', 'breakpoint', 'exception', 'pause', 'entry'}
|
||||
FIELDS = [
|
||||
Field('reason', enum=REASONS),
|
||||
Field.START_OPTIONAL,
|
||||
Field('description'),
|
||||
Field('threadId', int),
|
||||
Field('text'),
|
||||
Field('allThreadsStopped', bool),
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class ContinuedEvent(Event):
|
||||
"""Event message for 'continued' event type.
|
||||
|
||||
The event indicates that the execution of the debuggee has
|
||||
continued.
|
||||
|
||||
Please note: a debug adapter is not expected to send this event
|
||||
in response to a request that implies that execution continues,
|
||||
e.g. 'launch' or 'continue'. It is only necessary to send a
|
||||
ContinuedEvent if there was no previous request that implied this.
|
||||
"""
|
||||
|
||||
EVENT = 'continued'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('threadId', int),
|
||||
Field.START_OPTIONAL,
|
||||
Field('allThreadsContinued', bool),
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class ExitedEvent(Event):
|
||||
"""Event message for 'exited' event type.
|
||||
|
||||
The event indicates that the debuggee has exited.
|
||||
"""
|
||||
|
||||
EVENT = 'exited'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('exitCode', int),
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class TerminatedEvent(Event):
|
||||
"""Event message for 'terminated' event types.
|
||||
|
||||
The event indicates that debugging of the debuggee has terminated.
|
||||
"""
|
||||
|
||||
EVENT = 'terminated'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field.START_OPTIONAL,
|
||||
Field('restart', ANY),
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class ThreadEvent(Event):
|
||||
"""Event message for 'thread' event type.
|
||||
|
||||
The event indicates that a thread has started or exited.
|
||||
"""
|
||||
|
||||
EVENT = 'thread'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
REASONS = {'started', 'exited'}
|
||||
FIELDS = [
|
||||
Field('threadId', int),
|
||||
Field('reason', enum=REASONS),
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class OutputEvent(Event):
|
||||
"""Event message for 'output' event type.
|
||||
|
||||
The event indicates that the target has produced some output.
|
||||
"""
|
||||
|
||||
EVENT = 'output'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
CATEGORIES = {'console', 'stdout', 'stderr', 'telemetry'}
|
||||
FIELDS = [
|
||||
Field('output'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('category', enum=CATEGORIES),
|
||||
Field('variablesReference', int), # "number"
|
||||
Field('source'),
|
||||
Field('line', int),
|
||||
Field('column', int),
|
||||
Field('data', ANY),
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class BreakpointEvent(Event):
|
||||
"""Event message for 'breakpoint' event type.
|
||||
|
||||
The event indicates that some information about a breakpoint
|
||||
has changed.
|
||||
"""
|
||||
|
||||
EVENT = 'breakpoint'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
REASONS = {'changed', 'new', 'removed'}
|
||||
FIELDS = [
|
||||
Field('breakpoint', Breakpoint),
|
||||
Field('reason', enum=REASONS),
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class ModuleEvent(Event):
|
||||
"""Event message for 'module' event type.
|
||||
|
||||
The event indicates that some information about a module
|
||||
has changed.
|
||||
"""
|
||||
|
||||
EVENT = 'module'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
REASONS = {'new', 'changed', 'removed'}
|
||||
FIELDS = [
|
||||
Field('module', Module),
|
||||
Field('reason', enum=REASONS),
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class LoadedSourceEvent(Event):
|
||||
"""Event message for 'loadedSource' event type.
|
||||
|
||||
The event indicates that some source has been added, changed, or
|
||||
removed from the set of all loaded sources.
|
||||
"""
|
||||
|
||||
EVENT = 'loadedSource'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
REASONS = {'new', 'changed', 'removed'}
|
||||
FIELDS = [
|
||||
Field('source', Source),
|
||||
Field('reason', enum=REASONS),
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class ProcessEvent(Event):
|
||||
"""Event message for 'process' event type.
|
||||
|
||||
The event indicates that the debugger has begun debugging a new
|
||||
process. Either one that it has launched, or one that it has
|
||||
attached to.
|
||||
"""
|
||||
|
||||
EVENT = 'process'
|
||||
|
||||
class BODY(FieldsNamespace):
|
||||
START_METHODS = {'launch', 'attach', 'attachForSuspendedLaunch'}
|
||||
FIELDS = [
|
||||
Field('name'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('systemProcessId', int),
|
||||
Field('isLocalProcess', bool),
|
||||
Field('startMethod', enum=START_METHODS),
|
||||
]
|
||||
|
||||
|
||||
# Clean up the implicit __all__.
|
||||
del register
|
||||
del Event
|
||||
del FieldsNamespace
|
||||
del Field
|
||||
del ANY
|
||||
del Breakpoint
|
||||
del Module
|
||||
del Source
|
||||
|
|
@ -1,366 +0,0 @@
|
|||
from debugger_protocol._base import Readonly, WithRepr
|
||||
from debugger_protocol.arg import param_from_datatype
|
||||
from . import MESSAGE_TYPES, Message
|
||||
|
||||
"""
|
||||
From the schema:
|
||||
|
||||
MESSAGE = [
|
||||
name
|
||||
base
|
||||
description
|
||||
props: [PROPERTY + (properties: [PROPERTY])]
|
||||
]
|
||||
|
||||
PROPERTY = [
|
||||
name
|
||||
type: choices (one or a list)
|
||||
(enum/_enum)
|
||||
description
|
||||
required: True/False (default: False)
|
||||
]
|
||||
|
||||
inheritance: override properties of base
|
||||
"""
|
||||
|
||||
|
||||
class ProtocolMessage(Readonly, WithRepr, Message):
|
||||
"""Base class of requests, responses, and events."""
|
||||
|
||||
_reqid = 0
|
||||
TYPE = None
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, type, seq, **kwargs):
|
||||
"""Return an instance based on the given raw data."""
|
||||
return cls(type=type, seq=seq, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def _next_reqid(cls):
|
||||
reqid = ProtocolMessage._reqid
|
||||
ProtocolMessage._reqid += 1
|
||||
return reqid
|
||||
|
||||
_NOT_SET = object()
|
||||
|
||||
def __init__(self, seq=_NOT_SET, **kwargs):
|
||||
type = kwargs.pop('type', self.TYPE)
|
||||
if seq is self._NOT_SET:
|
||||
seq = self._next_reqid()
|
||||
self._bind_attrs(
|
||||
type=type or None,
|
||||
seq=int(seq) if seq or seq == 0 else None,
|
||||
)
|
||||
self._validate()
|
||||
|
||||
def _validate(self):
|
||||
if self.type is None:
|
||||
raise TypeError('missing type')
|
||||
elif self.TYPE is not None and self.type != self.TYPE:
|
||||
raise ValueError('type must be {!r}'.format(self.TYPE))
|
||||
elif self.type not in MESSAGE_TYPES:
|
||||
raise ValueError('unsupported type {!r}'.format(self.type))
|
||||
|
||||
if self.seq is None:
|
||||
raise TypeError('missing seq')
|
||||
elif self.seq < 0:
|
||||
msg = '"seq" must be a non-negative int, got {!r}'
|
||||
raise ValueError(msg.format(self.seq))
|
||||
|
||||
def _init_args(self):
|
||||
if self.TYPE is None:
|
||||
yield ('type', self.type)
|
||||
yield ('seq', self.seq)
|
||||
|
||||
def as_data(self):
|
||||
"""Return serializable data for the instance."""
|
||||
data = {
|
||||
'type': self.type,
|
||||
'seq': self.seq,
|
||||
}
|
||||
return data
|
||||
|
||||
|
||||
##################################
|
||||
|
||||
class Request(ProtocolMessage):
|
||||
"""A client or server-initiated request."""
|
||||
|
||||
TYPE = 'request'
|
||||
TYPE_KEY = 'command'
|
||||
|
||||
COMMAND = None
|
||||
ARGUMENTS = None
|
||||
ARGUMENTS_REQUIRED = None
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, type, seq, command, arguments=None):
|
||||
"""Return an instance based on the given raw data."""
|
||||
return super(Request, cls).from_data(
|
||||
type, seq,
|
||||
command=command,
|
||||
arguments=arguments,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def _arguments_required(cls):
|
||||
if cls.ARGUMENTS_REQUIRED is None:
|
||||
return cls.ARGUMENTS is not None
|
||||
return cls.ARGUMENTS_REQUIRED
|
||||
|
||||
def __init__(self, arguments=None, **kwargs):
|
||||
command = kwargs.pop('command', self.COMMAND)
|
||||
args = None
|
||||
if arguments is not None:
|
||||
try:
|
||||
arguments = dict(arguments)
|
||||
except TypeError:
|
||||
pass
|
||||
if self.ARGUMENTS is not None:
|
||||
param = param_from_datatype(self.ARGUMENTS)
|
||||
args = param.bind(arguments)
|
||||
if args is None:
|
||||
raise TypeError('bad arguments {!r}'.format(arguments))
|
||||
arguments = args.coerce()
|
||||
self._bind_attrs(
|
||||
command=command or None,
|
||||
arguments=arguments or None,
|
||||
_args=args,
|
||||
)
|
||||
super(Request, self).__init__(**kwargs)
|
||||
|
||||
def _validate(self):
|
||||
super(Request, self)._validate()
|
||||
|
||||
if self.command is None:
|
||||
raise TypeError('missing command')
|
||||
elif self.COMMAND is not None and self.command != self.COMMAND:
|
||||
raise ValueError('command must be {!r}'.format(self.COMMAND))
|
||||
|
||||
if self.arguments is None:
|
||||
if self._arguments_required():
|
||||
raise TypeError('missing arguments')
|
||||
else:
|
||||
if self.ARGUMENTS is None:
|
||||
raise TypeError('got unexpected arguments')
|
||||
self._args.validate()
|
||||
|
||||
def _init_args(self):
|
||||
if self.COMMAND is None:
|
||||
yield ('command', self.command)
|
||||
if self.arguments is not None:
|
||||
yield ('arguments', self.arguments)
|
||||
yield ('seq', self.seq)
|
||||
|
||||
def as_data(self):
|
||||
"""Return serializable data for the instance."""
|
||||
data = super(Request, self).as_data()
|
||||
data.update({
|
||||
'command': self.command,
|
||||
})
|
||||
if self.arguments is not None:
|
||||
data.update({
|
||||
'arguments': self.arguments.as_data(),
|
||||
})
|
||||
return data
|
||||
|
||||
|
||||
class Response(ProtocolMessage):
|
||||
"""Response to a request."""
|
||||
|
||||
TYPE = 'response'
|
||||
TYPE_KEY = 'command'
|
||||
|
||||
COMMAND = None
|
||||
BODY = None
|
||||
ERROR_BODY = None
|
||||
BODY_REQUIRED = None
|
||||
ERROR_BODY_REQUIRED = None
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, type, seq, request_seq, command, success,
|
||||
body=None, message=None):
|
||||
"""Return an instance based on the given raw data."""
|
||||
return super(Response, cls).from_data(
|
||||
type, seq,
|
||||
request_seq=request_seq,
|
||||
command=command,
|
||||
success=success,
|
||||
body=body,
|
||||
message=message,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def _body_required(cls, success=True):
|
||||
required = cls.BODY_REQUIRED if success else cls.ERROR_BODY_REQUIRED
|
||||
if required is not None:
|
||||
return required
|
||||
bodyclass = cls.BODY if success else cls.ERROR_BODY
|
||||
return bodyclass is not None
|
||||
|
||||
def __init__(self, request_seq, body=None, message=None, success=True,
|
||||
**kwargs):
|
||||
command = kwargs.pop('command', self.COMMAND)
|
||||
reqseq = request_seq
|
||||
bodyarg = None
|
||||
if body is not None:
|
||||
try:
|
||||
body = dict(body)
|
||||
except TypeError:
|
||||
pass
|
||||
bodyclass = self.BODY if success else self.ERROR_BODY
|
||||
if bodyclass is not None:
|
||||
param = param_from_datatype(bodyclass)
|
||||
bodyarg = param.bind(body)
|
||||
if bodyarg is None:
|
||||
raise TypeError('bad body type {!r}'.format(body))
|
||||
body = bodyarg.coerce()
|
||||
self._bind_attrs(
|
||||
command=command or None,
|
||||
request_seq=int(reqseq) if reqseq or reqseq == 0 else None,
|
||||
body=body or None,
|
||||
_bodyarg=bodyarg,
|
||||
message=message or None,
|
||||
success=bool(success),
|
||||
)
|
||||
super(Response, self).__init__(**kwargs)
|
||||
|
||||
def _validate(self):
|
||||
super(Response, self)._validate()
|
||||
|
||||
if self.request_seq is None:
|
||||
raise TypeError('missing request_seq')
|
||||
elif self.request_seq < 0:
|
||||
msg = 'request_seq must be a non-negative int, got {!r}'
|
||||
raise ValueError(msg.format(self.request_seq))
|
||||
|
||||
if not self.command:
|
||||
raise TypeError('missing command')
|
||||
elif self.COMMAND is not None and self.command != self.COMMAND:
|
||||
raise ValueError('command must be {!r}'.format(self.COMMAND))
|
||||
|
||||
if self.body is None:
|
||||
if self._body_required(self.success):
|
||||
raise TypeError('missing body')
|
||||
elif self._bodyarg is None:
|
||||
raise ValueError('got unexpected body')
|
||||
else:
|
||||
self._bodyarg.validate()
|
||||
|
||||
if not self.success and not self.message:
|
||||
raise TypeError('missing message')
|
||||
|
||||
def _init_args(self):
|
||||
if self.COMMAND is None:
|
||||
yield ('command', self.command)
|
||||
yield ('request_seq', self.request_seq)
|
||||
yield ('success', self.success)
|
||||
if not self.success:
|
||||
yield ('message', self.message)
|
||||
if self.body is not None:
|
||||
yield ('body', self.body)
|
||||
yield ('seq', self.seq)
|
||||
|
||||
def as_data(self):
|
||||
"""Return serializable data for the instance."""
|
||||
data = super(Response, self).as_data()
|
||||
data.update({
|
||||
'request_seq': self.request_seq,
|
||||
'command': self.command,
|
||||
'success': self.success,
|
||||
})
|
||||
if self.body is not None:
|
||||
data.update({
|
||||
'body': self.body.as_data(),
|
||||
})
|
||||
if self.message is not None:
|
||||
data.update({
|
||||
'message': self.message,
|
||||
})
|
||||
return data
|
||||
|
||||
|
||||
##################################
|
||||
|
||||
class Event(ProtocolMessage):
|
||||
"""Server-initiated event."""
|
||||
|
||||
TYPE = 'event'
|
||||
TYPE_KEY = 'event'
|
||||
|
||||
EVENT = None
|
||||
BODY = None
|
||||
BODY_REQUIRED = None
|
||||
|
||||
@classmethod
|
||||
def from_data(cls, type, seq, event, body=None):
|
||||
"""Return an instance based on the given raw data."""
|
||||
return super(Event, cls).from_data(type, seq, event=event, body=body)
|
||||
|
||||
@classmethod
|
||||
def _body_required(cls):
|
||||
if cls.BODY_REQUIRED is None:
|
||||
return cls.BODY is not None
|
||||
return cls.BODY_REQUIRED
|
||||
|
||||
def __init__(self, body=None, **kwargs):
|
||||
event = kwargs.pop('event', self.EVENT)
|
||||
bodyarg = None
|
||||
if body is not None:
|
||||
try:
|
||||
body = dict(body)
|
||||
except TypeError:
|
||||
pass
|
||||
if self.BODY is not None:
|
||||
param = param_from_datatype(self.BODY)
|
||||
bodyarg = param.bind(body)
|
||||
if bodyarg is None:
|
||||
raise TypeError('bad body type {!r}'.format(body))
|
||||
body = bodyarg.coerce()
|
||||
|
||||
self._bind_attrs(
|
||||
event=event or None,
|
||||
body=body or None,
|
||||
_bodyarg=bodyarg,
|
||||
)
|
||||
super(Event, self).__init__(**kwargs)
|
||||
|
||||
def _validate(self):
|
||||
super(Event, self)._validate()
|
||||
|
||||
if self.event is None:
|
||||
raise TypeError('missing event')
|
||||
if self.EVENT is not None and self.event != self.EVENT:
|
||||
msg = 'event must be {!r}, got {!r}'
|
||||
raise ValueError(msg.format(self.EVENT, self.event))
|
||||
|
||||
if self.body is None:
|
||||
if self._body_required():
|
||||
raise TypeError('missing body')
|
||||
elif self._bodyarg is None:
|
||||
raise ValueError('got unexpected body')
|
||||
else:
|
||||
self._bodyarg.validate()
|
||||
|
||||
def _init_args(self):
|
||||
if self.EVENT is None:
|
||||
yield ('event', self.event)
|
||||
if self.body is not None:
|
||||
yield ('body', self.body)
|
||||
yield ('seq', self.seq)
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return self.event
|
||||
|
||||
def as_data(self):
|
||||
"""Return serializable data for the instance."""
|
||||
data = super(Event, self).as_data()
|
||||
data.update({
|
||||
'event': self.event,
|
||||
})
|
||||
if self.body is not None:
|
||||
data.update({
|
||||
'body': self.body.as_data(),
|
||||
})
|
||||
return data
|
||||
File diff suppressed because it is too large
Load diff
|
|
@ -1,85 +0,0 @@
|
|||
from debugger_protocol.arg import ANY, FieldsNamespace, Field
|
||||
|
||||
|
||||
class Checksum(FieldsNamespace):
|
||||
"""The checksum of an item calculated by the specified algorithm."""
|
||||
|
||||
ALGORITHMS = {'MD5', 'SHA1', 'SHA256', 'timestamp'}
|
||||
|
||||
FIELDS = [
|
||||
Field('algorithm', enum=ALGORITHMS),
|
||||
Field('checksum'),
|
||||
]
|
||||
|
||||
|
||||
class Source(FieldsNamespace):
|
||||
"""A Source is a descriptor for source code.
|
||||
|
||||
It is returned from the debug adapter as part of a StackFrame
|
||||
and it is used by clients when specifying breakpoints.
|
||||
"""
|
||||
|
||||
HINTS = {'normal', 'emphasize', 'deemphasize'}
|
||||
|
||||
FIELDS = [
|
||||
Field.START_OPTIONAL,
|
||||
Field('name'),
|
||||
Field('path'),
|
||||
Field('sourceReference', int), # number
|
||||
Field('presentationHint', enum=HINTS),
|
||||
Field('origin'),
|
||||
Field('sources', ['<ref>']),
|
||||
Field('adapterData', ANY),
|
||||
Field('checksums', [Checksum]),
|
||||
]
|
||||
|
||||
|
||||
class Breakpoint(FieldsNamespace):
|
||||
"""Information about a Breakpoint.
|
||||
|
||||
The breakpoint comes from setBreakpoints or setFunctionBreakpoints.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('id', int, optional=True),
|
||||
Field('verified', bool),
|
||||
Field.START_OPTIONAL,
|
||||
Field('message'),
|
||||
Field('source', Source),
|
||||
Field('line', int),
|
||||
Field('column', int),
|
||||
Field('endLine', int),
|
||||
Field('endColumn', int),
|
||||
]
|
||||
|
||||
|
||||
class Module(FieldsNamespace):
|
||||
"""A Module object represents a row in the modules view.
|
||||
|
||||
Two attributes are mandatory: an id identifies a module in the
|
||||
modules view and is used in a ModuleEvent for identifying a module
|
||||
for adding, updating or deleting. The name is used to minimally
|
||||
render the module in the UI.
|
||||
|
||||
Additional attributes can be added to the module. They will show up
|
||||
in the module View if they have a corresponding ColumnDescriptor.
|
||||
|
||||
To avoid an unnecessary proliferation of additional attributes with
|
||||
similar semantics but different names we recommend to re-use
|
||||
attributes from the 'recommended' list below first, and only
|
||||
introduce new attributes if nothing appropriate could be found.
|
||||
"""
|
||||
|
||||
FIELDS = [
|
||||
Field('id', {int, str}),
|
||||
Field('name'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('path'),
|
||||
Field('isOptimized', bool),
|
||||
Field('isUserCode', bool),
|
||||
Field('version'),
|
||||
Field('symbolStatus'),
|
||||
Field('symbolFilePath'),
|
||||
Field('dateTimeStamp'),
|
||||
Field('addressRange'),
|
||||
]
|
||||
|
|
@ -1,57 +0,0 @@
|
|||
import json
|
||||
|
||||
from . import look_up
|
||||
|
||||
|
||||
def read(stream, look_up=look_up):
|
||||
"""Return an instance based on the given bytes."""
|
||||
headers = {}
|
||||
for line in stream:
|
||||
if line == b'\r\n':
|
||||
break
|
||||
assert(line.endswith(b'\r\n'))
|
||||
line = line[:-2].decode('ascii')
|
||||
try:
|
||||
name, value = line.split(': ', 1)
|
||||
except ValueError:
|
||||
raise RuntimeError('invalid header line: {}'.format(line))
|
||||
headers[name] = value
|
||||
else:
|
||||
# EOF
|
||||
return None
|
||||
|
||||
if not headers:
|
||||
raise RuntimeError('got message without headers')
|
||||
|
||||
size = int(headers['Content-Length'])
|
||||
body = stream.read(size)
|
||||
|
||||
data = json.loads(body.decode('utf-8'))
|
||||
|
||||
cls = look_up(data)
|
||||
return cls.from_data(**data)
|
||||
|
||||
|
||||
def write(stream, msg):
|
||||
"""Serialize the message and write it to the stream."""
|
||||
raw = as_bytes(msg)
|
||||
stream.write(raw)
|
||||
|
||||
|
||||
def as_bytes(msg):
|
||||
"""Return the raw bytes for the message."""
|
||||
headers, body = _as_http_data(msg)
|
||||
headers = '\r\n'.join('{}: {}'.format(name, value)
|
||||
for name, value in headers.items())
|
||||
return headers.encode('ascii') + b'\r\n\r\n' + body.encode('utf-8')
|
||||
|
||||
|
||||
def _as_http_data(msg):
|
||||
payload = msg.as_data()
|
||||
body = json.dumps(payload)
|
||||
|
||||
headers = {
|
||||
'Content-Length': len(body),
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
return headers, body
|
||||
|
|
@ -1,4 +0,0 @@
|
|||
upstream: https://github.com/Microsoft/vscode-debugadapter-node/raw/master/debugProtocol.json
|
||||
revision: 1b9a3e83656ebb88f4560bb6d700f9ac64b697ba
|
||||
checksum: 55a768cf61fe0c05c8af6c680b56154a
|
||||
downloaded: 2018-09-06 14:00:00 (UTC)
|
||||
|
|
@ -1,4 +0,0 @@
|
|||
import os.path
|
||||
|
||||
|
||||
DATA_DIR = os.path.dirname(__file__)
|
||||
|
|
@ -1,103 +0,0 @@
|
|||
import argparse
|
||||
import sys
|
||||
|
||||
from ._util import open_url
|
||||
from .metadata import open_metadata
|
||||
from .upstream import URL as UPSTREAM, download
|
||||
from .vendored import FILENAME as VENDORED, check_local, check_upstream
|
||||
|
||||
|
||||
COMMANDS = {}
|
||||
|
||||
|
||||
def as_command(name):
|
||||
def decorator(f):
|
||||
COMMANDS[name] = f
|
||||
return f
|
||||
return decorator
|
||||
|
||||
|
||||
@as_command('download')
|
||||
def handle_download(source=UPSTREAM, target=VENDORED, *,
|
||||
_open=open, _open_url=open_url):
|
||||
# Download the schema file.
|
||||
print('downloading the schema file from {}...'.format(source))
|
||||
with _open_url(source) as infile:
|
||||
with _open(target, 'wb') as outfile:
|
||||
meta = download(source, infile, outfile,
|
||||
_open_url=_open_url)
|
||||
print('...schema file written to {}.'.format(target))
|
||||
|
||||
# Save the metadata.
|
||||
print('saving the schema metadata...')
|
||||
formatted = meta.format()
|
||||
metafile, filename = open_metadata(target, 'w',
|
||||
_open=_open)
|
||||
with metafile:
|
||||
metafile.write(formatted)
|
||||
print('...metadata written to {}.'.format(filename))
|
||||
|
||||
|
||||
@as_command('check')
|
||||
def handle_check(schemafile=VENDORED, upstream=None, *,
|
||||
_open=open, _open_url=open_url):
|
||||
print('checking local schema file...')
|
||||
try:
|
||||
check_local(schemafile,
|
||||
_open=_open)
|
||||
except Exception as exc:
|
||||
sys.exit('ERROR: {}'.format(exc))
|
||||
print('comparing with upstream schema file...')
|
||||
try:
|
||||
check_upstream(schemafile, url=upstream,
|
||||
_open=_open, _open_url=_open_url)
|
||||
except Exception as exc:
|
||||
sys.exit('ERROR: {}'.format(exc))
|
||||
print('schema file okay')
|
||||
|
||||
|
||||
#############################
|
||||
# the script
|
||||
|
||||
def parse_args(argv=sys.argv[1:], prog=None):
|
||||
if prog is None:
|
||||
if __name__ == '__main__':
|
||||
module = __spec__.name
|
||||
pkg, _, mod = module.rpartition('.')
|
||||
if not pkg:
|
||||
module = mod
|
||||
elif mod == '__main__':
|
||||
module = pkg
|
||||
prog = 'python3 -m {}'.format(module)
|
||||
else:
|
||||
prog = sys.argv[0]
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
prog=prog,
|
||||
description='Manage the vendored VSC debugger protocol schema.',
|
||||
)
|
||||
subs = parser.add_subparsers(dest='command')
|
||||
|
||||
download = subs.add_parser('download')
|
||||
download.add_argument('--source', default=UPSTREAM)
|
||||
download.add_argument('--target', default=VENDORED)
|
||||
|
||||
check = subs.add_parser('check')
|
||||
check.add_argument('--schemafile', default=VENDORED)
|
||||
check.add_argument('--upstream', default=None)
|
||||
|
||||
args = parser.parse_args(argv)
|
||||
if args.command is None:
|
||||
parser.print_help()
|
||||
parser.exit()
|
||||
return args
|
||||
|
||||
|
||||
def main(command, **kwargs):
|
||||
handle_command = COMMANDS[command]
|
||||
return handle_command(**kwargs)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
args = parse_args()
|
||||
main(**(vars(args)))
|
||||
|
|
@ -1,60 +0,0 @@
|
|||
import hashlib
|
||||
import json
|
||||
import re
|
||||
import urllib.request
|
||||
|
||||
|
||||
def open_url(url):
|
||||
"""Return a file-like object for (binary) reading the given URL."""
|
||||
return urllib.request.urlopen(url)
|
||||
|
||||
|
||||
def get_revision(url, *, _open_url=open_url):
|
||||
"""Return the revision corresponding to the given URL."""
|
||||
if url.startswith('https://github.com/'):
|
||||
return github_get_revision(url, _open_url=_open_url)
|
||||
else:
|
||||
return '<unknown>'
|
||||
|
||||
|
||||
def get_checksum(data):
|
||||
"""Return the MD5 hash for the given data."""
|
||||
m = hashlib.md5()
|
||||
m.update(data)
|
||||
return m.hexdigest()
|
||||
|
||||
|
||||
##################################
|
||||
# github
|
||||
|
||||
GH_RESOURCE_RE = re.compile(r'^https://github.com'
|
||||
r'/(?P<org>[^/]*)'
|
||||
r'/(?P<repo>[^/]*)'
|
||||
r'/(?P<kind>[^/]*)'
|
||||
r'/(?P<rev>[^/]*)'
|
||||
r'/(?P<path>.*)$')
|
||||
|
||||
|
||||
def github_get_revision(url, *, _open_url=open_url):
|
||||
"""Return the full commit hash corresponding to the given URL."""
|
||||
m = GH_RESOURCE_RE.match(url)
|
||||
if not m:
|
||||
raise ValueError('invalid GitHub resource URL: {!r}'.format(url))
|
||||
org, repo, _, ref, path = m.groups()
|
||||
|
||||
revurl = ('https://api.github.com/repos/{}/{}/commits?sha={}&path={}'
|
||||
).format(org, repo, ref, path)
|
||||
with _open_url(revurl) as revinfo:
|
||||
raw = revinfo.read()
|
||||
data = json.loads(raw.decode())
|
||||
return data[0]['sha']
|
||||
|
||||
|
||||
def github_url_replace_ref(url, newref):
|
||||
"""Return a new URL with the ref replaced."""
|
||||
m = GH_RESOURCE_RE.match(url)
|
||||
if not m:
|
||||
raise ValueError('invalid GitHub resource URL: {!r}'.format(url))
|
||||
org, repo, kind, _, path = m.groups()
|
||||
parts = (org, repo, kind, newref, path)
|
||||
return 'https://github.com/{}/{}/{}/{}/{}'.format(*parts)
|
||||
File diff suppressed because it is too large
Load diff
|
|
@ -1,15 +0,0 @@
|
|||
|
||||
|
||||
class SchemaFileError(Exception):
|
||||
"""A schema-file-related operation failed."""
|
||||
|
||||
|
||||
def read_schema(filename, *, _open=open):
|
||||
"""Return the data (bytes) in the given schema file."""
|
||||
try:
|
||||
schemafile = _open(filename, 'rb')
|
||||
except FileNotFoundError:
|
||||
raise SchemaFileError(
|
||||
'schema file {!r} not found'.format(filename))
|
||||
with schemafile:
|
||||
return schemafile.read()
|
||||
|
|
@ -1,118 +0,0 @@
|
|||
from collections import namedtuple
|
||||
from datetime import datetime
|
||||
import os.path
|
||||
from textwrap import dedent
|
||||
|
||||
from ._util import github_url_replace_ref
|
||||
|
||||
|
||||
class MetadataError(Exception):
|
||||
"""A metadata-related operation failed."""
|
||||
|
||||
|
||||
def open_metadata(schemafile, mode='r', *, _open=open):
|
||||
"""Return a file object for the metadata of the given schema file.
|
||||
|
||||
Also return the metadata file's filename.
|
||||
"""
|
||||
from .vendored import METADATA # Here due to a circular import.
|
||||
filename = os.path.join(os.path.dirname(schemafile),
|
||||
os.path.basename(METADATA))
|
||||
try:
|
||||
return _open(filename, mode), filename
|
||||
except FileNotFoundError:
|
||||
raise MetadataError(
|
||||
'metadata file for {!r} not found'.format(schemafile))
|
||||
|
||||
|
||||
def read_metadata(schemafile, *, _open=open):
|
||||
"""Return the metadata corresponding to the schema file.
|
||||
|
||||
Also return the path to the metadata file.
|
||||
"""
|
||||
metafile, filename = open_metadata(schemafile, _open=_open)
|
||||
with metafile:
|
||||
data = metafile.read()
|
||||
|
||||
try:
|
||||
meta = Metadata.parse(data)
|
||||
except Exception as exc:
|
||||
raise MetadataError(
|
||||
'metadata file {!r} not valid: {}'.format(filename, exc))
|
||||
|
||||
return meta, filename
|
||||
|
||||
|
||||
class Metadata(
|
||||
namedtuple('Metadata', 'upstream revision checksum downloaded')):
|
||||
"""Info about the local copy of the upstream schema file."""
|
||||
|
||||
TIMESTAMP = '%Y-%m-%d %H:%M:%S (UTC)'
|
||||
|
||||
FORMAT = dedent("""\
|
||||
upstream: {}
|
||||
revision: {}
|
||||
checksum: {}
|
||||
downloaded: {:%s}
|
||||
""") % TIMESTAMP
|
||||
|
||||
@classmethod
|
||||
def parse(cls, data):
|
||||
"""Return an instance based on the given metadata string."""
|
||||
lines = data.splitlines()
|
||||
|
||||
kwargs = {}
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
if line.startswith('#'):
|
||||
continue
|
||||
if not line:
|
||||
continue
|
||||
field, _, value = line.partition(':')
|
||||
kwargs[field] = value.strip()
|
||||
self = cls(**kwargs)
|
||||
return self
|
||||
|
||||
def __new__(cls, upstream, revision, checksum, downloaded):
|
||||
# coercion
|
||||
upstream = str(upstream) if upstream else None
|
||||
revision = str(revision) if revision else None
|
||||
checksum = str(checksum) if checksum else None
|
||||
if not downloaded:
|
||||
downloaded = None
|
||||
elif isinstance(downloaded, str):
|
||||
downloaded = datetime.strptime(downloaded, cls.TIMESTAMP)
|
||||
elif downloaded.tzinfo is not None:
|
||||
downloaded -= downloaded.utcoffset()
|
||||
|
||||
self = super().__new__(cls, upstream, revision, checksum, downloaded)
|
||||
return self
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
# validation
|
||||
|
||||
if not self.upstream:
|
||||
raise ValueError('missing upstream URL')
|
||||
# TODO ensure upstream is URL?
|
||||
|
||||
if not self.revision:
|
||||
raise ValueError('missing upstream revision')
|
||||
# TODO ensure revision is a hash?
|
||||
|
||||
if not self.checksum:
|
||||
raise ValueError('missing checksum')
|
||||
# TODO ensure checksum is a MD5 hash?
|
||||
|
||||
if not self.downloaded:
|
||||
raise ValueError('missing downloaded')
|
||||
|
||||
@property
|
||||
def url(self):
|
||||
if self.upstream.startswith('https://github.com/'):
|
||||
return github_url_replace_ref(self.upstream, self.revision)
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
def format(self):
|
||||
"""Return a string containing the formatted metadata."""
|
||||
return self.FORMAT.format(*self)
|
||||
|
|
@ -1,36 +0,0 @@
|
|||
from datetime import datetime
|
||||
import io
|
||||
import urllib.error
|
||||
|
||||
from ._util import open_url, get_revision, get_checksum
|
||||
from .file import SchemaFileError
|
||||
from .metadata import Metadata
|
||||
|
||||
|
||||
URL = 'https://github.com/Microsoft/vscode-debugadapter-node/raw/master/debugProtocol.json' # noqa
|
||||
|
||||
|
||||
def download(source, infile, outfile, *,
|
||||
_now=datetime.utcnow, _open_url=open_url):
|
||||
"""Return the corresponding metadata after downloading the schema file."""
|
||||
timestamp = _now()
|
||||
revision = get_revision(source, _open_url=_open_url)
|
||||
|
||||
data = infile.read()
|
||||
checksum = get_checksum(data)
|
||||
outfile.write(data)
|
||||
|
||||
return Metadata(source, revision, checksum, timestamp)
|
||||
|
||||
|
||||
def read(url, *, _open_url=open_url):
|
||||
"""Return (data, metadata) for the given upstream URL."""
|
||||
outfile = io.BytesIO()
|
||||
try:
|
||||
infile = _open_url(url)
|
||||
except (FileNotFoundError, urllib.error.HTTPError):
|
||||
# TODO: Ensure it's a 404 error?
|
||||
raise SchemaFileError('schema file at {!r} not found'.format(url))
|
||||
with infile:
|
||||
upstream = download(url, infile, outfile, _open_url=_open_url)
|
||||
return outfile.getvalue(), upstream
|
||||
|
|
@ -1,69 +0,0 @@
|
|||
import os.path
|
||||
|
||||
from . import DATA_DIR, upstream
|
||||
from ._util import open_url, get_checksum
|
||||
from .file import SchemaFileError, read_schema
|
||||
from .metadata import MetadataError, read_metadata
|
||||
|
||||
|
||||
FILENAME = os.path.join(DATA_DIR, 'debugProtocol.json')
|
||||
METADATA = os.path.join(DATA_DIR, 'UPSTREAM')
|
||||
|
||||
|
||||
class SchemaFileMismatchError(SchemaFileError, MetadataError):
|
||||
"""The schema file does not match expectations."""
|
||||
|
||||
@classmethod
|
||||
def _build_message(cls, filename, actual, expected, upstream):
|
||||
if upstream:
|
||||
msg = ('local schema file {!r} does not match upstream {!r}'
|
||||
).format(filename, expected.upstream)
|
||||
else:
|
||||
msg = ('schema file {!r} does not match metadata file'
|
||||
).format(filename)
|
||||
|
||||
for field in actual._fields:
|
||||
value = getattr(actual, field)
|
||||
other = getattr(expected, field)
|
||||
if value != other:
|
||||
msg += (' ({} mismatch: {!r} != {!r})'
|
||||
).format(field, value, other)
|
||||
break
|
||||
|
||||
return msg
|
||||
|
||||
def __init__(self, filename, actual, expected, *, upstream=False):
|
||||
super().__init__(
|
||||
self._build_message(filename, actual, expected, upstream))
|
||||
self.filename = filename
|
||||
self.actual = actual
|
||||
self.expected = expected
|
||||
self.upstream = upstream
|
||||
|
||||
|
||||
def check_local(filename, *, _open=open):
|
||||
"""Ensure that the local schema file matches the local metadata file."""
|
||||
# Get the vendored metadata and data.
|
||||
meta, _ = read_metadata(filename, _open=_open)
|
||||
data = read_schema(filename, _open=_open)
|
||||
|
||||
# Only worry about the checksum matching.
|
||||
actual = meta._replace(
|
||||
checksum=get_checksum(data))
|
||||
if actual != meta:
|
||||
raise SchemaFileMismatchError(filename, actual, meta)
|
||||
|
||||
|
||||
def check_upstream(filename, url=None, *, _open=open, _open_url=open_url):
|
||||
"""Ensure that the local metadata file matches the upstream schema file."""
|
||||
# Get the vendored and upstream metadata.
|
||||
meta, _ = read_metadata(filename, _open=_open)
|
||||
if url is None:
|
||||
url = meta.upstream
|
||||
_, upmeta = upstream.read(url, _open_url=_open_url)
|
||||
|
||||
# Make sure the revision and checksum match.
|
||||
if meta.revision != upmeta.revision:
|
||||
raise SchemaFileMismatchError(filename, meta, upmeta, upstream=True)
|
||||
if meta.checksum != upmeta.checksum:
|
||||
raise SchemaFileMismatchError(filename, meta, upmeta, upstream=True)
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
[pytest]
|
||||
testpaths=pytests
|
||||
testpaths=tests
|
||||
timeout=15
|
||||
timeout_method=thread
|
||||
|
|
|
|||
|
|
@ -1,17 +0,0 @@
|
|||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License. See LICENSE in the project root
|
||||
# for license information.
|
||||
|
||||
__doc__ = """pytest-based ptvsd tests."""
|
||||
|
||||
import colorama
|
||||
import pytest
|
||||
|
||||
# This is only imported to ensure that the module is actually installed and the
|
||||
# timeout setting in pytest.ini is active, since otherwise most timeline-based
|
||||
# tests will hang indefinitely.
|
||||
import pytest_timeout # noqa
|
||||
|
||||
|
||||
colorama.init()
|
||||
pytest.register_assert_rewrite('pytests.helpers')
|
||||
|
|
@ -1,59 +0,0 @@
|
|||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License. See LICENSE in the project root
|
||||
# for license information.
|
||||
|
||||
from __future__ import print_function, with_statement, absolute_import
|
||||
|
||||
import os
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import traceback
|
||||
|
||||
|
||||
if sys.version_info >= (3, 5):
|
||||
clock = time.monotonic
|
||||
else:
|
||||
clock = time.clock
|
||||
|
||||
|
||||
timestamp_zero = clock()
|
||||
|
||||
def timestamp():
|
||||
return clock() - timestamp_zero
|
||||
|
||||
|
||||
def dump_stacks():
|
||||
"""Dump the stacks of all threads except the current thread"""
|
||||
current_ident = threading.current_thread().ident
|
||||
for thread_ident, frame in sys._current_frames().items():
|
||||
if thread_ident == current_ident:
|
||||
continue
|
||||
for t in threading.enumerate():
|
||||
if t.ident == thread_ident:
|
||||
thread_name = t.name
|
||||
thread_daemon = t.daemon
|
||||
break
|
||||
else:
|
||||
thread_name = '<unknown>'
|
||||
print('Stack of %s (%s) in pid %s; daemon=%s' % (thread_name, thread_ident, os.getpid(), thread_daemon))
|
||||
print(''.join(traceback.format_stack(frame)))
|
||||
|
||||
|
||||
def dump_stacks_in(secs):
|
||||
"""Invokes dump_stacks() on a background thread after waiting.
|
||||
|
||||
Can be called from debugged code before the point after which it hangs,
|
||||
to determine the cause of the hang while debugging a test.
|
||||
"""
|
||||
|
||||
def dumper():
|
||||
time.sleep(secs)
|
||||
dump_stacks()
|
||||
|
||||
thread = threading.Thread(target=dumper)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
|
||||
from .printer import print
|
||||
|
|
@ -1,66 +0,0 @@
|
|||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License. See LICENSE in the project root
|
||||
# for license information.
|
||||
|
||||
import threading
|
||||
import requests
|
||||
import re
|
||||
import socket
|
||||
import time
|
||||
|
||||
def get_web_string(path, obj):
|
||||
r = requests.get(path)
|
||||
content = r.text
|
||||
if obj is not None:
|
||||
obj.content = content
|
||||
return content
|
||||
|
||||
|
||||
def get_web_string_no_error(path, obj):
|
||||
try:
|
||||
return get_web_string(path, obj)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
re_link = r"(http(s|)\:\/\/[\w\.]*\:[0-9]{4,6}(\/|))"
|
||||
def get_url_from_str(s):
|
||||
matches = re.findall(re_link, s)
|
||||
if matches and matches[0]and matches[0][0].strip():
|
||||
return matches[0][0]
|
||||
return None
|
||||
|
||||
|
||||
def get_web_content(link, web_result=None, timeout=1):
|
||||
class WebResponse(object):
|
||||
def __init__(self):
|
||||
self.content = None
|
||||
|
||||
def wait_for_response(self, timeout=1):
|
||||
self._web_client_thread.join(timeout)
|
||||
return self.content
|
||||
|
||||
response = WebResponse()
|
||||
response._web_client_thread = threading.Thread(
|
||||
target=get_web_string_no_error,
|
||||
args=(link, response),
|
||||
name='test.webClient'
|
||||
)
|
||||
response._web_client_thread.start()
|
||||
return response
|
||||
|
||||
|
||||
def wait_for_connection(port, interval=1, attempts=10):
|
||||
count = 0
|
||||
while count < attempts:
|
||||
count += 1
|
||||
try:
|
||||
print('Waiting to connect to port: %s' % port)
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
sock.connect(('localhost', port))
|
||||
return
|
||||
except socket.error:
|
||||
pass
|
||||
finally:
|
||||
sock.close()
|
||||
time.sleep(interval)
|
||||
|
|
@ -1,46 +1,17 @@
|
|||
from __future__ import absolute_import
|
||||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License. See LICENSE in the project root
|
||||
# for license information.
|
||||
|
||||
import os
|
||||
import os.path
|
||||
import sys
|
||||
import unittest
|
||||
__doc__ = """pytest-based ptvsd tests."""
|
||||
|
||||
# Importing "ptvsd" here triggers the vendoring code before any vendored
|
||||
# code ever gets imported.
|
||||
import ptvsd # noqa
|
||||
from ptvsd._vendored import list_all as vendored
|
||||
import colorama
|
||||
import pytest
|
||||
|
||||
# This is only imported to ensure that the module is actually installed and the
|
||||
# timeout setting in pytest.ini is active, since otherwise most timeline-based
|
||||
# tests will hang indefinitely.
|
||||
import pytest_timeout # noqa
|
||||
|
||||
|
||||
TEST_ROOT = os.path.abspath(os.path.dirname(__file__)) # noqa
|
||||
RESOURCES_ROOT = os.path.join(TEST_ROOT, 'resources') # noqa
|
||||
PROJECT_ROOT = os.path.abspath(os.path.dirname(os.path.dirname(ptvsd.__file__)))
|
||||
VENDORED_ROOTS = vendored(resolve=True) # noqa
|
||||
|
||||
|
||||
def skip_py2(decorated=None):
|
||||
if sys.version_info[0] > 2:
|
||||
return decorated
|
||||
msg = 'not tested under Python 2'
|
||||
if decorated is None:
|
||||
raise unittest.SkipTest(msg)
|
||||
else:
|
||||
decorator = unittest.skip(msg)
|
||||
return decorator(decorated)
|
||||
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
# Hack alert!!!
|
||||
class SkippingTestSuite(unittest.TestSuite):
|
||||
def __init__(self, tests=()):
|
||||
if tests and type(tests[0]).__name__ == 'ModuleImportFailure':
|
||||
_, exc, _ = sys.exc_info()
|
||||
if isinstance(exc, unittest.SkipTest):
|
||||
from unittest.loader import _make_failed_load_tests
|
||||
suite = _make_failed_load_tests(
|
||||
tests[0]._testMethodName,
|
||||
exc,
|
||||
type(self),
|
||||
)
|
||||
tests = tuple(suite)
|
||||
unittest.TestSuite.__init__(self, tests)
|
||||
unittest.TestLoader.suiteClass = SkippingTestSuite
|
||||
colorama.init()
|
||||
pytest.register_assert_rewrite('tests.helpers')
|
||||
|
|
@ -1,231 +0,0 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import os.path
|
||||
import subprocess
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
from . import TEST_ROOT, PROJECT_ROOT, VENDORED_ROOTS
|
||||
|
||||
|
||||
def parse_cmdline(argv=None):
|
||||
"""Obtain command line arguments and setup the test run accordingly."""
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Run tests associated to the PTVSD project.",
|
||||
prog="tests",
|
||||
usage="python -m %(prog)s OPTS",
|
||||
add_help=False
|
||||
)
|
||||
|
||||
# allow_abbrev was added in 3.5
|
||||
if sys.version_info >= (3, 5):
|
||||
parser.allow_abbrev = False
|
||||
|
||||
parser.add_argument(
|
||||
"-c",
|
||||
"--coverage",
|
||||
help="Generate code coverage report.",
|
||||
action="store_true"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--full",
|
||||
help="Do full suite of tests (disables prior --quick options).",
|
||||
action="store_false",
|
||||
dest="quick"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-j",
|
||||
"--junit-xml",
|
||||
help="Output report is generated to JUnit-style XML file specified.",
|
||||
type=str
|
||||
)
|
||||
parser.add_argument(
|
||||
"-l",
|
||||
"--lint",
|
||||
help="Run and report on Linter compliance.",
|
||||
action="store_true"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-L",
|
||||
"--lint-only",
|
||||
help="Run and report on Linter compliance only, do not perform tests.",
|
||||
action="store_true"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-n",
|
||||
"--network",
|
||||
help="Perform tests taht require network connectivity.",
|
||||
action="store_true",
|
||||
dest="network"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-network",
|
||||
help="Do not perform tests that require network connectivity.",
|
||||
action="store_false",
|
||||
dest="network"
|
||||
)
|
||||
parser.add_argument(
|
||||
"-q",
|
||||
"--quick",
|
||||
help="Only do the tests under test/ptvsd.",
|
||||
action="store_true",
|
||||
dest="quick"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--quick-py2",
|
||||
help=("Only do the tests under test/ptvsd, that are compatible "
|
||||
"with Python 2.x."),
|
||||
action="store_true"
|
||||
)
|
||||
# these destinations have 2 switches, be explicit about the default
|
||||
parser.set_defaults(quick=False)
|
||||
parser.set_defaults(network=True)
|
||||
config, passthrough_args = parser.parse_known_args(argv)
|
||||
|
||||
return config, passthrough_args
|
||||
|
||||
|
||||
def convert_argv(argv=None):
|
||||
"""Convert commandling args into unittest/linter/coverage input."""
|
||||
|
||||
config, passthru = parse_cmdline(argv)
|
||||
|
||||
modules = set()
|
||||
args = []
|
||||
help = False
|
||||
|
||||
for arg in passthru:
|
||||
# Unittest's main has only flags and positional args.
|
||||
# So we don't worry about options with values.
|
||||
if not arg.startswith('-'):
|
||||
# It must be the name of a test, case, module, or file.
|
||||
# We convert filenames to module names. For filenames
|
||||
# we support specifying a test name by appending it to
|
||||
# the filename with a ":" in between.
|
||||
mod, _, test = arg.partition(':')
|
||||
if mod.endswith(os.sep):
|
||||
mod = mod.rsplit(os.sep, 1)[0]
|
||||
mod = mod.rsplit('.py', 1)[0]
|
||||
mod = mod.replace(os.sep, '.')
|
||||
arg = mod if not test else mod + '.' + test
|
||||
modules.add(mod)
|
||||
elif arg in ('-h', '--help'):
|
||||
help = True
|
||||
args.append(arg)
|
||||
|
||||
env = {}
|
||||
if config.network:
|
||||
env['HAS_NETWORK'] = '1'
|
||||
# We make the "executable" a single arg because unittest.main()
|
||||
# doesn't work if we split it into 3 parts.
|
||||
cmd = [sys.executable + ' -m unittest']
|
||||
if not modules and not help:
|
||||
# Do discovery.
|
||||
quickroot = os.path.join(TEST_ROOT, 'ptvsd')
|
||||
if config.quick:
|
||||
start = quickroot
|
||||
elif config.quick_py2 and sys.version_info[0] == 2:
|
||||
start = quickroot
|
||||
else:
|
||||
start = PROJECT_ROOT
|
||||
|
||||
cmd += [
|
||||
'discover',
|
||||
'--top-level-directory', PROJECT_ROOT,
|
||||
'--start-directory', start,
|
||||
]
|
||||
args = cmd + args
|
||||
|
||||
return config, args, env
|
||||
|
||||
|
||||
def is_cwd(path):
|
||||
p1 = os.path.normcase(os.path.abspath(path))
|
||||
p2 = os.path.normcase(os.getcwd())
|
||||
return p1 == p2
|
||||
|
||||
|
||||
def fix_sys_path():
|
||||
pos = 1 if (not sys.path[0] or sys.path[0] == '.' or
|
||||
is_cwd(sys.path[0])) else 0
|
||||
for projectroot in VENDORED_ROOTS:
|
||||
sys.path.insert(pos, projectroot)
|
||||
|
||||
|
||||
def check_lint():
|
||||
print('linting...')
|
||||
args = [
|
||||
sys.executable,
|
||||
'-m', 'flake8',
|
||||
'--config', '.flake8',
|
||||
PROJECT_ROOT,
|
||||
]
|
||||
rc = subprocess.call(args)
|
||||
if rc != 0:
|
||||
print('...linting failed!')
|
||||
sys.exit(rc)
|
||||
print('...done')
|
||||
|
||||
|
||||
def run_tests(argv, env, coverage, junit_xml):
|
||||
print('running tests...')
|
||||
if coverage:
|
||||
omissions = [os.path.join(root, '*') for root in VENDORED_ROOTS]
|
||||
# TODO: Drop the explicit pydevd omit once we move the subtree.
|
||||
omissions.append(os.path.join('ptvsd', 'pydevd', '*'))
|
||||
ver = 3 if sys.version_info < (3,) else 2
|
||||
omissions.append(os.path.join('ptvsd', 'reraise{}.py'.format(ver)))
|
||||
args = [
|
||||
sys.executable,
|
||||
'-m', 'coverage',
|
||||
'run',
|
||||
# We use --source instead of "--include ptvsd/*".
|
||||
'--source', 'ptvsd',
|
||||
'--omit', ','.join(omissions),
|
||||
'-m', 'unittest',
|
||||
] + argv[1:]
|
||||
assert 'PYTHONPATH' not in env
|
||||
env['PYTHONPATH'] = os.pathsep.join(VENDORED_ROOTS)
|
||||
rc = subprocess.call(args, env=env)
|
||||
if rc != 0:
|
||||
print('...coverage failed!')
|
||||
sys.exit(rc)
|
||||
print('...done')
|
||||
elif junit_xml:
|
||||
from xmlrunner import XMLTestRunner # noqa
|
||||
os.environ.update(env)
|
||||
verbosity = 1
|
||||
if '-v' in argv or '--verbose' in argv:
|
||||
verbosity = 2
|
||||
with open(junit_xml, 'wb') as output:
|
||||
unittest.main(
|
||||
testRunner=XMLTestRunner(output=output, verbosity=verbosity),
|
||||
module=None,
|
||||
argv=argv,
|
||||
)
|
||||
else:
|
||||
os.environ.update(env)
|
||||
unittest.main(module=None, argv=argv)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
config, argv, env = convert_argv()
|
||||
fix_sys_path()
|
||||
|
||||
if config.lint or config.lint_only:
|
||||
check_lint()
|
||||
|
||||
if not config.lint_only:
|
||||
if '--start-directory' in argv:
|
||||
start = argv[argv.index('--start-directory') + 1]
|
||||
print('(will look for tests under {})'.format(start))
|
||||
|
||||
run_tests(
|
||||
argv,
|
||||
env,
|
||||
config.coverage,
|
||||
config.junit_xml
|
||||
)
|
||||
|
|
@ -1,7 +0,0 @@
|
|||
from .. import skip_py2
|
||||
|
||||
|
||||
# The code under the debugger_protocol package isn't used
|
||||
# by the debugger (it's used by schema-related tools). So we don't need
|
||||
# to support Python 2.
|
||||
skip_py2()
|
||||
|
|
@ -1,50 +0,0 @@
|
|||
from debugger_protocol.arg import ANY, FieldsNamespace, Field
|
||||
|
||||
|
||||
FIELDS_BASIC = [
|
||||
Field('name'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('value'),
|
||||
]
|
||||
|
||||
BASIC_FULL = {
|
||||
'name': 'spam',
|
||||
'value': 'eggs',
|
||||
}
|
||||
|
||||
BASIC_MIN = {
|
||||
'name': 'spam',
|
||||
}
|
||||
|
||||
|
||||
class Basic(FieldsNamespace):
|
||||
FIELDS = FIELDS_BASIC
|
||||
|
||||
|
||||
FIELDS_EXTENDED = [
|
||||
Field('name', datatype=str, optional=False),
|
||||
Field('valid', datatype=bool, optional=True),
|
||||
Field('id', datatype=int, optional=False),
|
||||
Field('value', datatype=ANY, optional=True),
|
||||
Field('x', datatype=Basic, optional=True),
|
||||
Field('y', datatype={int, str}, optional=True),
|
||||
Field('z', datatype=[Basic], optional=True),
|
||||
]
|
||||
|
||||
EXTENDED_FULL = {
|
||||
'name': 'spam',
|
||||
'valid': True,
|
||||
'id': 10,
|
||||
'value': None,
|
||||
'x': BASIC_FULL,
|
||||
'y': 11,
|
||||
'z': [
|
||||
BASIC_FULL,
|
||||
BASIC_MIN,
|
||||
],
|
||||
}
|
||||
|
||||
EXTENDED_MIN = {
|
||||
'name': 'spam',
|
||||
'id': 10,
|
||||
}
|
||||
|
|
@ -1,365 +0,0 @@
|
|||
import itertools
|
||||
import unittest
|
||||
|
||||
from debugger_protocol.arg._common import ANY
|
||||
from debugger_protocol.arg._datatype import FieldsNamespace
|
||||
from debugger_protocol.arg._decl import Union, Array, Field, Fields
|
||||
from debugger_protocol.arg._param import Parameter, DatatypeHandler, Arg
|
||||
|
||||
from ._common import (
|
||||
BASIC_FULL, BASIC_MIN, Basic,
|
||||
FIELDS_EXTENDED, EXTENDED_FULL, EXTENDED_MIN)
|
||||
|
||||
|
||||
class FieldsNamespaceTests(unittest.TestCase):
|
||||
|
||||
def test_traverse_noop(self):
|
||||
fields = [
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
]
|
||||
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = Fields(*fields)
|
||||
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or dt)
|
||||
transformed = Spam.traverse(op)
|
||||
|
||||
self.assertIs(transformed, Spam)
|
||||
self.assertIs(transformed.FIELDS, Spam.FIELDS)
|
||||
for i, field in enumerate(Spam.FIELDS):
|
||||
self.assertIs(field, fields[i])
|
||||
self.assertCountEqual(calls, [
|
||||
# Note that it did not recurse into the fields.
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
])
|
||||
|
||||
def test_traverse_unnormalized(self):
|
||||
fields = [
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
]
|
||||
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = fields
|
||||
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or dt)
|
||||
transformed = Spam.traverse(op)
|
||||
|
||||
self.assertIs(transformed, Spam)
|
||||
self.assertIsInstance(transformed.FIELDS, Fields)
|
||||
for i, field in enumerate(Spam.FIELDS):
|
||||
self.assertIs(field, fields[i])
|
||||
self.assertCountEqual(calls, [
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
])
|
||||
|
||||
def test_traverse_changed(self):
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = Fields(
|
||||
Field('spam', ANY),
|
||||
Field('eggs', None),
|
||||
)
|
||||
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or Field(dt.name, str))
|
||||
transformed = Spam.traverse(op)
|
||||
|
||||
self.assertIs(transformed, Spam)
|
||||
self.assertEqual(transformed.FIELDS, Fields(
|
||||
Field('spam', str),
|
||||
Field('eggs', str),
|
||||
))
|
||||
self.assertEqual(calls, [
|
||||
Field('spam', ANY),
|
||||
Field('eggs', None),
|
||||
])
|
||||
|
||||
def test_normalize_without_ops(self):
|
||||
fieldlist = [
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
]
|
||||
fields = Fields(*fieldlist)
|
||||
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = fields
|
||||
|
||||
Spam.normalize()
|
||||
|
||||
self.assertIs(Spam.FIELDS, fields)
|
||||
for i, field in enumerate(Spam.FIELDS):
|
||||
self.assertIs(field, fieldlist[i])
|
||||
|
||||
def test_normalize_unnormalized(self):
|
||||
fieldlist = [
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
]
|
||||
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = fieldlist
|
||||
|
||||
Spam.normalize()
|
||||
|
||||
self.assertIsInstance(Spam.FIELDS, Fields)
|
||||
for i, field in enumerate(Spam.FIELDS):
|
||||
self.assertIs(field, fieldlist[i])
|
||||
|
||||
def test_normalize_with_ops_noop(self):
|
||||
fieldlist = [
|
||||
Field('spam'),
|
||||
Field('ham', int),
|
||||
Field('eggs', Array(ANY)),
|
||||
]
|
||||
fields = Fields(*fieldlist)
|
||||
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = fields
|
||||
|
||||
calls = []
|
||||
op1 = (lambda dt: calls.append((op1, dt)) or dt)
|
||||
op2 = (lambda dt: calls.append((op2, dt)) or dt)
|
||||
Spam.normalize(op1, op2)
|
||||
|
||||
self.assertIs(Spam.FIELDS, fields)
|
||||
for i, field in enumerate(Spam.FIELDS):
|
||||
self.assertIs(field, fieldlist[i])
|
||||
self.maxDiff = None
|
||||
self.assertEqual(calls, [
|
||||
(op1, fields),
|
||||
(op1, Field('spam')),
|
||||
(op1, str),
|
||||
(op1, Field('ham', int)),
|
||||
(op1, int),
|
||||
(op1, Field('eggs', Array(ANY))),
|
||||
(op1, Array(ANY)),
|
||||
(op1, ANY),
|
||||
|
||||
(op2, fields),
|
||||
(op2, Field('spam')),
|
||||
(op2, str),
|
||||
(op2, Field('ham', int)),
|
||||
(op2, int),
|
||||
(op2, Field('eggs', Array(ANY))),
|
||||
(op2, Array(ANY)),
|
||||
(op2, ANY),
|
||||
])
|
||||
|
||||
def test_normalize_with_op_changed(self):
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = Fields(
|
||||
Field('spam', Array(ANY)),
|
||||
)
|
||||
|
||||
op = (lambda dt: int if dt is ANY else dt)
|
||||
Spam.normalize(op)
|
||||
|
||||
self.assertEqual(Spam.FIELDS, Fields(
|
||||
Field('spam', Array(int)),
|
||||
))
|
||||
|
||||
def test_normalize_declarative(self):
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
Field('b', bool),
|
||||
Field.START_OPTIONAL,
|
||||
Field('c', {int, str}),
|
||||
Field('d', [int]),
|
||||
Field('e', ANY),
|
||||
Field('f', '<ref>'),
|
||||
]
|
||||
|
||||
class Ham(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('w', Spam),
|
||||
Field('x', int),
|
||||
Field('y', frozenset({int, str})),
|
||||
Field('z', (int,)),
|
||||
]
|
||||
|
||||
class Eggs(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('b', [Ham]),
|
||||
Field('x', [{str, ('<ref>',)}], optional=True),
|
||||
Field('d', {Spam, '<ref>'}, optional=True),
|
||||
]
|
||||
|
||||
Eggs.normalize()
|
||||
|
||||
self.assertEqual(Spam.FIELDS, Fields(
|
||||
Field('a'),
|
||||
Field('b', bool),
|
||||
Field('c', Union(int, str), optional=True),
|
||||
Field('d', Array(int), optional=True),
|
||||
Field('e', ANY, optional=True),
|
||||
Field('f', Spam, optional=True),
|
||||
))
|
||||
self.assertEqual(Ham.FIELDS, Fields(
|
||||
Field('w', Spam),
|
||||
Field('x', int),
|
||||
Field('y', Union(int, str)),
|
||||
Field('z', Array(int)),
|
||||
))
|
||||
self.assertEqual(Eggs.FIELDS, Fields(
|
||||
Field('b', Array(Ham)),
|
||||
Field('x',
|
||||
Array(Union.unordered(str, Array(Eggs))),
|
||||
optional=True),
|
||||
Field('d', Union.unordered(Spam, Eggs), optional=True),
|
||||
))
|
||||
|
||||
def test_normalize_missing(self):
|
||||
with self.assertRaises(TypeError):
|
||||
FieldsNamespace.normalize()
|
||||
|
||||
#######
|
||||
|
||||
def test_bind_no_param(self):
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
arg = Spam.bind({'a': 'x'})
|
||||
|
||||
self.assertIsInstance(arg, Spam)
|
||||
self.assertEqual(arg, Spam(a='x'))
|
||||
|
||||
def test_bind_with_param_obj(self):
|
||||
class Param(Parameter):
|
||||
HANDLER = DatatypeHandler(ANY)
|
||||
match_type = (lambda self, raw: self.HANDLER)
|
||||
|
||||
class Spam(FieldsNamespace):
|
||||
PARAM = Param(ANY)
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
arg = Spam.bind({'a': 'x'})
|
||||
|
||||
self.assertIsInstance(arg, Arg)
|
||||
self.assertEqual(arg, Arg(Param(ANY), {'a': 'x'}))
|
||||
|
||||
def test_bind_with_param_type(self):
|
||||
class Param(Parameter):
|
||||
HANDLER = DatatypeHandler(ANY)
|
||||
match_type = (lambda self, raw: self.HANDLER)
|
||||
|
||||
class Spam(FieldsNamespace):
|
||||
PARAM_TYPE = Param
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
arg = Spam.bind({'a': 'x'})
|
||||
|
||||
self.assertIsInstance(arg, Arg)
|
||||
self.assertEqual(arg, Arg(Param(Spam.FIELDS), {'a': 'x'}))
|
||||
|
||||
def test_bind_already_bound(self):
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
spam = Spam(a='x')
|
||||
arg = Spam.bind(spam)
|
||||
|
||||
self.assertIs(arg, spam)
|
||||
|
||||
#######
|
||||
|
||||
def test_fields_full(self):
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = FIELDS_EXTENDED
|
||||
|
||||
spam = Spam(**EXTENDED_FULL)
|
||||
ns = vars(spam)
|
||||
del ns['_validators']
|
||||
del ns['_serializers']
|
||||
|
||||
self.assertEqual(ns, {
|
||||
'name': 'spam',
|
||||
'valid': True,
|
||||
'id': 10,
|
||||
'value': None,
|
||||
'x': Basic(**BASIC_FULL),
|
||||
'y': 11,
|
||||
'z': [
|
||||
Basic(**BASIC_FULL),
|
||||
Basic(**BASIC_MIN),
|
||||
],
|
||||
})
|
||||
|
||||
def test_fields_min(self):
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = FIELDS_EXTENDED
|
||||
|
||||
spam = Spam(**EXTENDED_MIN)
|
||||
ns = vars(spam)
|
||||
del ns['_validators']
|
||||
del ns['_serializers']
|
||||
|
||||
self.assertEqual(ns, {
|
||||
'name': 'spam',
|
||||
'id': 10,
|
||||
})
|
||||
|
||||
def test_no_fields(self):
|
||||
with self.assertRaises(TypeError):
|
||||
FieldsNamespace(
|
||||
x='spam',
|
||||
y=42,
|
||||
z=None,
|
||||
)
|
||||
|
||||
def test_attrs(self):
|
||||
ns = Basic(name='<name>', value='<value>')
|
||||
|
||||
self.assertEqual(ns.name, '<name>')
|
||||
self.assertEqual(ns.value, '<value>')
|
||||
|
||||
def test_equality(self):
|
||||
ns1 = Basic(name='<name>', value='<value>')
|
||||
ns2 = Basic(name='<name>', value='<value>')
|
||||
|
||||
self.assertTrue(ns1 == ns1)
|
||||
self.assertTrue(ns1 == ns2)
|
||||
|
||||
def test_inequality(self):
|
||||
p = [Basic(name=n, value=v)
|
||||
for n in ['<>', '<name>']
|
||||
for v in ['<>', '<value>']]
|
||||
for basic1, basic2 in itertools.combinations(p, 2):
|
||||
with self.subTest((basic1, basic2)):
|
||||
self.assertTrue(basic1 != basic2)
|
||||
|
||||
@unittest.skip('not ready')
|
||||
def test_validate(self):
|
||||
# TODO: finish
|
||||
raise NotImplementedError
|
||||
|
||||
def test_as_data(self):
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = FIELDS_EXTENDED
|
||||
|
||||
spam = Spam(**EXTENDED_FULL)
|
||||
sdata = spam.as_data()
|
||||
basic = Basic(**BASIC_FULL)
|
||||
bdata = basic.as_data()
|
||||
|
||||
self.assertEqual(sdata, EXTENDED_FULL)
|
||||
self.assertEqual(bdata, BASIC_FULL)
|
||||
|
|
@ -1,565 +0,0 @@
|
|||
import unittest
|
||||
|
||||
from debugger_protocol.arg import NOT_SET, ANY
|
||||
from debugger_protocol.arg._datatype import FieldsNamespace
|
||||
from debugger_protocol.arg._decl import (
|
||||
REF, TYPE_REFERENCE, _normalize_datatype, _transform_datatype,
|
||||
Enum, Union, Array, Mapping, Field, Fields)
|
||||
from debugger_protocol.arg._param import Parameter, DatatypeHandler, Arg
|
||||
from debugger_protocol.arg._params import (
|
||||
SimpleParameter, UnionParameter, ArrayParameter, ComplexParameter)
|
||||
|
||||
|
||||
class ModuleTests(unittest.TestCase):
|
||||
|
||||
def test_normalize_datatype(self):
|
||||
class Spam:
|
||||
@classmethod
|
||||
def normalize(cls):
|
||||
return OKAY
|
||||
|
||||
OKAY = object()
|
||||
NOOP = object()
|
||||
param = SimpleParameter(str)
|
||||
tests = [
|
||||
# explicitly handled
|
||||
(REF, TYPE_REFERENCE),
|
||||
(TYPE_REFERENCE, NOOP),
|
||||
(ANY, NOOP),
|
||||
(None, NOOP),
|
||||
(int, NOOP),
|
||||
(str, NOOP),
|
||||
(bool, NOOP),
|
||||
(Enum(str, ('spam',)), NOOP),
|
||||
(Union(str, int), NOOP),
|
||||
({str, int}, Union(str, int)),
|
||||
(frozenset([str, int]), Union(str, int)),
|
||||
(Array(str), NOOP),
|
||||
([str], Array(str)),
|
||||
((str,), Array(str)),
|
||||
(Mapping(str), NOOP),
|
||||
({str: str}, Mapping(str)),
|
||||
# others
|
||||
(Field('spam'), NOOP),
|
||||
(Fields(Field('spam')), NOOP),
|
||||
(param, NOOP),
|
||||
(DatatypeHandler(str), NOOP),
|
||||
(Arg(param, 'spam'), NOOP),
|
||||
(SimpleParameter(str), NOOP),
|
||||
(UnionParameter(Union(str)), NOOP),
|
||||
(ArrayParameter(Array(str)), NOOP),
|
||||
(ComplexParameter(Fields()), NOOP),
|
||||
(NOT_SET, NOOP),
|
||||
(object(), NOOP),
|
||||
(object, NOOP),
|
||||
(type, NOOP),
|
||||
(Spam, OKAY),
|
||||
]
|
||||
for datatype, expected in tests:
|
||||
if expected is NOOP:
|
||||
expected = datatype
|
||||
with self.subTest(datatype):
|
||||
datatype = _normalize_datatype(datatype)
|
||||
|
||||
self.assertEqual(datatype, expected)
|
||||
|
||||
def test_transform_datatype_simple(self):
|
||||
datatypes = [
|
||||
REF,
|
||||
TYPE_REFERENCE,
|
||||
ANY,
|
||||
None,
|
||||
int,
|
||||
str,
|
||||
bool,
|
||||
{str, int},
|
||||
frozenset([str, int]),
|
||||
[str],
|
||||
(str,),
|
||||
Parameter(object()),
|
||||
DatatypeHandler(str),
|
||||
Arg(SimpleParameter(str), 'spam'),
|
||||
SimpleParameter(str),
|
||||
UnionParameter(Union(str, int)),
|
||||
ArrayParameter(Array(str)),
|
||||
ComplexParameter(Fields()),
|
||||
NOT_SET,
|
||||
object(),
|
||||
object,
|
||||
type,
|
||||
]
|
||||
for expected in datatypes:
|
||||
transformed = []
|
||||
op = (lambda dt: transformed.append(dt) or dt)
|
||||
with self.subTest(expected):
|
||||
datatype = _transform_datatype(expected, op)
|
||||
|
||||
self.assertIs(datatype, expected)
|
||||
self.assertEqual(transformed, [expected])
|
||||
|
||||
def test_transform_datatype_container(self):
|
||||
class Spam(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
Field('b', {str: str})
|
||||
]
|
||||
|
||||
fields = Fields(Field('...'))
|
||||
field_spam = Field('spam', ANY)
|
||||
field_ham = Field('ham', Union(
|
||||
Array(Spam),
|
||||
))
|
||||
field_eggs = Field('eggs', Array(TYPE_REFERENCE))
|
||||
nested = Fields(
|
||||
Field('???', fields),
|
||||
field_spam,
|
||||
field_ham,
|
||||
field_eggs,
|
||||
)
|
||||
tests = {
|
||||
Array(str): [
|
||||
Array(str),
|
||||
str,
|
||||
],
|
||||
Field('...'): [
|
||||
Field('...'),
|
||||
str,
|
||||
],
|
||||
fields: [
|
||||
fields,
|
||||
Field('...'),
|
||||
str,
|
||||
],
|
||||
nested: [
|
||||
nested,
|
||||
# ...
|
||||
Field('???', fields),
|
||||
fields,
|
||||
Field('...'),
|
||||
str,
|
||||
# ...
|
||||
Field('spam', ANY),
|
||||
ANY,
|
||||
# ...
|
||||
field_ham,
|
||||
Union(Array(Spam)),
|
||||
Array(Spam),
|
||||
Spam,
|
||||
Field('a'),
|
||||
str,
|
||||
Field('b', Mapping(str)),
|
||||
Mapping(str),
|
||||
str,
|
||||
str,
|
||||
# ...
|
||||
field_eggs,
|
||||
Array(TYPE_REFERENCE),
|
||||
TYPE_REFERENCE,
|
||||
],
|
||||
}
|
||||
self.maxDiff = None
|
||||
for datatype, expected in tests.items():
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or dt)
|
||||
with self.subTest(datatype):
|
||||
transformed = _transform_datatype(datatype, op)
|
||||
|
||||
self.assertIs(transformed, datatype)
|
||||
self.assertEqual(calls, expected)
|
||||
|
||||
# Check Union separately due to set iteration order.
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or dt)
|
||||
datatype = Union(str, int)
|
||||
transformed = _transform_datatype(datatype, op)
|
||||
|
||||
self.assertIs(transformed, datatype)
|
||||
self.assertEqual(calls[0], Union(str, int))
|
||||
self.assertEqual(set(calls[1:]), {str, int})
|
||||
|
||||
|
||||
class EnumTests(unittest.TestCase):
|
||||
|
||||
def test_attrs(self):
|
||||
enum = Enum(str, ('spam', 'eggs'))
|
||||
datatype, choices = enum
|
||||
|
||||
self.assertIs(datatype, str)
|
||||
self.assertEqual(choices, frozenset(['spam', 'eggs']))
|
||||
|
||||
def test_bad_datatype(self):
|
||||
with self.assertRaises(ValueError):
|
||||
Enum('spam', ('spam', 'eggs'))
|
||||
with self.assertRaises(ValueError):
|
||||
Enum(dict, ('spam', 'eggs'))
|
||||
|
||||
def test_bad_choices(self):
|
||||
class String(str):
|
||||
pass
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
Enum(str, 'spam')
|
||||
with self.assertRaises(TypeError):
|
||||
Enum(str, ())
|
||||
with self.assertRaises(ValueError):
|
||||
Enum(str, ('spam', 10))
|
||||
with self.assertRaises(ValueError):
|
||||
Enum(str, ('spam', String))
|
||||
|
||||
|
||||
class UnionTests(unittest.TestCase):
|
||||
|
||||
def test_normalized(self):
|
||||
tests = [
|
||||
(REF, TYPE_REFERENCE),
|
||||
({str, int}, Union(*{str, int})),
|
||||
(frozenset([str, int]), Union(*frozenset([str, int]))),
|
||||
([str], Array(str)),
|
||||
((str,), Array(str)),
|
||||
({str: str}, Mapping(str)),
|
||||
(None, None),
|
||||
]
|
||||
for datatype, expected in tests:
|
||||
with self.subTest(datatype):
|
||||
union = Union(int, datatype, str)
|
||||
|
||||
self.assertEqual(union, Union(int, expected, str))
|
||||
|
||||
def test_traverse_noop(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or dt)
|
||||
union = Union(str, Array(int), int)
|
||||
transformed = union.traverse(op)
|
||||
|
||||
self.assertIs(transformed, union)
|
||||
self.assertCountEqual(calls, [
|
||||
str,
|
||||
# Note that it did not recurse into Array(int).
|
||||
Array(int),
|
||||
int,
|
||||
])
|
||||
|
||||
def test_traverse_changed(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or str)
|
||||
union = Union(ANY)
|
||||
transformed = union.traverse(op)
|
||||
|
||||
self.assertIsNot(transformed, union)
|
||||
self.assertEqual(transformed, Union(str))
|
||||
self.assertEqual(calls, [
|
||||
ANY,
|
||||
])
|
||||
|
||||
|
||||
class ArrayTests(unittest.TestCase):
|
||||
|
||||
def test_normalized(self):
|
||||
tests = [
|
||||
(REF, TYPE_REFERENCE),
|
||||
({str, int}, Union(str, int)),
|
||||
(frozenset([str, int]), Union(str, int)),
|
||||
([str], Array(str)),
|
||||
((str,), Array(str)),
|
||||
({str: str}, Mapping(str)),
|
||||
(None, None),
|
||||
]
|
||||
for datatype, expected in tests:
|
||||
with self.subTest(datatype):
|
||||
array = Array(datatype)
|
||||
|
||||
self.assertEqual(array, Array(expected))
|
||||
|
||||
def test_normalized_transformed(self):
|
||||
calls = 0
|
||||
|
||||
class Spam:
|
||||
@classmethod
|
||||
def traverse(cls, op):
|
||||
nonlocal calls
|
||||
calls += 1
|
||||
return cls
|
||||
|
||||
array = Array(Spam)
|
||||
|
||||
self.assertIs(array.itemtype, Spam)
|
||||
self.assertEqual(calls, 1)
|
||||
|
||||
def test_traverse_noop(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or dt)
|
||||
array = Array(Union(str, int))
|
||||
transformed = array.traverse(op)
|
||||
|
||||
self.assertIs(transformed, array)
|
||||
self.assertCountEqual(calls, [
|
||||
# Note that it did not recurse into Union(str, int).
|
||||
Union(str, int),
|
||||
])
|
||||
|
||||
def test_traverse_changed(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or str)
|
||||
array = Array(ANY)
|
||||
transformed = array.traverse(op)
|
||||
|
||||
self.assertIsNot(transformed, array)
|
||||
self.assertEqual(transformed, Array(str))
|
||||
self.assertEqual(calls, [
|
||||
ANY,
|
||||
])
|
||||
|
||||
|
||||
class MappingTests(unittest.TestCase):
|
||||
|
||||
def test_normalized(self):
|
||||
tests = [
|
||||
(REF, TYPE_REFERENCE),
|
||||
({str, int}, Union(str, int)),
|
||||
(frozenset([str, int]), Union(str, int)),
|
||||
([str], Array(str)),
|
||||
((str,), Array(str)),
|
||||
({str: str}, Mapping(str)),
|
||||
(None, None),
|
||||
]
|
||||
for datatype, expected in tests:
|
||||
with self.subTest(datatype):
|
||||
mapping = Mapping(datatype)
|
||||
|
||||
self.assertEqual(mapping, Mapping(expected))
|
||||
|
||||
def test_normalized_transformed(self):
|
||||
calls = 0
|
||||
|
||||
class Spam:
|
||||
@classmethod
|
||||
def traverse(cls, op):
|
||||
nonlocal calls
|
||||
calls += 1
|
||||
return cls
|
||||
|
||||
mapping = Mapping(Spam)
|
||||
|
||||
self.assertIs(mapping.keytype, str)
|
||||
self.assertIs(mapping.valuetype, Spam)
|
||||
self.assertEqual(calls, 1)
|
||||
|
||||
def test_traverse_noop(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or dt)
|
||||
mapping = Mapping(Union(str, int))
|
||||
transformed = mapping.traverse(op)
|
||||
|
||||
self.assertIs(transformed, mapping)
|
||||
self.assertCountEqual(calls, [
|
||||
str,
|
||||
# Note that it did not recurse into Union(str, int).
|
||||
Union(str, int),
|
||||
])
|
||||
|
||||
def test_traverse_changed(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or str)
|
||||
mapping = Mapping(ANY)
|
||||
transformed = mapping.traverse(op)
|
||||
|
||||
self.assertIsNot(transformed, mapping)
|
||||
self.assertEqual(transformed, Mapping(str))
|
||||
self.assertEqual(calls, [
|
||||
str,
|
||||
ANY,
|
||||
])
|
||||
|
||||
|
||||
class FieldTests(unittest.TestCase):
|
||||
|
||||
def test_defaults(self):
|
||||
field = Field('spam')
|
||||
|
||||
self.assertEqual(field.name, 'spam')
|
||||
self.assertIs(field.datatype, str)
|
||||
self.assertIs(field.default, NOT_SET)
|
||||
self.assertFalse(field.optional)
|
||||
|
||||
def test_enum(self):
|
||||
field = Field('spam', str, enum=('a', 'b', 'c'))
|
||||
|
||||
self.assertEqual(field.datatype, Enum(str, ('a', 'b', 'c')))
|
||||
|
||||
def test_normalized(self):
|
||||
tests = [
|
||||
(REF, TYPE_REFERENCE),
|
||||
({str, int}, Union(str, int)),
|
||||
(frozenset([str, int]), Union(str, int)),
|
||||
([str], Array(str)),
|
||||
((str,), Array(str)),
|
||||
({str: str}, Mapping(str)),
|
||||
(None, None),
|
||||
]
|
||||
for datatype, expected in tests:
|
||||
with self.subTest(datatype):
|
||||
field = Field('spam', datatype)
|
||||
|
||||
self.assertEqual(field, Field('spam', expected))
|
||||
|
||||
def test_normalized_transformed(self):
|
||||
calls = 0
|
||||
|
||||
class Spam:
|
||||
@classmethod
|
||||
def traverse(cls, op):
|
||||
nonlocal calls
|
||||
calls += 1
|
||||
return cls
|
||||
|
||||
field = Field('spam', Spam)
|
||||
|
||||
self.assertIs(field.datatype, Spam)
|
||||
self.assertEqual(calls, 1)
|
||||
|
||||
def test_traverse_noop(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or dt)
|
||||
field = Field('spam', Union(str, int))
|
||||
transformed = field.traverse(op)
|
||||
|
||||
self.assertIs(transformed, field)
|
||||
self.assertCountEqual(calls, [
|
||||
# Note that it did not recurse into Union(str, int).
|
||||
Union(str, int),
|
||||
])
|
||||
|
||||
def test_traverse_changed(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or str)
|
||||
field = Field('spam', ANY)
|
||||
transformed = field.traverse(op)
|
||||
|
||||
self.assertIsNot(transformed, field)
|
||||
self.assertEqual(transformed, Field('spam', str))
|
||||
self.assertEqual(calls, [
|
||||
ANY,
|
||||
])
|
||||
|
||||
|
||||
class FieldsTests(unittest.TestCase):
|
||||
|
||||
def test_single(self):
|
||||
fields = Fields(
|
||||
Field('spam'),
|
||||
)
|
||||
|
||||
self.assertEqual(fields, [
|
||||
Field('spam'),
|
||||
])
|
||||
|
||||
def test_multiple(self):
|
||||
fields = Fields(
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
)
|
||||
|
||||
self.assertEqual(fields, [
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
])
|
||||
|
||||
def test_empty(self):
|
||||
fields = Fields()
|
||||
|
||||
self.assertCountEqual(fields, [])
|
||||
|
||||
def test_normalized(self):
|
||||
tests = [
|
||||
(REF, TYPE_REFERENCE),
|
||||
({str, int}, Union(str, int)),
|
||||
(frozenset([str, int]), Union(str, int)),
|
||||
([str], Array(str)),
|
||||
((str,), Array(str)),
|
||||
({str: str}, Mapping(str)),
|
||||
(None, None),
|
||||
]
|
||||
for datatype, expected in tests:
|
||||
with self.subTest(datatype):
|
||||
fields = Fields(
|
||||
Field('spam', datatype),
|
||||
)
|
||||
|
||||
self.assertEqual(fields, [
|
||||
Field('spam', expected),
|
||||
])
|
||||
|
||||
def test_with_START_OPTIONAL(self):
|
||||
fields = Fields(
|
||||
Field('spam'),
|
||||
Field('ham', optional=True),
|
||||
Field('eggs'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('a'),
|
||||
Field('b', optional=False),
|
||||
)
|
||||
|
||||
self.assertEqual(fields, [
|
||||
Field('spam'),
|
||||
Field('ham', optional=True),
|
||||
Field('eggs'),
|
||||
Field('a', optional=True),
|
||||
Field('b', optional=True),
|
||||
])
|
||||
|
||||
def test_non_field(self):
|
||||
with self.assertRaises(TypeError):
|
||||
Fields(str)
|
||||
|
||||
def test_as_dict(self):
|
||||
fields = Fields(
|
||||
Field('spam', int),
|
||||
Field('ham'),
|
||||
Field('eggs', Array(str)),
|
||||
)
|
||||
result = fields.as_dict()
|
||||
|
||||
self.assertEqual(result, {
|
||||
'spam': fields[0],
|
||||
'ham': fields[1],
|
||||
'eggs': fields[2],
|
||||
})
|
||||
|
||||
def test_traverse_noop(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or dt)
|
||||
fields = Fields(
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
)
|
||||
transformed = fields.traverse(op)
|
||||
|
||||
self.assertIs(transformed, fields)
|
||||
self.assertCountEqual(calls, [
|
||||
# Note that it did not recurse into the fields.
|
||||
Field('spam'),
|
||||
Field('ham'),
|
||||
Field('eggs'),
|
||||
])
|
||||
|
||||
def test_traverse_changed(self):
|
||||
calls = []
|
||||
op = (lambda dt: calls.append(dt) or Field(dt.name, str))
|
||||
fields = Fields(
|
||||
Field('spam', ANY),
|
||||
Field('eggs', None),
|
||||
)
|
||||
transformed = fields.traverse(op)
|
||||
|
||||
self.assertIsNot(transformed, fields)
|
||||
self.assertEqual(transformed, Fields(
|
||||
Field('spam', str),
|
||||
Field('eggs', str),
|
||||
))
|
||||
self.assertEqual(calls, [
|
||||
Field('spam', ANY),
|
||||
Field('eggs', None),
|
||||
])
|
||||
|
|
@ -1,237 +0,0 @@
|
|||
from types import SimpleNamespace
|
||||
import unittest
|
||||
|
||||
from debugger_protocol.arg._param import Parameter, DatatypeHandler, Arg
|
||||
|
||||
from tests.helpers.stub import Stub
|
||||
|
||||
|
||||
class FakeHandler(DatatypeHandler):
|
||||
|
||||
def __init__(self, datatype=str, stub=None):
|
||||
super().__init__(datatype)
|
||||
self.stub = stub or Stub()
|
||||
self.returns = SimpleNamespace(
|
||||
coerce=None,
|
||||
as_data=None,
|
||||
)
|
||||
|
||||
def coerce(self, raw):
|
||||
self.stub.add_call('coerce', raw)
|
||||
self.stub.maybe_raise()
|
||||
return self.returns.coerce
|
||||
|
||||
def validate(self, coerced):
|
||||
self.stub.add_call('validate', coerced)
|
||||
self.stub.maybe_raise()
|
||||
|
||||
def as_data(self, coerced):
|
||||
self.stub.add_call('as_data', coerced)
|
||||
self.stub.maybe_raise()
|
||||
return self.returns.as_data
|
||||
|
||||
|
||||
class ParameterTests(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
self.stub = Stub()
|
||||
self.handler = FakeHandler(self.stub)
|
||||
|
||||
def test_bind_matched(self):
|
||||
param = Parameter(str, self.handler)
|
||||
arg = param.bind('spam')
|
||||
|
||||
self.assertEqual(arg, Arg(param, 'spam', self.handler))
|
||||
self.assertEqual(self.stub.calls, [])
|
||||
|
||||
def test_bind_no_match(self):
|
||||
param = Parameter(str)
|
||||
|
||||
arg = param.bind('spam')
|
||||
self.assertIs(arg, None)
|
||||
self.assertEqual(self.stub.calls, [])
|
||||
|
||||
def test_match_type_no_match(self):
|
||||
param = Parameter(str)
|
||||
matched = param.match_type('spam')
|
||||
|
||||
self.assertIs(matched, None)
|
||||
self.assertEqual(self.stub.calls, [])
|
||||
|
||||
def test_match_type_matched(self):
|
||||
param = Parameter(str, self.handler)
|
||||
matched = param.match_type('spam')
|
||||
|
||||
self.assertIs(matched, self.handler)
|
||||
self.assertEqual(self.stub.calls, [])
|
||||
|
||||
|
||||
class DatatypeHandlerTests(unittest.TestCase):
|
||||
|
||||
def test_coerce(self):
|
||||
handler = DatatypeHandler(str)
|
||||
coerced = handler.coerce('spam')
|
||||
|
||||
self.assertEqual(coerced, 'spam')
|
||||
|
||||
def test_validate(self):
|
||||
handler = DatatypeHandler(str)
|
||||
handler.validate('spam')
|
||||
|
||||
def test_as_data(self):
|
||||
handler = DatatypeHandler(str)
|
||||
data = handler.as_data('spam')
|
||||
|
||||
self.assertEqual(data, 'spam')
|
||||
|
||||
|
||||
class ArgTests(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
self.stub = Stub()
|
||||
self.handler = FakeHandler(str, self.stub)
|
||||
self.param = Parameter(str, self.handler)
|
||||
|
||||
def test_raw_valid(self):
|
||||
self.handler.returns.coerce = 'eggs'
|
||||
arg = Arg(self.param, 'spam', self.handler)
|
||||
raw = arg.raw
|
||||
|
||||
self.assertEqual(raw, 'spam')
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('coerce', ('spam',), {}),
|
||||
('validate', ('eggs',), {}),
|
||||
])
|
||||
|
||||
def test_raw_invalid(self):
|
||||
self.handler.returns.coerce = 'eggs'
|
||||
self.stub.set_exceptions(
|
||||
None,
|
||||
ValueError('oops'),
|
||||
)
|
||||
arg = Arg(self.param, 'spam', self.handler)
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
arg.raw
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('coerce', ('spam',), {}),
|
||||
('validate', ('eggs',), {}),
|
||||
])
|
||||
|
||||
def test_raw_generated(self):
|
||||
self.handler.returns.as_data = 'spam'
|
||||
arg = Arg(self.param, 'eggs', self.handler, israw=False)
|
||||
raw = arg.raw
|
||||
|
||||
self.assertEqual(raw, 'spam')
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('validate', ('eggs',), {}),
|
||||
('as_data', ('eggs',), {}),
|
||||
])
|
||||
|
||||
def test_value_valid(self):
|
||||
arg = Arg(self.param, 'eggs', self.handler, israw=False)
|
||||
value = arg.value
|
||||
|
||||
self.assertEqual(value, 'eggs')
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('validate', ('eggs',), {}),
|
||||
])
|
||||
|
||||
def test_value_invalid(self):
|
||||
self.stub.set_exceptions(
|
||||
ValueError('oops'),
|
||||
)
|
||||
arg = Arg(self.param, 'eggs', self.handler, israw=False)
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
arg.value
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('validate', ('eggs',), {}),
|
||||
])
|
||||
|
||||
def test_value_generated(self):
|
||||
self.handler.returns.coerce = 'eggs'
|
||||
arg = Arg(self.param, 'spam', self.handler)
|
||||
value = arg.value
|
||||
|
||||
self.assertEqual(value, 'eggs')
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('coerce', ('spam',), {}),
|
||||
('validate', ('eggs',), {}),
|
||||
])
|
||||
|
||||
def test_coerce(self):
|
||||
self.handler.returns.coerce = 'eggs'
|
||||
arg = Arg(self.param, 'spam', self.handler)
|
||||
value = arg.coerce()
|
||||
|
||||
self.assertEqual(value, 'eggs')
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('coerce', ('spam',), {}),
|
||||
])
|
||||
|
||||
def test_validate_okay(self):
|
||||
self.handler.returns.coerce = 'eggs'
|
||||
arg = Arg(self.param, 'spam', self.handler)
|
||||
arg.validate()
|
||||
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('coerce', ('spam',), {}),
|
||||
('validate', ('eggs',), {}),
|
||||
])
|
||||
|
||||
def test_validate_invalid(self):
|
||||
self.stub.set_exceptions(
|
||||
None,
|
||||
ValueError('oops'),
|
||||
)
|
||||
self.handler.returns.coerce = 'eggs'
|
||||
arg = Arg(self.param, 'spam', self.handler)
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
arg.validate()
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('coerce', ('spam',), {}),
|
||||
('validate', ('eggs',), {}),
|
||||
])
|
||||
|
||||
def test_validate_use_coerced(self):
|
||||
handler = FakeHandler()
|
||||
other = Arg(Parameter(str, handler), 'spam', handler, israw=False)
|
||||
arg = Arg(Parameter(str, self.handler), other, self.handler,
|
||||
israw=False)
|
||||
arg.validate()
|
||||
|
||||
self.assertEqual(self.stub.calls, [])
|
||||
self.assertEqual(handler.stub.calls, [
|
||||
('validate', ('spam',), {}),
|
||||
])
|
||||
|
||||
def test_as_data_use_handler(self):
|
||||
self.handler.returns.as_data = 'spam'
|
||||
arg = Arg(self.param, 'eggs', self.handler, israw=False)
|
||||
data = arg.as_data()
|
||||
|
||||
self.assertEqual(data, 'spam')
|
||||
self.assertEqual(self.stub.calls, [
|
||||
('validate', ('eggs',), {}),
|
||||
('as_data', ('eggs',), {}),
|
||||
])
|
||||
|
||||
def test_as_data_use_coerced(self):
|
||||
handler = FakeHandler()
|
||||
other = Arg(Parameter(str, handler), 'spam', handler, israw=False)
|
||||
handler.returns.as_data = 'spam'
|
||||
arg = Arg(Parameter(str, self.handler), other, self.handler,
|
||||
israw=False)
|
||||
data = arg.as_data(other)
|
||||
|
||||
self.assertEqual(data, 'spam')
|
||||
self.assertEqual(self.stub.calls, [])
|
||||
self.assertEqual(handler.stub.calls, [
|
||||
('validate', ('spam',), {}),
|
||||
('as_data', ('spam',), {}),
|
||||
])
|
||||
|
|
@ -1,800 +0,0 @@
|
|||
import unittest
|
||||
|
||||
from debugger_protocol.arg._common import NOT_SET, ANY
|
||||
from debugger_protocol.arg._decl import (
|
||||
Enum, Union, Array, Mapping, Field, Fields)
|
||||
from debugger_protocol.arg._param import Parameter, DatatypeHandler
|
||||
from debugger_protocol.arg._params import (
|
||||
param_from_datatype,
|
||||
NoopParameter, SingletonParameter,
|
||||
SimpleParameter, EnumParameter,
|
||||
UnionParameter, ArrayParameter, MappingParameter, ComplexParameter)
|
||||
|
||||
from ._common import FIELDS_BASIC, BASIC_FULL, Basic
|
||||
|
||||
|
||||
class String(str):
|
||||
pass
|
||||
|
||||
|
||||
class Integer(int):
|
||||
pass
|
||||
|
||||
|
||||
class ParamFromDatatypeTest(unittest.TestCase):
|
||||
|
||||
def test_supported(self):
|
||||
handler = DatatypeHandler(str)
|
||||
tests = [
|
||||
(Parameter(str), Parameter(str)),
|
||||
(handler, Parameter(str, handler)),
|
||||
(Fields(Field('spam')), ComplexParameter(Fields(Field('spam')))),
|
||||
(Field('spam'), SimpleParameter(str)),
|
||||
(Field('spam', str, enum={'a'}), EnumParameter(str, {'a'})),
|
||||
(ANY, NoopParameter()),
|
||||
(None, SingletonParameter(None)),
|
||||
(str, SimpleParameter(str)),
|
||||
(int, SimpleParameter(int)),
|
||||
(bool, SimpleParameter(bool)),
|
||||
(Enum(str, {'a'}), EnumParameter(str, {'a'})),
|
||||
(Union(str, int), UnionParameter(Union(str, int))),
|
||||
({str, int}, UnionParameter(Union(str, int))),
|
||||
(frozenset([str, int]), UnionParameter(Union(str, int))),
|
||||
(Array(str), ArrayParameter(Array(str))),
|
||||
([str], ArrayParameter(Array(str))),
|
||||
((str,), ArrayParameter(Array(str))),
|
||||
(Basic, ComplexParameter(Basic)),
|
||||
]
|
||||
for datatype, expected in tests:
|
||||
with self.subTest(datatype):
|
||||
param = param_from_datatype(datatype)
|
||||
|
||||
self.assertEqual(param, expected)
|
||||
|
||||
def test_not_supported(self):
|
||||
datatypes = [
|
||||
String('spam'),
|
||||
...,
|
||||
]
|
||||
for datatype in datatypes:
|
||||
with self.subTest(datatype):
|
||||
with self.assertRaises(NotImplementedError):
|
||||
param_from_datatype(datatype)
|
||||
|
||||
|
||||
class NoopParameterTests(unittest.TestCase):
|
||||
|
||||
VALUES = [
|
||||
object(),
|
||||
'spam',
|
||||
10,
|
||||
['spam'],
|
||||
{'spam': 42},
|
||||
True,
|
||||
None,
|
||||
NOT_SET,
|
||||
]
|
||||
|
||||
def test_match_type(self):
|
||||
values = [
|
||||
object(),
|
||||
'',
|
||||
'spam',
|
||||
b'spam',
|
||||
0,
|
||||
10,
|
||||
10.0,
|
||||
10+0j,
|
||||
('spam',),
|
||||
(),
|
||||
['spam'],
|
||||
[],
|
||||
{'spam': 42},
|
||||
{},
|
||||
{'spam'},
|
||||
set(),
|
||||
object,
|
||||
type,
|
||||
NoopParameterTests,
|
||||
True,
|
||||
None,
|
||||
...,
|
||||
NotImplemented,
|
||||
NOT_SET,
|
||||
ANY,
|
||||
Union(str, int),
|
||||
Union(),
|
||||
Array(str),
|
||||
Field('spam'),
|
||||
Fields(Field('spam')),
|
||||
Fields(),
|
||||
Basic,
|
||||
]
|
||||
for value in values:
|
||||
with self.subTest(value):
|
||||
param = NoopParameter()
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(type(handler), DatatypeHandler)
|
||||
self.assertIs(handler.datatype, ANY)
|
||||
|
||||
def test_coerce(self):
|
||||
for value in self.VALUES:
|
||||
with self.subTest(value):
|
||||
param = NoopParameter()
|
||||
handler = param.match_type(value)
|
||||
coerced = handler.coerce(value)
|
||||
|
||||
self.assertIs(coerced, value)
|
||||
|
||||
def test_validate(self):
|
||||
for value in self.VALUES:
|
||||
with self.subTest(value):
|
||||
param = NoopParameter()
|
||||
handler = param.match_type(value)
|
||||
handler.validate(value)
|
||||
|
||||
def test_as_data(self):
|
||||
for value in self.VALUES:
|
||||
with self.subTest(value):
|
||||
param = NoopParameter()
|
||||
handler = param.match_type(value)
|
||||
data = handler.as_data(value)
|
||||
|
||||
self.assertIs(data, value)
|
||||
|
||||
|
||||
class SingletonParameterTests(unittest.TestCase):
|
||||
|
||||
def test_match_type_matched(self):
|
||||
param = SingletonParameter(None)
|
||||
handler = param.match_type(None)
|
||||
|
||||
self.assertIs(handler.datatype, None)
|
||||
|
||||
def test_match_type_no_match(self):
|
||||
tests = [
|
||||
# same type, different value
|
||||
('spam', 'eggs'),
|
||||
(10, 11),
|
||||
(True, False),
|
||||
# different type but equivalent
|
||||
('spam', b'spam'),
|
||||
(10, 10.0),
|
||||
(10, 10+0j),
|
||||
(10, '10'),
|
||||
(10, b'\10'),
|
||||
]
|
||||
for singleton, value in tests:
|
||||
with self.subTest((singleton, value)):
|
||||
param = SingletonParameter(singleton)
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler, None)
|
||||
|
||||
def test_coerce(self):
|
||||
param = SingletonParameter(None)
|
||||
handler = param.match_type(None)
|
||||
value = handler.coerce(None)
|
||||
|
||||
self.assertIs(value, None)
|
||||
|
||||
def test_validate_valid(self):
|
||||
param = SingletonParameter(None)
|
||||
handler = param.match_type(None)
|
||||
handler.validate(None)
|
||||
|
||||
def test_validate_wrong_type(self):
|
||||
tests = [
|
||||
(None, True),
|
||||
(True, None),
|
||||
('spam', 10),
|
||||
(10, 'spam'),
|
||||
]
|
||||
for singleton, value in tests:
|
||||
with self.subTest(singleton):
|
||||
param = SingletonParameter(singleton)
|
||||
handler = param.match_type(singleton)
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
handler.validate(value)
|
||||
|
||||
def test_validate_same_type_wrong_value(self):
|
||||
tests = [
|
||||
('spam', 'eggs'),
|
||||
(True, False),
|
||||
(10, 11),
|
||||
]
|
||||
for singleton, value in tests:
|
||||
with self.subTest(singleton):
|
||||
param = SingletonParameter(singleton)
|
||||
handler = param.match_type(singleton)
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
handler.validate(value)
|
||||
|
||||
def test_as_data(self):
|
||||
param = SingletonParameter(None)
|
||||
handler = param.match_type(None)
|
||||
data = handler.as_data(None)
|
||||
|
||||
self.assertIs(data, None)
|
||||
|
||||
|
||||
class SimpleParameterTests(unittest.TestCase):
|
||||
|
||||
def test_match_type_match(self):
|
||||
tests = [
|
||||
(str, 'spam'),
|
||||
(str, String('spam')),
|
||||
(int, 10),
|
||||
(bool, True),
|
||||
]
|
||||
for datatype, value in tests:
|
||||
with self.subTest((datatype, value)):
|
||||
param = SimpleParameter(datatype, strict=False)
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler.datatype, datatype)
|
||||
|
||||
def test_match_type_no_match(self):
|
||||
tests = [
|
||||
(int, 'spam'),
|
||||
# coercible
|
||||
(str, 10),
|
||||
(int, 10.0),
|
||||
(int, '10'),
|
||||
(bool, 1),
|
||||
# semi-coercible
|
||||
(str, b'spam'),
|
||||
(int, 10+0j),
|
||||
(int, b'\10'),
|
||||
]
|
||||
for datatype, value in tests:
|
||||
with self.subTest((datatype, value)):
|
||||
param = SimpleParameter(datatype, strict=False)
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler, None)
|
||||
|
||||
def test_match_type_strict_match(self):
|
||||
tests = {
|
||||
str: 'spam',
|
||||
int: 10,
|
||||
bool: True,
|
||||
}
|
||||
for datatype, value in tests.items():
|
||||
with self.subTest(datatype):
|
||||
param = SimpleParameter(datatype, strict=True)
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler.datatype, datatype)
|
||||
|
||||
def test_match_type_strict_no_match(self):
|
||||
tests = {
|
||||
str: String('spam'),
|
||||
int: Integer(10),
|
||||
}
|
||||
for datatype, value in tests.items():
|
||||
with self.subTest(datatype):
|
||||
param = SimpleParameter(datatype, strict=True)
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler, None)
|
||||
|
||||
def test_coerce(self):
|
||||
tests = [
|
||||
(str, 'spam', 'spam'),
|
||||
(str, String('spam'), 'spam'),
|
||||
(int, 10, 10),
|
||||
(bool, True, True),
|
||||
# did not match, but still coercible
|
||||
(str, 10, '10'),
|
||||
(str, str, "<class 'str'>"),
|
||||
(int, 10.0, 10),
|
||||
(int, '10', 10),
|
||||
(bool, 1, True),
|
||||
]
|
||||
for datatype, value, expected in tests:
|
||||
with self.subTest((datatype, value)):
|
||||
handler = SimpleParameter.HANDLER(datatype)
|
||||
coerced = handler.coerce(value)
|
||||
|
||||
self.assertEqual(coerced, expected)
|
||||
|
||||
def test_validate_valid(self):
|
||||
tests = {
|
||||
str: 'spam',
|
||||
int: 10,
|
||||
bool: True,
|
||||
}
|
||||
for datatype, value in tests.items():
|
||||
with self.subTest(datatype):
|
||||
handler = SimpleParameter.HANDLER(datatype)
|
||||
handler.validate(value)
|
||||
|
||||
def test_validate_invalid(self):
|
||||
tests = [
|
||||
(int, 'spam'),
|
||||
# coercible
|
||||
(str, String('spam')),
|
||||
(str, 10),
|
||||
(int, 10.0),
|
||||
(int, '10'),
|
||||
(bool, 1),
|
||||
# semi-coercible
|
||||
(str, b'spam'),
|
||||
(int, 10+0j),
|
||||
(int, b'\10'),
|
||||
]
|
||||
for datatype, value in tests:
|
||||
with self.subTest((datatype, value)):
|
||||
handler = SimpleParameter.HANDLER(datatype)
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
handler.validate(value)
|
||||
|
||||
def test_as_data(self):
|
||||
tests = [
|
||||
(str, 'spam'),
|
||||
(int, 10),
|
||||
(bool, True),
|
||||
# did not match, but still coercible
|
||||
(str, String('spam')),
|
||||
(str, 10),
|
||||
(str, str),
|
||||
(int, 10.0),
|
||||
(int, '10'),
|
||||
(bool, 1),
|
||||
# semi-coercible
|
||||
(str, b'spam'),
|
||||
(int, 10+0j),
|
||||
(int, b'\10'),
|
||||
]
|
||||
for datatype, value in tests:
|
||||
with self.subTest((datatype, value)):
|
||||
handler = SimpleParameter.HANDLER(datatype)
|
||||
data = handler.as_data(value)
|
||||
|
||||
self.assertIs(data, value)
|
||||
|
||||
|
||||
class EnumParameterTests(unittest.TestCase):
|
||||
|
||||
def test_match_type_match(self):
|
||||
tests = [
|
||||
(str, ('spam', 'eggs'), 'spam'),
|
||||
(str, ('spam',), 'spam'),
|
||||
(int, (1, 2, 3), 2),
|
||||
(bool, (True,), True),
|
||||
]
|
||||
for datatype, enum, value in tests:
|
||||
with self.subTest((datatype, enum)):
|
||||
param = EnumParameter(datatype, enum)
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler.datatype, datatype)
|
||||
|
||||
def test_match_type_no_match(self):
|
||||
tests = [
|
||||
# enum mismatch
|
||||
(str, ('spam', 'eggs'), 'ham'),
|
||||
(int, (1, 2, 3), 10),
|
||||
# type mismatch
|
||||
(int, (1, 2, 3), 'spam'),
|
||||
# coercible
|
||||
(str, ('spam', 'eggs'), String('spam')),
|
||||
(str, ('1', '2', '3'), 2),
|
||||
(int, (1, 2, 3), 2.0),
|
||||
(int, (1, 2, 3), '2'),
|
||||
(bool, (True,), 1),
|
||||
# semi-coercible
|
||||
(str, ('spam', 'eggs'), b'spam'),
|
||||
(int, (1, 2, 3), 2+0j),
|
||||
(int, (1, 2, 3), b'\02'),
|
||||
]
|
||||
for datatype, enum, value in tests:
|
||||
with self.subTest((datatype, enum, value)):
|
||||
param = EnumParameter(datatype, enum)
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler, None)
|
||||
|
||||
def test_coerce(self):
|
||||
tests = [
|
||||
(str, 'spam', 'spam'),
|
||||
(int, 10, 10),
|
||||
(bool, True, True),
|
||||
# did not match, but still coercible
|
||||
(str, String('spam'), 'spam'),
|
||||
(str, 10, '10'),
|
||||
(str, str, "<class 'str'>"),
|
||||
(int, 10.0, 10),
|
||||
(int, '10', 10),
|
||||
(bool, 1, True),
|
||||
]
|
||||
for datatype, value, expected in tests:
|
||||
with self.subTest((datatype, value)):
|
||||
enum = (expected,)
|
||||
handler = EnumParameter.HANDLER(datatype, enum)
|
||||
coerced = handler.coerce(value)
|
||||
|
||||
self.assertEqual(coerced, expected)
|
||||
|
||||
def test_coerce_enum_mismatch(self):
|
||||
enum = ('spam', 'eggs')
|
||||
handler = EnumParameter.HANDLER(str, enum)
|
||||
coerced = handler.coerce('ham')
|
||||
|
||||
# It still works.
|
||||
self.assertEqual(coerced, 'ham')
|
||||
|
||||
def test_validate_valid(self):
|
||||
tests = [
|
||||
(str, ('spam', 'eggs'), 'spam'),
|
||||
(str, ('spam',), 'spam'),
|
||||
(int, (1, 2, 3), 2),
|
||||
(bool, (True, False), True),
|
||||
]
|
||||
for datatype, enum, value in tests:
|
||||
with self.subTest((datatype, enum)):
|
||||
handler = EnumParameter.HANDLER(datatype, enum)
|
||||
handler.validate(value)
|
||||
|
||||
def test_validate_invalid(self):
|
||||
tests = [
|
||||
# enum mismatch
|
||||
(str, ('spam', 'eggs'), 'ham'),
|
||||
(int, (1, 2, 3), 10),
|
||||
# type mismatch
|
||||
(int, (1, 2, 3), 'spam'),
|
||||
# coercible
|
||||
(str, ('spam', 'eggs'), String('spam')),
|
||||
(str, ('1', '2', '3'), 2),
|
||||
(int, (1, 2, 3), 2.0),
|
||||
(int, (1, 2, 3), '2'),
|
||||
(bool, (True,), 1),
|
||||
# semi-coercible
|
||||
(str, ('spam', 'eggs'), b'spam'),
|
||||
(int, (1, 2, 3), 2+0j),
|
||||
(int, (1, 2, 3), b'\02'),
|
||||
]
|
||||
for datatype, enum, value in tests:
|
||||
with self.subTest((datatype, enum, value)):
|
||||
handler = EnumParameter.HANDLER(datatype, enum)
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
handler.validate(value)
|
||||
|
||||
def test_as_data(self):
|
||||
tests = [
|
||||
(str, ('spam', 'eggs'), 'spam'),
|
||||
(str, ('spam',), 'spam'),
|
||||
(int, (1, 2, 3), 2),
|
||||
(bool, (True,), True),
|
||||
# enum mismatch
|
||||
(str, ('spam', 'eggs'), 'ham'),
|
||||
(int, (1, 2, 3), 10),
|
||||
# type mismatch
|
||||
(int, (1, 2, 3), 'spam'),
|
||||
# coercible
|
||||
(str, ('spam', 'eggs'), String('spam')),
|
||||
(str, ('1', '2', '3'), 2),
|
||||
(int, (1, 2, 3), 2.0),
|
||||
(int, (1, 2, 3), '2'),
|
||||
(bool, (True,), 1),
|
||||
# semi-coercible
|
||||
(str, ('spam', 'eggs'), b'spam'),
|
||||
(int, (1, 2, 3), 2+0j),
|
||||
(int, (1, 2, 3), b'\02'),
|
||||
]
|
||||
for datatype, enum, value in tests:
|
||||
with self.subTest((datatype, enum, value)):
|
||||
handler = EnumParameter.HANDLER(datatype, enum)
|
||||
data = handler.as_data(value)
|
||||
|
||||
self.assertIs(data, value)
|
||||
|
||||
|
||||
class UnionParameterTests(unittest.TestCase):
|
||||
|
||||
def test_match_type_all_simple(self):
|
||||
tests = [
|
||||
'spam',
|
||||
10,
|
||||
True,
|
||||
]
|
||||
datatype = Union(str, int, bool)
|
||||
param = UnionParameter(datatype)
|
||||
for value in tests:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(type(handler), SimpleParameter.HANDLER)
|
||||
self.assertIs(handler.datatype, type(value))
|
||||
|
||||
def test_match_type_mixed(self):
|
||||
datatype = Union(
|
||||
str,
|
||||
# XXX add dedicated enums
|
||||
Enum(int, (1, 2, 3)),
|
||||
Basic,
|
||||
Array(str),
|
||||
Array(int),
|
||||
Union(int, bool),
|
||||
)
|
||||
param = UnionParameter(datatype)
|
||||
|
||||
tests = [
|
||||
('spam', SimpleParameter.HANDLER(str)),
|
||||
(2, EnumParameter.HANDLER(int, (1, 2, 3))),
|
||||
(BASIC_FULL, ComplexParameter(Basic).match_type(BASIC_FULL)),
|
||||
(['spam'], ArrayParameter.HANDLER(Array(str))),
|
||||
([], ArrayParameter.HANDLER(Array(str))),
|
||||
([10], ArrayParameter.HANDLER(Array(int))),
|
||||
(10, SimpleParameter.HANDLER(int)),
|
||||
(True, SimpleParameter.HANDLER(bool)),
|
||||
# no match
|
||||
(Integer(2), None),
|
||||
([True], None),
|
||||
({}, None),
|
||||
]
|
||||
for value, expected in tests:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertEqual(handler, expected)
|
||||
|
||||
def test_match_type_catchall(self):
|
||||
NOOP = DatatypeHandler(ANY)
|
||||
param = UnionParameter(Union(int, str, ANY))
|
||||
tests = [
|
||||
('spam', SimpleParameter.HANDLER(str)),
|
||||
(10, SimpleParameter.HANDLER(int)),
|
||||
# catchall
|
||||
(BASIC_FULL, NOOP),
|
||||
(['spam'], NOOP),
|
||||
(True, NOOP),
|
||||
(Integer(2), NOOP),
|
||||
([10], NOOP),
|
||||
({}, NOOP),
|
||||
]
|
||||
for value, expected in tests:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertEqual(handler, expected)
|
||||
|
||||
def test_match_type_no_match(self):
|
||||
param = UnionParameter(Union(int, str))
|
||||
values = [
|
||||
BASIC_FULL,
|
||||
['spam'],
|
||||
True,
|
||||
Integer(2),
|
||||
[10],
|
||||
{},
|
||||
]
|
||||
for value in values:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler, None)
|
||||
|
||||
|
||||
class ArrayParameterTests(unittest.TestCase):
|
||||
|
||||
def test_match_type_match(self):
|
||||
param = ArrayParameter(Array(str))
|
||||
expected = ArrayParameter.HANDLER(Array(str))
|
||||
values = [
|
||||
['a', 'b', 'c'],
|
||||
[],
|
||||
]
|
||||
for value in values:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertEqual(handler, expected)
|
||||
|
||||
def test_match_type_no_match(self):
|
||||
param = ArrayParameter(Array(str))
|
||||
values = [
|
||||
['a', 1, 'c'],
|
||||
('a', 'b', 'c'),
|
||||
'spam',
|
||||
]
|
||||
for value in values:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler, None)
|
||||
|
||||
def test_coerce_simple(self):
|
||||
param = ArrayParameter(Array(str))
|
||||
values = [
|
||||
['a', 'b', 'c'],
|
||||
[],
|
||||
]
|
||||
for value in values:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
coerced = handler.coerce(value)
|
||||
|
||||
self.assertEqual(coerced, value)
|
||||
|
||||
def test_coerce_complicated(self):
|
||||
param = ArrayParameter(Array(Union(str, Basic)))
|
||||
value = [
|
||||
'a',
|
||||
BASIC_FULL,
|
||||
'c',
|
||||
]
|
||||
handler = param.match_type(value)
|
||||
coerced = handler.coerce(value)
|
||||
|
||||
self.assertEqual(coerced, [
|
||||
'a',
|
||||
Basic(name='spam', value='eggs'),
|
||||
'c',
|
||||
])
|
||||
|
||||
def test_validate(self):
|
||||
param = ArrayParameter(Array(str))
|
||||
handler = param.match_type(['a', 'b', 'c'])
|
||||
handler.validate(['a', 'b', 'c'])
|
||||
|
||||
def test_as_data_simple(self):
|
||||
param = ArrayParameter(Array(str))
|
||||
handler = param.match_type(['a', 'b', 'c'])
|
||||
data = handler.as_data(['a', 'b', 'c'])
|
||||
|
||||
self.assertEqual(data, ['a', 'b', 'c'])
|
||||
|
||||
def test_as_data_complicated(self):
|
||||
param = ArrayParameter(Array(Union(str, Basic)))
|
||||
value = [
|
||||
'a',
|
||||
BASIC_FULL,
|
||||
'c',
|
||||
]
|
||||
handler = param.match_type(value)
|
||||
data = handler.as_data([
|
||||
'a',
|
||||
Basic(name='spam', value='eggs'),
|
||||
'c',
|
||||
])
|
||||
|
||||
self.assertEqual(data, value)
|
||||
|
||||
|
||||
class MappingParameterTests(unittest.TestCase):
|
||||
|
||||
def test_match_type_match(self):
|
||||
param = MappingParameter(Mapping(int))
|
||||
expected = MappingParameter.HANDLER(Mapping(int))
|
||||
values = [
|
||||
{'a': 1, 'b': 2, 'c': 3},
|
||||
{},
|
||||
]
|
||||
for value in values:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertEqual(handler, expected)
|
||||
|
||||
def test_match_type_no_match(self):
|
||||
param = MappingParameter(Mapping(int))
|
||||
values = [
|
||||
{'a': 1, 'b': '2', 'c': 3},
|
||||
[('a', 1), ('b', 2), ('c', 3)],
|
||||
'spam',
|
||||
]
|
||||
for value in values:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
|
||||
self.assertIs(handler, None)
|
||||
|
||||
def test_coerce_simple(self):
|
||||
param = MappingParameter(Mapping(int))
|
||||
values = [
|
||||
{'a': 1, 'b': 2, 'c': 3},
|
||||
{},
|
||||
]
|
||||
for value in values:
|
||||
with self.subTest(value):
|
||||
handler = param.match_type(value)
|
||||
coerced = handler.coerce(value)
|
||||
|
||||
self.assertEqual(coerced, value)
|
||||
|
||||
def test_coerce_complicated(self):
|
||||
param = MappingParameter(Mapping(Union(int, Basic)))
|
||||
value = {
|
||||
'a': 1,
|
||||
'b': BASIC_FULL,
|
||||
'c': 3,
|
||||
}
|
||||
handler = param.match_type(value)
|
||||
coerced = handler.coerce(value)
|
||||
|
||||
self.assertEqual(coerced, {
|
||||
'a': 1,
|
||||
'b': Basic(name='spam', value='eggs'),
|
||||
'c': 3,
|
||||
})
|
||||
|
||||
def test_validate(self):
|
||||
raw = {'a': 1, 'b': 2, 'c': 3}
|
||||
param = MappingParameter(Mapping(int))
|
||||
handler = param.match_type(raw)
|
||||
handler.validate(raw)
|
||||
|
||||
def test_as_data_simple(self):
|
||||
raw = {'a': 1, 'b': 2, 'c': 3}
|
||||
param = MappingParameter(Mapping(int))
|
||||
handler = param.match_type(raw)
|
||||
data = handler.as_data(raw)
|
||||
|
||||
self.assertEqual(data, raw)
|
||||
|
||||
def test_as_data_complicated(self):
|
||||
param = MappingParameter(Mapping(Union(int, Basic)))
|
||||
value = {
|
||||
'a': 1,
|
||||
'b': BASIC_FULL,
|
||||
'c': 3,
|
||||
}
|
||||
handler = param.match_type(value)
|
||||
data = handler.as_data({
|
||||
'a': 1,
|
||||
'b': Basic(name='spam', value='eggs'),
|
||||
'c': 3,
|
||||
})
|
||||
|
||||
self.assertEqual(data, value)
|
||||
|
||||
|
||||
class ComplexParameterTests(unittest.TestCase):
|
||||
|
||||
def test_match_type_none_missing(self):
|
||||
fields = Fields(*FIELDS_BASIC)
|
||||
param = ComplexParameter(fields)
|
||||
handler = param.match_type(BASIC_FULL)
|
||||
|
||||
self.assertIs(type(handler), ComplexParameter.HANDLER)
|
||||
self.assertEqual(handler.datatype.FIELDS, fields)
|
||||
|
||||
def test_match_type_missing_optional(self):
|
||||
fields = Fields(
|
||||
Field('name'),
|
||||
Field.START_OPTIONAL,
|
||||
Field('value'),
|
||||
)
|
||||
param = ComplexParameter(fields)
|
||||
handler = param.match_type({'name': 'spam'})
|
||||
|
||||
self.assertIs(type(handler), ComplexParameter.HANDLER)
|
||||
self.assertEqual(handler.datatype.FIELDS, fields)
|
||||
self.assertNotIn('value', handler.handlers)
|
||||
|
||||
def test_coerce(self):
|
||||
handler = ComplexParameter.HANDLER(Basic)
|
||||
coerced = handler.coerce(BASIC_FULL)
|
||||
|
||||
self.assertEqual(coerced, Basic(**BASIC_FULL))
|
||||
|
||||
def test_validate(self):
|
||||
handler = ComplexParameter.HANDLER(Basic)
|
||||
handler.coerce(BASIC_FULL)
|
||||
coerced = Basic(**BASIC_FULL)
|
||||
handler.validate(coerced)
|
||||
|
||||
def test_as_data(self):
|
||||
handler = ComplexParameter.HANDLER(Basic)
|
||||
handler.coerce(BASIC_FULL)
|
||||
coerced = Basic(**BASIC_FULL)
|
||||
data = handler.as_data(coerced)
|
||||
|
||||
self.assertEqual(data, BASIC_FULL)
|
||||
|
|
@ -1,356 +0,0 @@
|
|||
import unittest
|
||||
|
||||
from debugger_protocol.messages import events
|
||||
|
||||
|
||||
class StringLike:
|
||||
|
||||
def __init__(self, value):
|
||||
self.value = value
|
||||
|
||||
def __str__(self):
|
||||
return self.value
|
||||
|
||||
|
||||
class EventsTests(unittest.TestCase):
|
||||
|
||||
def test_implicit___all__(self):
|
||||
names = set(name
|
||||
for name in vars(events)
|
||||
if not name.startswith('__'))
|
||||
|
||||
self.assertEqual(names, {
|
||||
'InitializedEvent',
|
||||
'StoppedEvent',
|
||||
'ContinuedEvent',
|
||||
'ExitedEvent',
|
||||
'TerminatedEvent',
|
||||
'ThreadEvent',
|
||||
'OutputEvent',
|
||||
'BreakpointEvent',
|
||||
'ModuleEvent',
|
||||
'LoadedSourceEvent',
|
||||
'ProcessEvent',
|
||||
})
|
||||
|
||||
|
||||
class TestBase:
|
||||
|
||||
NAME = None
|
||||
EVENT = None
|
||||
BODY = None
|
||||
BODY_MIN = None
|
||||
|
||||
def test_event_full(self):
|
||||
event = self.EVENT(self.BODY, seq=9)
|
||||
|
||||
self.assertEqual(event.event, self.NAME)
|
||||
self.assertEqual(event.body, self.BODY)
|
||||
|
||||
def test_event_minimal(self):
|
||||
event = self.EVENT(self.BODY_MIN, seq=9)
|
||||
|
||||
self.assertEqual(event.body, self.BODY_MIN)
|
||||
|
||||
def test_event_empty_body(self):
|
||||
if self.BODY_MIN:
|
||||
with self.assertRaises(TypeError):
|
||||
self.EVENT({}, seq=9)
|
||||
|
||||
def test_from_data(self):
|
||||
event = self.EVENT.from_data(
|
||||
type='event',
|
||||
seq=9,
|
||||
event=self.NAME,
|
||||
body=self.BODY,
|
||||
)
|
||||
|
||||
self.assertEqual(event.body, self.BODY)
|
||||
|
||||
def test_as_data(self):
|
||||
event = self.EVENT(self.BODY, seq=9)
|
||||
data = event.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'event',
|
||||
'seq': 9,
|
||||
'event': self.NAME,
|
||||
'body': self.BODY,
|
||||
})
|
||||
|
||||
|
||||
class InitializedEventTests(unittest.TestCase):
|
||||
|
||||
def test_event(self):
|
||||
event = events.InitializedEvent(seq=9)
|
||||
|
||||
self.assertEqual(event.event, 'initialized')
|
||||
|
||||
|
||||
class StoppedEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'stopped'
|
||||
EVENT = events.StoppedEvent
|
||||
BODY = {
|
||||
'reason': 'step',
|
||||
'description': 'descr',
|
||||
'threadId': 10,
|
||||
'text': '...',
|
||||
'allThreadsStopped': False,
|
||||
}
|
||||
BODY_MIN = {
|
||||
'reason': 'step',
|
||||
}
|
||||
|
||||
def test_reasons(self):
|
||||
for reason in events.StoppedEvent.BODY.REASONS:
|
||||
with self.subTest(reason):
|
||||
body = {
|
||||
'reason': reason,
|
||||
}
|
||||
event = events.StoppedEvent(body, seq=9)
|
||||
|
||||
self.assertEqual(event.body.reason, reason)
|
||||
|
||||
|
||||
class ContinuedEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'continued'
|
||||
EVENT = events.ContinuedEvent
|
||||
BODY = {
|
||||
'threadId': 10,
|
||||
'allThreadsContinued': True,
|
||||
}
|
||||
BODY_MIN = {
|
||||
'threadId': 10,
|
||||
}
|
||||
|
||||
|
||||
class ExitedEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'exited'
|
||||
EVENT = events.ExitedEvent
|
||||
BODY = {
|
||||
'exitCode': 0,
|
||||
}
|
||||
BODY_MIN = BODY
|
||||
|
||||
|
||||
class TerminatedEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'terminated'
|
||||
EVENT = events.TerminatedEvent
|
||||
BODY = {
|
||||
'restart': True,
|
||||
}
|
||||
BODY_MIN = {}
|
||||
|
||||
|
||||
class ThreadEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'thread'
|
||||
EVENT = events.ThreadEvent
|
||||
BODY = {
|
||||
'threadId': 10,
|
||||
'reason': 'exited',
|
||||
}
|
||||
BODY_MIN = BODY
|
||||
|
||||
def test_reasons(self):
|
||||
for reason in self.EVENT.BODY.REASONS:
|
||||
with self.subTest(reason):
|
||||
body = {
|
||||
'threadId': 10,
|
||||
'reason': reason,
|
||||
}
|
||||
event = self.EVENT(body, seq=9)
|
||||
|
||||
self.assertEqual(event.body.reason, reason)
|
||||
|
||||
|
||||
class OutputEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'output'
|
||||
EVENT = events.OutputEvent
|
||||
BODY = {
|
||||
'output': '...',
|
||||
'category': 'stdout',
|
||||
'variablesReference': 10,
|
||||
'source': '...',
|
||||
'line': 11,
|
||||
'column': 12,
|
||||
'data': None,
|
||||
}
|
||||
BODY_MIN = {
|
||||
'output': '...',
|
||||
}
|
||||
|
||||
def test_categories(self):
|
||||
for category in self.EVENT.BODY.CATEGORIES:
|
||||
with self.subTest(category):
|
||||
body = dict(self.BODY, **{
|
||||
'category': category,
|
||||
})
|
||||
event = self.EVENT(body, seq=9)
|
||||
|
||||
self.assertEqual(event.body.category, category)
|
||||
|
||||
|
||||
class BreakpointEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'breakpoint'
|
||||
EVENT = events.BreakpointEvent
|
||||
BODY = {
|
||||
'breakpoint': {
|
||||
'id': 10,
|
||||
'verified': True,
|
||||
'message': '...',
|
||||
'source': {
|
||||
'name': '...',
|
||||
'path': '...',
|
||||
'sourceReference': 15,
|
||||
'presentationHint': 'normal',
|
||||
'origin': '...',
|
||||
'sources': [
|
||||
{'name': '...'},
|
||||
],
|
||||
'adapterData': None,
|
||||
'checksums': [
|
||||
{'algorithm': 'MD5', 'checksum': '...'},
|
||||
],
|
||||
},
|
||||
'line': 11,
|
||||
'column': 12,
|
||||
'endLine': 11,
|
||||
'endColumn': 12,
|
||||
},
|
||||
'reason': 'new',
|
||||
}
|
||||
BODY_MIN = {
|
||||
'breakpoint': {
|
||||
'id': 10,
|
||||
'verified': True,
|
||||
},
|
||||
'reason': 'new',
|
||||
}
|
||||
|
||||
def test_reasons(self):
|
||||
for reason in self.EVENT.BODY.REASONS:
|
||||
with self.subTest(reason):
|
||||
body = dict(self.BODY, **{
|
||||
'reason': reason,
|
||||
})
|
||||
event = self.EVENT(body, seq=9)
|
||||
|
||||
self.assertEqual(event.body.reason, reason)
|
||||
|
||||
|
||||
class ModuleEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'module'
|
||||
EVENT = events.ModuleEvent
|
||||
BODY = {
|
||||
'module': {
|
||||
'id': 10,
|
||||
'name': '...',
|
||||
'path': '...',
|
||||
'isOptimized': False,
|
||||
'isUserCode': True,
|
||||
'version': '...',
|
||||
'symbolStatus': '...',
|
||||
'symbolFilePath': '...',
|
||||
'dateTimeStamp': '...',
|
||||
'addressRange': '...',
|
||||
},
|
||||
'reason': 'new',
|
||||
}
|
||||
BODY_MIN = {
|
||||
'module': {
|
||||
'id': 10,
|
||||
'name': '...',
|
||||
},
|
||||
'reason': 'new',
|
||||
}
|
||||
|
||||
def test_reasons(self):
|
||||
for reason in self.EVENT.BODY.REASONS:
|
||||
with self.subTest(reason):
|
||||
body = dict(self.BODY, **{
|
||||
'reason': reason,
|
||||
})
|
||||
event = self.EVENT(body, seq=9)
|
||||
|
||||
self.assertEqual(event.body.reason, reason)
|
||||
|
||||
|
||||
class LoadedSourceEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'loadedSource'
|
||||
EVENT = events.LoadedSourceEvent
|
||||
BODY = {
|
||||
'source': {
|
||||
'name': '...',
|
||||
'path': '...',
|
||||
'sourceReference': 15,
|
||||
'presentationHint': 'normal',
|
||||
'origin': '...',
|
||||
'sources': [
|
||||
{'name': '...'},
|
||||
],
|
||||
'adapterData': None,
|
||||
'checksums': [
|
||||
{'algorithm': 'MD5', 'checksum': '...'},
|
||||
],
|
||||
},
|
||||
'reason': 'new',
|
||||
}
|
||||
BODY_MIN = {
|
||||
'source': {},
|
||||
'reason': 'new',
|
||||
}
|
||||
|
||||
def test_reasons(self):
|
||||
for reason in self.EVENT.BODY.REASONS:
|
||||
with self.subTest(reason):
|
||||
body = dict(self.BODY, **{
|
||||
'reason': reason,
|
||||
})
|
||||
event = self.EVENT(body, seq=9)
|
||||
|
||||
self.assertEqual(event.body.reason, reason)
|
||||
|
||||
def test_hints(self):
|
||||
for hint in self.EVENT.BODY.FIELDS[0].datatype.HINTS:
|
||||
with self.subTest(hint):
|
||||
body = dict(self.BODY)
|
||||
body['source'].update(**{
|
||||
'presentationHint': hint,
|
||||
})
|
||||
event = self.EVENT(body, seq=9)
|
||||
|
||||
self.assertEqual(event.body.source.presentationHint, hint)
|
||||
|
||||
|
||||
class ProcessEventTests(TestBase, unittest.TestCase):
|
||||
|
||||
NAME = 'process'
|
||||
EVENT = events.ProcessEvent
|
||||
BODY = {
|
||||
'name': '...',
|
||||
'systemProcessId': 10,
|
||||
'isLocalProcess': True,
|
||||
'startMethod': 'launch',
|
||||
}
|
||||
BODY_MIN = {
|
||||
'name': '...',
|
||||
}
|
||||
|
||||
def test_start_methods(self):
|
||||
for method in self.EVENT.BODY.START_METHODS:
|
||||
with self.subTest(method):
|
||||
body = dict(self.BODY, **{
|
||||
'startMethod': method,
|
||||
})
|
||||
event = self.EVENT(body, seq=9)
|
||||
|
||||
self.assertEqual(event.body.startMethod, method)
|
||||
|
|
@ -1,936 +0,0 @@
|
|||
import unittest
|
||||
|
||||
from debugger_protocol.arg import FieldsNamespace, Field
|
||||
from debugger_protocol.messages import register
|
||||
from debugger_protocol.messages.message import (
|
||||
ProtocolMessage, Request, Response, Event)
|
||||
|
||||
|
||||
@register
|
||||
class DummyRequest(object):
|
||||
TYPE = 'request'
|
||||
TYPE_KEY = 'command'
|
||||
COMMAND = '...'
|
||||
|
||||
|
||||
@register
|
||||
class DummyResponse(object):
|
||||
TYPE = 'response'
|
||||
TYPE_KEY = 'command'
|
||||
COMMAND = '...'
|
||||
|
||||
|
||||
@register
|
||||
class DummyEvent(object):
|
||||
TYPE = 'event'
|
||||
TYPE_KEY = 'event'
|
||||
EVENT = '...'
|
||||
|
||||
|
||||
class FakeMsg(ProtocolMessage):
|
||||
|
||||
SEQ = 0
|
||||
|
||||
@classmethod
|
||||
def _next_reqid(cls):
|
||||
return cls.SEQ
|
||||
|
||||
|
||||
class ProtocolMessageTests(unittest.TestCase):
|
||||
|
||||
def test_from_data(self):
|
||||
data = {
|
||||
'type': 'event',
|
||||
'seq': 10,
|
||||
}
|
||||
msg = ProtocolMessage.from_data(**data)
|
||||
|
||||
self.assertEqual(msg.type, 'event')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
|
||||
def test_defaults(self): # no args
|
||||
class Spam(FakeMsg):
|
||||
SEQ = 10
|
||||
TYPE = 'event'
|
||||
|
||||
msg = Spam()
|
||||
|
||||
self.assertEqual(msg.type, 'event')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
|
||||
def test_all_args(self):
|
||||
msg = ProtocolMessage(10, type='event')
|
||||
|
||||
self.assertEqual(msg.type, 'event')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
|
||||
def test_coercion_seq(self):
|
||||
msg = ProtocolMessage('10', type='event')
|
||||
|
||||
self.assertEqual(msg.seq, 10)
|
||||
|
||||
def test_validation(self):
|
||||
# type
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
ProtocolMessage(type=None)
|
||||
with self.assertRaises(ValueError):
|
||||
ProtocolMessage(type='spam')
|
||||
|
||||
class Other(ProtocolMessage):
|
||||
TYPE = 'spam'
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
Other(type='event')
|
||||
|
||||
# seq
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
ProtocolMessage(None, type='event')
|
||||
with self.assertRaises(ValueError):
|
||||
ProtocolMessage(-1, type='event')
|
||||
|
||||
def test_readonly(self):
|
||||
msg = ProtocolMessage(10, type='event')
|
||||
|
||||
with self.assertRaises(AttributeError):
|
||||
msg.seq = 11
|
||||
with self.assertRaises(AttributeError):
|
||||
msg.type = 'event'
|
||||
with self.assertRaises(AttributeError):
|
||||
msg.spam = object()
|
||||
with self.assertRaises(AttributeError):
|
||||
del msg.seq
|
||||
|
||||
def test_repr(self):
|
||||
msg = ProtocolMessage(10, type='event')
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result, "ProtocolMessage(type='event', seq=10)")
|
||||
|
||||
def test_repr_subclass(self):
|
||||
class Eventish(ProtocolMessage):
|
||||
TYPE = 'event'
|
||||
|
||||
msg = Eventish(10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result, 'Eventish(seq=10)')
|
||||
|
||||
def test_as_data(self):
|
||||
msg = ProtocolMessage(10, type='event')
|
||||
data = msg.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'event',
|
||||
'seq': 10,
|
||||
})
|
||||
|
||||
|
||||
class RequestTests(unittest.TestCase):
|
||||
|
||||
def test_from_data_without_arguments(self):
|
||||
data = {
|
||||
'type': 'request',
|
||||
'seq': 10,
|
||||
'command': 'spam',
|
||||
}
|
||||
msg = Request.from_data(**data)
|
||||
|
||||
self.assertEqual(msg.type, 'request')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertIsNone(msg.arguments)
|
||||
|
||||
def test_from_data_with_arguments(self):
|
||||
class Spam(Request):
|
||||
class ARGUMENTS(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
data = {
|
||||
'type': 'request',
|
||||
'seq': 10,
|
||||
'command': 'spam',
|
||||
'arguments': {'a': 'b'},
|
||||
}
|
||||
#msg = Request.from_data(**data)
|
||||
msg = Spam.from_data(**data)
|
||||
|
||||
self.assertEqual(msg.type, 'request')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertEqual(msg.arguments, {'a': 'b'})
|
||||
|
||||
def test_defaults(self):
|
||||
class Spam(Request, FakeMsg):
|
||||
SEQ = 10
|
||||
COMMAND = 'spam'
|
||||
|
||||
msg = Spam()
|
||||
|
||||
self.assertEqual(msg.type, 'request')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertIsNone(msg.arguments)
|
||||
|
||||
def test_all_args(self):
|
||||
class Spam(Request):
|
||||
class ARGUMENTS(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
args = {'a': 'b'}
|
||||
msg = Spam(arguments=args, command='spam', seq=10)
|
||||
|
||||
self.assertEqual(msg.type, 'request')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertEqual(msg.arguments, args)
|
||||
|
||||
def test_no_arguments_not_required(self):
|
||||
class Spam(Request):
|
||||
COMMAND = 'spam'
|
||||
ARGUMENTS = True
|
||||
ARGUMENTS_REQUIRED = False
|
||||
|
||||
msg = Spam()
|
||||
|
||||
self.assertIsNone(msg.arguments)
|
||||
|
||||
def test_no_args(self):
|
||||
with self.assertRaises(TypeError):
|
||||
Request()
|
||||
|
||||
def test_coercion_arguments(self):
|
||||
class Spam(Request):
|
||||
COMMAND = 'spam'
|
||||
class ARGUMENTS(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
args = [('a', 'b')]
|
||||
msg = Spam(args)
|
||||
|
||||
self.assertEqual(msg.arguments, {'a': 'b'})
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
Spam(command='spam', arguments=11)
|
||||
|
||||
def test_validation(self):
|
||||
with self.assertRaises(TypeError):
|
||||
Request()
|
||||
|
||||
# command
|
||||
|
||||
class Other1(Request):
|
||||
COMMAND = 'eggs'
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
# command doesn't match
|
||||
Other1(arguments=10, command='spam')
|
||||
|
||||
# arguments
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
# unexpected arguments
|
||||
Request(arguments=10, command='spam')
|
||||
|
||||
class Other2(Request):
|
||||
COMMAND = 'spam'
|
||||
ARGUMENTS = int
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
# missing arguments (implicitly required)
|
||||
Other2(command='eggs')
|
||||
|
||||
class Other3(Request):
|
||||
COMMAND = 'eggs'
|
||||
ARGUMENTS = int
|
||||
ARGUMENTS_REQUIRED = True
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
# missing arguments (explicitly required)
|
||||
Other2(command='eggs')
|
||||
|
||||
def test_repr_minimal(self):
|
||||
msg = Request(command='spam', seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result, "Request(command='spam', seq=10)")
|
||||
|
||||
def test_repr_full(self):
|
||||
msg = Request(command='spam', seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result, "Request(command='spam', seq=10)")
|
||||
|
||||
def test_repr_subclass_minimal(self):
|
||||
class SpamRequest(Request):
|
||||
COMMAND = 'spam'
|
||||
|
||||
msg = SpamRequest(seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result, "SpamRequest(seq=10)")
|
||||
|
||||
def test_repr_subclass_full(self):
|
||||
class SpamRequest(Request):
|
||||
COMMAND = 'spam'
|
||||
class ARGUMENTS(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = SpamRequest(arguments={'a': 'b'}, seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result,
|
||||
"SpamRequest(arguments=ARGUMENTS(a='b'), seq=10)")
|
||||
|
||||
def test_as_data_minimal(self):
|
||||
msg = Request(command='spam', seq=10)
|
||||
data = msg.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'request',
|
||||
'seq': 10,
|
||||
'command': 'spam',
|
||||
})
|
||||
|
||||
def test_as_data_full(self):
|
||||
class Spam(Request):
|
||||
COMMAND = 'spam'
|
||||
class ARGUMENTS(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = Spam(arguments={'a': 'b'}, seq=10)
|
||||
data = msg.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'request',
|
||||
'seq': 10,
|
||||
'command': 'spam',
|
||||
'arguments': {'a': 'b'},
|
||||
})
|
||||
|
||||
|
||||
class ResponseTests(unittest.TestCase):
|
||||
|
||||
def test_from_data_without_body(self):
|
||||
data = {
|
||||
'type': 'response',
|
||||
'seq': 10,
|
||||
'command': 'spam',
|
||||
'request_seq': 9,
|
||||
'success': True,
|
||||
}
|
||||
msg = Response.from_data(**data)
|
||||
|
||||
self.assertEqual(msg.type, 'response')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertEqual(msg.request_seq, 9)
|
||||
self.assertTrue(msg.success)
|
||||
self.assertIsNone(msg.body)
|
||||
self.assertIsNone(msg.message)
|
||||
|
||||
def test_from_data_with_body(self):
|
||||
class Spam(Response):
|
||||
class BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
data = {
|
||||
'type': 'response',
|
||||
'seq': 10,
|
||||
'command': 'spam',
|
||||
'request_seq': 9,
|
||||
'success': True,
|
||||
'body': {'a': 'b'},
|
||||
}
|
||||
msg = Spam.from_data(**data)
|
||||
|
||||
self.assertEqual(msg.type, 'response')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertEqual(msg.request_seq, 9)
|
||||
self.assertTrue(msg.success)
|
||||
self.assertEqual(msg.body, {'a': 'b'})
|
||||
self.assertIsNone(msg.message)
|
||||
|
||||
def test_from_data_error_without_body(self):
|
||||
data = {
|
||||
'type': 'response',
|
||||
'seq': 10,
|
||||
'command': 'spam',
|
||||
'request_seq': 9,
|
||||
'success': False,
|
||||
'message': 'oops!',
|
||||
}
|
||||
msg = Response.from_data(**data)
|
||||
|
||||
self.assertEqual(msg.type, 'response')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertEqual(msg.request_seq, 9)
|
||||
self.assertFalse(msg.success)
|
||||
self.assertIsNone(msg.body)
|
||||
self.assertEqual(msg.message, 'oops!')
|
||||
|
||||
def test_from_data_error_with_body(self):
|
||||
class Spam(Response):
|
||||
class ERROR_BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
data = {
|
||||
'type': 'response',
|
||||
'seq': 10,
|
||||
'command': 'spam',
|
||||
'request_seq': 9,
|
||||
'success': False,
|
||||
'message': 'oops!',
|
||||
'body': {'a': 'b'},
|
||||
}
|
||||
msg = Spam.from_data(**data)
|
||||
|
||||
self.assertEqual(msg.type, 'response')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertEqual(msg.request_seq, 9)
|
||||
self.assertFalse(msg.success)
|
||||
self.assertEqual(msg.body, {'a': 'b'})
|
||||
self.assertEqual(msg.message, 'oops!')
|
||||
|
||||
def test_defaults(self):
|
||||
class Spam(Response, FakeMsg):
|
||||
SEQ = 10
|
||||
COMMAND = 'spam'
|
||||
|
||||
msg = Spam('9')
|
||||
|
||||
self.assertEqual(msg.type, 'response')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.request_seq, 9)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertTrue(msg.success)
|
||||
self.assertIsNone(msg.body)
|
||||
self.assertIsNone(msg.message)
|
||||
|
||||
def test_all_args_not_error(self):
|
||||
class Spam(Response):
|
||||
class BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = Spam('9', command='spam', success=True, body={'a': 'b'},
|
||||
seq=10, type='response')
|
||||
|
||||
self.assertEqual(msg.type, 'response')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.request_seq, 9)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertTrue(msg.success)
|
||||
self.assertEqual(msg.body, {'a': 'b'})
|
||||
self.assertIsNone(msg.message)
|
||||
|
||||
def test_all_args_error(self):
|
||||
class Spam(Response):
|
||||
COMMAND = 'spam'
|
||||
class ERROR_BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = Spam('9', success=False, message='oops!', body={'a': 'b'},
|
||||
seq=10, type='response')
|
||||
|
||||
self.assertEqual(msg.type, 'response')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.command, 'spam')
|
||||
self.assertEqual(msg.request_seq, 9)
|
||||
self.assertFalse(msg.success)
|
||||
self.assertEqual(msg.body, Spam.ERROR_BODY(a='b'))
|
||||
self.assertEqual(msg.message, 'oops!')
|
||||
|
||||
def test_no_body_not_required(self):
|
||||
class Spam(Response):
|
||||
COMMAND = 'spam'
|
||||
BODY = True
|
||||
BODY_REQUIRED = False
|
||||
|
||||
msg = Spam('9')
|
||||
|
||||
self.assertIsNone(msg.body)
|
||||
|
||||
def test_no_error_body_not_required(self):
|
||||
class Spam(Response):
|
||||
COMMAND = 'spam'
|
||||
ERROR_BODY = True
|
||||
ERROR_BODY_REQUIRED = False
|
||||
|
||||
msg = Spam('9', success=False, message='oops!')
|
||||
|
||||
self.assertIsNone(msg.body)
|
||||
|
||||
def test_no_args(self):
|
||||
with self.assertRaises(TypeError):
|
||||
Response()
|
||||
|
||||
def test_coercion_request_seq(self):
|
||||
msg = Response('9', command='spam')
|
||||
|
||||
self.assertEqual(msg.request_seq, 9)
|
||||
|
||||
def test_coercion_success(self):
|
||||
msg1 = Response(9, success=1, command='spam')
|
||||
msg2 = Response(9, success=None, command='spam', message='oops!')
|
||||
|
||||
self.assertIs(msg1.success, True)
|
||||
self.assertIs(msg2.success, False)
|
||||
|
||||
def test_coercion_body(self):
|
||||
class Spam(Response):
|
||||
COMMAND = 'spam'
|
||||
class BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
body = [('a', 'b')]
|
||||
msg = Spam(9, body=body)
|
||||
|
||||
self.assertEqual(msg.body, {'a': 'b'})
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
Spam(9, command='spam', body=11)
|
||||
|
||||
def test_coercion_error_body(self):
|
||||
class Spam(Response):
|
||||
COMMAND = 'spam'
|
||||
class ERROR_BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
body = [('a', 'b')]
|
||||
msg = Spam(9, body=body, success=False, message='oops!')
|
||||
|
||||
self.assertEqual(msg.body, {'a': 'b'})
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
Spam(9, command='spam', success=False, message='oops!', body=11)
|
||||
|
||||
def test_validation(self):
|
||||
# request_seq
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
# missing
|
||||
Response(None, command='spam')
|
||||
with self.assertRaises(TypeError):
|
||||
# missing
|
||||
Response('', command='spam')
|
||||
with self.assertRaises(TypeError):
|
||||
# couldn't convert to int
|
||||
Response(object(), command='spam')
|
||||
with self.assertRaises(ValueError):
|
||||
# not non-negative
|
||||
Response(-1, command='spam')
|
||||
|
||||
# command
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
# missing
|
||||
Response(9, command=None)
|
||||
with self.assertRaises(TypeError):
|
||||
# missing
|
||||
Response(9, command='')
|
||||
|
||||
class Other1(Response):
|
||||
COMMAND = 'eggs'
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
# does not match
|
||||
Other1(9, command='spam')
|
||||
|
||||
# body
|
||||
|
||||
class Other2(Response):
|
||||
class BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
ERROR_BODY = BODY
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
# unexpected
|
||||
Response(9, command='spam', body=11)
|
||||
with self.assertRaises(TypeError):
|
||||
# missing (implicitly required)
|
||||
Other2(9, command='spam')
|
||||
with self.assertRaises(TypeError):
|
||||
# missing (explicitly required)
|
||||
Other2.BODY_REQUIRED = True
|
||||
Other2(9, command='spam')
|
||||
with self.assertRaises(ValueError):
|
||||
# unexpected (error)
|
||||
Response(9, command='spam', body=11, success=False, message=':(')
|
||||
with self.assertRaises(TypeError):
|
||||
# missing (error) (implicitly required)
|
||||
Other2(9, command='spam', success=False, message=':(')
|
||||
with self.assertRaises(TypeError):
|
||||
# missing (error) (explicitly required)
|
||||
Other2.ERROR_BODY_REQUIRED = True
|
||||
Other2(9, command='spam', success=False, message=':(')
|
||||
|
||||
# message
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
# missing
|
||||
Response(9, command='spam', success=False)
|
||||
|
||||
def test_repr_minimal(self):
|
||||
msg = Response(9, command='spam', seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result,
|
||||
"Response(command='spam', request_seq=9, success=True, seq=10)") # noqa
|
||||
|
||||
def test_repr_full(self):
|
||||
msg = Response(9, command='spam', seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result,
|
||||
"Response(command='spam', request_seq=9, success=True, seq=10)") # noqa
|
||||
|
||||
def test_repr_error_minimal(self):
|
||||
msg = Response(9, command='spam', success=False, message='oops!',
|
||||
seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result,
|
||||
"Response(command='spam', request_seq=9, success=False, message='oops!', seq=10)") # noqa
|
||||
|
||||
def test_repr_error_full(self):
|
||||
msg = Response(9, command='spam', success=False, message='oops!',
|
||||
seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result,
|
||||
"Response(command='spam', request_seq=9, success=False, message='oops!', seq=10)") # noqa
|
||||
|
||||
def test_repr_subclass_minimal(self):
|
||||
class SpamResponse(Response):
|
||||
COMMAND = 'spam'
|
||||
|
||||
msg = SpamResponse(9, seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result,
|
||||
"SpamResponse(request_seq=9, success=True, seq=10)")
|
||||
|
||||
def test_repr_subclass_full(self):
|
||||
class SpamResponse(Response):
|
||||
COMMAND = 'spam'
|
||||
class BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = SpamResponse(9, body={'a': 'b'}, seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result,
|
||||
"SpamResponse(request_seq=9, success=True, body=BODY(a='b'), seq=10)") # noqa
|
||||
|
||||
def test_repr_subclass_error_minimal(self):
|
||||
class SpamResponse(Response):
|
||||
COMMAND = 'spam'
|
||||
|
||||
msg = SpamResponse(9, success=False, message='oops!', seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result,
|
||||
"SpamResponse(request_seq=9, success=False, message='oops!', seq=10)") # noqa
|
||||
|
||||
def test_repr_subclass_error_full(self):
|
||||
class SpamResponse(Response):
|
||||
COMMAND = 'spam'
|
||||
class ERROR_BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = SpamResponse(9, success=False, message='oops!', body={'a': 'b'},
|
||||
seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result,
|
||||
"SpamResponse(request_seq=9, success=False, message='oops!', body=ERROR_BODY(a='b'), seq=10)") # noqa
|
||||
|
||||
def test_as_data_minimal(self):
|
||||
msg = Response(9, command='spam', seq=10)
|
||||
data = msg.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'response',
|
||||
'seq': 10,
|
||||
'request_seq': 9,
|
||||
'command': 'spam',
|
||||
'success': True,
|
||||
})
|
||||
|
||||
def test_as_data_full(self):
|
||||
class Spam(Response):
|
||||
COMMAND = 'spam'
|
||||
class BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = Spam(9, body={'a': 'b'}, seq=10)
|
||||
data = msg.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'response',
|
||||
'seq': 10,
|
||||
'request_seq': 9,
|
||||
'command': 'spam',
|
||||
'success': True,
|
||||
'body': {'a': 'b'},
|
||||
})
|
||||
|
||||
def test_as_data_error_minimal(self):
|
||||
msg = Response(9, command='spam', success=False, message='oops!',
|
||||
seq=10)
|
||||
data = msg.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'response',
|
||||
'seq': 10,
|
||||
'request_seq': 9,
|
||||
'command': 'spam',
|
||||
'success': False,
|
||||
'message': 'oops!',
|
||||
})
|
||||
|
||||
def test_as_data_error_full(self):
|
||||
class Spam(Response):
|
||||
COMMAND = 'spam'
|
||||
class ERROR_BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = Spam(9, success=False, body={'a': 'b'}, message='oops!', seq=10)
|
||||
data = msg.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'response',
|
||||
'seq': 10,
|
||||
'request_seq': 9,
|
||||
'command': 'spam',
|
||||
'success': False,
|
||||
'message': 'oops!',
|
||||
'body': {'a': 'b'},
|
||||
})
|
||||
|
||||
|
||||
class EventTests(unittest.TestCase):
|
||||
|
||||
def test_from_data_without_body(self):
|
||||
data = {
|
||||
'type': 'event',
|
||||
'seq': 10,
|
||||
'event': 'spam',
|
||||
}
|
||||
msg = Event.from_data(**data)
|
||||
|
||||
self.assertEqual(msg.type, 'event')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.event, 'spam')
|
||||
self.assertIsNone(msg.body)
|
||||
|
||||
def test_from_data_with_body(self):
|
||||
class Spam(Event):
|
||||
class BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
data = {
|
||||
'type': 'event',
|
||||
'seq': 10,
|
||||
'event': 'spam',
|
||||
'body': {'a': 'b'},
|
||||
}
|
||||
msg = Spam.from_data(**data)
|
||||
|
||||
self.assertEqual(msg.type, 'event')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.event, 'spam')
|
||||
self.assertEqual(msg.body, {'a': 'b'})
|
||||
|
||||
def test_defaults(self): # no args
|
||||
class Spam(Event, FakeMsg):
|
||||
SEQ = 10
|
||||
EVENT = 'spam'
|
||||
|
||||
msg = Spam()
|
||||
|
||||
self.assertEqual(msg.type, 'event')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.event, 'spam')
|
||||
self.assertIsNone(msg.body)
|
||||
|
||||
def test_all_args(self):
|
||||
class Spam(Event):
|
||||
class BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = Spam(event='spam', body={'a': 'b'}, seq=10, type='event')
|
||||
|
||||
self.assertEqual(msg.type, 'event')
|
||||
self.assertEqual(msg.seq, 10)
|
||||
self.assertEqual(msg.event, 'spam')
|
||||
self.assertEqual(msg.body, {'a': 'b'})
|
||||
|
||||
def test_no_body_not_required(self):
|
||||
class Spam(Event):
|
||||
EVENT = 'spam'
|
||||
BODY = True
|
||||
BODY_REQUIRED = False
|
||||
|
||||
msg = Spam()
|
||||
|
||||
self.assertIsNone(msg.body)
|
||||
|
||||
def test_no_args(self):
|
||||
with self.assertRaises(TypeError):
|
||||
Event()
|
||||
|
||||
def test_coercion_body(self):
|
||||
class Spam(Event):
|
||||
EVENT = 'spam'
|
||||
class BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
body = [('a', 'b')]
|
||||
msg = Spam(body=body)
|
||||
|
||||
self.assertEqual(msg.body, {'a': 'b'})
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
Spam(event='spam', body=11)
|
||||
|
||||
def test_validation(self):
|
||||
# event
|
||||
|
||||
with self.assertRaises(TypeError):
|
||||
# missing
|
||||
Event(event=None)
|
||||
with self.assertRaises(TypeError):
|
||||
# missing
|
||||
Event(event='')
|
||||
|
||||
class Other1(Event):
|
||||
EVENT = 'eggs'
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
# does not match
|
||||
Other1(event='spam')
|
||||
|
||||
# body
|
||||
|
||||
class Other2(Event):
|
||||
class BODY(FieldsNamespace):
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
with self.assertRaises(ValueError):
|
||||
# unexpected
|
||||
Event(event='spam', body=11)
|
||||
with self.assertRaises(TypeError):
|
||||
# missing (implicitly required)
|
||||
Other2(9, command='spam')
|
||||
with self.assertRaises(TypeError):
|
||||
# missing (explicitly required)
|
||||
Other2.BODY_REQUIRED = True
|
||||
Other2(9, command='spam')
|
||||
|
||||
def test_repr_minimal(self):
|
||||
msg = Event(event='spam', seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result, "Event(event='spam', seq=10)")
|
||||
|
||||
def test_repr_full(self):
|
||||
msg = Event(event='spam', seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result, "Event(event='spam', seq=10)")
|
||||
|
||||
def test_repr_subclass_minimal(self):
|
||||
class SpamEvent(Event):
|
||||
EVENT = 'spam'
|
||||
|
||||
msg = SpamEvent(seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result, 'SpamEvent(seq=10)')
|
||||
|
||||
def test_repr_subclass_full(self):
|
||||
class SpamEvent(Event):
|
||||
EVENT = 'spam'
|
||||
class BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = SpamEvent(body={'a': 'b'}, seq=10)
|
||||
result = repr(msg)
|
||||
|
||||
self.assertEqual(result, "SpamEvent(body=BODY(a='b'), seq=10)")
|
||||
|
||||
def test_as_data_minimal(self):
|
||||
msg = Event(event='spam', seq=10)
|
||||
data = msg.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'event',
|
||||
'seq': 10,
|
||||
'event': 'spam',
|
||||
})
|
||||
|
||||
def test_as_data_full(self):
|
||||
class Spam(Event):
|
||||
EVENT = 'spam'
|
||||
class BODY(FieldsNamespace): # noqa
|
||||
FIELDS = [
|
||||
Field('a'),
|
||||
]
|
||||
|
||||
msg = Spam(body={'a': 'b'}, seq=10)
|
||||
data = msg.as_data()
|
||||
|
||||
self.assertEqual(data, {
|
||||
'type': 'event',
|
||||
'seq': 10,
|
||||
'event': 'spam',
|
||||
'body': {'a': 'b'},
|
||||
})
|
||||
|
|
@ -1,126 +0,0 @@
|
|||
import unittest
|
||||
|
||||
from debugger_protocol.messages import requests
|
||||
|
||||
|
||||
class RequestsTests(unittest.TestCase):
|
||||
|
||||
def test_implicit___all__(self):
|
||||
names = set(name
|
||||
for name in vars(requests)
|
||||
if not name.startswith('__'))
|
||||
|
||||
self.assertEqual(names, {
|
||||
'ErrorResponse',
|
||||
'RunInTerminalRequest',
|
||||
'RunInTerminalResponse',
|
||||
'InitializeRequest',
|
||||
'InitializeResponse',
|
||||
'ConfigurationDoneRequest',
|
||||
'ConfigurationDoneResponse',
|
||||
'LaunchRequest',
|
||||
'LaunchResponse',
|
||||
'AttachRequest',
|
||||
'AttachResponse',
|
||||
'RestartRequest',
|
||||
'RestartResponse',
|
||||
'DisconnectRequest',
|
||||
'DisconnectResponse',
|
||||
'SetBreakpointsRequest',
|
||||
'SetBreakpointsResponse',
|
||||
'SetFunctionBreakpointsRequest',
|
||||
'SetFunctionBreakpointsResponse',
|
||||
'SetExceptionBreakpointsRequest',
|
||||
'SetExceptionBreakpointsResponse',
|
||||
'ContinueRequest',
|
||||
'ContinueResponse',
|
||||
'NextRequest',
|
||||
'NextResponse',
|
||||
'StepInRequest',
|
||||
'StepInResponse',
|
||||
'StepOutRequest',
|
||||
'StepOutResponse',
|
||||
'StepBackRequest',
|
||||
'StepBackResponse',
|
||||
'ReverseContinueRequest',
|
||||
'ReverseContinueResponse',
|
||||
'RestartFrameRequest',
|
||||
'RestartFrameResponse',
|
||||
'GotoRequest',
|
||||
'GotoResponse',
|
||||
'PauseRequest',
|
||||
'PauseResponse',
|
||||
'StackTraceRequest',
|
||||
'StackTraceResponse',
|
||||
'ScopesRequest',
|
||||
'ScopesResponse',
|
||||
'VariablesRequest',
|
||||
'VariablesResponse',
|
||||
'SetVariableRequest',
|
||||
'SetVariableResponse',
|
||||
'SourceRequest',
|
||||
'SourceResponse',
|
||||
'ThreadsRequest',
|
||||
'ThreadsResponse',
|
||||
'ModulesRequest',
|
||||
'ModulesResponse',
|
||||
'LoadedSourcesRequest',
|
||||
'LoadedSourcesResponse',
|
||||
'EvaluateRequest',
|
||||
'EvaluateResponse',
|
||||
'StepInTargetsRequest',
|
||||
'StepInTargetsResponse',
|
||||
'GotoTargetsRequest',
|
||||
'GotoTargetsResponse',
|
||||
'CompletionsRequest',
|
||||
'CompletionsResponse',
|
||||
'ExceptionInfoRequest',
|
||||
'ExceptionInfoResponse',
|
||||
})
|
||||
|
||||
|
||||
# TODO: Add tests for every request/response type.
|
||||
|
||||
#class TestBase:
|
||||
#
|
||||
# NAME = None
|
||||
# EVENT = None
|
||||
# BODY = None
|
||||
# BODY_MIN = None
|
||||
#
|
||||
# def test_event_full(self):
|
||||
# event = self.EVENT(self.BODY, seq=9)
|
||||
#
|
||||
# self.assertEqual(event.event, self.NAME)
|
||||
# self.assertEqual(event.body, self.BODY)
|
||||
#
|
||||
# def test_event_minimal(self):
|
||||
# event = self.EVENT(self.BODY_MIN, seq=9)
|
||||
#
|
||||
# self.assertEqual(event.body, self.BODY_MIN)
|
||||
#
|
||||
# def test_event_empty_body(self):
|
||||
# if self.BODY_MIN:
|
||||
# with self.assertRaises(TypeError):
|
||||
# self.EVENT({}, seq=9)
|
||||
#
|
||||
# def test_from_data(self):
|
||||
# event = self.EVENT.from_data(
|
||||
# type='event',
|
||||
# seq=9,
|
||||
# event=self.NAME,
|
||||
# body=self.BODY,
|
||||
# )
|
||||
#
|
||||
# self.assertEqual(event.body, self.BODY)
|
||||
#
|
||||
# def test_as_data(self):
|
||||
# event = self.EVENT(self.BODY, seq=9)
|
||||
# data = event.as_data()
|
||||
#
|
||||
# self.assertEqual(data, {
|
||||
# 'type': 'event',
|
||||
# 'seq': 9,
|
||||
# 'event': self.NAME,
|
||||
# 'body': self.BODY,
|
||||
# })
|
||||
|
|
@ -1,20 +0,0 @@
|
|||
import urllib.error
|
||||
|
||||
|
||||
class StubOpener:
|
||||
|
||||
def __init__(self, *files):
|
||||
self.files = list(files)
|
||||
self.calls = []
|
||||
|
||||
def open(self, *args):
|
||||
self.calls.append(args)
|
||||
|
||||
file = self.files.pop(0)
|
||||
if file is None:
|
||||
if args[0].startswith('http'):
|
||||
raise urllib.error.HTTPError(args[0], 404, 'Not Found',
|
||||
None, None)
|
||||
else:
|
||||
raise FileNotFoundError
|
||||
return file
|
||||
|
|
@ -1,126 +0,0 @@
|
|||
import contextlib
|
||||
import io
|
||||
import sys
|
||||
from textwrap import dedent
|
||||
import unittest
|
||||
|
||||
from .helpers import StubOpener
|
||||
from debugger_protocol.schema.vendored import FILENAME as VENDORED, METADATA
|
||||
from debugger_protocol.schema.__main__ import (
|
||||
COMMANDS, handle_download, handle_check)
|
||||
|
||||
|
||||
class Outfile:
|
||||
|
||||
def __init__(self, initial):
|
||||
self.written = initial
|
||||
|
||||
def write(self, data):
|
||||
self.written += data
|
||||
return len(data)
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
pass
|
||||
|
||||
|
||||
class CommandRegistryTests(unittest.TestCase):
|
||||
|
||||
def test_commands(self):
|
||||
self.assertEqual(set(COMMANDS), {
|
||||
'download',
|
||||
'check',
|
||||
})
|
||||
|
||||
|
||||
class internal_redirect_stderr:
|
||||
"""Context manager for temporarily redirecting stderr to another file
|
||||
"""
|
||||
|
||||
def __init__(self, new_target):
|
||||
self._new_target = new_target
|
||||
self._old_targets = []
|
||||
|
||||
def __enter__(self):
|
||||
self._old_targets.append(sys.stderr)
|
||||
sys.stderr = self._new_target
|
||||
return self._new_target
|
||||
|
||||
def __exit__(self, exctype, excinst, exctb):
|
||||
sys.stderr = self._old_targets.pop()
|
||||
|
||||
|
||||
class HandleDownloadTests(unittest.TestCase):
|
||||
|
||||
def test_default_args(self):
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
outfile = Outfile(b'')
|
||||
buf = io.BytesIO(
|
||||
b'[{"sha": "fc2395ca3564fb2afded8d90ddbe38dad1bf86f1"}]')
|
||||
metafile = Outfile('')
|
||||
opener = StubOpener(schemafile, outfile, buf, metafile)
|
||||
|
||||
stdout = io.StringIO()
|
||||
try:
|
||||
redirect_stderr = contextlib.redirect_stderr
|
||||
except AttributeError:
|
||||
redirect_stderr = internal_redirect_stderr
|
||||
|
||||
with contextlib.redirect_stdout(stdout):
|
||||
with redirect_stderr(stdout):
|
||||
handle_download(
|
||||
_open=opener.open, _open_url=opener.open)
|
||||
metadata = '\n'.join(line
|
||||
for line in metafile.written.splitlines()
|
||||
if not line.startswith('downloaded: '))
|
||||
|
||||
self.assertEqual(outfile.written, b'<a schema>')
|
||||
self.assertEqual(metadata, dedent("""
|
||||
upstream: https://github.com/Microsoft/vscode-debugadapter-node/raw/master/debugProtocol.json
|
||||
revision: fc2395ca3564fb2afded8d90ddbe38dad1bf86f1
|
||||
checksum: e778c3751f9d0bceaf8d5aa81e2c659f
|
||||
""").strip()) # noqa
|
||||
self.assertEqual(stdout.getvalue(), dedent("""\
|
||||
downloading the schema file from https://github.com/Microsoft/vscode-debugadapter-node/raw/master/debugProtocol.json...
|
||||
...schema file written to {}.
|
||||
saving the schema metadata...
|
||||
...metadata written to {}.
|
||||
""").format(VENDORED, METADATA)) # noqa
|
||||
|
||||
|
||||
class HandleCheckTests(unittest.TestCase):
|
||||
|
||||
def test_default_args(self):
|
||||
metadata = dedent("""
|
||||
upstream: https://github.com/x/y/raw/master/z
|
||||
revision: fc2395ca3564fb2afded8d90ddbe38dad1bf86f1
|
||||
checksum: e778c3751f9d0bceaf8d5aa81e2c659f
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
""")
|
||||
opener = StubOpener(
|
||||
io.StringIO(metadata),
|
||||
io.BytesIO(b'<a schema>'), # local
|
||||
io.StringIO(metadata),
|
||||
io.BytesIO(b'<a schema>'), # upstream
|
||||
io.BytesIO(
|
||||
b'[{"sha": "fc2395ca3564fb2afded8d90ddbe38dad1bf86f1"}]'),
|
||||
)
|
||||
|
||||
stdout = io.StringIO()
|
||||
try:
|
||||
redirect_stderr = contextlib.redirect_stderr
|
||||
except AttributeError:
|
||||
redirect_stderr = internal_redirect_stderr
|
||||
|
||||
with contextlib.redirect_stdout(stdout):
|
||||
with redirect_stderr(stdout):
|
||||
handle_check(
|
||||
_open=opener.open, _open_url=opener.open)
|
||||
|
||||
self.assertEqual(stdout.getvalue(), dedent("""\
|
||||
checking local schema file...
|
||||
comparing with upstream schema file...
|
||||
schema file okay
|
||||
"""))
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
import io
|
||||
import unittest
|
||||
|
||||
from .helpers import StubOpener
|
||||
from debugger_protocol.schema.file import SchemaFileError, read_schema
|
||||
|
||||
|
||||
class ReadSchemaTests(unittest.TestCase):
|
||||
|
||||
def test_success(self):
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
opener = StubOpener(schemafile)
|
||||
|
||||
data = read_schema('schema.json', _open=opener.open)
|
||||
|
||||
self.assertEqual(data, b'<a schema>')
|
||||
|
||||
def test_file_missing(self):
|
||||
opener = StubOpener(None)
|
||||
|
||||
with self.assertRaises(SchemaFileError):
|
||||
read_schema('schema.json', _open=opener.open)
|
||||
|
|
@ -1,210 +0,0 @@
|
|||
from datetime import datetime
|
||||
import io
|
||||
import os.path
|
||||
from textwrap import dedent
|
||||
import unittest
|
||||
|
||||
from .helpers import StubOpener
|
||||
from debugger_protocol.schema.upstream import URL as UPSTREAM
|
||||
from debugger_protocol.schema.metadata import (
|
||||
open_metadata, read_metadata,
|
||||
MetadataError, Metadata)
|
||||
|
||||
|
||||
class Stringlike:
|
||||
|
||||
def __init__(self, value):
|
||||
self.value = value
|
||||
|
||||
def __str__(self):
|
||||
return self.value
|
||||
|
||||
|
||||
class Hash(Stringlike):
|
||||
pass
|
||||
|
||||
|
||||
class OpenMetadataTests(unittest.TestCase):
|
||||
|
||||
def test_success(self):
|
||||
expected = object()
|
||||
opener = StubOpener(expected)
|
||||
schemadir = os.path.join('x', 'y', 'z', '')
|
||||
metafile, filename = open_metadata(schemadir + 'schema.json',
|
||||
_open=opener.open)
|
||||
|
||||
self.assertIs(metafile, expected)
|
||||
self.assertEqual(filename, schemadir + 'UPSTREAM')
|
||||
|
||||
def test_file_missing(self):
|
||||
metafile = None
|
||||
opener = StubOpener(metafile)
|
||||
|
||||
with self.assertRaises(MetadataError):
|
||||
open_metadata('schema.json', _open=opener.open)
|
||||
|
||||
|
||||
class ReadMetadataTests(unittest.TestCase):
|
||||
|
||||
def test_success(self):
|
||||
metafile = io.StringIO(dedent("""
|
||||
upstream: https://x.y.z/schema.json
|
||||
revision: abcdef0123456789
|
||||
checksum: deadbeefdeadbeefdeadbeefdeadbeef
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
opener = StubOpener(metafile)
|
||||
schemadir = os.path.join('x', 'y', 'z', '')
|
||||
meta, filename = read_metadata(schemadir + 'schema.json',
|
||||
_open=opener.open)
|
||||
|
||||
self.assertEqual(meta,
|
||||
Metadata('https://x.y.z/schema.json',
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
))
|
||||
self.assertEqual(filename, schemadir + 'UPSTREAM')
|
||||
|
||||
def test_file_missing(self):
|
||||
metafile = None
|
||||
opener = StubOpener(metafile)
|
||||
|
||||
with self.assertRaises(MetadataError):
|
||||
read_metadata('schema.json', _open=opener.open)
|
||||
|
||||
def test_file_invalid(self):
|
||||
metafile = io.StringIO('<bogus>')
|
||||
opener = StubOpener(metafile)
|
||||
|
||||
with self.assertRaises(MetadataError):
|
||||
read_metadata('schema.json', _open=opener.open)
|
||||
|
||||
|
||||
class MetadataTests(unittest.TestCase):
|
||||
|
||||
def test_parse_minimal(self):
|
||||
expected = Metadata('https://x.y.z/schema.json',
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
)
|
||||
meta = Metadata.parse(dedent("""
|
||||
upstream: https://x.y.z/schema.json
|
||||
revision: abcdef0123456789
|
||||
checksum: deadbeefdeadbeefdeadbeefdeadbeef
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
|
||||
self.assertEqual(meta, expected)
|
||||
|
||||
def test_parse_with_whitespace_and_comments(self):
|
||||
expected = Metadata('https://x.y.z/schema.json',
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
)
|
||||
meta = Metadata.parse(dedent("""
|
||||
|
||||
# generated by x.y.z
|
||||
upstream: https://x.y.z/schema.json
|
||||
|
||||
revision: abcdef0123456789
|
||||
checksum: deadbeefdeadbeefdeadbeefdeadbeef
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
|
||||
# done!
|
||||
|
||||
""")) # noqa
|
||||
|
||||
self.assertEqual(meta, expected)
|
||||
|
||||
def test_parse_roundtrip_from_object(self):
|
||||
orig = Metadata('https://x.y.z/schema.json',
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
)
|
||||
meta = Metadata.parse(
|
||||
orig.format())
|
||||
|
||||
self.assertEqual(meta, orig)
|
||||
|
||||
def test_parse_roundtrip_from_string(self):
|
||||
orig = dedent("""\
|
||||
upstream: https://x.y.z/schema.json
|
||||
revision: abcdef0123456789
|
||||
checksum: deadbeefdeadbeefdeadbeefdeadbeef
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
""")
|
||||
data = (Metadata.parse(orig)
|
||||
).format()
|
||||
|
||||
self.assertEqual(data, orig)
|
||||
|
||||
def test_coercion_noop(self):
|
||||
meta = Metadata('https://x.y.z/schema.json',
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
)
|
||||
|
||||
self.assertEqual(meta, (
|
||||
'https://x.y.z/schema.json',
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
))
|
||||
|
||||
def test_coercion_change_all(self):
|
||||
meta = Metadata(Stringlike('https://x.y.z/schema.json'),
|
||||
Hash('abcdef0123456789'),
|
||||
Hash('deadbeefdeadbeefdeadbeefdeadbeef'),
|
||||
'2018-01-09 13:10:59 (UTC)',
|
||||
)
|
||||
|
||||
self.assertEqual(meta, (
|
||||
'https://x.y.z/schema.json',
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
))
|
||||
|
||||
def test_validation_fail(self):
|
||||
baseargs = [
|
||||
'https://x.y.z/schema.json',
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
]
|
||||
for i in range(len(baseargs)):
|
||||
with self.subTest(baseargs[i]):
|
||||
args = list(baseargs)
|
||||
args[i] = ''
|
||||
with self.assertRaises(ValueError):
|
||||
Metadata(*args)
|
||||
|
||||
def test_url(self):
|
||||
meta = Metadata(UPSTREAM,
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
)
|
||||
url = meta.url
|
||||
|
||||
self.assertEqual(url, 'https://github.com/Microsoft/vscode-debugadapter-node/raw/abcdef0123456789/debugProtocol.json') # noqa
|
||||
|
||||
def test_format(self):
|
||||
meta = Metadata('https://x.y.z/schema.json',
|
||||
'abcdef0123456789',
|
||||
'deadbeefdeadbeefdeadbeefdeadbeef',
|
||||
datetime(2018, 1, 9, 13, 10, 59),
|
||||
)
|
||||
formatted = meta.format()
|
||||
|
||||
self.assertEqual(formatted, dedent("""\
|
||||
upstream: https://x.y.z/schema.json
|
||||
revision: abcdef0123456789
|
||||
checksum: deadbeefdeadbeefdeadbeefdeadbeef
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
|
|
@ -1,60 +0,0 @@
|
|||
from datetime import datetime
|
||||
import io
|
||||
import unittest
|
||||
|
||||
from .helpers import StubOpener
|
||||
from debugger_protocol.schema.file import SchemaFileError
|
||||
from debugger_protocol.schema.metadata import Metadata
|
||||
from debugger_protocol.schema.upstream import (
|
||||
download, read)
|
||||
|
||||
|
||||
class DownloadTests(unittest.TestCase):
|
||||
|
||||
def test_success(self):
|
||||
now = datetime.utcnow()
|
||||
infile = io.BytesIO(b'<a schema>')
|
||||
outfile = io.BytesIO()
|
||||
buf = io.BytesIO(
|
||||
b'[{"sha": "fc2395ca3564fb2afded8d90ddbe38dad1bf86f1"}]')
|
||||
meta = download('https://github.com/x/y/raw/master/z',
|
||||
infile,
|
||||
outfile,
|
||||
_now=(lambda: now),
|
||||
_open_url=(lambda _: buf),
|
||||
)
|
||||
rcvd = outfile.getvalue()
|
||||
|
||||
self.assertEqual(meta, Metadata(
|
||||
'https://github.com/x/y/raw/master/z',
|
||||
'fc2395ca3564fb2afded8d90ddbe38dad1bf86f1',
|
||||
'e778c3751f9d0bceaf8d5aa81e2c659f',
|
||||
now,
|
||||
))
|
||||
self.assertEqual(rcvd, b'<a schema>')
|
||||
|
||||
|
||||
class ReadSchemaTests(unittest.TestCase):
|
||||
|
||||
def test_success(self):
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
buf = io.BytesIO(
|
||||
b'[{"sha": "fc2395ca3564fb2afded8d90ddbe38dad1bf86f1"}]')
|
||||
opener = StubOpener(schemafile, buf)
|
||||
data, meta = read('https://github.com/x/y/raw/master/z',
|
||||
_open_url=opener.open)
|
||||
|
||||
self.assertEqual(data, b'<a schema>')
|
||||
self.assertEqual(meta, Metadata(
|
||||
'https://github.com/x/y/raw/master/z',
|
||||
'fc2395ca3564fb2afded8d90ddbe38dad1bf86f1',
|
||||
'e778c3751f9d0bceaf8d5aa81e2c659f',
|
||||
meta.downloaded,
|
||||
))
|
||||
|
||||
def test_resource_missing(self):
|
||||
schemafile = None
|
||||
opener = StubOpener(schemafile)
|
||||
|
||||
with self.assertRaises(SchemaFileError):
|
||||
read('schema.json', _open_url=opener.open)
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
import io
|
||||
import unittest
|
||||
|
||||
from debugger_protocol.schema._util import get_revision, get_checksum
|
||||
|
||||
|
||||
class GetRevisionTests(unittest.TestCase):
|
||||
|
||||
def test_github(self):
|
||||
buf = io.BytesIO(
|
||||
b'[{"sha": "fc2395ca3564fb2afded8d90ddbe38dad1bf86f1"}]')
|
||||
revision = get_revision('https://github.com/x/y/raw/master/z',
|
||||
_open_url=lambda _: buf)
|
||||
|
||||
self.assertEqual(revision, 'fc2395ca3564fb2afded8d90ddbe38dad1bf86f1')
|
||||
|
||||
def test_unrecognized_url(self):
|
||||
revision = get_revision('https://localhost/schema.json',
|
||||
_open_url=lambda _: io.BytesIO())
|
||||
|
||||
self.assertEqual(revision, '<unknown>')
|
||||
|
||||
|
||||
class GetChecksumTests(unittest.TestCase):
|
||||
|
||||
def test_checksums(self):
|
||||
checksums = {
|
||||
b'': 'd41d8cd98f00b204e9800998ecf8427e',
|
||||
b'spam': 'e09f6a7593f8ae3994ea57e1117f67ec',
|
||||
}
|
||||
for data, expected in checksums.items():
|
||||
with self.subTest(data):
|
||||
checksum = get_checksum(data)
|
||||
|
||||
self.assertEqual(checksum, expected)
|
||||
|
|
@ -1,154 +0,0 @@
|
|||
import io
|
||||
from textwrap import dedent
|
||||
import unittest
|
||||
|
||||
from .helpers import StubOpener
|
||||
from debugger_protocol.schema.file import SchemaFileError
|
||||
from debugger_protocol.schema.metadata import MetadataError
|
||||
from debugger_protocol.schema.vendored import (
|
||||
SchemaFileMismatchError, check_local, check_upstream)
|
||||
|
||||
|
||||
class CheckLocalTests(unittest.TestCase):
|
||||
|
||||
def test_match(self):
|
||||
metafile = io.StringIO(dedent("""
|
||||
upstream: https://x.y.z/schema.json
|
||||
revision: abcdef0123456789
|
||||
checksum: e778c3751f9d0bceaf8d5aa81e2c659f
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
opener = StubOpener(metafile, schemafile)
|
||||
|
||||
# This does not fail.
|
||||
check_local('schema.json', _open=opener.open)
|
||||
|
||||
def test_mismatch(self):
|
||||
metafile = io.StringIO(dedent("""
|
||||
upstream: https://x.y.z/schema.json
|
||||
revision: abcdef0123456789
|
||||
checksum: abc2
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
opener = StubOpener(metafile, schemafile)
|
||||
|
||||
with self.assertRaises(SchemaFileMismatchError) as cm:
|
||||
check_local('schema.json', _open=opener.open)
|
||||
self.assertEqual(str(cm.exception),
|
||||
("schema file 'schema.json' does not match "
|
||||
'metadata file (checksum mismatch: '
|
||||
"'e778c3751f9d0bceaf8d5aa81e2c659f' != 'abc2')"))
|
||||
|
||||
def test_metafile_missing(self):
|
||||
metafile = None
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
opener = StubOpener(metafile, schemafile)
|
||||
|
||||
with self.assertRaises(MetadataError):
|
||||
check_local('schema.json', _open=opener.open)
|
||||
|
||||
def test_metafile_invalid(self):
|
||||
metafile = io.StringIO('<bogus>')
|
||||
metafile.name = '/x/y/z/UPSTREAM'
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
opener = StubOpener(metafile, schemafile)
|
||||
|
||||
with self.assertRaises(MetadataError):
|
||||
check_local('schema.json', _open=opener.open)
|
||||
|
||||
def test_schemafile_missing(self):
|
||||
metafile = io.StringIO(dedent("""
|
||||
upstream: https://x.y.z/schema.json
|
||||
revision: abcdef0123456789
|
||||
checksum: e778c3751f9d0bceaf8d5aa81e2c659f
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
schemafile = None
|
||||
opener = StubOpener(metafile, schemafile)
|
||||
|
||||
with self.assertRaises(SchemaFileError):
|
||||
check_local('schema.json', _open=opener.open)
|
||||
|
||||
|
||||
class CheckUpstream(unittest.TestCase):
|
||||
|
||||
def test_match(self):
|
||||
metafile = io.StringIO(dedent("""
|
||||
upstream: https://github.com/x/y/raw/master/z
|
||||
revision: fc2395ca3564fb2afded8d90ddbe38dad1bf86f1
|
||||
checksum: e778c3751f9d0bceaf8d5aa81e2c659f
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
buf = io.BytesIO(
|
||||
b'[{"sha": "fc2395ca3564fb2afded8d90ddbe38dad1bf86f1"}]')
|
||||
opener = StubOpener(metafile, schemafile, buf)
|
||||
|
||||
# This does not fail.
|
||||
check_upstream('schema.json',
|
||||
_open=opener.open, _open_url=opener.open)
|
||||
|
||||
def test_revision_mismatch(self):
|
||||
metafile = io.StringIO(dedent("""
|
||||
upstream: https://github.com/x/y/raw/master/z
|
||||
revision: abc2
|
||||
checksum: e778c3751f9d0bceaf8d5aa81e2c659f
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
buf = io.BytesIO(
|
||||
b'[{"sha": "fc2395ca3564fb2afded8d90ddbe38dad1bf86f1"}]')
|
||||
opener = StubOpener(metafile, schemafile, buf)
|
||||
|
||||
with self.assertRaises(SchemaFileMismatchError) as cm:
|
||||
check_upstream('schema.json',
|
||||
_open=opener.open, _open_url=opener.open)
|
||||
self.assertEqual(str(cm.exception),
|
||||
("local schema file 'schema.json' does not match "
|
||||
"upstream 'https://github.com/x/y/raw/master/z' "
|
||||
"(revision mismatch: 'abc2' != 'fc2395ca3564fb2afded8d90ddbe38dad1bf86f1')")) # noqa
|
||||
|
||||
def test_checksum_mismatch(self):
|
||||
metafile = io.StringIO(dedent("""
|
||||
upstream: https://github.com/x/y/raw/master/z
|
||||
revision: fc2395ca3564fb2afded8d90ddbe38dad1bf86f1
|
||||
checksum: abc2
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
schemafile = io.BytesIO(b'<a schema>')
|
||||
buf = io.BytesIO(
|
||||
b'[{"sha": "fc2395ca3564fb2afded8d90ddbe38dad1bf86f1"}]')
|
||||
opener = StubOpener(metafile, schemafile, buf)
|
||||
|
||||
with self.assertRaises(SchemaFileMismatchError) as cm:
|
||||
check_upstream('schema.json',
|
||||
_open=opener.open, _open_url=opener.open)
|
||||
self.assertEqual(str(cm.exception),
|
||||
("local schema file 'schema.json' does not match "
|
||||
"upstream 'https://github.com/x/y/raw/master/z' "
|
||||
"(checksum mismatch: 'abc2' != 'e778c3751f9d0bceaf8d5aa81e2c659f')")) # noqa
|
||||
|
||||
def test_metafile_missing(self):
|
||||
metafile = None
|
||||
opener = StubOpener(metafile)
|
||||
|
||||
with self.assertRaises(MetadataError):
|
||||
check_upstream('schema.json',
|
||||
_open=opener.open, _open_url=opener.open)
|
||||
|
||||
def test_url_resource_missing(self):
|
||||
metafile = io.StringIO(dedent("""
|
||||
upstream: https://github.com/x/y/raw/master/z
|
||||
revision: fc2395ca3564fb2afded8d90ddbe38dad1bf86f1
|
||||
checksum: abc2
|
||||
downloaded: 2018-01-09 13:10:59 (UTC)
|
||||
"""))
|
||||
#schemafile = io.BytesIO(b'<a schema>')
|
||||
schemafile = None
|
||||
opener = StubOpener(metafile, schemafile)
|
||||
|
||||
with self.assertRaises(SchemaFileError):
|
||||
check_upstream('schema.json',
|
||||
_open=opener.open, _open_url=opener.open)
|
||||
|
|
@ -3,7 +3,7 @@
|
|||
# for license information.
|
||||
|
||||
from __future__ import print_function, with_statement, absolute_import
|
||||
from pytests.helpers.session import DebugSession
|
||||
from tests.helpers.session import DebugSession
|
||||
import pytest
|
||||
|
||||
@pytest.mark.parametrize('run_as', ['file', 'module', 'code'])
|
||||
|
|
@ -6,9 +6,9 @@ from __future__ import print_function, with_statement, absolute_import
|
|||
|
||||
import os
|
||||
import pytest
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.pathutils import get_test_root
|
||||
from pytests.helpers.timeline import Event
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.pathutils import get_test_root
|
||||
from tests.helpers.timeline import Event
|
||||
|
||||
|
||||
@pytest.mark.parametrize('wait_for_attach', ['waitOn', 'waitOff'])
|
||||
|
|
@ -6,8 +6,8 @@ from __future__ import print_function, with_statement, absolute_import
|
|||
|
||||
import pytest
|
||||
import sys
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
|
||||
|
||||
@pytest.mark.parametrize('run_as', ['file', 'module', 'code'])
|
||||
|
|
@ -10,10 +10,10 @@ import pytest
|
|||
import sys
|
||||
import re
|
||||
|
||||
from pytests.helpers.pathutils import get_test_root
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from pytests.helpers.pattern import ANY, Path
|
||||
from tests.helpers.pathutils import get_test_root
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
from tests.helpers.pattern import ANY, Path
|
||||
|
||||
|
||||
BP_TEST_ROOT = get_test_root('bp')
|
||||
|
|
@ -5,9 +5,9 @@
|
|||
from __future__ import print_function, with_statement, absolute_import
|
||||
|
||||
import pytest
|
||||
from pytests.helpers.pattern import ANY
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from tests.helpers.pattern import ANY
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
|
||||
|
||||
expected_at_line = {
|
||||
|
|
@ -6,9 +6,9 @@ from __future__ import print_function, with_statement, absolute_import
|
|||
|
||||
import os.path
|
||||
import pytest
|
||||
from pytests.helpers.pattern import ANY
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from tests.helpers.pattern import ANY
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
|
||||
|
||||
@pytest.mark.parametrize('start_method', ['attach_socket_cmdline', 'attach_socket_import'])
|
||||
|
|
@ -8,11 +8,11 @@ import os.path
|
|||
import pytest
|
||||
import sys
|
||||
|
||||
from pytests.helpers.pattern import ANY, Path
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from pytests.helpers.pathutils import get_test_root
|
||||
from pytests.helpers.webhelper import get_url_from_str, get_web_content, wait_for_connection
|
||||
from tests.helpers.pattern import ANY, Path
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
from tests.helpers.pathutils import get_test_root
|
||||
from tests.helpers.webhelper import get_url_from_str, get_web_content, wait_for_connection
|
||||
|
||||
DJANGO1_ROOT = get_test_root('django1')
|
||||
DJANGO1_MANAGE = os.path.join(DJANGO1_ROOT, 'app.py')
|
||||
|
|
@ -6,9 +6,9 @@ from __future__ import print_function, with_statement, absolute_import
|
|||
|
||||
import sys
|
||||
|
||||
from pytests.helpers.pattern import ANY
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from tests.helpers.pattern import ANY
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
|
||||
|
||||
def test_variables_and_evaluate(pyfile, run_as, start_method):
|
||||
|
|
@ -6,9 +6,9 @@ from __future__ import print_function, with_statement, absolute_import
|
|||
|
||||
import pytest
|
||||
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from pytests.helpers.pattern import ANY, Path
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
from tests.helpers.pattern import ANY, Path
|
||||
|
||||
|
||||
@pytest.mark.parametrize('raised', ['raisedOn', 'raisedOff'])
|
||||
|
|
@ -9,11 +9,11 @@ import platform
|
|||
import pytest
|
||||
import sys
|
||||
|
||||
from pytests.helpers.pattern import ANY, Path
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from pytests.helpers.webhelper import get_web_content, wait_for_connection
|
||||
from pytests.helpers.pathutils import get_test_root
|
||||
from tests.helpers.pattern import ANY, Path
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
from tests.helpers.webhelper import get_web_content, wait_for_connection
|
||||
from tests.helpers.pathutils import get_test_root
|
||||
|
||||
|
||||
FLASK1_ROOT = get_test_root('flask1')
|
||||
|
|
@ -8,9 +8,9 @@ import platform
|
|||
import pytest
|
||||
import sys
|
||||
|
||||
from pytests.helpers.pattern import ANY
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event, Request, Response
|
||||
from tests.helpers.pattern import ANY
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event, Request, Response
|
||||
|
||||
|
||||
@pytest.mark.timeout(30)
|
||||
|
|
@ -3,9 +3,9 @@
|
|||
# for license information.
|
||||
|
||||
from __future__ import print_function, with_statement, absolute_import
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from pytests.helpers.pattern import ANY
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
from tests.helpers.pattern import ANY
|
||||
|
||||
|
||||
def test_with_no_output(pyfile, run_as, start_method):
|
||||
|
|
@ -6,9 +6,9 @@ from __future__ import print_function, with_statement, absolute_import
|
|||
|
||||
import os
|
||||
from shutil import copyfile
|
||||
from pytests.helpers.pattern import Path
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from tests.helpers.pattern import Path
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
|
||||
|
||||
def test_with_path_mappings(pyfile, tmpdir, run_as, start_method):
|
||||
|
|
@ -9,10 +9,10 @@ import pytest
|
|||
|
||||
import ptvsd
|
||||
|
||||
from pytests.helpers import print
|
||||
from pytests.helpers.pattern import ANY
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from tests.helpers import print
|
||||
from tests.helpers.pattern import ANY
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
|
||||
|
||||
@pytest.mark.parametrize('run_as', ['file', 'module', 'code'])
|
||||
|
|
@ -8,9 +8,9 @@ import platform
|
|||
import pytest
|
||||
import sys
|
||||
|
||||
from pytests.helpers.pattern import ANY
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from tests.helpers.pattern import ANY
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
|
||||
|
||||
@pytest.mark.parametrize('start_method', ['launch'])
|
||||
|
|
@ -4,8 +4,8 @@
|
|||
|
||||
from __future__ import print_function, with_statement, absolute_import
|
||||
import pytest
|
||||
from pytests.helpers.timeline import Event
|
||||
from pytests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
from tests.helpers.session import DebugSession
|
||||
|
||||
|
||||
@pytest.mark.parametrize('count', [1, 3])
|
||||
|
|
@ -5,9 +5,9 @@
|
|||
from __future__ import print_function, with_statement, absolute_import
|
||||
|
||||
import pytest
|
||||
from pytests.helpers.session import DebugSession
|
||||
from pytests.helpers.timeline import Event
|
||||
from pytests.helpers.pattern import Path
|
||||
from tests.helpers.session import DebugSession
|
||||
from tests.helpers.timeline import Event
|
||||
from tests.helpers.pattern import Path
|
||||
|
||||
|
||||
@pytest.mark.parametrize('module', [True, False])
|
||||
|
|
@ -1,3 +1,59 @@
|
|||
# Copyright (c) Microsoft Corporation. All rights reserved.
|
||||
# Licensed under the MIT License. See LICENSE in the project root
|
||||
# for license information.
|
||||
|
||||
def noop(*args, **kwargs):
|
||||
"""Do nothing."""
|
||||
from __future__ import print_function, with_statement, absolute_import
|
||||
|
||||
import os
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import traceback
|
||||
|
||||
|
||||
if sys.version_info >= (3, 5):
|
||||
clock = time.monotonic
|
||||
else:
|
||||
clock = time.clock
|
||||
|
||||
|
||||
timestamp_zero = clock()
|
||||
|
||||
def timestamp():
|
||||
return clock() - timestamp_zero
|
||||
|
||||
|
||||
def dump_stacks():
|
||||
"""Dump the stacks of all threads except the current thread"""
|
||||
current_ident = threading.current_thread().ident
|
||||
for thread_ident, frame in sys._current_frames().items():
|
||||
if thread_ident == current_ident:
|
||||
continue
|
||||
for t in threading.enumerate():
|
||||
if t.ident == thread_ident:
|
||||
thread_name = t.name
|
||||
thread_daemon = t.daemon
|
||||
break
|
||||
else:
|
||||
thread_name = '<unknown>'
|
||||
print('Stack of %s (%s) in pid %s; daemon=%s' % (thread_name, thread_ident, os.getpid(), thread_daemon))
|
||||
print(''.join(traceback.format_stack(frame)))
|
||||
|
||||
|
||||
def dump_stacks_in(secs):
|
||||
"""Invokes dump_stacks() on a background thread after waiting.
|
||||
|
||||
Can be called from debugged code before the point after which it hangs,
|
||||
to determine the cause of the hang while debugging a test.
|
||||
"""
|
||||
|
||||
def dumper():
|
||||
time.sleep(secs)
|
||||
dump_stacks()
|
||||
|
||||
thread = threading.Thread(target=dumper)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
|
||||
from .printer import print
|
||||
|
|
|
|||
|
|
@ -1,146 +0,0 @@
|
|||
import contextlib
|
||||
from io import StringIO, BytesIO
|
||||
import sys
|
||||
|
||||
from . import noop
|
||||
|
||||
|
||||
if sys.version_info < (3,):
|
||||
Buffer = BytesIO
|
||||
else:
|
||||
Buffer = StringIO
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def captured_stdio(out=None, err=None):
|
||||
if out is None and err is None:
|
||||
out = err = Buffer()
|
||||
else:
|
||||
if out is True:
|
||||
out = Buffer()
|
||||
elif out is False:
|
||||
out = None
|
||||
if err is True:
|
||||
err = Buffer()
|
||||
elif err is False:
|
||||
err = None
|
||||
|
||||
orig = sys.stdout, sys.stderr
|
||||
if out is not None:
|
||||
sys.stdout = out
|
||||
if err is not None:
|
||||
sys.stderr = err
|
||||
try:
|
||||
yield out, err
|
||||
finally:
|
||||
sys.stdout, sys.stderr = orig
|
||||
|
||||
|
||||
def iter_lines(read, sep=b'\n', stop=noop):
|
||||
"""Yield each sep-delimited line.
|
||||
|
||||
If EOF is hit, the loop is stopped, or read() returns b'' then
|
||||
EOFError is raised with exc.remainder set to any bytes left in the
|
||||
buffer.
|
||||
"""
|
||||
first = sep[0]
|
||||
line = b''
|
||||
while True:
|
||||
try:
|
||||
if stop():
|
||||
raise EOFError()
|
||||
c = read(1)
|
||||
if not c:
|
||||
raise EOFError()
|
||||
except EOFError as exc:
|
||||
exc.buffered = line
|
||||
raise
|
||||
line += c
|
||||
if c != first:
|
||||
continue
|
||||
|
||||
for want in sep[1:]:
|
||||
try:
|
||||
if stop():
|
||||
raise EOFError()
|
||||
c = read(1)
|
||||
if not c:
|
||||
raise EOFError()
|
||||
except EOFError as exc:
|
||||
exc.buffered = line
|
||||
raise
|
||||
line += c
|
||||
if c != want:
|
||||
break
|
||||
else:
|
||||
# EOL
|
||||
yield line
|
||||
line = b''
|
||||
|
||||
|
||||
def iter_lines_buffered(read, sep=b'\n', initial=b'', stop=noop):
|
||||
"""Yield (line, remainder) for each sep-delimited line.
|
||||
|
||||
If EOF is hit, the loop is stopped, or read() returns b'' then
|
||||
EOFError is raised with exc.remainder set to any bytes left in the
|
||||
buffer.
|
||||
"""
|
||||
gap = len(sep)
|
||||
# TODO: Use a bytearray?
|
||||
buf = b''
|
||||
data = initial
|
||||
while True:
|
||||
try:
|
||||
line = data[:data.index(sep)]
|
||||
except ValueError:
|
||||
buf += data
|
||||
try:
|
||||
if stop():
|
||||
raise EOFError()
|
||||
# ConnectionResetError (errno 104) likely means the
|
||||
# client was never able to establish a connection.
|
||||
# TODO: Handle ConnectionResetError gracefully.
|
||||
data = read(1024)
|
||||
if not data:
|
||||
raise EOFError()
|
||||
if buf and buf[-1:] == b'\r':
|
||||
data = buf + data
|
||||
buf = b''
|
||||
except EOFError as exc:
|
||||
exc.remainder = buf
|
||||
raise
|
||||
else:
|
||||
# EOL
|
||||
data = data[len(line) + gap:]
|
||||
yield buf + line, data
|
||||
buf = b''
|
||||
|
||||
|
||||
def read_buffered(read, numbytes, initial=b'', stop=noop):
|
||||
"""Return (data, remainder) with read().
|
||||
|
||||
If EOF is hit, the loop is stopped, or read() returns b'' then
|
||||
EOFError is raised with exc.buffered set to any bytes left in the
|
||||
buffer.
|
||||
"""
|
||||
# TODO: Use a bytearray?
|
||||
buf = initial
|
||||
while len(buf) < numbytes:
|
||||
try:
|
||||
if stop():
|
||||
raise EOFError()
|
||||
data = read(1024)
|
||||
if not data:
|
||||
raise EOFError()
|
||||
except EOFError as exc:
|
||||
exc.buffered = buf
|
||||
raise
|
||||
buf += data
|
||||
return buf[:numbytes], buf[numbytes:]
|
||||
|
||||
|
||||
def write_all(write, data, stop=noop):
|
||||
"""Keep writing until all the data is written."""
|
||||
while data and not stop():
|
||||
sent = write(data)
|
||||
data = data[sent:]
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
PROG = 'eggs'
|
||||
PORT_ARGS = ['--port', '8888']
|
||||
PYDEVD_DEFAULT_ARGS = ['--qt-support=auto']
|
||||
|
||||
|
||||
def _get_args(*args, **kwargs):
|
||||
ptvsd_extras = kwargs.get('ptvsd_extras', [])
|
||||
prog = [kwargs.get('prog', PROG)]
|
||||
port = kwargs.get('port', PORT_ARGS)
|
||||
pydevd_args = kwargs.get('pydevd', PYDEVD_DEFAULT_ARGS)
|
||||
return prog + port + ptvsd_extras + pydevd_args + list(args)
|
||||
|
|
@ -1,64 +0,0 @@
|
|||
import sys
|
||||
|
||||
|
||||
class Counter(object):
|
||||
"""An introspectable, dynamic alternative to itertools.count()."""
|
||||
|
||||
def __init__(self, start=0, step=1):
|
||||
self._start = int(start)
|
||||
self._step = int(step)
|
||||
|
||||
def __repr__(self):
|
||||
return '{}(start={}, step={})'.format(
|
||||
type(self).__name__,
|
||||
self.peek(),
|
||||
self._step,
|
||||
)
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __next__(self):
|
||||
try:
|
||||
self._last += self._step
|
||||
except AttributeError:
|
||||
self._last = self._start
|
||||
return self._last
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
next = __next__
|
||||
|
||||
@property
|
||||
def start(self):
|
||||
return self._start
|
||||
|
||||
@property
|
||||
def step(self):
|
||||
return self._step
|
||||
|
||||
@property
|
||||
def last(self):
|
||||
try:
|
||||
return self._last
|
||||
except AttributeError:
|
||||
return None
|
||||
|
||||
def peek(self, iterations=1):
|
||||
"""Return the value that will be used next."""
|
||||
try:
|
||||
last = self._last
|
||||
except AttributeError:
|
||||
last = self._start - self._step
|
||||
return last + self._step * iterations
|
||||
|
||||
def reset(self, start=None):
|
||||
"""Set the next value to the given one.
|
||||
|
||||
If no value is provided then the previous start value is used.
|
||||
"""
|
||||
if start is not None:
|
||||
self._start = int(start)
|
||||
try:
|
||||
del self._last
|
||||
except AttributeError:
|
||||
pass
|
||||
|
|
@ -1,288 +0,0 @@
|
|||
import os
|
||||
import os.path
|
||||
import socket
|
||||
import time
|
||||
|
||||
from ptvsd.socket import Address
|
||||
from ptvsd._util import Closeable, ClosedError
|
||||
from .proc import Proc
|
||||
from .. import PROJECT_ROOT
|
||||
|
||||
|
||||
COPIED_ENV = [
|
||||
'PYTHONHASHSEED',
|
||||
|
||||
# Windows
|
||||
#'ALLUSERSPROFILE',
|
||||
#'APPDATA',
|
||||
#'CLIENTNAME',
|
||||
#'COMMONPROGRAMFILES',
|
||||
#'COMMONPROGRAMFILES(X86)',
|
||||
#'COMMONPROGRAMW6432',
|
||||
#'COMPUTERNAME',
|
||||
#'COMSPEC',
|
||||
#'DRIVERDATA',
|
||||
#'HOMEDRIVE',
|
||||
#'HOMEPATH',
|
||||
#'LOCALAPPDATA',
|
||||
#'LOGONSERVER',
|
||||
#'NUMBER_OF_PROCESSORS',
|
||||
#'OS',
|
||||
#'PATH',
|
||||
#'PATHEXT',
|
||||
#'PROCESSOR_ARCHITECTURE',
|
||||
#'PROCESSOR_IDENTIFIER',
|
||||
#'PROCESSOR_LEVEL',
|
||||
#'PROCESSOR_REVISION',
|
||||
#'PROGRAMDATA',
|
||||
#'PROGRAMFILES',
|
||||
#'PROGRAMFILES(X86)',
|
||||
#'PROGRAMW6432',
|
||||
#'PSMODULEPATH',
|
||||
#'PUBLIC',
|
||||
#'SESSIONNAME',
|
||||
'SYSTEMDRIVE',
|
||||
'SYSTEMROOT',
|
||||
#'TEMP',
|
||||
#'TMP',
|
||||
#'USERDOMAIN',
|
||||
#'USERDOMAIN_ROAMINGPROFILE',
|
||||
#'USERNAME',
|
||||
#'USERPROFILE',
|
||||
'WINDIR',
|
||||
]
|
||||
|
||||
SERVER_READY_TIMEOUT = 3.0 # seconds
|
||||
|
||||
try:
|
||||
ConnectionRefusedError
|
||||
except Exception:
|
||||
class ConnectionRefusedError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def _copy_env(verbose=False, env=None):
|
||||
variables = {k: v for k, v in os.environ.items() if k in COPIED_ENV}
|
||||
# TODO: Be smarter about the seed?
|
||||
variables.setdefault('PYTHONHASHSEED', '1234')
|
||||
if verbose:
|
||||
variables.update({
|
||||
'PTVSD_DEBUG': '1',
|
||||
'PTVSD_SOCKET_TIMEOUT': '1',
|
||||
})
|
||||
if env is not None:
|
||||
variables.update(env)
|
||||
|
||||
# Ensure Project root is always in current path.
|
||||
python_path = variables.get('PYTHONPATH', None)
|
||||
if python_path is None:
|
||||
variables['PYTHONPATH'] = PROJECT_ROOT
|
||||
else:
|
||||
variables['PYTHONPATH'] = os.pathsep.join([PROJECT_ROOT, python_path])
|
||||
|
||||
return variables
|
||||
|
||||
|
||||
def wait_for_socket_server(addr, timeout=SERVER_READY_TIMEOUT):
|
||||
start_time = time.time()
|
||||
while True:
|
||||
try:
|
||||
sock = socket.create_connection((addr.host, addr.port))
|
||||
sock.close()
|
||||
time.sleep(0.1) # wait for daemon to detect to socket close.
|
||||
return
|
||||
except Exception:
|
||||
pass
|
||||
time.sleep(0.1)
|
||||
if time.time() - start_time > timeout:
|
||||
raise ConnectionRefusedError('Timeout waiting for connection')
|
||||
|
||||
|
||||
def wait_for_port_to_free(port, timeout=3.0):
|
||||
start_time = time.time()
|
||||
while True:
|
||||
try:
|
||||
time.sleep(0.5)
|
||||
sock = socket.create_connection(('localhost', port))
|
||||
sock.close()
|
||||
except Exception:
|
||||
return
|
||||
time.sleep(0.1)
|
||||
if time.time() - start_time > timeout:
|
||||
raise ConnectionRefusedError('Timeout waiting for port to be free')
|
||||
|
||||
|
||||
class DebugAdapter(Closeable):
|
||||
|
||||
VERBOSE = False
|
||||
#VERBOSE = True
|
||||
|
||||
PORT = 8888
|
||||
|
||||
# generic factories
|
||||
|
||||
@classmethod
|
||||
def start(cls, argv, env=None, cwd=None, **kwargs):
|
||||
def new_proc(argv, addr, **kwds):
|
||||
env_vars = _copy_env(verbose=cls.VERBOSE, env=env)
|
||||
argv = list(argv)
|
||||
cls._ensure_addr(argv, addr)
|
||||
return Proc.start_python_module(
|
||||
'ptvsd', argv, env=env_vars, cwd=cwd, **kwds)
|
||||
|
||||
return cls._start(new_proc, argv, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def start_wrapper_script(cls, filename, argv, env=None, cwd=None,
|
||||
**kwargs): # noqa
|
||||
def new_proc(argv, addr, **kwds):
|
||||
env_vars = _copy_env(verbose=cls.VERBOSE, env=env)
|
||||
return Proc.start_python_script(
|
||||
filename, argv, env=env_vars, cwd=cwd, **kwds)
|
||||
|
||||
return cls._start(new_proc, argv, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def start_wrapper_module(cls,
|
||||
modulename,
|
||||
argv,
|
||||
env=None,
|
||||
cwd=None,
|
||||
**kwargs): # noqa
|
||||
def new_proc(argv, addr, **kwds):
|
||||
env_vars = _copy_env(verbose=cls.VERBOSE, env=env)
|
||||
return Proc.start_python_module(
|
||||
modulename, argv, env=env_vars, cwd=cwd, **kwds)
|
||||
|
||||
return cls._start(new_proc, argv, **kwargs)
|
||||
|
||||
# specific factory cases
|
||||
|
||||
@classmethod
|
||||
def start_nodebug(cls, addr, name, kind='script', **kwargs):
|
||||
if kind == 'script':
|
||||
argv = ['--nodebug', name]
|
||||
elif kind == 'module':
|
||||
argv = ['--nodebug', '-m', name]
|
||||
else:
|
||||
raise NotImplementedError
|
||||
return cls.start(argv, addr=addr, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def start_as_server(cls, addr, *args, **kwargs):
|
||||
addr = Address.as_server(*addr)
|
||||
return cls._start_as(addr, *args, server=False, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def start_as_client(cls, addr, *args, **kwargs):
|
||||
addr = Address.as_client(*addr)
|
||||
return cls._start_as(addr, *args, server=False, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def start_for_attach(cls, addr, *args, **kwargs):
|
||||
srvtimeout = kwargs.pop('srvtimeout', SERVER_READY_TIMEOUT)
|
||||
addr = Address.as_server(*addr)
|
||||
adapter = cls._start_as(addr, *args, server=True, **kwargs)
|
||||
if srvtimeout is not None:
|
||||
wait_for_socket_server(addr, timeout=srvtimeout)
|
||||
return adapter
|
||||
|
||||
@classmethod
|
||||
def _start_as(cls,
|
||||
addr,
|
||||
name,
|
||||
kind='script',
|
||||
extra=None,
|
||||
server=False,
|
||||
**kwargs):
|
||||
argv = []
|
||||
if server:
|
||||
argv += ['--server']
|
||||
if kwargs.pop('wait', True):
|
||||
argv += ['--wait']
|
||||
if kind == 'script':
|
||||
argv += [name]
|
||||
elif kind == 'module':
|
||||
argv += ['-m', name]
|
||||
else:
|
||||
raise NotImplementedError
|
||||
if extra:
|
||||
argv += list(extra)
|
||||
return cls.start(argv, addr=addr, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def start_embedded(cls, addr, filename, argv=[], **kwargs):
|
||||
# ptvsd.enable_attach() slows things down, so we must wait longer.
|
||||
srvtimeout = kwargs.pop('srvtimeout', SERVER_READY_TIMEOUT + 2)
|
||||
addr = Address.as_server(*addr)
|
||||
with open(filename, 'r+') as scriptfile:
|
||||
content = scriptfile.read()
|
||||
# TODO: Handle this case somehow?
|
||||
assert 'ptvsd.enable_attach' in content
|
||||
adapter = cls.start_wrapper_script(
|
||||
filename, argv=argv, addr=addr, **kwargs)
|
||||
if srvtimeout is not None:
|
||||
wait_for_socket_server(addr, timeout=srvtimeout)
|
||||
return adapter
|
||||
|
||||
@classmethod
|
||||
def _start(cls, new_proc, argv, addr=None, **kwargs):
|
||||
addr = Address.from_raw(addr, defaultport=cls.PORT)
|
||||
proc = new_proc(argv, addr, **kwargs)
|
||||
return cls(proc, addr, owned=True)
|
||||
|
||||
@classmethod
|
||||
def _ensure_addr(cls, argv, addr):
|
||||
if '--host' in argv:
|
||||
raise ValueError("unexpected '--host' in argv")
|
||||
if '--port' in argv:
|
||||
raise ValueError("unexpected '--port' in argv")
|
||||
if '--client' in argv:
|
||||
raise ValueError("unexpected '--client' in argv")
|
||||
host, port = addr
|
||||
|
||||
argv.insert(0, str(port))
|
||||
argv.insert(0, '--port')
|
||||
|
||||
argv.insert(0, host)
|
||||
argv.insert(0, '--host')
|
||||
if not addr.isserver:
|
||||
argv.insert(0, '--client')
|
||||
|
||||
def __init__(self, proc, addr, owned=False):
|
||||
super(DebugAdapter, self).__init__()
|
||||
assert isinstance(proc, Proc)
|
||||
self._proc = proc
|
||||
self._addr = addr
|
||||
|
||||
@property
|
||||
def address(self):
|
||||
return self._addr
|
||||
|
||||
@property
|
||||
def pid(self):
|
||||
return self._proc.pid
|
||||
|
||||
@property
|
||||
def output(self):
|
||||
# TODO: Decode here?
|
||||
return self._proc.output
|
||||
|
||||
@property
|
||||
def exitcode(self):
|
||||
return self._proc.exitcode
|
||||
|
||||
def wait(self, *argv):
|
||||
self._proc.wait(*argv)
|
||||
|
||||
# internal methods
|
||||
|
||||
def _close(self):
|
||||
if self._proc is not None:
|
||||
try:
|
||||
self._proc.close()
|
||||
except ClosedError:
|
||||
pass
|
||||
if self.VERBOSE:
|
||||
lines = self.output.decode('utf-8').splitlines()
|
||||
print(' + ' + '\n + '.join(lines))
|
||||
|
|
@ -1,270 +0,0 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
import os
|
||||
import traceback
|
||||
import warnings
|
||||
|
||||
from ptvsd.socket import Address
|
||||
from ptvsd._util import new_hidden_thread, Closeable, ClosedError
|
||||
from .debugadapter import DebugAdapter, wait_for_socket_server
|
||||
from .debugsession import DebugSession
|
||||
|
||||
# TODO: Add a helper function to start a remote debugger for testing
|
||||
# remote debugging?
|
||||
|
||||
|
||||
class _LifecycleClient(Closeable):
|
||||
|
||||
SESSION = DebugSession
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
addr=None,
|
||||
port=8888,
|
||||
breakpoints=None,
|
||||
connecttimeout=1.0,
|
||||
):
|
||||
super(_LifecycleClient, self).__init__()
|
||||
self._addr = Address.from_raw(addr, defaultport=port)
|
||||
self._connecttimeout = connecttimeout
|
||||
self._adapter = None
|
||||
self._session = None
|
||||
|
||||
self._breakpoints = breakpoints
|
||||
|
||||
@property
|
||||
def adapter(self):
|
||||
return self._adapter
|
||||
|
||||
@property
|
||||
def session(self):
|
||||
return self._session
|
||||
|
||||
def start_debugging(self, launchcfg):
|
||||
if self.closed:
|
||||
raise RuntimeError('debug client closed')
|
||||
if self._adapter is not None:
|
||||
raise RuntimeError('debugger already running')
|
||||
assert self._session is None
|
||||
|
||||
raise NotImplementedError
|
||||
|
||||
def stop_debugging(self):
|
||||
if self.closed:
|
||||
raise RuntimeError('debug client closed')
|
||||
if self._adapter is None:
|
||||
raise RuntimeError('debugger not running')
|
||||
|
||||
if self._session is not None:
|
||||
self._detach()
|
||||
|
||||
try:
|
||||
self._adapter.close()
|
||||
except ClosedError:
|
||||
pass
|
||||
self._adapter = None
|
||||
|
||||
def attach_pid(self, pid, **kwargs):
|
||||
if self.closed:
|
||||
raise RuntimeError('debug client closed')
|
||||
if self._adapter is None:
|
||||
raise RuntimeError('debugger not running')
|
||||
if self._session is not None:
|
||||
raise RuntimeError('already attached')
|
||||
|
||||
raise NotImplementedError
|
||||
|
||||
def attach_socket(self, addr=None, adapter=None, **kwargs):
|
||||
if self.closed:
|
||||
raise RuntimeError('debug client closed')
|
||||
if adapter is None:
|
||||
adapter = self._adapter
|
||||
elif self._adapter is not None:
|
||||
raise RuntimeError('already using managed adapter')
|
||||
if adapter is None:
|
||||
raise RuntimeError('debugger not running')
|
||||
if self._session is not None:
|
||||
raise RuntimeError('already attached')
|
||||
|
||||
if addr is None:
|
||||
addr = adapter.address
|
||||
self._attach(addr, **kwargs)
|
||||
return self._session
|
||||
|
||||
def detach(self, adapter=None):
|
||||
if self.closed:
|
||||
raise RuntimeError('debug client closed')
|
||||
if self._session is None:
|
||||
raise RuntimeError('not attached')
|
||||
if adapter is None:
|
||||
adapter = self._adapter
|
||||
assert adapter is not None
|
||||
if not self._session.is_client:
|
||||
raise RuntimeError('detach not supported')
|
||||
|
||||
self._detach()
|
||||
|
||||
# internal methods
|
||||
|
||||
def _close(self):
|
||||
if self._session is not None:
|
||||
try:
|
||||
self._session.close()
|
||||
except ClosedError:
|
||||
pass
|
||||
if self._adapter is not None:
|
||||
try:
|
||||
self._adapter.close()
|
||||
except ClosedError:
|
||||
pass
|
||||
|
||||
def _launch(self,
|
||||
argv,
|
||||
script=None,
|
||||
wait_for_connect=None,
|
||||
detachable=True,
|
||||
env=None,
|
||||
cwd=None,
|
||||
**kwargs):
|
||||
if script is not None:
|
||||
def start(*args, **kwargs):
|
||||
return DebugAdapter.start_wrapper_script(
|
||||
script, *args, **kwargs)
|
||||
else:
|
||||
start = DebugAdapter.start
|
||||
new_addr = Address.as_server if detachable else Address.as_client
|
||||
addr = new_addr(None, self._addr.port)
|
||||
self._adapter = start(argv, addr=addr, env=env, cwd=cwd)
|
||||
|
||||
if wait_for_connect:
|
||||
wait_for_connect()
|
||||
else:
|
||||
try:
|
||||
wait_for_socket_server(addr)
|
||||
except Exception:
|
||||
# If we fail to connect, print out the adapter output.
|
||||
self._adapter.VERBOSE = True
|
||||
raise
|
||||
self._attach(addr, **kwargs)
|
||||
|
||||
def _attach(self, addr, **kwargs):
|
||||
if addr is None:
|
||||
addr = self._addr
|
||||
assert addr.host == 'localhost'
|
||||
self._session = self.SESSION.create_client(addr, **kwargs)
|
||||
|
||||
def _detach(self):
|
||||
session = self._session
|
||||
if session is None:
|
||||
return
|
||||
self._session = None
|
||||
try:
|
||||
session.close()
|
||||
except ClosedError:
|
||||
pass
|
||||
|
||||
|
||||
class DebugClient(_LifecycleClient):
|
||||
"""A high-level abstraction of a debug client (i.e. editor)."""
|
||||
|
||||
# TODO: Manage breakpoints, etc.
|
||||
# TODO: Add debugger methods here (e.g. "pause").
|
||||
|
||||
|
||||
class EasyDebugClient(DebugClient):
|
||||
def start_detached(self, argv):
|
||||
"""Start an adapter in a background process."""
|
||||
if self.closed:
|
||||
raise RuntimeError('debug client closed')
|
||||
if self._adapter is not None:
|
||||
raise RuntimeError('debugger already running')
|
||||
assert self._session is None
|
||||
|
||||
# TODO: Launch, handshake and detach?
|
||||
self._adapter = DebugAdapter.start(argv, port=self._port)
|
||||
return self._adapter
|
||||
|
||||
def host_local_debugger(self,
|
||||
argv,
|
||||
script=None,
|
||||
env=None,
|
||||
cwd=None,
|
||||
**kwargs): # noqa
|
||||
if self.closed:
|
||||
raise RuntimeError('debug client closed')
|
||||
if self._adapter is not None:
|
||||
raise RuntimeError('debugger already running')
|
||||
assert self._session is None
|
||||
addr = ('localhost', self._addr.port)
|
||||
|
||||
self._run_server_ex = None
|
||||
|
||||
def run():
|
||||
try:
|
||||
self._session = self.SESSION.create_server(addr, **kwargs)
|
||||
except Exception:
|
||||
self._run_server_ex = traceback.format_exc()
|
||||
|
||||
t = new_hidden_thread(
|
||||
target=run,
|
||||
name='test.client',
|
||||
)
|
||||
t.start()
|
||||
|
||||
def wait():
|
||||
t.join(timeout=self._connecttimeout)
|
||||
if t.is_alive():
|
||||
warnings.warn('timed out waiting for connection')
|
||||
if self._session is None:
|
||||
message = 'unable to connect after {} secs'.format( # noqa
|
||||
self._connecttimeout)
|
||||
if self._run_server_ex is None:
|
||||
raise Exception(message)
|
||||
else:
|
||||
message = message + os.linesep + self._run_server_ex # noqa
|
||||
raise Exception(message)
|
||||
|
||||
# The adapter will close when the connection does.
|
||||
|
||||
self._launch(
|
||||
argv,
|
||||
script=script,
|
||||
wait_for_connect=wait,
|
||||
detachable=False,
|
||||
env=env,
|
||||
cwd=cwd)
|
||||
|
||||
return self._adapter, self._session
|
||||
|
||||
def launch_script(self, filename, *argv, **kwargs):
|
||||
if self.closed:
|
||||
raise RuntimeError('debug client closed')
|
||||
if self._adapter is not None:
|
||||
raise RuntimeError('debugger already running')
|
||||
assert self._session is None
|
||||
|
||||
argv = [
|
||||
filename,
|
||||
] + list(argv)
|
||||
if kwargs.pop('nodebug', False):
|
||||
argv.insert(0, '--nodebug')
|
||||
if kwargs.pop('wait', True):
|
||||
argv.insert(0, '--wait')
|
||||
self._launch(argv, **kwargs)
|
||||
return self._adapter, self._session
|
||||
|
||||
def launch_module(self, module, *argv, **kwargs):
|
||||
if self.closed:
|
||||
raise RuntimeError('debug client closed')
|
||||
if self._adapter is not None:
|
||||
raise RuntimeError('debugger already running')
|
||||
assert self._session is None
|
||||
|
||||
argv = [
|
||||
'-m',
|
||||
module,
|
||||
] + list(argv)
|
||||
if kwargs.pop('nodebug', False):
|
||||
argv.insert(0, '--nodebug')
|
||||
self._launch(argv, **kwargs)
|
||||
return self._adapter, self._session
|
||||
|
|
@ -8,7 +8,7 @@ from __future__ import print_function, with_statement, absolute_import
|
|||
# the code that is executed under debugger as part of the test (e.g. via @pyfile).
|
||||
# PYTHONPATH has an entry appended to it that allows these modules to be imported
|
||||
# directly from such code, i.e. "import backchannel". Consequently, these modules
|
||||
# should not assume that any other code from pytests/ is importable.
|
||||
# should not assume that any other code from tests/ is importable.
|
||||
|
||||
|
||||
# Ensure that __file__ is always absolute.
|
||||
|
|
@ -1,411 +0,0 @@
|
|||
from __future__ import absolute_import, print_function
|
||||
|
||||
import contextlib
|
||||
import json
|
||||
import socket
|
||||
import sys
|
||||
import time
|
||||
import threading
|
||||
import warnings
|
||||
|
||||
from ptvsd._util import new_hidden_thread, Closeable, ClosedError
|
||||
from .message import (
|
||||
raw_read_all as read_messages,
|
||||
raw_write_one as write_message
|
||||
)
|
||||
from .socket import (
|
||||
Connection, create_server, create_client, close,
|
||||
recv_as_read, send_as_write,
|
||||
timeout as socket_timeout)
|
||||
from .threading import get_locked_and_waiter
|
||||
from .vsc import parse_message
|
||||
|
||||
|
||||
class DebugSessionConnection(Closeable):
|
||||
|
||||
VERBOSE = False
|
||||
#VERBOSE = True
|
||||
|
||||
TIMEOUT = 5.0
|
||||
|
||||
@classmethod
|
||||
def create_client(cls, addr, **kwargs):
|
||||
def connect(addr, timeout):
|
||||
sock = create_client()
|
||||
for _ in range(int(timeout * 10)):
|
||||
try:
|
||||
sock.connect(addr)
|
||||
except (OSError, socket.error):
|
||||
if cls.VERBOSE:
|
||||
print('+', end='')
|
||||
sys.stdout.flush()
|
||||
time.sleep(0.1)
|
||||
else:
|
||||
break
|
||||
else:
|
||||
raise RuntimeError('could not connect')
|
||||
return sock
|
||||
return cls._create(connect, addr, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def create_server(cls, addr, **kwargs):
|
||||
def connect(addr, timeout):
|
||||
server = create_server(addr)
|
||||
with socket_timeout(server, timeout):
|
||||
client, _ = server.accept()
|
||||
return Connection(client, server)
|
||||
return cls._create(connect, addr, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def _create(cls, connect, addr, timeout=None):
|
||||
if timeout is None:
|
||||
timeout = cls.TIMEOUT
|
||||
sock = connect(addr, timeout)
|
||||
if cls.VERBOSE:
|
||||
print('connected')
|
||||
self = cls(sock, ownsock=True)
|
||||
self._addr = addr
|
||||
return self
|
||||
|
||||
def __init__(self, sock, ownsock=False):
|
||||
super(DebugSessionConnection, self).__init__()
|
||||
self._sock = sock
|
||||
self._ownsock = ownsock
|
||||
|
||||
@property
|
||||
def is_client(self):
|
||||
try:
|
||||
return self._sock.server is None
|
||||
except AttributeError:
|
||||
return True
|
||||
|
||||
def iter_messages(self):
|
||||
if self.closed:
|
||||
raise RuntimeError('connection closed')
|
||||
|
||||
def stop():
|
||||
return self.closed
|
||||
read = recv_as_read(self._sock)
|
||||
for msg, _, _ in read_messages(read, stop=stop):
|
||||
if self.VERBOSE:
|
||||
print(repr(msg))
|
||||
yield parse_message(msg)
|
||||
|
||||
def send(self, req):
|
||||
if self.closed:
|
||||
raise RuntimeError('connection closed')
|
||||
|
||||
def stop():
|
||||
return self.closed
|
||||
write = send_as_write(self._sock)
|
||||
body = json.dumps(req)
|
||||
write_message(write, body, stop=stop)
|
||||
|
||||
# internal methods
|
||||
|
||||
def _close(self):
|
||||
if self._ownsock:
|
||||
close(self._sock)
|
||||
|
||||
|
||||
class DebugSession(Closeable):
|
||||
|
||||
VERBOSE = False
|
||||
#VERBOSE = True
|
||||
|
||||
HOST = 'localhost'
|
||||
PORT = 8888
|
||||
|
||||
TIMEOUT = None
|
||||
|
||||
@classmethod
|
||||
def create_client(cls, addr=None, **kwargs):
|
||||
if addr is None:
|
||||
addr = (cls.HOST, cls.PORT)
|
||||
conn = DebugSessionConnection.create_client(
|
||||
addr,
|
||||
timeout=kwargs.get('timeout'),
|
||||
)
|
||||
return cls(conn, owned=True, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def create_server(cls, addr=None, **kwargs):
|
||||
if addr is None:
|
||||
addr = (cls.HOST, cls.PORT)
|
||||
conn = DebugSessionConnection.create_server(addr, **kwargs)
|
||||
return cls(conn, owned=True, **kwargs)
|
||||
|
||||
def __init__(self, conn, seq=1000, handlers=(), timeout=None, owned=False):
|
||||
super(DebugSession, self).__init__()
|
||||
self._conn = conn
|
||||
self._seq = seq
|
||||
self._timeout = timeout
|
||||
self._owned = owned
|
||||
|
||||
self._handlers = []
|
||||
for handler in handlers:
|
||||
if callable(handler):
|
||||
self._add_handler(handler)
|
||||
else:
|
||||
self._add_handler(*handler)
|
||||
self._received = []
|
||||
self._listenerthread = new_hidden_thread(
|
||||
target=self._listen,
|
||||
name='test.session',
|
||||
)
|
||||
self._listenerthread.start()
|
||||
|
||||
@property
|
||||
def is_client(self):
|
||||
return self._conn.is_client
|
||||
|
||||
@property
|
||||
def received(self):
|
||||
return list(self._received)
|
||||
|
||||
def _create_request(self, command, **args):
|
||||
seq = self._seq
|
||||
self._seq += 1
|
||||
return {
|
||||
'type': 'request',
|
||||
'seq': seq,
|
||||
'command': command,
|
||||
'arguments': args,
|
||||
}
|
||||
|
||||
def send_request(self, command, **args):
|
||||
if self.closed:
|
||||
raise RuntimeError('session closed')
|
||||
|
||||
wait = args.pop('wait', False)
|
||||
req = self._create_request(command, **args)
|
||||
if self.VERBOSE:
|
||||
msg = parse_message(req)
|
||||
print(' <-', msg)
|
||||
|
||||
if wait:
|
||||
with self.wait_for_response(req) as resp:
|
||||
self._conn.send(req)
|
||||
resp_awaiter = AwaitableResponse(req, lambda: resp["msg"])
|
||||
else:
|
||||
resp_awaiter = self._get_awaiter_for_request(req, **args)
|
||||
self._conn.send(req)
|
||||
return resp_awaiter
|
||||
|
||||
def add_handler(self, handler, **kwargs):
|
||||
if self.closed:
|
||||
raise RuntimeError('session closed')
|
||||
|
||||
self._add_handler(handler, **kwargs)
|
||||
|
||||
@contextlib.contextmanager
|
||||
def wait_for_event(self, event, **kwargs):
|
||||
if self.closed:
|
||||
raise RuntimeError('session closed')
|
||||
result = {'msg': None}
|
||||
|
||||
def match(msg):
|
||||
result['msg'] = msg
|
||||
return msg.type == 'event' and msg.event == event
|
||||
handlername = 'event {!r}'.format(event)
|
||||
with self._wait_for_message(match, handlername, **kwargs):
|
||||
yield result
|
||||
|
||||
def get_awaiter_for_event(self, event, condition=lambda msg: True, **kwargs): # noqa
|
||||
if self.closed:
|
||||
raise RuntimeError('session closed')
|
||||
result = {'msg': None}
|
||||
|
||||
def match(msg):
|
||||
result['msg'] = msg
|
||||
return msg.type == 'event' and msg.event == event and condition(msg) # noqa
|
||||
handlername = 'event {!r}'.format(event)
|
||||
evt = self._get_message_handle(match, handlername)
|
||||
|
||||
return AwaitableEvent(event, lambda: result["msg"], evt)
|
||||
|
||||
def _get_awaiter_for_request(self, req, **kwargs):
|
||||
if self.closed:
|
||||
raise RuntimeError('session closed')
|
||||
|
||||
try:
|
||||
command, seq = req.command, req.seq
|
||||
except AttributeError:
|
||||
command, seq = req['command'], req['seq']
|
||||
result = {'msg': None}
|
||||
|
||||
def match(msg):
|
||||
if msg.type != 'response':
|
||||
return False
|
||||
result['msg'] = msg
|
||||
return msg.request_seq == seq
|
||||
handlername = 'response (cmd:{} seq:{})'.format(command, seq)
|
||||
evt = self._get_message_handle(match, handlername)
|
||||
|
||||
return AwaitableResponse(req, lambda: result["msg"], evt)
|
||||
|
||||
@contextlib.contextmanager
|
||||
def wait_for_response(self, req, **kwargs):
|
||||
if self.closed:
|
||||
raise RuntimeError('session closed')
|
||||
|
||||
try:
|
||||
command, seq = req.command, req.seq
|
||||
except AttributeError:
|
||||
command, seq = req['command'], req['seq']
|
||||
result = {'msg': None}
|
||||
|
||||
def match(msg):
|
||||
if msg.type != 'response':
|
||||
return False
|
||||
result['msg'] = msg
|
||||
return msg.request_seq == seq
|
||||
handlername = 'response (cmd:{} seq:{})'.format(command, seq)
|
||||
with self._wait_for_message(match, handlername, **kwargs):
|
||||
yield result
|
||||
|
||||
# internal methods
|
||||
|
||||
def _close(self):
|
||||
if self._owned:
|
||||
try:
|
||||
self._conn.close()
|
||||
except ClosedError:
|
||||
pass
|
||||
if self._listenerthread != threading.current_thread():
|
||||
self._listenerthread.join(timeout=1.0)
|
||||
if self._listenerthread.is_alive():
|
||||
warnings.warn('session listener still running')
|
||||
self._check_handlers()
|
||||
|
||||
def _listen(self):
|
||||
eof = None
|
||||
try:
|
||||
for msg in self._conn.iter_messages():
|
||||
if self.VERBOSE:
|
||||
print(' ->', msg)
|
||||
self._receive_message(msg)
|
||||
except EOFError as ex:
|
||||
# Handle EOF outside of except to avoid unnecessary chaining.
|
||||
eof = ex
|
||||
if eof:
|
||||
remainder = getattr(eof, 'remainder', b'')
|
||||
if remainder:
|
||||
self._receive_message(remainder)
|
||||
try:
|
||||
self.close()
|
||||
except ClosedError:
|
||||
pass
|
||||
|
||||
def _receive_message(self, msg):
|
||||
for i, handler in enumerate(list(self._handlers)):
|
||||
handle_message, _, _ = handler
|
||||
handled = handle_message(msg)
|
||||
try:
|
||||
msg, handled = handled
|
||||
except TypeError:
|
||||
pass
|
||||
if handled:
|
||||
self._handlers.remove(handler)
|
||||
break
|
||||
self._received.append(msg)
|
||||
|
||||
def _add_handler(self, handle_msg, handlername=None, required=True):
|
||||
self._handlers.append(
|
||||
(handle_msg, handlername, required))
|
||||
|
||||
def _check_handlers(self):
|
||||
unhandled = []
|
||||
for handle_msg, name, required in self._handlers:
|
||||
if not required:
|
||||
continue
|
||||
unhandled.append(name or repr(handle_msg))
|
||||
if unhandled:
|
||||
raise RuntimeError('unhandled: {}'.format(unhandled))
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _wait_for_message(self, match, handlername, timeout=None):
|
||||
if timeout is None:
|
||||
timeout = self.TIMEOUT
|
||||
lock, wait = get_locked_and_waiter()
|
||||
|
||||
def handler(msg):
|
||||
if not match(msg):
|
||||
return msg, False
|
||||
lock.release()
|
||||
return msg, True
|
||||
self._add_handler(handler, handlername)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
wait(timeout or self._timeout, handlername, fail=True)
|
||||
|
||||
def _get_message_handle(self, match, handlername):
|
||||
event = threading.Event()
|
||||
|
||||
def handler(msg):
|
||||
if not match(msg):
|
||||
return msg, False
|
||||
event.set()
|
||||
return msg, True
|
||||
self._add_handler(handler, handlername, False)
|
||||
return event
|
||||
|
||||
|
||||
class Awaitable(object):
|
||||
|
||||
@classmethod
|
||||
def wait_all(cls, *awaitables):
|
||||
timeout = 3.0
|
||||
messages = []
|
||||
for _ in range(int(timeout * 10)):
|
||||
time.sleep(0.1)
|
||||
messages = []
|
||||
not_ready = (a for a in awaitables if a._event is not None and not a._event.is_set()) # noqa
|
||||
for awaitable in not_ready:
|
||||
if isinstance(awaitable, AwaitableEvent):
|
||||
messages.append('Event {}'.format(awaitable.name))
|
||||
else:
|
||||
messages.append('Response {}'.format(awaitable.name))
|
||||
if len(messages) == 0:
|
||||
return
|
||||
else:
|
||||
raise TimeoutError('Timeout waiting for {}'.format(','.join(messages))) # noqa
|
||||
|
||||
def __init__(self, name, event=None):
|
||||
self._event = event
|
||||
self.name = name
|
||||
|
||||
def wait(self, timeout=1.0):
|
||||
if self._event is None:
|
||||
return
|
||||
if not self._event.wait(timeout):
|
||||
message = 'Timeout waiting for '
|
||||
if isinstance(self, AwaitableEvent):
|
||||
message += 'Event {}'.format(self.name)
|
||||
else:
|
||||
message += 'Response {}'.format(self.name)
|
||||
raise TimeoutError(message)
|
||||
|
||||
|
||||
class AwaitableResponse(Awaitable):
|
||||
|
||||
def __init__(self, req, result_getter, event=None):
|
||||
super(AwaitableResponse, self).__init__(req["command"], event)
|
||||
self.req = req
|
||||
self._result_getter = result_getter
|
||||
|
||||
@property
|
||||
def resp(self):
|
||||
return self._result_getter()
|
||||
|
||||
|
||||
class AwaitableEvent(Awaitable):
|
||||
|
||||
def __init__(self, name, result_getter, event=None):
|
||||
super(AwaitableEvent, self).__init__(name, event)
|
||||
self._result_getter = result_getter
|
||||
|
||||
@property
|
||||
def event(self):
|
||||
return self._result_getter()
|
||||
|
|
@ -1,78 +0,0 @@
|
|||
from . import noop
|
||||
from ._io import iter_lines_buffered, write_all
|
||||
|
||||
|
||||
class HeaderError(Exception):
|
||||
"""Some header-related problem."""
|
||||
|
||||
|
||||
class HeaderLineError(HeaderError):
|
||||
"""A problem with an encoded header line."""
|
||||
|
||||
|
||||
class DecodeError(HeaderLineError):
|
||||
"""Trouble decoding a header line."""
|
||||
|
||||
|
||||
def decode(line):
|
||||
"""Return (name, value) for the given encoded header line."""
|
||||
if line[-2:] == b'\r\n':
|
||||
line = line[:-2]
|
||||
if not line:
|
||||
return None, None
|
||||
line = line.decode('ascii', 'replace')
|
||||
name, sep, value = line.partition(':')
|
||||
if not sep:
|
||||
raise DecodeError(line)
|
||||
return name, value
|
||||
|
||||
|
||||
def encode(name, value):
|
||||
"""Return the encoded header line."""
|
||||
return '{}: {}\r\n'.format(name, value).encode('ascii')
|
||||
|
||||
|
||||
def read_one(read, **kwargs):
|
||||
"""Return ((name, value), remainder) for the next header from read()."""
|
||||
lines = iter_lines_buffered(read, sep=b'\r\n', **kwargs)
|
||||
for line, remainder in lines:
|
||||
if not line:
|
||||
return None, remainder
|
||||
return decode(line), remainder
|
||||
|
||||
|
||||
def write_one(write, name, value, stop=noop):
|
||||
"""Send the header."""
|
||||
line = encode(name, value)
|
||||
return write_all(write, line, stop=stop)
|
||||
|
||||
|
||||
#def recv_header(sock, stop=(lambda: None), timeout=5.0):
|
||||
# """Return (name, value) for the next header."""
|
||||
# line = b''
|
||||
# with socket.timeout(sock, timeout):
|
||||
# while not stop():
|
||||
# c = sock.recv(1)
|
||||
# if c == b'\r':
|
||||
# c = sock.recv(1)
|
||||
# if c == b'\n':
|
||||
# break
|
||||
# line += b'\r'
|
||||
# line += c
|
||||
# else:
|
||||
# line += c
|
||||
# line = line.decode('ascii', 'replace')
|
||||
# if not line:
|
||||
# return None, None
|
||||
# name, sep, value = line.partition(':')
|
||||
# if not sep:
|
||||
# raise ValueError('bad header line {!r}'.format(line))
|
||||
# return name, value
|
||||
#
|
||||
#
|
||||
#def send_header(sock, name, value):
|
||||
# """Send the header."""
|
||||
# line = '{}: {}\r\n'.format(name, value).encode('ascii')
|
||||
# while line:
|
||||
# sent = sock.send(line)
|
||||
# line = line[sent:]
|
||||
|
|
@ -1,82 +0,0 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
try:
|
||||
from http.server import BaseHTTPRequestHandler, HTTPServer
|
||||
except ImportError:
|
||||
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
|
||||
|
||||
from ptvsd._util import new_hidden_thread
|
||||
|
||||
|
||||
class Server:
|
||||
"""Wraps an http.server.HTTPServer in a thread."""
|
||||
|
||||
def __init__(self, handler, host='', port=8000):
|
||||
self.handler = handler
|
||||
self._addr = (host, port)
|
||||
self._server = None
|
||||
self._thread = None
|
||||
|
||||
@property
|
||||
def address(self):
|
||||
host, port = self._addr
|
||||
if host == '':
|
||||
host = 'localhost'
|
||||
return '{}:{}'.format(host, port)
|
||||
|
||||
def start(self):
|
||||
if self._server is not None:
|
||||
raise RuntimeError('already started')
|
||||
self._server = HTTPServer(self._addr, self.handler)
|
||||
self._thread = new_hidden_thread(
|
||||
target=(lambda: self._server.serve_forever()),
|
||||
name='test.http',
|
||||
)
|
||||
self._thread.start()
|
||||
|
||||
def stop(self):
|
||||
if self._server is None:
|
||||
raise RuntimeError('not running')
|
||||
self._server.shutdown()
|
||||
self._thread.join()
|
||||
self._server.server_close()
|
||||
self._thread = None
|
||||
self._server = None
|
||||
|
||||
def __enter__(self):
|
||||
self.start()
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
self.stop()
|
||||
|
||||
|
||||
def json_file_handler(data):
|
||||
"""Return an HTTP handler that always serves the given JSON bytes."""
|
||||
|
||||
class HTTPHandler(BaseHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', b'application/json')
|
||||
self.send_header('Content-Length',
|
||||
str(len(data)).encode('ascii'))
|
||||
self.end_headers()
|
||||
self.wfile.write(data)
|
||||
|
||||
def log_message(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
return HTTPHandler
|
||||
|
||||
|
||||
def error_handler(code, msg):
|
||||
"""Return an HTTP handler that always returns the given error code."""
|
||||
|
||||
class HTTPHandler(BaseHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
self.send_error(code, msg)
|
||||
|
||||
def log_message(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
return HTTPHandler
|
||||
|
|
@ -1,105 +0,0 @@
|
|||
import inspect
|
||||
import os
|
||||
import os.path
|
||||
import time
|
||||
|
||||
|
||||
class LockTimeoutError(RuntimeError):
|
||||
pass
|
||||
|
||||
|
||||
##################################
|
||||
# lock files
|
||||
|
||||
# TODO: Support a nonce for lockfiles?
|
||||
|
||||
def _acquire_lockfile(filename, timeout):
|
||||
# Wait until it does not exist.
|
||||
for _ in range(int(timeout * 10) + 1):
|
||||
if not os.path.exists(filename):
|
||||
break
|
||||
time.sleep(0.1)
|
||||
else:
|
||||
if os.path.exists(filename):
|
||||
raise LockTimeoutError(
|
||||
'timed out waiting for lockfile %r' % filename)
|
||||
# Create the file.
|
||||
with open(filename, 'w'):
|
||||
pass
|
||||
|
||||
|
||||
def _release_lockfile(filename):
|
||||
try:
|
||||
os.remove(filename)
|
||||
except OSError:
|
||||
if not os.path.exists(filename):
|
||||
raise RuntimeError('lockfile not held')
|
||||
# TODO: Fail here?
|
||||
pass
|
||||
|
||||
|
||||
_ACQUIRE_LOCKFILE = """
|
||||
# <- START ACQUIRE LOCKFILE SCRIPT ->
|
||||
import os.path
|
||||
import time
|
||||
class LockTimeoutError(RuntimeError):
|
||||
pass
|
||||
%s
|
||||
_acquire_lockfile({!r}, {!r})
|
||||
# <- END ACQUIRE LOCKFILE SCRIPT ->
|
||||
""" % inspect.getsource(_acquire_lockfile).strip()
|
||||
|
||||
_RELEASE_LOCKFILE = """
|
||||
# <- START RELEASE LOCKFILE SCRIPT ->
|
||||
import os
|
||||
import os.path
|
||||
%s
|
||||
_release_lockfile({!r})
|
||||
# <- END RELEASE LOCKFILE SCRIPT ->
|
||||
""" % inspect.getsource(_release_lockfile).strip()
|
||||
|
||||
|
||||
class Lockfile(object):
|
||||
"""A wrapper around a lock file."""
|
||||
|
||||
def __init__(self, filename):
|
||||
self._filename = filename
|
||||
|
||||
def __repr__(self):
|
||||
return '{}(filename={!r})'.format(
|
||||
type(self).__name__,
|
||||
self._filename,
|
||||
)
|
||||
|
||||
def __str__(self):
|
||||
return self._filename
|
||||
|
||||
@property
|
||||
def filename(self):
|
||||
return self._filename
|
||||
|
||||
def acquire(self, timeout=5.0):
|
||||
_acquire_lockfile(self._filename, timeout)
|
||||
|
||||
def acquire_script(self, timeout=5.0):
|
||||
return _ACQUIRE_LOCKFILE.format(self._filename, timeout)
|
||||
|
||||
def release(self):
|
||||
_release_lockfile(self._filename)
|
||||
|
||||
def release_script(self):
|
||||
return _RELEASE_LOCKFILE.format(self._filename)
|
||||
|
||||
def wait_for_script(self):
|
||||
"""Return (done script, wait func) after acquiring."""
|
||||
def wait(**kwargs):
|
||||
self.acquire(**kwargs)
|
||||
self.release()
|
||||
self.acquire()
|
||||
return self.release_script(), wait
|
||||
|
||||
def wait_in_script(self, **kwargs):
|
||||
"""Return (done func, wait script) after acquiring."""
|
||||
script = self.acquire_script(**kwargs) + self.release_script()
|
||||
self.acquire()
|
||||
return self.release, script
|
||||
|
|
@ -1,129 +0,0 @@
|
|||
from . import noop
|
||||
from ._io import write_all, read_buffered
|
||||
from .header import read_one as read_header, write_one as write_header
|
||||
|
||||
|
||||
def raw_read_all(read, initial=b'', stop=noop):
|
||||
"""Yield (msg, headers, remainder) for each message read."""
|
||||
headers = {}
|
||||
remainder = initial
|
||||
while not stop():
|
||||
header, remainder = read_header(read, initial=remainder, stop=stop)
|
||||
if header is not None:
|
||||
name, value = header
|
||||
headers[name] = value
|
||||
continue
|
||||
|
||||
# end-of-headers
|
||||
numbytes = int(headers['Content-Length'])
|
||||
data, remainder = read_buffered(read, numbytes, initial=remainder,
|
||||
stop=stop)
|
||||
msg = data.decode('utf-8', 'replace')
|
||||
yield msg, headers, remainder
|
||||
headers = {}
|
||||
|
||||
|
||||
def raw_write_one(write, body, stop=noop, **headers):
|
||||
"""Write the message."""
|
||||
body = body.encode('utf-8')
|
||||
headers.setdefault('Content-Length', len(body))
|
||||
for name, value in headers.items():
|
||||
write_header(write, name, value, stop=stop)
|
||||
write_all(write, b'\r\n')
|
||||
write_all(write, body)
|
||||
|
||||
|
||||
def assert_messages_equal(received, expected):
|
||||
if received != expected:
|
||||
try:
|
||||
from itertools import zip_longest
|
||||
except ImportError:
|
||||
from itertools import izip_longest as zip_longest
|
||||
|
||||
msg = ['']
|
||||
msg.append('Received:')
|
||||
for r in received:
|
||||
msg.append(str(r))
|
||||
msg.append('')
|
||||
|
||||
msg.append('Expected:')
|
||||
for r in expected:
|
||||
msg.append(str(r))
|
||||
msg.append('')
|
||||
|
||||
msg.append('Diff by line')
|
||||
for i, (a, b) in enumerate(
|
||||
zip_longest(received, expected, fillvalue=None)):
|
||||
if a == b:
|
||||
msg.append(' %2d: %s' % (i, a,))
|
||||
else:
|
||||
msg.append('!%2d: %s != %s' % (i, a, b))
|
||||
|
||||
raise AssertionError('\n'.join(msg))
|
||||
|
||||
|
||||
def assert_contains_messages(received, expected):
|
||||
error_message = ['']
|
||||
received_copy = list(msg._replace(seq=0) for msg in received)
|
||||
expected_copy = list(msg._replace(seq=0) for msg in expected)
|
||||
received_messages = '\nReceived:\n' + \
|
||||
'\n'.join(str(msg) for msg in received_copy)
|
||||
for msg in expected_copy:
|
||||
if msg in received_copy:
|
||||
del received_copy[received_copy.index(msg)]
|
||||
else:
|
||||
error_message.append('Not found:')
|
||||
error_message.append(str(msg))
|
||||
|
||||
if len(error_message) > 1:
|
||||
expected_messages = '\nExpected:\n' + \
|
||||
'\n'.join(str(msg) for msg in expected_copy)
|
||||
raise AssertionError('\n'.join(error_message) +
|
||||
received_messages +
|
||||
expected_messages)
|
||||
|
||||
|
||||
def assert_is_subset(received_message, expected_message):
|
||||
message = [
|
||||
'Subset comparison failed',
|
||||
'Received: {}'.format(received_message),
|
||||
'Expected: {}'.format(expected_message),
|
||||
]
|
||||
|
||||
def assert_is_subset(received, expected, current_path=''):
|
||||
try:
|
||||
if received == expected:
|
||||
return
|
||||
elif type(expected) is dict:
|
||||
try:
|
||||
iterator = expected.iteritems()
|
||||
except AttributeError:
|
||||
iterator = expected.items()
|
||||
parent_path = current_path
|
||||
for pkey, pvalue in iterator:
|
||||
current_path = '{}.{}'.format(parent_path, pkey)
|
||||
assert_is_subset(received[pkey], pvalue, current_path)
|
||||
elif type(expected) is list:
|
||||
parent_path = current_path
|
||||
for i, pvalue in enumerate(expected):
|
||||
current_path = '{}[{}]'.format(parent_path, i)
|
||||
assert_is_subset(received[i], pvalue, current_path)
|
||||
else:
|
||||
if received != expected:
|
||||
raise ValueError
|
||||
return True
|
||||
except ValueError:
|
||||
message.append('Path: body{}'.format(current_path))
|
||||
message.append('Received:{}'.format(received))
|
||||
message.append('Expected:{}'.format(expected))
|
||||
raise AssertionError('\n'.join(message))
|
||||
except KeyError:
|
||||
message.append('Key not found: body{}'.format(current_path))
|
||||
raise AssertionError('\n'.join(message))
|
||||
except IndexError:
|
||||
message.append('Index not found: body'.format(current_path))
|
||||
raise AssertionError('\n'.join(message))
|
||||
|
||||
received = received_message.body if hasattr(received_message, 'body') else received_message # noqa
|
||||
expected = expected_message.body if hasattr(expected_message, 'body') else expected_message # noqa
|
||||
assert_is_subset(received, expected)
|
||||
|
|
@ -9,8 +9,8 @@ import ptvsd.compat
|
|||
|
||||
|
||||
def get_test_root(name):
|
||||
pytests_dir = os.path.dirname(os.path.dirname(__file__))
|
||||
p = os.path.join(pytests_dir, 'func', 'testfiles', name)
|
||||
tests_dir = os.path.dirname(os.path.dirname(__file__))
|
||||
p = os.path.join(tests_dir, 'func', 'testfiles', name)
|
||||
if os.path.exists(p):
|
||||
return p
|
||||
return None
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue