## Summary
Closes: https://github.com/astral-sh/ty/issues/551
This PR adds support for step 4 of the overload call evaluation
algorithm which states that:
> If the argument list is compatible with two or more overloads,
determine whether one or more of the overloads has a variadic parameter
(either `*args` or `**kwargs`) that maps to a corresponding argument
that supplies an indeterminate number of positional or keyword
arguments. If so, eliminate overloads that do not have a variadic
parameter.
And, with that, the overload call evaluation algorithm has been
implemented completely end to end as stated in the typing spec.
## Test Plan
Expand the overload call test suite.
## Summary
Closes: https://github.com/astral-sh/ty/issues/1236
This PR fixes a bug where the variadic argument wouldn't match against
the variadic parameter in certain scenarios.
This was happening because I didn't realize that the `all_elements`
iterator wouldn't keep on returning the variable element (which is
correct, I just didn't realize it back then).
I don't think we can use the `resize` method here because we don't know
how many parameters this variadic argument is matching against as this
is where the actual parameter matching occurs.
## Test Plan
Expand test cases to consider a few more combinations of arguments and
parameters which are variadic.
## Summary
This applies the trick that we use for `builtins.open` to similar
functions that have the same problem. The reason is that the problem
would otherwise become even more pronounced once we add understanding of
the implicit type of `self` parameters, because then something like
`(base_path / "test.bin").open("rb")` also leads to a wrong return type
and can result in false positives.
## Test Plan
New Markdown tests
## Summary
This PR adds support for unpacking `**kwargs` argument.
This can be matched against any standard (positional or keyword),
keyword-only, or keyword variadic parameter that haven't been matched
yet.
This PR also takes care of special casing `TypedDict` because the key
names and the corresponding value type is known, so we can be more
precise in our matching and type checking step. In the future, this
special casing would be extended to include `ParamSpec` as well.
Part of astral-sh/ty#247
## Test Plan
Add test cases for various scenarios.
## Summary
https://github.com/astral-sh/ruff/pull/20165 added a lot of false
positives around calls to `builtins.open()`, because our missing support
for PEP-613 type aliases means that we don't understand typeshed's
overloads for `builtins.open()` at all yet, and therefore always select
the first overload. This didn't use to matter very much, but now that we
have a much stricter implementation of protocol assignability/subtyping
it matters a lot, because most of the stdlib functions dealing with I/O
(`pickle`, `marshal`, `io`, `json`, etc.) are annotated in typeshed as
taking in protocols of some kind.
In lieu of full PEP-613 support, which is blocked on various things and
might not land in time for our next alpha release, this PR adds some
temporary special-casing for `builtins.open()` to avoid the false
positives. We just infer `Todo` for anything that isn't meant to match
typeshed's first `open()` overload. This should be easy to rip out again
once we have proper support for PEP-613 type aliases, which hopefully
should be pretty soon!
## Test Plan
Added an mdtest
## Summary
This PR addresses an issue for a variadic argument when involved in
argument type expansion of overload call evaluation.
The issue is that the expansion of the variadic argument could result in
argument list of different arity. For example, in `*args: tuple[int] |
tuple[int, str]`, the expansion would lead to the variadic argument
being unpacked into 1 and 2 element respectively. This means that the
parameter matching that was performed initially isn't sufficient and
each expanded argument list would need to redo the parameter matching
again.
This is currently done by redoing the parameter matching directly,
maintaining the state of argument forms (and the conflicting forms), and
updating the `Bindings` values if it changes.
Closes: astral-sh/ty#735
## Test Plan
Update existing mdtest.
## Summary
This PR fixes various TODOs around overload call when a variadic
argument is used.
The reason this bug existed is because the specialization wouldn't
account for unpacking the type of the variadic argument.
This is fixed by expanding `MatchedArgument` to contain the type of that
argument _only_ when it is a variadic argument. The reason is that
there's a split for when the argument type is inferred -- the
non-variadic arguments are inferred using `infer_argument_types` _after_
parameter matching while the variadic argument type is inferred _during_
the parameter matching. And, the `MatchedArgument` is populated _during_
parameter matching which means the unpacking would need to happen during
parameter matching.
This split seems a bit inconsistent but I don't want to spend a lot of
time on trying to merge them such that all argument type inference
happens in a single place. I might look into it while adding support for
`**kwargs`.
## Test Plan
Update existing tests by resolving the todos.
The ecosystem changes looks correct to me except for the `slice` call
but it seems that it's unrelated to this PR as we infer `slice[Any, Any,
Any]` for a `slice(1, 2, 3)` call on `main` as well
([playground](https://play.ty.dev/9eacce00-c7d5-4dd5-a932-4265cb2bb4f6)).
## Summary
This PR limits the argument type expansion size for an overload call
evaluation to 512.
The limit chosen is arbitrary but I've taken the 256 limit from Pyright
into account and bumped it x2 to start with.
Initially, I actually started out by trying to refactor the entire
argument type expansion to be lazy. Currently, expanding a single
argument at any position eagerly creates the combination (argument
lists) and returns that (`Vec<CallArguments>`) but I thought we could
make it lazier by converting the return type of `expand` from
`Iterator<Item = Vec<CallArguments>>` to `Iterator<Item = Iterator<Item
= CallArguments>>` but that's proving to be difficult to implement
mainly because we **need** to maintain the previous expansion to
generate the next expansion which is the main reason to use
`std::iter::successors` in the first place.
Another approach would be to eagerly expand all the argument types and
then use the `combinations` from `itertools` to generate the
combinations but we would need to find the "boundary" between arguments
lists produced from expanding argument at position 1 and position 2
because that's important for the algorithm.
Closes: https://github.com/astral-sh/ty/issues/868
## Test Plan
Add test case to demonstrate the limit along with the diagnostic
snapshot stating that the limit has been reached.
## Summary
Part of: https://github.com/astral-sh/ty/issues/868
This PR adds a heuristic to avoid argument type expansion if it's going
to eventually lead to no matching overload.
This is done by checking whether the non-expandable argument types are
assignable to the corresponding annotated parameter type. If one of them
is not assignable to all of the remaining overloads, then argument type
expansion isn't going to help.
## Test Plan
Add mdtest that would otherwise take a long time because of the number
of arguments that it would need to expand (30).
## Summary
Closes: https://github.com/astral-sh/ty/issues/669
(This turned out to be simpler that I thought :))
## Test Plan
Update existing test cases.
### Ecosystem report
Most of them are basically because ty has now started inferring more
precise types for the return type to an overloaded call and a lot of the
types are defined using type aliases, here's some examples:
<details><summary>Details</summary>
<p>
> attrs (https://github.com/python-attrs/attrs)
> + tests/test_make.py:146:14: error[unresolved-attribute] Type
`Literal[42]` has no attribute `default`
> - Found 555 diagnostics
> + Found 556 diagnostics
This is accurate now that we infer the type as `Literal[42]` instead of
`Unknown` (Pyright infers it as `int`)
> optuna (https://github.com/optuna/optuna)
> + optuna/_gp/search_space.py:181:53: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `tuple[int | float, int | float]`, found `tuple[Unknown |
ndarray[Unknown, <class 'float'>], Unknown | ndarray[Unknown, <class
'float'>]]`
> + optuna/_gp/search_space.py:181:83: error[invalid-argument-type]
Argument to function `_round_one_normalized_param` is incorrect:
Expected `int | float`, found `Unknown | ndarray[Unknown, <class
'float'>]`
> + tests/gp_tests/test_search_space.py:109:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `tuple[int | float, int
| float]`, found `Unknown | ndarray[Unknown, <class 'float'>]`
> + tests/gp_tests/test_search_space.py:110:13:
error[invalid-argument-type] Argument to function
`_unnormalize_one_param` is incorrect: Expected `int | float`, found
`Unknown | ndarray[Unknown, <class 'float'>]`
> - Found 559 diagnostics
> + Found 563 diagnostics
Same as above where ty is now inferring a more precise type like
`Unknown | ndarray[tuple[int, int], <class 'float'>]` instead of just
`Unknown` as before
> jinja (https://github.com/pallets/jinja)
> + src/jinja2/bccache.py:298:39: error[invalid-argument-type] Argument
to bound method `write_bytecode` is incorrect: Expected `IO[bytes]`,
found `_TemporaryFileWrapper[str]`
> - Found 186 diagnostics
> + Found 187 diagnostics
This requires support for type aliases to match the correct overload.
> hydra-zen (https://github.com/mit-ll-responsible-ai/hydra-zen)
> + src/hydra_zen/wrapper/_implementations.py:945:16:
error[invalid-return-type] Return type does not match returned value:
expected `DataClass_ | type[@Todo(type[T] for protocols)] | ListConfig |
DictConfig`, found `@Todo(unsupported type[X] special form) | (((...) ->
Any) & dict[Unknown, Unknown]) | (DataClass_ & dict[Unknown, Unknown]) |
dict[Any, Any] | (ListConfig & dict[Unknown, Unknown]) | (DictConfig &
dict[Unknown, Unknown]) | (((...) -> Any) & list[Unknown]) | (DataClass_
& list[Unknown]) | list[Any] | (ListConfig & list[Unknown]) |
(DictConfig & list[Unknown])`
> + tests/annotations/behaviors.py:60:28: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/behaviors.py:64:21: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:167:17: error[call-non-callable]
Object of type `Path` is not callable
> + tests/annotations/declarations.py:524:17:
error[unresolved-attribute] Type `<class 'int'>` has no attribute
`_target_`
> - Found 561 diagnostics
> + Found 566 diagnostics
Same as above, this requires support for type aliases to match the
correct overload.
> paasta (https://github.com/yelp/paasta)
> + paasta_tools/utils.py:4188:19: warning[redundant-cast] Value is
already of type `list[str]`
> - Found 888 diagnostics
> + Found 889 diagnostics
This is correct.
> colour (https://github.com/colour-science/colour)
> + colour/plotting/diagrams.py:448:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/diagrams.py:462:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/models.py:419:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:230:9: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:474:13: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:495:17: error[invalid-argument-type]
Argument to bound method `__init__` is incorrect: Expected
`Sequence[@Todo(Support for `typing.TypeAlias`)]`, found
`ndarray[tuple[int, int, int], dtype[Unknown]]`
> + colour/plotting/temperature.py:513:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> + colour/plotting/temperature.py:514:13: error[invalid-argument-type]
Argument to bound method `text` is incorrect: Expected `int | float`,
found `ndarray[@Todo(Support for `typing.TypeAlias`), dtype[Unknown]]`
> - Found 480 diagnostics
> + Found 488 diagnostics
Most of them are correct except for the last two diagnostics which I'm
not sure
what's happening, it's trying to index into an `np.ndarray` type (which
is
inferred correctly) but I think it might be picking up an incorrect
overload
for the `__getitem__` method.
Scipy's diagnostics also requires support for type alises to pick the
correct overload.
</p>
</details>
## Summary
- Add support for the return types of `async` functions
- Add type inference for `await` expressions
- Add support for `async with` / async context managers
- Add support for `yield from` expressions
This PR is generally lacking proper error handling in some cases (e.g.
illegal `__await__` attributes). I'm planning to work on this in a
follow-up.
part of https://github.com/astral-sh/ty/issues/151
closes https://github.com/astral-sh/ty/issues/736
## Ecosystem
There are a lot of true positives on `prefect` which look similar to:
```diff
prefect (https://github.com/PrefectHQ/prefect)
+ src/integrations/prefect-aws/tests/workers/test_ecs_worker.py:406:12: error[unresolved-attribute] Type `str` has no attribute `status_code`
```
This is due to a wrong return type annotation
[here](e926b8c4c1/src/integrations/prefect-aws/tests/workers/test_ecs_worker.py (L355-L391)).
```diff
mitmproxy (https://github.com/mitmproxy/mitmproxy)
+ test/mitmproxy/addons/test_clientplayback.py:18:1: error[invalid-argument-type] Argument to function `asynccontextmanager` is incorrect: Expected `(...) -> AsyncIterator[Unknown]`, found `def tcp_server(handle_conn, **server_args) -> Unknown | tuple[str, int]`
```
[This](a4d794c59a/test/mitmproxy/addons/test_clientplayback.py (L18-L19))
is a true positive. That function should return
`AsyncIterator[Address]`, not `Address`.
I looked through almost all of the other new diagnostics and they all
look like known problems or true positives.
## Typing conformance
The typing conformance diff looks good.
## Test Plan
New Markdown tests
This PR updates our iterator protocol machinery to return a tuple spec
describing the elements that are returned, instead of a type. That
allows us to track heterogeneous iterators more precisely, and
consolidates the logic in unpacking and splatting, which are the two
places where we can take advantage of that more precise information.
(Other iterator consumers, like `for` loops, have to collapse the
iterated elements down to a single type regardless, and we provide a new
helper method on `TupleSpec` to perform that summarization.)
## Summary
Add more precise type inference for a limited set of `isinstance(…)`
calls, i.e. return `Literal[True]` if we can be sure that this is the
correct result. This improves exhaustiveness checking / reachability
analysis for if-elif-else chains with `isinstance` checks. For example:
```py
def is_number(x: int | str) -> bool: # no "can implicitly return `None` error here anymore
if isinstance(x, int):
return True
elif isinstance(x, str):
return False
# code here is now detected as being unreachable
```
This PR also adds a new test suite for exhaustiveness checking.
## Test Plan
New Markdown tests
### Ecosystem analysis
The removed diagnostics look good. There's [one
case](f52c4f1afd/torchvision/io/video_reader.py (L125-L143))
where a "true positive" is removed in unreachable code. `src` is
annotated as being of type `str`, but there is an `elif isinstance(src,
bytes)` branch, which we now detect as unreachable. And so the
diagnostic inside that branch is silenced. I don't think this is a
problem, especially once we have a "graying out" feature, or a lint that
warns about unreachable code.
This PR updates our call binding logic to handle splatted arguments.
Complicating matters is that we have separated call bind analysis into
two phases: parameter matching and type checking. Parameter matching
looks at the arity of the function signature and call site, and assigns
arguments to parameters. Importantly, we don't yet know the type of each
argument! This is needed so that we can decide whether to infer the type
of each argument as a type form or value form, depending on the
requirements of the parameter that the argument was matched to.
This is an issue when splatting an argument, since we need to know how
many elements the splatted argument contains to know how many positional
parameters to match it against. And to know how many elements the
splatted argument has, we need to know its type.
To get around this, we now make the assumption that splatted arguments
can only be used with value-form parameters. (If you end up splatting an
argument into a type-form parameter, we will silently pass in its
value-form type instead.) That allows us to preemptively infer the
(value-form) type of any splatted argument, so that we have its arity
available during parameter matching. We defer inference of non-splatted
arguments until after parameter matching has finished, as before.
We reuse a lot of the new tuple machinery to make this happen — in
particular resizing the tuple spec representing the number of arguments
passed in with the tuple length representing the number of parameters
the splat was matched with.
This work also shows that we might need to change how we are performing
argument expansion during overload resolution. At the moment, when we
expand parameters, we assume that each argument will still be matched to
the same parameters as before, and only retry the type-checking phase.
With splatted arguments, this is no longer the case, since the inferred
arity of each union element might be different than the arity of the
union as a whole, which can affect how many parameters the splatted
argument is matched to. See the regression test case in
`mdtest/call/function.md` for more details.
## Summary
Implement expansion of enums into unions of enum literals (and the
reverse operation). For the enum below, this allows us to understand
that `Color = Literal[Color.RED, Color.GREEN, Color.BLUE]`, or that
`Color & ~Literal[Color.RED] = Literal[Color.GREEN, Color.BLUE]`. This
helps in exhaustiveness checking, which is why we see some removed
`assert_never` false positives. And since exhaustiveness checking also
helps with understanding terminal control flow, we also see a few
removed `invalid-return-type` and `possibly-unresolved-reference` false
positives. This PR also adds expansion of enums in overload resolution
and type narrowing constructs.
```py
from enum import Enum
from typing_extensions import Literal, assert_never
from ty_extensions import Intersection, Not, static_assert, is_equivalent_to
class Color(Enum):
RED = 1
GREEN = 2
BLUE = 3
type Red = Literal[Color.RED]
type Green = Literal[Color.GREEN]
type Blue = Literal[Color.BLUE]
static_assert(is_equivalent_to(Red | Green | Blue, Color))
static_assert(is_equivalent_to(Intersection[Color, Not[Red]], Green | Blue))
def color_name(color: Color) -> str: # no error here (we detect that this can not implicitly return None)
if color is Color.RED:
return "Red"
elif color is Color.GREEN:
return "Green"
elif color is Color.BLUE:
return "Blue"
else:
assert_never(color) # no error here
```
## Performance
I avoided an initial regression here for large enums, but the
`UnionBuilder` and `IntersectionBuilder` parts can certainly still be
optimized. We might want to use the same technique that we also use for
unions of other literals. I didn't see any problems in our benchmarks so
far, so this is not included yet.
## Test Plan
Many new Markdown tests
## Summary
Add a new `Type::EnumLiteral(…)` variant and infer this type for member
accesses on enums.
**Example**: No more `@Todo` types here:
```py
from enum import Enum
class Answer(Enum):
YES = 1
NO = 2
def is_yes(self) -> bool:
return self == Answer.YES
reveal_type(Answer.YES) # revealed: Literal[Answer.YES]
reveal_type(Answer.YES == Answer.NO) # revealed: Literal[False]
reveal_type(Answer.YES.is_yes()) # revealed: bool
```
## Test Plan
* Many new Markdown tests for the new type variant
* Added enum literal types to property tests, ran property tests
## Ecosystem analysis
Summary:
Lots of false positives removed. All of the new diagnostics are
either new true positives (the majority) or known problems. Click for
detailed analysis</summary>
Details:
```diff
AutoSplit (https://github.com/Toufool/AutoSplit)
+ error[call-non-callable] src/capture_method/__init__.py:137:9: Method `__getitem__` of type `bound method CaptureMethodDict.__getitem__(key: Never, /) -> type[CaptureMethodBase]` is not callable on object of type `CaptureMethodDict`
+ error[call-non-callable] src/capture_method/__init__.py:147:9: Method `__getitem__` of type `bound method CaptureMethodDict.__getitem__(key: Never, /) -> type[CaptureMethodBase]` is not callable on object of type `CaptureMethodDict`
+ error[call-non-callable] src/capture_method/__init__.py:148:1: Method `__getitem__` of type `bound method CaptureMethodDict.__getitem__(key: Never, /) -> type[CaptureMethodBase]` is not callable on object of type `CaptureMethodDict`
```
New true positives. That `__getitem__` method is apparently annotated
with `Never` to prevent developers from using it.
```diff
dd-trace-py (https://github.com/DataDog/dd-trace-py)
+ error[invalid-assignment] ddtrace/vendor/psutil/_common.py:29:5: Object of type `None` is not assignable to `Literal[AddressFamily.AF_INET6]`
+ error[invalid-assignment] ddtrace/vendor/psutil/_common.py:33:5: Object of type `None` is not assignable to `Literal[AddressFamily.AF_UNIX]`
```
Arguably true positives:
e0a772c28b/ddtrace/vendor/psutil/_common.py (L29)
```diff
ignite (https://github.com/pytorch/ignite)
+ error[invalid-argument-type] tests/ignite/engine/test_custom_events.py:190:34: Argument to bound method `__call__` is incorrect: Expected `((...) -> Unknown) | None`, found `Literal["123"]`
+ error[invalid-argument-type] tests/ignite/engine/test_custom_events.py:220:37: Argument to function `default_event_filter` is incorrect: Expected `Engine`, found `None`
+ error[invalid-argument-type] tests/ignite/engine/test_custom_events.py:220:43: Argument to function `default_event_filter` is incorrect: Expected `int`, found `None`
+ error[call-non-callable] tests/ignite/engine/test_custom_events.py:561:9: Object of type `CustomEvents` is not callable
+ error[invalid-argument-type] tests/ignite/metrics/test_frequency.py:50:38: Argument to bound method `attach` is incorrect: Expected `Events`, found `CallableEventWithFilter`
```
All true positives. Some of them are inside `pytest.raises(TypeError,
…)` blocks 🙃
```diff
meson (https://github.com/mesonbuild/meson)
+ error[invalid-argument-type] unittests/internaltests.py:243:51: Argument to bound method `__init__` is incorrect: Expected `bool`, found `Literal[MachineChoice.HOST]`
+ error[invalid-argument-type] unittests/internaltests.py:271:51: Argument to bound method `__init__` is incorrect: Expected `bool`, found `Literal[MachineChoice.HOST]`
```
New true positives. Enum literals can not be assigned to `bool`, even if
their value types are `0` and `1`.
```diff
poetry (https://github.com/python-poetry/poetry)
+ error[invalid-assignment] src/poetry/console/exceptions.py:101:5: Object of type `Literal[""]` is not assignable to `InitVar[str]`
```
New false positive, missing support for `InitVar`.
```diff
prefect (https://github.com/PrefectHQ/prefect)
+ error[invalid-argument-type] src/integrations/prefect-dask/tests/test_task_runners.py:193:17: Argument is incorrect: Expected `StateType`, found `Literal[StateType.COMPLETED]`
```
This is confusing. There are two definitions
([one](74d8cd93ee/src/prefect/client/schemas/objects.py (L89-L100)),
[two](https://github.com/PrefectHQ/prefect/blob/main/src/prefect/server/schemas/states.py#L40))
of the `StateType` enum. Here, we're trying to assign one to the other.
I don't think that should be allowed, so this is a true positive (?).
```diff
python-htmlgen (https://github.com/srittau/python-htmlgen)
+ error[invalid-assignment] test_htmlgen/form.py:51:9: Object of type `str` is not assignable to attribute `autocomplete` of type `Autocomplete | None`
+ error[invalid-assignment] test_htmlgen/video.py:38:9: Object of type `str` is not assignable to attribute `preload` of type `Preload | None`
```
True positives. [The stubs are
wrong](01e3b911ac/htmlgen/form.pyi (L8-L10)).
These should not contain type annotations, but rather just `OFF = ...`.
```diff
rotki (https://github.com/rotki/rotki)
+ error[invalid-argument-type] rotkehlchen/tests/unit/test_serialization.py:62:30: Argument to bound method `deserialize` is incorrect: Expected `str`, found `Literal[15]`
```
New true positive.
```diff
vision (https://github.com/pytorch/vision)
+ error[unresolved-attribute] test/test_extended_models.py:302:17: Type `type[WeightsEnum]` has no attribute `DEFAULT`
+ error[unresolved-attribute] test/test_extended_models.py:302:58: Type `type[WeightsEnum]` has no attribute `DEFAULT`
```
Also new true positives. No `DEFAULT` member exists on `WeightsEnum`.
## Summary
Allow declared-only class-level attributes to be accessed on the class:
```py
class C:
attr: int
C.attr # this is now allowed
```
closes https://github.com/astral-sh/ty/issues/384
closes https://github.com/astral-sh/ty/issues/553
## Ecosystem analysis
* We see many removed `unresolved-attribute` false-positives for code
that makes use of sqlalchemy, as expected (see changes for `prefect`)
* We see many removed `call-non-callable` false-positives for uses of
`pytest.skip` and similar, as expected
* Most new diagnostics seem to be related to cases like the following,
where we previously inferred `int` for `Derived().x`, but now we infer
`int | None`. I think this should be a
conflicting-declarations/bad-override error anyway? The new behavior may
even be preferred here?
```py
class Base:
x: int | None
class Derived(Base):
def __init__(self):
self.x: int = 1
```
## Summary
Having a recursive type method to check whether a type is fully static
is inefficient, unnecessary, and makes us overly strict about subtyping
relations.
It's inefficient because we end up re-walking the same types many times
to check for fully-static-ness.
It's unnecessary because we can check relations involving the dynamic
type appropriately, depending whether the relation is subtyping or
assignability.
We use the subtyping relation to simplify unions and intersections. We
can usefully consider that `S <: T` for gradual types also, as long as
it remains true that `S | T` is equivalent to `T` and `S & T` is
equivalent to `S`.
One conservative definition (implemented here) that satisfies this
requirement is that we consider `S <: T` if, for every possible pair of
materializations `S'` and `T'`, `S' <: T'`. Or put differently the top
materialization of `S` (`S+` -- the union of all possible
materializations of `S`) is a subtype of the bottom materialization of
`T` (`T-` -- the intersection of all possible materializations of `T`).
In the most basic cases we can usefully say that `Any <: object` and
that `Never <: Any`, and we can handle more complex cases inductively
from there.
This definition of subtyping for gradual subtypes is not reflexive
(`Any` is not a subtype of `Any`).
As a corollary, we also remove `is_gradual_equivalent_to` --
`is_equivalent_to` now has the meaning that `is_gradual_equivalent_to`
used to have. If necessary, we could restore an
`is_fully_static_equivalent_to` or similar (which would not do an
`is_fully_static` pre-check of the types, but would instead pass a
relation-kind enum down through a recursive equivalence check, similar
to `has_relation_to`), but so far this doesn't appear to be necessary.
Credit to @JelleZijlstra for the observation that `is_fully_static` is
unnecessary and overly restrictive on subtyping.
There is another possible definition of gradual subtyping: instead of
requiring that `S+ <: T-`, we could instead require that `S+ <: T+` and
`S- <: T-`. In other words, instead of requiring all materializations of
`S` to be a subtype of every materialization of `T`, we just require
that every materialization of `S` be a subtype of _some_ materialization
of `T`, and that every materialization of `T` be a supertype of some
materialization of `S`. This definition also preserves the core
invariant that `S <: T` implies that `S | T = T` and `S & T = S`, and it
restores reflexivity: under this definition, `Any` is a subtype of
`Any`, and for any equivalent types `S` and `T`, `S <: T` and `T <: S`.
But unfortunately, this definition breaks transitivity of subtyping,
because nominal subclasses in Python use assignability ("consistent
subtyping") to define acceptable overrides. This means that we may have
a class `A` with `def method(self) -> Any` and a subtype `B(A)` with
`def method(self) -> int`, since `int` is assignable to `Any`. This
means that if we have a protocol `P` with `def method(self) -> Any`, we
would have `B <: A` (from nominal subtyping) and `A <: P` (`Any` is a
subtype of `Any`), but not `B <: P` (`int` is not a subtype of `Any`).
Breaking transitivity of subtyping is not tenable, so we don't use this
definition of subtyping.
## Test Plan
Existing tests (modified in some cases to account for updated
semantics.)
Stable property tests pass at a million iterations:
`QUICKCHECK_TESTS=1000000 cargo test -p ty_python_semantic -- --ignored
types::property_tests::stable`
### Changes to property test type generation
Since we no longer have a method of categorizing built types as
fully-static or not-fully-static, I had to add a previously-discussed
feature to the property tests so that some tests can build types that
are known by construction to be fully static, because there are still
properties that only apply to fully-static types (for example,
reflexiveness of subtyping.)
## Changes to handling of `*args, **kwargs` signatures
This PR "discovered" that, once we allow non-fully-static types to
participate in subtyping under the above definitions, `(*args: Any,
**kwargs: Any) -> Any` is now a subtype of `() -> object`. This is true,
if we take a literal interpretation of the former signature: all
materializations of the parameters `*args: Any, **kwargs: Any` can
accept zero arguments, making the former signature a subtype of the
latter. But the spec actually says that `*args: Any, **kwargs: Any`
should be interpreted as equivalent to `...`, and that makes a
difference here: `(...) -> Any` is not a subtype of `() -> object`,
because (unlike a literal reading of `(*args: Any, **kwargs: Any)`),
`...` can materialize to _any_ signature, including a signature with
required positional arguments.
This matters for this PR because it makes the "any two types are both
assignable to their union" property test fail if we don't implement the
equivalence to `...`. Because `FunctionType.__call__` has the signature
`(*args: Any, **kwargs: Any) -> Any`, and if we take that at face value
it's a subtype of `() -> object`, making `FunctionType` a subtype of `()
-> object)` -- but then a function with a required argument is also a
subtype of `FunctionType`, but not a subtype of `() -> object`. So I
went ahead and implemented the equivalence to `...` in this PR.
## Ecosystem analysis
* Most of the ecosystem report are cases of improved union/intersection
simplification. For example, we can now simplify a union like `bool |
(bool & Unknown) | Unknown` to simply `bool | Unknown`, because we can
now observe that every possible materialization of `bool & Unknown` is
still a subtype of `bool` (whereas before we would set aside `bool &
Unknown` as a not-fully-static type.) This is clearly an improvement.
* The `possibly-unresolved-reference` errors in sockeye, pymongo,
ignite, scrapy and others are true positives for conditional imports
that were formerly silenced by bogus conflicting-declarations (which we
currently don't issue a diagnostic for), because we considered two
different declarations of `Unknown` to be conflicting (we used
`is_equivalent_to` not `is_gradual_equivalent_to`). In this PR that
distinction disappears and all equivalence is gradual, so a declaration
of `Unknown` no longer conflicts with a declaration of `Unknown`, which
then results in us surfacing the possibly-unbound error.
* We will now issue "redundant cast" for casting from a typevar with a
gradual bound to the same typevar (the hydra-zen diagnostic). This seems
like an improvement.
* The new diagnostics in bandersnatch are interesting. For some reason
primer in CI seems to be checking bandersnatch on Python 3.10 (not yet
sure why; this doesn't happen when I run it locally). But bandersnatch
uses `enum.StrEnum`, which doesn't exist on 3.10. That makes the `class
SimpleDigest(StrEnum)` a class that inherits from `Unknown` (and
bypasses our current TODO handling for accessing attributes on enum
classes, since we don't recognize it as an enum class at all). This PR
improves our understanding of assignability to classes that inherit from
`Any` / `Unknown`, and we now recognize that a string literal is not
assignable to a class inheriting `Any` or `Unknown`.
## Summary
Note this modifies the diagnostics a bit. Previously performing
subscript access on something like `NotSubscriptable1 |
NotSubscriptable2` would report the full type as not being
subscriptable:
```
[non-subscriptable] "Cannot subscript object of type `NotSubscriptable1 | NotSubscriptable2` with no `__getitem__` method"
```
Now each erroneous constituent has a separate error:
```
[non-subscriptable] "Cannot subscript object of type `NotSubscriptable2` with no `__getitem__` method"
[non-subscriptable] "Cannot subscript object of type `NotSubscriptable1` with no `__getitem__` method"
```
Closes https://github.com/astral-sh/ty/issues/625
## Test Plan
mdtest
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Add support for `@staticmethod`s. Overall, the changes are very similar
to #16305.
#18587 will be dependent on this PR for a potential fix of
https://github.com/astral-sh/ty/issues/207.
mypy_primer will look bad since the new code allows ty to check more
code.
## Test Plan
Added new markdown tests. Please comment if there's any missing tests
that I should add in, thank you.
## Summary
This PR resolves the way diagnostics are reported for an invalid call to
an overloaded function.
If any of the steps in the overload call evaluation algorithm yields a
matching overload but it's type checking that failed, the
`no-matching-overload` diagnostic is incorrect because there is a
matching overload, it's the arguments passed that are invalid as per the
signature. So, this PR improves that by surfacing the diagnostics on the
matching overload directly.
It also provides additional context, specifically the matching overload
where this error occurred and other non-matching overloads. Consider the
following example:
```py
from typing import overload
@overload
def f() -> None: ...
@overload
def f(x: int) -> int: ...
@overload
def f(x: int, y: int) -> int: ...
def f(x: int | None = None, y: int | None = None) -> int | None:
return None
f("a")
```
We get:
<img width="857" alt="Screenshot 2025-06-18 at 11 07 10"
src="https://github.com/user-attachments/assets/8dbcaf13-2a74-4661-aa94-1225c9402ea6"
/>
## Test Plan
Update test cases, resolve existing todos and validate the updated
snapshots.
## Summary
Closes: astral-sh/ty#552
This PR adds support for step 5 of the overload call evaluation
algorithm which specifies:
> For all arguments, determine whether all possible materializations of
the argument’s type are
> assignable to the corresponding parameter type for each of the
remaining overloads. If so,
> eliminate all of the subsequent remaining overloads.
The algorithm works in two parts:
1. Find out the participating parameter indexes. These are the
parameters that aren't gradual equivalent to one or more parameter types
at the same index in other overloads.
2. Loop over each overload and check whether that would be the _final_
overload for the argument types i.e., the remaining overloads will never
be matched against these argument types
For step 1, the participating parameter indexes are computed by just
comparing whether all the parameter types at the corresponding index for
all the overloads are **gradual equivalent**.
The step 2 of the algorithm used is described in [this
comment](https://github.com/astral-sh/ty/issues/552#issuecomment-2969165421).
## Test Plan
Update the overload call tests.
## Summary
Part of astral-sh/ty#104, closes: astral-sh/ty#468
This PR implements the argument type expansion which is step 3 of the
overload call evaluation algorithm.
Specifically, this step needs to be taken if type checking resolves to
no matching overload and there are argument types that can be expanded.
## Test Plan
Add new test cases.
## Ecosystem analysis
This PR removes 174 `no-matching-overload` false positives -- I looked
at a lot of them and they all are false positives.
One thing that I'm not able to understand is that in
2b7e3adf27/sphinx/ext/autodoc/preserve_defaults.py (L179)
the inferred type of `value` is `str | None` by ty and Pyright, which is
correct, but it's only ty that raises `invalid-argument-type` error
while Pyright doesn't. The constructor method of `DefaultValue` has
declared type of `str` which is invalid.
There are few cases of false positives resulting due to the fact that ty
doesn't implement narrowing on attribute expressions.
## Summary
Fix some issues with subtying/assignability for instances vs callables.
We need to look up dunders on the class, not the instance, and we should
limit our logic here to delegating to the type of `__call__`, so it
doesn't get out of sync with the calls we allow.
Also, we were just entirely missing assignability handling for
`__call__` implemented as anything other than a normal bound method
(though we had it for subtyping.)
A first step towards considering what else we want to change in
https://github.com/astral-sh/ty/issues/491
## Test Plan
mdtests
---------
Co-authored-by: med <medioqrity@gmail.com>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Resolves [#290](https://github.com/astral-sh/ty/issues/290).
All arguments, synthesized or not, are now accounted for in
`too-many-positional-arguments`'s error message.
For example, consider this example:
```python
class C:
def foo(self): ...
C().foo(1) # !!!
```
Previously, ty would say:
> Too many positional arguments to bound method foo: expected 0, got 1
After this change, it will say:
> Too many positional arguments to bound method foo: expected 1, got 2
This is what Python itself does too:
```text
Traceback (most recent call last):
File "<python-input-0>", line 3, in <module>
C().foo()
~~~~~~~^^
TypeError: C.foo() takes 0 positional arguments but 1 was given
```
## Test Plan
Markdown tests.
This makes one very simple change: we report all call binding
errors from each union variant.
This does result in duplicate-seeming diagnostics. For example,
when two union variants are invalid for the same reason.
## Summary
Fixes: astral-sh/ty#159
This PR adds support for using `Self` in methods.
When the type of an annotation is `TypingSelf` it is converted to a type
var based on:
https://typing.python.org/en/latest/spec/generics.html#self
I just skipped Protocols because it had more problems and the tests was
not useful.
Also I need to create a follow up PR that implicitly assumes `self`
argument has type `Self`.
In order to infer the type in the `in_type_expression` method I needed
to have scope id and semantic index available. I used the idea from
[this PR](https://github.com/astral-sh/ruff/pull/17589/files) to pass
additional context to this method.
Also I think in all places that `in_type_expression` is called we need
to have this context because `Self` can be there so I didn't split the
method into one version with context and one without.
## Test Plan
Added new tests from spec.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Carl Meyer <carl@astral.sh>