## Summary
We currently perform a subtyping check instead of the intended subclass
check (and the subtyping check is confusingly named `is_subclass_of`).
This showed up in https://github.com/astral-sh/ruff/pull/21070.
## Summary
Before this PR, we would emit diagnostics like "Invalid key access" for
a TypedDict literal with invalid key, which doesn't make sense since
there's no "access" in that case. This PR just adjusts the wording to be
more general, and adjusts the documentation of the lint rule too.
I noticed this in the playground and thought it would be a quick fix. As
usual, it turned out to be a bit more subtle than I expected, but for
now I chose to punt on the complexity. We may ultimately want to have
different rules for invalid subscript vs invalid TypedDict literal,
because an invalid key in a TypedDict literal is low severity: it's a
typo detector, but not actually a type error. But then there's another
wrinkle there: if the TypedDict is `closed=True`, then it _is_ a type
error. So would we want to separate the open and closed cases into
separate rules, too? I decided to leave this as a question for future.
If we wanted to use separate rules, or use specific wording for each
case instead of the generalized wording I chose here, that would also
involve a bit of extra work to distinguish the cases, since we use a
generic set of functions for reporting these errors.
## Test Plan
Added and updated mdtests.
This is a second take at the implicit imports approach, allowing `from .
import submodule` in an `__init__.pyi` to create the
`mypackage.submodule` attribute everyhere.
This implementation operates inside of the
available_submodule_attributes subsystem instead of as a re-export rule.
The upside of this is we are no longer purely syntactic, and absolute
from imports that happen to target submodules work (an intentional
discussed deviation from pyright which demands a relative from import).
Also we don't re-export functions or classes.
The downside(?) of this is star imports no longer see these attributes
(this may be either good or bad. I believe it's not a huge lift to make
it work with star imports but it's some non-trivial reworking).
I've also intentionally made `import mypackage.submodule` not trigger
this rule although it's trivial to change that.
I've tried to cover as many relevant cases as possible for discussion in
the new test file I've added (there are some random overlaps with
existing tests but trying to add them piecemeal felt confusing and
weird, so I just made a dedicated file for this extension to the rules).
Fixes https://github.com/astral-sh/ty/issues/133
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
## Summary
Fixes https://github.com/astral-sh/ty/issues/1427
This PR fixes a regression introduced in alpha.24 where non-dataclass
children of generic dataclasses lost generic type parameter information
during `__init__` synthesis.
The issue occurred because when looking up inherited members in the MRO,
the child class's `inherited_generic_context` was correctly passed down,
but `own_synthesized_member()` (which synthesizes dataclass `__init__`
methods) didn't accept this parameter. It only used
`self.inherited_generic_context(db)`, which returned the parent's
context instead of the child's.
The fix threads the child's generic context through to the synthesis
logic, allowing proper generic type inference for inherited dataclass
constructors.
## Test Plan
- Added regression test for non-dataclass inheriting from generic
dataclass
- Verified the exact repro case from the issue now works
- All 277 mdtest tests passing
- Clippy clean
- Manually verified with Python runtime, mypy, and pyright - all accept
this code pattern
## Verification
Tested against multiple type checkers:
- ✅ Python runtime: Code works correctly
- ✅ mypy: No issues found
- ✅ pyright: 0 errors, 0 warnings
- ✅ ty alpha.23: Worked (before regression)
- ❌ ty alpha.24: Regression
- ✅ ty with this fix: Works correctly
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: David Peter <mail@david-peter.de>
It's possible for a constraint to mention two typevars. For instance, in
the body of
```py
def f[S: int, T: S](): ...
```
the baseline constraint set would be `(T ≤ S) ∧ (S ≤ int)`. That is, `S`
must specialize to some subtype of `int`, and `T` must specialize to a
subtype of the type that `S` specializes to.
This PR updates the new "constraint implication" relationship from
#21010 to work on these kinds of constraint sets. For instance, in the
example above, we should be able to see that `T ≤ int` must always hold:
```py
def f[S, T]():
constraints = ConstraintSet.range(Never, S, int) & ConstraintSet.range(Never, T, S)
static_assert(constraints.implies_subtype_of(T, int)) # now succeeds!
```
This did not require major changes to the implementation of
`implies_subtype_of`. That method already relies on how our `simplify`
and `domain` methods expand a constraint set to include the transitive
closure of the constraints that it mentions, and to mark certain
combinations of constraints as impossible. Previously, that transitive
closure logic only looked at pairs of constraints that constrain the
same typevar. (For instance, to notice that `(T ≤ bool) ∧ ¬(T ≤ int)` is
impossible.)
Now we also look at pairs of constraints that constraint different
typevars, if one of the constraints is bound by the other — that is,
pairs of the form `T ≤ S` and `S ≤ something`, or `S ≤ T` and `something
≤ S`. In those cases, transitivity lets us add a new derived constraint
that `T ≤ something` or `something ≤ T`, respectively. Having done that,
our existing `implies_subtype_of` logic finds and takes into account
that derived constraint.
## Summary
We weren't correctly modeling it as a `staticmethod` in all cases,
leading us to incorrectly infer that the `cls` argument would be bound
if it was accessed on an instance (rather than the class object).
## Test Plan
Added mdtests that fail on `main`. The primer output also looks good!
## Summary
Adds proper type narrowing and reachability analysis for matching on
non-inferable type variables bound to enums. For example:
```py
from enum import Enum
class Answer(Enum):
NO = 0
YES = 1
def is_yes(self) -> bool: # no error here!
match self:
case Answer.YES:
return True
case Answer.NO:
return False
```
closes https://github.com/astral-sh/ty/issues/1404
## Test Plan
Added regression tests
## Summary
We previously didn't understand `range` and wrote these custom
`IntIterable`/`IntIterator` classes for tests. We can now remove them
and make the tests shorter in some places.
## Summary
Infer a type of unannotated `self` parameters in decorated methods /
properties.
closes https://github.com/astral-sh/ty/issues/1448
## Test Plan
Existing tests, some new tests.
This PR updates the mdtests that test how our generics solver interacts
with our new constraint set implementation. Because the rendering of a
constraint set can get long, this standardizes on putting the `revealed`
assertion on a separate line. We also add a `static_assert` test for
each constraint set to verify that they are all coerced into simple
`bool`s correctly.
This is a pure reformatting (not even a refactoring!) that changes no
behavior. I've pulled it out of #20093 to reduce the amount of effort
that will be required to review that PR.
We have several functions in `ty_extensions` for testing our constraint
set implementation. This PR refactors those functions so that they are
all methods of the `ConstraintSet` class, rather than being standalone
top-level functions. 🎩 to @sharkdp for pointing out that
`KnownBoundMethod` gives us what we need to implement that!
This PR adds the new **_constraint implication_** relationship between
types, aka `is_subtype_of_given`, which tests whether one type is a
subtype of another _assuming that the constraints in a particular
constraint set hold_.
For concrete types, constraint implication is exactly the same as
subtyping. (A concrete type is any fully static type that is not a
typevar. It can _contain_ a typevar, though — `list[T]` is considered
concrete.)
The interesting case is typevars. The other typing relationships (TODO:
will) all "punt" on the question when considering a typevar, by
translating the desired relationship into a constraint set. At some
point, though, we need to resolve a constraint set; at that point, we
can no longer punt on the question. Unlike with concrete types, the
answer will depend on the constraint set that we are considering.
That PR title might be a bit inscrutable.
Consider the two constraints `T ≤ bool` and `T ≤ int`. Since `bool ≤
int`, by transitivity `T ≤ bool` implies `T ≤ int`. (Every type that is
a subtype of `bool` is necessarily also a subtype of `int`.) That means
that `T ≤ bool ∧ T ≰ int` is an impossible combination of constraints,
and is therefore not a valid input to any BDD. We say that that
assignment is not in the _domain_ of the BDD.
The implication `T ≤ bool → T ≤ int` can be rewritten as `T ≰ bool ∨ T ≤
int`. (That's the definition of implication.) If we construct that
constraint set in an mdtest, we should get a constraint set that is
always satisfiable. Previously, that constraint set would correctly
_display_ as `always`, but a `static_assert` on it would fail.
The underlying cause is that our `is_always_satisfied` method would only
test if the BDD was the `AlwaysTrue` terminal node. `T ≰ bool ∨ T ≤ int`
does not simplify that far, because we purposefully keep around those
constraints in the BDD structure so that it's easier to compare against
other BDDs that reference those constraints.
To fix this, we need a more nuanced definition of "always satisfied".
Instead of evaluating to `true` for _every_ input, we only need it to
evaluate to `true` for every _valid_ input — that is, every input in its
domain.
## Summary
Infer a type of `Self` for unannotated `self` parameters in methods of
classes.
part of https://github.com/astral-sh/ty/issues/159
closes https://github.com/astral-sh/ty/issues/1081
## Conformance tests changes
```diff
+enums_member_values.py:85:9: error[invalid-assignment] Object of type `int` is not assignable to attribute `_value_` of type `str`
```
A true positive ✔️
```diff
-generics_self_advanced.py:35:9: error[type-assertion-failure] Argument does not have asserted type `Self@method2`
-generics_self_basic.py:14:9: error[type-assertion-failure] Argument does not have asserted type `Self@set_scale
```
Two false positives going away ✔️
```diff
+generics_syntax_infer_variance.py:82:9: error[invalid-assignment] Cannot assign to final attribute `x` on type `Self@__init__`
```
This looks like a true positive to me, even if it's not marked with `#
E` ✔️
```diff
+protocols_explicit.py:56:9: error[invalid-assignment] Object of type `tuple[int, int, str]` is not assignable to attribute `rgb` of type `tuple[int, int, int]`
```
True positive ✔️
```
+protocols_explicit.py:85:9: error[invalid-attribute-access] Cannot assign to ClassVar `cm1` from an instance of type `Self@__init__`
```
This looks like a true positive to me, even if it's not marked with `#
E`. But this is consistent with our understanding of `ClassVar`, I
think. ✔️
```py
+qualifiers_final_annotation.py:52:9: error[invalid-assignment] Cannot assign to final attribute `ID4` on type `Self@__init__`
+qualifiers_final_annotation.py:65:9: error[invalid-assignment] Cannot assign to final attribute `ID7` on type `Self@method1`
```
New true positives ✔️
```py
+qualifiers_final_annotation.py:52:9: error[invalid-assignment] Cannot assign to final attribute `ID4` on type `Self@__init__`
+qualifiers_final_annotation.py:57:13: error[invalid-assignment] Cannot assign to final attribute `ID6` on type `Self@__init__`
+qualifiers_final_annotation.py:59:13: error[invalid-assignment] Cannot assign to final attribute `ID6` on type `Self@__init__`
```
This is a new false positive, but that's a pre-existing issue on main
(if you annotate with `Self`):
https://play.ty.dev/3ee1c56d-7e13-43bb-811a-7a81e236e6ab❌ => reported
as https://github.com/astral-sh/ty/issues/1409
## Ecosystem
* There are 5931 new `unresolved-attribute` and 3292 new
`possibly-missing-attribute` attribute errors, way too many to look at
all of them. I randomly sampled 15 of these errors and found:
* 13 instances where there was simply no such attribute that we could
plausibly see. Sometimes [I didn't find it
anywhere](8644d886c6/openlibrary/plugins/openlibrary/tests/test_listapi.py (L33)).
Sometimes it was set externally on the object. Sometimes there was some
[`setattr` dynamicness going
on](a49f6b927d/setuptools/wheel.py (L88-L94)).
I would consider all of them to be true positives.
* 1 instance where [attribute was set on `obj` in
`__new__`](9e87b44fd4/sympy/tensor/array/array_comprehension.py (L45C1-L45C36)),
which we don't support yet
* 1 instance [where the attribute was defined via `__slots__`
](e250ec0fc8/lib/spack/spack/vendor/pyrsistent/_pdeque.py (L48C5-L48C14))
* I see 44 instances [of the false positive
above](https://github.com/astral-sh/ty/issues/1409) with `Final`
instance attributes being set in `__init__`. I don't think this should
block this PR.
## Test Plan
New Markdown tests.
---------
Co-authored-by: Shaygan Hooshyari <sh.hooshyari@gmail.com>
This PR adds another useful simplification when rendering constraint
sets: `T = int` instead of `T = int ∧ T ≠ str`. (The "smaller"
constraint `T = int` implies the "larger" constraint `T ≠ str`.
Constraint set clauses are intersections, and if one constraint in a
clause implies another, we can throw away the "larger" constraint.)
While we're here, we also normalize the bounds of a constraint, so that
we equate e.g. `T ≤ int | str` with `T ≤ str | int`, and change the
ordering of BDD variables so that all constraints with the same typevar
are ordered adjacent to each other.
Lastly, we also add a new `display_graph` helper method that prints out
the full graph structure of a BDD.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Fall back to `C[Divergent]` if we are trying to specialize `C[T]` with a
type that itself already contains deeply nested specialized generic
classes. This is a way to prevent infinite recursion for cases like
`self.x = [self.x]` where type inference for the implicit instance
attribute would not converge.
closes https://github.com/astral-sh/ty/issues/1383
closes https://github.com/astral-sh/ty/issues/837
## Test Plan
Regression tests.
## Summary
We currently panic in the seemingly rare case where the type of a
default value of a parameter depends on the callable itself:
```py
class C:
def f(self: C):
self.x = lambda a=self.x: a
```
Types of default values are only used for display reasons, and it's
unclear if we even want to track them (or if we should rather track the
actual value). So it didn't seem to me that we should spend a lot of
effort (and runtime) trying to achieve a theoretically correct type here
(which would be infinite).
Instead, we simply replace *nested* default types with `Unknown`, i.e.
only if the type of the default value is a callable itself.
closes https://github.com/astral-sh/ty/issues/1402
## Test Plan
Regression tests
## Summary
Only run the "pull types" test after performing the "actual" mdtest. We
observed that the order matters. There is currently one mdtest which
panics when checked in the CLI or the playground. With this change, it
also panics in the mdtest suite.
reopens https://github.com/astral-sh/ty/issues/837?
## Summary
- Type checkers (and type-checker authors) think in terms of types, but
I think most Python users think in terms of values. Rather than saying
that a _type_ `X` "has no attribute `foo`" (which I think sounds strange
to many users), say that "an object of type `X` has no attribute `foo`"
- Special-case certain types so that the diagnostic messages read more
like normal English: rather than saying "Type `<class 'Foo'>` has no
attribute `bar`" or "Object of type `<class 'Foo'>` has no attribute
`bar`", just say "Class `Foo` has no attribute `bar`"
## Test Plan
Mdtests and snapshots updated
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR implements semantic syntax error where alternative patterns bind
different names
## Test Plan
<!-- How was it tested? -->
I have written inline tests as directed in #17412
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
Derived from #20900
Implement `VarianceInferable` for `KnownInstanceType` (especially for
`KnownInstanceType::TypeAliasType`).
The variance of a type alias matches its value type. In normal usage,
type aliases are expanded to value types, so the variance of a type
alias can be obtained without implementing this. However, for example,
if we want to display the variance when hovering over a type alias, we
need to be able to obtain the variance of the type alias itself (cf.
#20900).
## Test Plan
I couldn't come up with a way to test this in mdtest, so I'm testing it
in a test submodule at the end of `types.rs`.
I also added a test to `mdtest/generics/pep695/variance.md`, but it
passes without the changes in this PR.
## Summary
Support `dataclass_transform` when used on a (base) class.
## Typing conformance
* The changes in `dataclasses_transform_class.py` look good, just a few
mistakes due to missing `alias` support.
* I didn't look closely at the changes in
`dataclasses_transform_converter.py` since we don't support `converter`
yet.
## Ecosystem impact
The impact looks huge, but it's concentrated on a single project (ibis).
Their setup looks more or less like this:
* the real `Annotatable`:
d7083c2c96/ibis/common/grounds.py (L100-L101)
* the real `DataType`:
d7083c2c96/ibis/expr/datatypes/core.py (L161-L179)
* the real `Array`:
d7083c2c96/ibis/expr/datatypes/core.py (L1003-L1006)
```py
from typing import dataclass_transform
@dataclass_transform()
class Annotatable:
pass
class DataType(Annotatable):
nullable: bool = True
class Array[T](DataType):
value_type: T
```
They expect something like `Array([1, 2])` to work, but ty, pyright,
mypy, and pyrefly would all expect there to be a first argument for the
`nullable` field on `DataType`. I don't really understand on what
grounds they expect the `nullable` field to be excluded from the
signature, but this seems to be the main reason for the new diagnostics
here. Not sure if related, but it looks like their typing setup is not
really complete
(https://github.com/ibis-project/ibis/issues/6844#issuecomment-1868274770,
this thread also mentions `dataclass_transform`).
## Test Plan
Update pre-existing tests.
Detect legacy namespace packages and treat them like namespace packages
when looking them up as the *parent* of the module we're interested in.
In all other cases treat them like a regular package.
(This PR is coauthored by @MichaReiser in a shared coding session)
Fixes https://github.com/astral-sh/ty/issues/838
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Prefer the declared type for collection literals, e.g.,
```py
x: list[Any] = [1, "2", (3,)]
reveal_type(x) # list[Any]
```
This solves a large part of https://github.com/astral-sh/ty/issues/136
for invariant generics, where respecting the declared type is a lot more
important. It also means that annotated dict literals with `dict[_,
Any]` is a way out of https://github.com/astral-sh/ty/issues/1248.
## Summary
Use the declared type of variables as type context for the RHS of assignment expressions, e.g.,
```py
x: list[int | str]
x = [1]
reveal_type(x) # revealed: list[int | str]
```
## Summary
Ignore the type context when specializing a generic call if it leads to
an unnecessarily wide return type. For example, [the example mentioned
here](https://github.com/astral-sh/ruff/pull/20796#issuecomment-3403319536)
works as expected after this change:
```py
def id[T](x: T) -> T:
return x
def _(i: int):
x: int | None = id(i)
y: int | None = i
reveal_type(x) # revealed: int
reveal_type(y) # revealed: int
```
I also added extended our usage of `filter_disjoint_elements` to tuple
and typed-dict inference, which resolves
https://github.com/astral-sh/ty/issues/1266.
## Summary
Add support for the `field_specifiers` parameter on
`dataclass_transform` decorator calls.
closes https://github.com/astral-sh/ty/issues/1068
## Conformance test results
All true positives ✔️
## Ecosystem analysis
* `trio`: this is the kind of change that I would expect from this PR.
The code makes use of a dataclass `Outcome` with a `_unwrapped: bool =
attr.ib(default=False, eq=False, init=False)` field that is excluded
from the `__init__` signature, so we now see a bunch of
constructor-call-related errors going away.
* `home-assistant/core`: They have a `domain: str = attr.ib(init=False,
repr=False)` field and then use
```py
@domain.default
def _domain_default(self) -> str:
# …
```
This accesses the `default` attribute on `dataclasses.Field[…]` with a
type of `default: _T | Literal[_MISSING_TYPE.MISSING]`, so we get those
"Object of type `_MISSING_TYPE` is not callable" errors. I don't really
understand how that is supposed to work. Even if `_MISSING_TYPE` would
be absent from that union, what does this try to call? pyright also
issues an error and it doesn't seem to work at runtime? So this looks
like a true positive?
* `attrs`: Similar here. There are some new diagnostics on code that
tries to access `.validator` on a field. This *does* work at runtime,
but I'm not sure how that is supposed to type-check (without a [custom
plugin](2c6c395935/mypy/plugins/attrs.py (L575-L602))).
pyright errors on this as well.
* A handful of new false positives because we don't support `alias` yet
## Test Plan
Updated tests.
Summary
--
This PR unifies the two different ways Ruff and ty construct syntax
errors. Ruff has been storing the primary message in the diagnostic
itself, while ty attached the message to the primary annotation:
```
> ruff check try.py
invalid-syntax: name capture `x` makes remaining patterns unreachable
--> try.py:2:10
|
1 | match 42:
2 | case x: ...
| ^
3 | case y: ...
|
Found 1 error.
> uvx ty check try.py
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
Checking ------------------------------------------------------------ 1/1 files
error[invalid-syntax]
--> try.py:2:10
|
1 | match 42:
2 | case x: ...
| ^ name capture `x` makes remaining patterns unreachable
3 | case y: ...
|
Found 1 diagnostic
```
I think there are benefits to both approaches, and I do like ty's
version, but I feel like we should pick one (and it might help with
#20901 eventually). I slightly prefer Ruff's version, so I went with
that. Hopefully this isn't too controversial, but I'm happy to close
this if it is.
Note that this shouldn't change any other diagnostic formats in ty
because
[`Diagnostic::primary_message`](98d27c4128/crates/ruff_db/src/diagnostic/mod.rs (L177))
was already falling back to the primary annotation message if the
diagnostic message was empty. As a result, I think this change will
partially resolve the FIXME therein.
Test Plan
--
Existing tests with updated snapshots
This is the ultra-minimal implementation of
* https://github.com/astral-sh/ty/issues/296
that was previously discussed as a good starting point. In particular we
don't actually bother trying to figure out the exact python versions,
but we still mention "hey btw for No Reason At All... you're on python
3.10" when you try to access something that has a definition rooted in
the stdlib that we believe exists sometimes.
This is a drive-by improvement that I stumbled backwards into while
looking into
* https://github.com/astral-sh/ty/issues/296
I was writing some simple tests for "thing not in old version of stdlib"
diagnostics and checked what was added in 3.14, and saw
`compression.zstd` and to my surprise discovered that `import
compression.zstd` and `from compression import zstd` had completely
different quality diagnostics.
This is because `compression` and `compression.zstd` were *both*
introduced in 3.14, and so per VERSIONS policy only an entry for
`compression` was added, and so we don't actually have any definite info
on `compression.zstd` and give up on producing a diagnostic. However the
`from compression import zstd` form fails on looking up `compression`
and we *do* have an exact match for that, so it gets a better
diagnostic!
(aside: I have now learned about the VERSIONS format and I *really* wish
they would just enumerate all the submodules but, oh well!)
The fix is, when handling an import failure, if we fail to find an exact
match *we requery with the parent module*. In cases like
`compression.zstd` this lets us at least identify that, hey, not even
`compression` exists, and luckily that fixes the whole issue. In cases
where the parent module and submodule were introduced at different times
then we may discover that the parent module is in-range and that's fine,
we don't produce the richer stdlib diagnostic.
## Summary
`dataclasses.field` and field-specifier functions of commonly used
libraries like `pydantic`, `attrs`, and `SQLAlchemy` all return the
default type for the field (or `Any`) instead of an actual `Field`
instance, even if this is not what happens at runtime. Let's make use of
this fact and assume that *all* field specifiers return the type of the
default value of the field.
For standard dataclasses, this leads to more or less the same outcome
(see test diff for details), but this change is important for 3rd party
dataclass-transformers.
## Test Plan
Tested the consequences of this change on the field-specifiers branch as
well.