This refactors the importer abstraction to use a shared
`Insertion`. This is mostly just moving some code around
with some slight tweaks.
The plan here is to keep the rest of the importing code
in `ruff_linter` and then write something ty-specific on
top of `Insertion`. This ends up sharing some code, but
not as much as would be ideal. In particular, the
`ruff_linter` imported is pretty tightly coupled with
ruff's semantic model. So to share the code, we'd need to
abstract over that.
## Summary
This PR wires up the GitHub output format moved to `ruff_db` in #20320
to the ty CLI.
It's a bit smaller than the GitLab version (#20155) because some of the
helpers were already in place, but I did factor out a few
`DisplayDiagnosticConfig` constructor calls in Ruff. I also exposed the
`GithubRenderer` and a wrapper `DisplayGithubDiagnostics` type because
we needed a way to configure the program name displayed in the GitHub
diagnostics. This was previously hard-coded to `Ruff`:
<img width="675" height="247" alt="image"
src="https://github.com/user-attachments/assets/592da860-d2f5-4abd-bc5a-66071d742509"
/>
Another option would be to drop the program name in the output format,
but I think it can be helpful in workflows with multiple programs
emitting annotations (such as Ruff and ty!)
## Test Plan
New CLI test, and a manual test with `--config 'terminal.output-format =
"github"'`
## Summary
Catch infinite recursion in binary-compare inference.
Fixes the stack overflow in `graphql-core` in mypy-primer.
## Test Plan
Added two tests that stack-overflowed before this PR.
## Summary
Use `Type::Divergent` to short-circuit diverging types in type
expressions. This avoids panicking in a wide variety of cases of
recursive type expressions.
Avoids many panics (but not yet all -- I'll be tracking down the rest)
from https://github.com/astral-sh/ty/issues/256 by falling back to
Divergent. For many of these recursive type aliases, we'd like to
support them properly (i.e. really understand the recursive nature of
the type, not just fall back to Divergent) but that will be future work.
This switches `Type::has_divergent_type` from using `any_over_type` to a
custom set of visit methods, because `any_over_type` visits more than we
need to visit, and exercises some lazy attributes of type, causing
significantly more work. This change means this diff doesn't regress
perf; it even reclaims some of the perf regression from
https://github.com/astral-sh/ruff/pull/20333.
## Test Plan
Added mdtest for recursive type alias that panics on main.
Verified that we can now type-check `packaging` (and projects depending
on it) without panic; this will allow moving a number of mypy-primer
projects from `bad.txt` to `good.txt` in a subsequent PR.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR implements F406
https://docs.astral.sh/ruff/rules/undefined-local-with-nested-import-star-usage/
as a semantic syntax error
## Test Plan
I have written inline tests as directed in #17412
---------
Signed-off-by: 11happy <soni5happy@gmail.com>
- Convert panics to diagnostics with id `Panic`, severity `Fatal`, and
the error as the diagnostic message, annotated with a `Span` with empty
code block and no range.
- Updates the post-linting message diagnostic handling to track the
maximum severity seen, and then prints the "report a bug in ruff"
message only if the max severity was `Fatal`
This depends on the sorting changes since it creates diagnostics with no
range specified.
Previously, we used a very fine-grained representation for individual
constraints: each constraint was _either_ a range constraint, a
not-equivalent constraint, or an incomparable constraint. These three
pieces are enough to represent all of the "real" constraints we need to
create — range constraints and their negation.
However, it meant that we weren't picking up as many chances to simplify
constraint sets as we could. Our simplification logic depends on being
able to look at _pairs_ of constraints or clauses to see if they
simplify relative to each other. With our fine-grained representation,
we could easily encounter situations that we should have been able to
simplify, but that would require looking at three or more individual
constraints.
For instance, negating a range constraint would produce:
```
¬(Base ≤ T ≤ Super) = ((T ≤ Base) ∧ (T ≠ Base)) ∨ (T ≁ Base) ∨
((Super ≤ T) ∧ (T ≠ Super)) ∨ (T ≁ Super)
```
That is, `T` must be (strictly) less than `Base`, (strictly) greater
than `Super`, or incomparable to either.
If we tried to union those back together, we should get `always`, since
`x ∨ ¬x` should always be true, no matter what `x` is. But instead we
would get:
```
(Base ≤ T ≤ Super) ∨ ((T ≤ Base) ∧ (T ≠ Base)) ∨ (T ≁ Base) ∨ ((Super ≤ T) ∧ (T ≠
Super)) ∨ (T ≁ Super)
```
Nothing would simplify relative to each other, because we'd have to look
at all five union elements to see that together they do in fact combine
to `always`.
The fine-grained representation was nice, because it made it easier to
[work out the math](https://dcreager.net/theory/constraints/) for
intersections and unions of each kind of constraint. But being able to
simplify is more important, since the example above comes up immediately
in #20093 when trying to handle constrained typevars.
The fix in this PR is to go back to a more coarse-grained
representation, where each individual constraint consists of a positive
range (which might be `always` / `Never ≤ T ≤ object`), and zero or more
negative ranges. The intuition is to think of a constraint as a region
of the type space (representable as a range) with zero or more "holes"
removed from it.
With this representation, negating a range constraint produces:
```
¬(Base ≤ T ≤ Super) = (always ∧ ¬(Base ≤ T ≤ Super))
```
(That looks trivial, because it is! We just move the positive range to
the negative side.)
The math is not that much harder than before, because there are only
three combinations to consider (each for intersection and union) —
though the fact that there can be multiple holes in a constraint does
require some nested loops. But the mdtest suite gives me confidence that
this is not introducing any new issues, and it definitely removes a
troublesome TODO.
(As an aside, this change also means that we are back to having each
clause contain no more than one individual constraint for any typevar.
This turned out to be important, because part of our simplification
logic was also depending on that!)
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
This mainly removes an internal inconsistency, where we didn't remove
the `Self` type variable when eagerly binding `Self` to an instance
type. It has no observable effect, apparently.
builds on top of https://github.com/astral-sh/ruff/pull/20328
## Test Plan
None
## Summary
Fixes https://github.com/astral-sh/ty/issues/1161
Include `NamedTupleFallback` members in `NamedTuple` instance
completions.
- Augment instance attribute completions when completing on NamedTuple
instances by merging members from
`_typeshed._type_checker_internals.NamedTupleFallback`
## Test Plan
Adds a minimal completion test `namedtuple_fallback_instance_methods`
---------
Co-authored-by: David Peter <mail@david-peter.de>
## Summary
This project was [recently removed from
mypy_primer](https://github.com/astral-sh/ruff/pull/20378), so we need
to remove it from `good.txt` in order for ecosystem-analyzer to work
correctly.
## Test Plan
Run mypy_primer and ecosystem-analyzer on this branch.
## Summary
https://github.com/astral-sh/ruff/pull/20165 added a lot of false
positives around calls to `builtins.open()`, because our missing support
for PEP-613 type aliases means that we don't understand typeshed's
overloads for `builtins.open()` at all yet, and therefore always select
the first overload. This didn't use to matter very much, but now that we
have a much stricter implementation of protocol assignability/subtyping
it matters a lot, because most of the stdlib functions dealing with I/O
(`pickle`, `marshal`, `io`, `json`, etc.) are annotated in typeshed as
taking in protocols of some kind.
In lieu of full PEP-613 support, which is blocked on various things and
might not land in time for our next alpha release, this PR adds some
temporary special-casing for `builtins.open()` to avoid the false
positives. We just infer `Todo` for anything that isn't meant to match
typeshed's first `open()` overload. This should be easy to rip out again
once we have proper support for PEP-613 type aliases, which hopefully
should be pretty soon!
## Test Plan
Added an mdtest
## Summary
Fixes https://github.com/astral-sh/ty/issues/377.
We were treating any function as being assignable to any callback
protocol, because we were trying to figure out a type's `Callable`
supertype by looking up the `__call__` attribute on the type's
meta-type. But a function-literal's meta-type is `types.FunctionType`,
and `types.FunctionType.__call__` is `(...) -> Any`, which is not very
helpful!
While working on this PR, I also realised that assignability between
class-literals and callback protocols was somewhat broken too, so I
fixed that at the same time.
## Test Plan
Added mdtests
## Summary
Resolves#20266
Definition of the frozen dataclass attribute can be instantiation of a
nested frozen dataclass as well as a non-nested one.
### Problem explanation
The `function_call_in_dataclass_default` function is invoked during the
"defined scope" stage, after all scopes have been processed. At this
point, the semantic references the top-level scope. When
`SemanticModel::lookup_attribute` executes, it searches for bindings in
the top-level module scope rather than the class scope, resulting in an
error.
To solve this issue, the lookup should be evaluated through the class
scope.
## Test Plan
- Added test case from issue
Co-authored-by: Igor Drokin <drokinii1017@gmail.com>
## Summary
Fixes#19842
Prevent infinite loop with I002 and UP026
- Implement isort-aware handling for UP026 (deprecated mock import):
- Add CLI integration tests in crates/ruff/tests/lint.rs:
## Test Plan
I have added two integration tests
`pyupgrade_up026_respects_isort_required_import_fix` and
`pyupgrade_up026_respects_isort_required_import_from_fix` in
`crates/ruff/tests/lint.rs`.
## Summary
This looks like it should fix the errors that we've been seeing in sympy
in recent mypy-primer runs.
## Test Plan
I wasn't able to reproduce the sympy failures locally; it looks like
there is probably a dependency on the order in which files are checked.
So I don't have a minimal reproducible example, and wasn't able to add a
test :/ Obviously I would be happier if we could commit a regression
test here, but since the change is straightforward and clearly
desirable, I'm not sure how many hours it's worth trying to track it
down.
Mypy-primer is still failing in CI on this PR, because it fails on the
"old" ty commit already (i.e. on main). But it passes [on a no-op PR
stacked on top of this](https://github.com/astral-sh/ruff/pull/20370),
which strongly suggests this PR fixes the problem.
## Summary
Resolves#20282
Makes the rule fix always unsafe, because the replacement may not be
semantically equivalent to the original expression, potentially changing
the behavior of the code.
Updated docstring with examples.
## Test Plan
- Added two tests from issue and regenerated the snapshot
---------
Co-authored-by: Igor Drokin <drokinii1017@gmail.com>
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
Fixes#20204
Recognize t-strings, generators, and lambdas in RUF016
- Accept boolean literals as valid index and slice bounds.
- Add TString, Generator, and Lambda to `CheckableExprType`.
- Expand RUF016.py fixture and update snapshots accordingly.
Our token-based rules and `noqa` extraction used an `Indexer` that kept
track of f-string ranges but not t-strings. We've updated the `Indexer`
and downstream uses thereof to handle both f-strings and t-strings.
Most of the diff is renaming and adding tests.
Note that much of the "new" logic gets to be naive because the lexer has
already ensured that f and t-string "starts" are paired with their
respective "ends", even amidst nesting and so on.
Finally: one could imagine wanting to know if a given interpolated
string range corresponds to an f-string or a t-string, but I didn't find
a place where we actually needed this.
Closes#20310
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This PR adds support for building loongarch64 binaries in CI. As such
support has been merged in uv (astral-sh/uv#15387) it's time to consider
adding it to ruff.
Please note that as Ubuntu is not yet available for loongarch64, I have
elected to use a Debian Trixie container maintained by community
members. In addition, as Debian's pip does not allow installing modules
system-wide, I have modified the workflow to install additional modules
in a virtual environment.
Since the workflow is shared between all targets, the only way to handle
this difference (between Debian and Ubuntu) is just to install pip in a
venv for all targets. If there is a better (and less intrusive) way to
work around this, please let me know.
## Test Plan
Tests are included in CI and the loongarch64 artifacts built in [this
workflow](5012547154)
has been smoke tested.
## Summary
This PR addresses an issue for a variadic argument when involved in
argument type expansion of overload call evaluation.
The issue is that the expansion of the variadic argument could result in
argument list of different arity. For example, in `*args: tuple[int] |
tuple[int, str]`, the expansion would lead to the variadic argument
being unpacked into 1 and 2 element respectively. This means that the
parameter matching that was performed initially isn't sufficient and
each expanded argument list would need to redo the parameter matching
again.
This is currently done by redoing the parameter matching directly,
maintaining the state of argument forms (and the conflicting forms), and
updating the `Bindings` values if it changes.
Closes: astral-sh/ty#735
## Test Plan
Update existing mdtest.
This PR removes the `Constraints` trait. We removed the `bool`
implementation several weeks back, and are using `ConstraintSet`
everywhere. There have been discussions about trying to include the
reason for an assignability failure as part of the result, but that
there are no concrete plans to do so soon, and it's not clear that we'll
need the `Constraints` trait to do that. (We can ideally just update the
`ConstraintSet` type directly.)
In the meantime, this just complicates the code for no good reason.
This PR is a pure refactoring, and contains no behavioral changes.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This is the GitHub analog to #20117. This PR prepares to add a GitHub
output format to ty by moving the implementation from `ruff_linter` to
`ruff_db`. Hopefully this one is a bit easier to review
commit-by-commit. Almost all of the refactoring this time is in the
first commit, then the second commit adds the new `OutputFormat` variant
and moves the file into `ruff_db`. The third commit is just a small
touch up to use a private method that accommodates ty files so that we
can run the tests and update/move the snapshots.
I had to push a fourth commit to fix and test diagnostics without a
span/file.
## Test Plan
Existing tests
Previously, `Type::object` would find the definition of the `object`
class in typeshed, load that in (to produce a `ClassLiteral` and
`ClassType`), and then create a `NominalInstance` of that class.
It's possible that we are using a typeshed that doesn't define `object`.
We will not be able to do much useful work with that kind of typeshed,
but it's still a possibility that we have to support at least without
panicking. Previously, we would handle this situation by falling back on
`Unknown`.
In most cases, that's a perfectly fine fallback! But `object` is also
our top type — the type of all values. `Unknown` is _not_ an acceptable
stand-in for the top type.
This PR adds a new `NominalInstance` variant for "instances of
`object`". Unlike other nominal instances, we do not need to load in
`object`'s `ClassType` to instantiate this variant. We will use this new
variant even when the current typeshed does not define an `object`
class, ensuring that we have a fully static representation of our top
type at all times.
There are several operations that need access to a nominal instance's
class, and for this new `object` variant we load it lazily only when
it's needed. That means this operation is now fallible, since this is
where the "typeshed doesn't define `object`" failure shows up.
This new approach also has the benefit of avoiding some salsa cycles
that were cropping up while I was debugging #20093, since the new
constraint set representation was trying to instantiate `Type::object`
while in the middle of processing its definition in typeshed. Cycle
handling was kicking in correctly and returning the `Unknown` fallback
mentioned above. But the constraint set implementation depends on
`Type::object` being a distinct and fully static type, highlighting that
this is a correctness fix, not just an optimization fix.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Use `Type::Divergent` to avoid "too many iterations" panic on an
infinitely-nested tuple in an implicit instance attribute.
The regression here is from checking all tuple elements to see if they
contain a Divergent type. It's 5% on one project, 1% on another, and
zero on the rest. I spent some time looking into eliminating this
regression by tracking a flag on inference results to note if they could
possibly contain any Divergent type, but this doesn't really work --
there are too many different ways a type containing a Divergent type
could enter an inference result. Still thinking about whether there are
other ways to reduce this. One option is if we see certain kinds of
non-atomic types that are commonly expensive to check for Divergent, we
could make `has_divergent_type` a Salsa query on those types.
## Test Plan
Added mdtest.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
The debug representation isn't as useful as calling `.display(db)`, but
it's still kind-of annoying when `dbg!()` calls don't compile locally
due to the compiler not being able to guarantee that an object of type
`impl Constraints` implements `Debug`
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fixes#20235
• Fix `RUF102` to properly handle rule redirects when validating noqa
codes
• Update `code_is_valid` to check redirect targets before determining
validity
• Add test case for rule redirects (TCH002 in this case)
## Test Plan
<!-- How was it tested? -->
I have added a test case for rule redirects to
`crates/ruff_linter/resources/test/fixtures/ruff/RUF102.py`.