## Summary
Adds preliminary support for `NamedTuple`s, including:
* No false positives when constructing a `NamedTuple` object
* Correct signature for the synthesized `__new__` method, i.e. proper
checking of constructor calls
* A patched MRO (`NamedTuple` => `tuple`), mainly to make type inference
of named attributes possible, but also to better reflect the runtime
MRO.
All of this works:
```py
from typing import NamedTuple
class Person(NamedTuple):
id: int
name: str
age: int | None = None
alice = Person(1, "Alice", 42)
alice = Person(id=1, name="Alice", age=42)
reveal_type(alice.id) # revealed: int
reveal_type(alice.name) # revealed: str
reveal_type(alice.age) # revealed: int | None
# error: [missing-argument]
Person(3)
# error: [too-many-positional-arguments]
Person(3, "Eve", 99, "extra")
# error: [invalid-argument-type]
Person(id="3", name="Eve")
```
Not included:
* type inference for index-based access.
* support for the functional `MyTuple = NamedTuple("MyTuple", […])`
syntax
## Test Plan
New Markdown tests
## Ecosystem analysis
```
Diagnostic Analysis Report
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┓
┃ Diagnostic ID ┃ Severity ┃ Removed ┃ Added ┃ Net Change ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩
│ lint:call-non-callable │ error │ 0 │ 3 │ +3 │
│ lint:call-possibly-unbound-method │ warning │ 0 │ 4 │ +4 │
│ lint:invalid-argument-type │ error │ 0 │ 72 │ +72 │
│ lint:invalid-context-manager │ error │ 0 │ 2 │ +2 │
│ lint:invalid-return-type │ error │ 0 │ 2 │ +2 │
│ lint:missing-argument │ error │ 0 │ 46 │ +46 │
│ lint:no-matching-overload │ error │ 19121 │ 0 │ -19121 │
│ lint:not-iterable │ error │ 0 │ 6 │ +6 │
│ lint:possibly-unbound-attribute │ warning │ 13 │ 32 │ +19 │
│ lint:redundant-cast │ warning │ 0 │ 1 │ +1 │
│ lint:unresolved-attribute │ error │ 0 │ 10 │ +10 │
│ lint:unsupported-operator │ error │ 3 │ 9 │ +6 │
│ lint:unused-ignore-comment │ warning │ 15 │ 4 │ -11 │
├───────────────────────────────────┼──────────┼─────────┼───────┼────────────┤
│ TOTAL │ │ 19152 │ 191 │ -18961 │
└───────────────────────────────────┴──────────┴─────────┴───────┴────────────┘
Analysis complete. Found 13 unique diagnostic IDs.
Total diagnostics removed: 19152
Total diagnostics added: 191
Net change: -18961
```
I uploaded the ecosystem full diff (ignoring the 19k
`no-matching-overload` diagnostics)
[here](https://shark.fish/diff-namedtuple.html).
* There are some new `missing-argument` false positives which come from
the fact that named tuples are often created using unpacking as in
`MyNamedTuple(*fields)`, which we do not understand yet.
* There are some new `unresolved-attribute` false positives, because
methods like `_replace` are not available.
* Lots of the `invalid-argument-type` diagnostics look like true
positives
---------
Co-authored-by: Douglas Creager <dcreager@dcreager.net>
## Summary
This PR updates the existing overload matching methods to return an
iterator of all the matched overloads instead.
This would be useful once the overload call evaluation algorithm is
implemented which should provide an accurate picture of all the matched
overloads. The return type would then be picked from either the only
matched overload or the first overload from the ones that are matched.
In an earlier version of this PR, it tried to check if using an
intersection of return types from the matched overload would help reduce
the false positives but that's not enough. [This
comment](https://github.com/astral-sh/ruff/pull/17618#issuecomment-2842891696)
keep the ecosystem analysis for that change for prosperity.
> [!NOTE]
>
> The best way to review this PR is by hiding the whitespace changes
because there are two instances where a large match expression is
indented to be inside a loop over matching overlods
>
> <img width="1207" alt="Screenshot 2025-04-28 at 15 12 16"
src="https://github.com/user-attachments/assets/e06cbfa4-04fa-435f-84ef-4e5c3c5626d1"
/>
## Test Plan
Make sure existing test cases are unaffected and no ecosystem changes.
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This is not yet fixing anything as the names are not changed, but it
lays down the foundation for fixing.
## Test Plan
<!-- How was it tested? -->
the existing test fixture should already cover this change
## Summary
Part of #17412
Starred expressions cannot be used as values in assignment expressions.
Add a new semantic syntax error to catch such instances.
Note that we already have
`ParseErrorType::InvalidStarredExpressionUsage` to catch some starred
expression errors during parsing, but that does not cover top level
assignment expressions.
## Test Plan
- Added new inline tests for the new rule
- Found some examples marked as "valid" in existing tests (`_ = *data`),
which are not really valid (per this new rule) and updated them
- There was an existing inline test - `assign_stmt_invalid_value_expr`
which had instances of `*` expression which would be deemed invalid by
this new rule. Converted these to tuples, so that they do not trigger
this new rule.
## Summary
Model the lookup of `__new__` without going through
`Type::try_call_dunder`. The `__new__` method is only looked up on the
constructed type itself, not on the meta-type.
This now removes ~930 false positives across the ecosystem (vs 255 for
https://github.com/astral-sh/ruff/pull/17662). It introduces 30 new
false positives related to the construction of enums via something like
`Color = enum.Enum("Color", ["RED", "GREEN"])`. This is expected,
because we don't handle custom metaclass `__call__` methods. The fact
that we previously didn't emit diagnostics there was a coincidence (we
incorrectly called `EnumMeta.__new__`, and since we don't fully
understand its signature, that happened to work with `str`, `list`
arguments).
closes#17462
## Test Plan
Regression test
## Summary
Part of #15383.
As per the spec
(https://typing.python.org/en/latest/spec/overload.html#invalid-overload-definitions):
For `@staticmethod` and `@classmethod`:
> If one overload signature is decorated with `@staticmethod` or
`@classmethod`, all overload signatures must be similarly decorated. The
implementation, if present, must also have a consistent decorator. Type
checkers should report an error if these conditions are not met.
For `@final` and `@override`:
> If a `@final` or `@override` decorator is supplied for a function with
overloads, the decorator should be applied only to the overload
implementation if it is present. If an overload implementation isn’t
present (for example, in a stub file), the `@final` or `@override`
decorator should be applied only to the first overload. Type checkers
should enforce these rules and generate an error when they are violated.
If a `@final` or `@override` decorator follows these rules, a type
checker should treat the decorator as if it is present on all overloads.
## Test Plan
Update existing tests; add snapshots.
## Summary
As mentioned in the spec
(https://typing.python.org/en/latest/spec/overload.html#invalid-overload-definitions),
part of #15383:
> The `@overload`-decorated definitions must be followed by an overload
implementation, which does not include an `@overload` decorator. Type
checkers should report an error or warning if an implementation is
missing. Overload definitions within stub files, protocols, and on
abstract methods within abstract base classes are exempt from this
check.
## Test Plan
Remove TODOs from the test; create one diagnostic snapshot.
Re: #17526
## Summary
Adds tests to red knot and `linter.rs` for the semantic syntax.
Specifically add tests for `ReboundComprehensionVariable`,
`DuplicateTypeParameter`, and `MultipleCaseAssignment`.
Refactor the `test_async_comprehension_in_sync_comprehension` →
`test_semantic_error` to be more general for all semantic syntax test
cases.
## Test Plan
This is a test.
## Question
I'm happy to contribute more tests the coming days.
Should that happen here or should we merge this PR such that the
refactor `test_async_comprehension_in_sync_comprehension` →
`test_semantic_error` is available on main and others can chime in, too?
## Summary
Part of #15383, this PR adds the core infrastructure to check for
invalid overloads and adds a diagnostic to raise if there are < 2
overloads for a given definition.
### Design notes
The requirements to check the overloads are:
* Requires `FunctionType` which has the `to_overloaded` method
* The `FunctionType` **should** be for the function that is either the
implementation or the last overload if the implementation doesn't exists
* Avoid checking any `FunctionType` that are part of an overload chain
* Consider visibility constraints
This required a couple of iteration to make sure all of the above
requirements are fulfilled.
#### 1. Use a set to deduplicate
The logic would first collect all the `FunctionType` that are part of
the overload chain except for the implementation or the last overload if
the implementation doesn't exists. Then, when iterating over all the
function declarations within the scope, we'd avoid checking these
functions. But, this approach would fail to consider visibility
constraints as certain overloads _can_ be behind a version check. Those
aren't part of the overload chain but those aren't a separate overload
chain either.
<details><summary>Implementation:</summary>
<p>
```rs
fn check_overloaded_functions(&mut self) {
let function_definitions = || {
self.types
.declarations
.iter()
.filter_map(|(definition, ty)| {
// Filter out function literals that result from anything other than a function
// definition e.g., imports.
if let DefinitionKind::Function(function) = definition.kind(self.db()) {
ty.inner_type()
.into_function_literal()
.map(|ty| (ty, definition.symbol(self.db()), function.node()))
} else {
None
}
})
};
// A set of all the functions that are part of an overloaded function definition except for
// the implementation function and the last overload in case the implementation doesn't
// exists. This allows us to collect all the function definitions that needs to be skipped
// when checking for invalid overload usages.
let mut overloads: HashSet<FunctionType<'db>> = HashSet::default();
for (function, _) in function_definitions() {
let Some(overloaded) = function.to_overloaded(self.db()) else {
continue;
};
if overloaded.implementation.is_some() {
overloads.extend(overloaded.overloads.iter().copied());
} else if let Some((_, previous_overloads)) = overloaded.overloads.split_last() {
overloads.extend(previous_overloads.iter().copied());
}
}
for (function, function_node) in function_definitions() {
let Some(overloaded) = function.to_overloaded(self.db()) else {
continue;
};
if overloads.contains(&function) {
continue;
}
// At this point, the `function` variable is either the implementation function or the
// last overloaded function if the implementation doesn't exists.
if overloaded.overloads.len() < 2 {
if let Some(builder) = self
.context
.report_lint(&INVALID_OVERLOAD, &function_node.name)
{
let mut diagnostic = builder.into_diagnostic(format_args!(
"Function `{}` requires at least two overloads",
&function_node.name
));
if let Some(first_overload) = overloaded.overloads.first() {
diagnostic.annotate(
self.context
.secondary(first_overload.focus_range(self.db()))
.message(format_args!("Only one overload defined here")),
);
}
}
}
}
}
```
</p>
</details>
#### 2. Define a `predecessor` query
The `predecessor` query would return the previous `FunctionType` for the
given `FunctionType` i.e., the current logic would be extracted to be a
query instead. This could then be used to make sure that we're checking
the entire overload chain once. The way this would've been implemented
is to have a `to_overloaded` implementation which would take the root of
the overload chain instead of the leaf. But, this would require updates
to the use-def map to somehow be able to return the _following_
functions for a given definition.
#### 3. Create a successor link
This is what Pyrefly uses, we'd create a forward link between two
functions that are involved in an overload chain. This means that for a
given function, we can get the successor function. This could be used to
find the _leaf_ of the overload chain which can then be used with the
`to_overloaded` method to get the entire overload chain. But, this would
also require updating the use-def map to be able to "see" the
_following_ function.
### Implementation
This leads us to the final implementation that this PR implements which
is to consider the overloaded functions using:
* Collect all the **function symbols** that are defined **and** called
within the same file. This could potentially be an overloaded function
* Use the public bindings to get the leaf of the overload chain and use
that to get the entire overload chain via `to_overloaded` and perform
the check
This has a limitation that in case a function redefines an overload,
then that overload will not be checked. For example:
```py
from typing import overload
@overload
def f() -> None: ...
@overload
def f(x: int) -> int: ...
# The above overload will not be checked as the below function with the same name
# shadows it
def f(*args: int) -> int: ...
```
## Test Plan
Update existing mdtest and add snapshot diagnostics.
## Summary
@sharkdp and I realised in our 1:1 this morning that our control flow
for `assert` statements isn't quite accurate at the moment. Namely, for
something like this:
```py
def _(x: int | None):
assert x is None, reveal_type(x)
```
we currently reveal `None` for `x` here, but this is incorrect. In
actual fact, the `msg` expression of an `assert` statement (the
expression after the comma) will only be evaluated if the test (`x is
None`) evaluates to `False`. As such, we should be adding a constraint
of `~None` to `x` in the `msg` expression, which should simplify the
inferred type of `x` to `int` in that context (`(int | None) & ~None` ->
`int`).
## Test Plan
Mdtests added.
---------
Co-authored-by: David Peter <mail@david-peter.de>
## Summary
We were previously recording wrong reachability constraints for negative
branches. Instead of `[cond] AND (NOT [True])` below, we were recording
`[cond] AND (NOT ([cond] AND [True]))`, i.e. we were negating not just
the last predicate, but the `AND`-ed reachability constraint from last
clause. With this fix, we now record the correct constraints for the
example from #17723:
```py
def _(cond: bool):
if cond:
# reachability: [cond]
if True:
# reachability: [cond] AND [True]
pass
else:
# reachability: [cond] AND (NOT [True])
x
```
closes#17723
## Test Plan
* Regression test.
* Verified the ecosystem changes
## Summary
Part of #15383, this PR adds `is_equivalent_to` support for overloaded
callables.
This is mainly done by delegating it to the subtyping check in that two
types A and B are considered equivalent if A is a subtype of B and B is
a subtype of A.
## Test Plan
Add test cases for overloaded callables in `is_equivalent_to.md`
## Summary
Includes minor changes to the semantic type inference to help detect the
return type of function call.
Fixes#17691
## Test Plan
Snapshot tests
## Summary
Subtyping was already modeled, but assignability also needs an explicit
branch. Removes 921 ecosystem false positives.
## Test Plan
New Markdown tests.
We are currently representing type variables using a `KnownInstance`
variant, which wraps a `TypeVarInstance` that contains the information
about the typevar (name, bounds, constraints, default type). We were
previously only constructing that type for PEP 695 typevars. This PR
constructs that type for legacy typevars as well.
It also detects functions that are generic because they use legacy
typevars in their parameter list. With the existing logic for inferring
specializations of function calls (#17301), that means that we are
correctly detecting that the definition of `reveal_type` in the typeshed
is generic, and inferring the correct specialization of `_T` for each
call site.
This does not yet handle legacy generic classes; that will come in a
follow-on PR.
A small PR that just updates the various settings/configurations to
allow Python 3.14. At the moment selecting that target version will
have no impact compared to Python 3.13 - except that a warning
is emitted if the user does so with `preview` disabled.
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Apply auto fixes to cases where the names have changed in Airflow 3 in
AIR302 and split the huge test cases into different test cases based on
proivder
## Test Plan
<!-- How was it tested? -->
the test cases has been split into multiple for easier checking
Summary
--
This PR resolves https://github.com/astral-sh/ruff/issues/9761 by adding
a linter configuration option to disable
`typing_extensions` imports. As mentioned [here], it would be ideal if
we could
detect whether or not `typing_extensions` is available as a dependency
automatically, but this seems like a much easier fix in the meantime.
The default for the new option, `typing-extensions`, is `true`,
preserving the current behavior. Setting it to `false` will bail out of
the new
`Checker::typing_importer` method, which has been refactored from the
`Checker::import_from_typing` method in
https://github.com/astral-sh/ruff/pull/17340),
with `None`, which is then handled specially by each rule that calls it.
I considered some alternatives to a config option, such as checking if
`typing_extensions` has been imported or checking for a `TYPE_CHECKING`
block we could use, but I think defaulting to allowing
`typing_extensions` imports and allowing the user to disable this with
an option is both simple to implement and pretty intuitive.
[here]:
https://github.com/astral-sh/ruff/issues/9761#issuecomment-2790492853
Test Plan
--
New linter tests exercising several combinations of Python versions and
the new config option for PYI019. I also added tests for the other
affected rules, but only in the case where the new config option is
enabled. The rules' existing tests also cover the default case.
This is done in what appears to be the same way as Ruff: we get the CWD,
strip the prefix from the path if possible, and use that. If stripping
the prefix fails, then we print the full path as-is.
Fixes#17233
## Summary
Removes ~850 diagnostics related to assignability of callable types,
where the callable-being-assigned-to has a "Todo signature", which
should probably accept any left hand side callable/signature.
This PR promotes the fix applicability of [readlines-in-for
(FURB129)](https://docs.astral.sh/ruff/rules/readlines-in-for/#readlines-in-for-furb129)
to always safe.
In the original PR (https://github.com/astral-sh/ruff/pull/9880), the
author marked the rule as unsafe because Ruff's type inference couldn't
quite guarantee that we had an `IOBase` object in hand. Some false
positives were recorded in the test fixture. However, before the PR was
merged, Charlie added the necessary type inference and the false
positives went away.
According to the [Python
documentation](https://docs.python.org/3/library/io.html#io.IOBase), I
believe this fix is safe for any proper implementation of `IOBase`:
>[IOBase](https://docs.python.org/3/library/io.html#io.IOBase) (and its
subclasses) supports the iterator protocol, meaning that an
[IOBase](https://docs.python.org/3/library/io.html#io.IOBase) object can
be iterated over yielding the lines in a stream. Lines are defined
slightly differently depending on whether the stream is a binary stream
(yielding bytes), or a text stream (yielding character strings). See
[readline()](https://docs.python.org/3/library/io.html#io.IOBase.readline)
below.
and then in the [documentation for
`readlines`](https://docs.python.org/3/library/io.html#io.IOBase.readlines):
>Read and return a list of lines from the stream. hint can be specified
to control the number of lines read: no more lines will be read if the
total size (in bytes/characters) of all lines so far exceeds hint. [...]
>Note that it’s already possible to iterate on file objects using for
line in file: ... without calling file.readlines().
I believe that a careful reading of our [versioning
policy](https://docs.astral.sh/ruff/versioning/#version-changes)
requires that this change be deferred to a minor release - but please
correct me if I'm wrong!
This PR collects all behavior gated under preview into a new module
`ruff_linter::preview` that exposes functions like
`is_my_new_feature_enabled` - just as is done in the formatter crate.
## Summary
Do not emit errors when defining `TypedDict`s:
```py
from typing_extensions import TypedDict
# No error here
class Person(TypedDict):
name: str
age: int | None
# No error for this alternative syntax
Message = TypedDict("Message", {"id": int, "content": str})
```
## Ecosystem analysis
* Removes ~ 450 false positives for `TypedDict` definitions.
* Changes a few diagnostic messages.
* Adds a few (< 10) false positives, for example:
```diff
+ error[lint:unresolved-attribute]
/tmp/mypy_primer/projects/hydra-zen/src/hydra_zen/structured_configs/_utils.py:262:5:
Type `Literal[DataclassOptions]` has no attribute `__required_keys__`
+ error[lint:unresolved-attribute]
/tmp/mypy_primer/projects/hydra-zen/src/hydra_zen/structured_configs/_utils.py:262:42:
Type `Literal[DataclassOptions]` has no attribute `__optional_keys__`
```
* New true positive
4f8263cd7f/corporate/lib/remote_billing_util.py (L155-L157)
```diff
+ error[lint:invalid-assignment]
/tmp/mypy_primer/projects/zulip/corporate/lib/remote_billing_util.py:155:5:
Object of type `RemoteBillingIdentityDict | LegacyServerIdentityDict |
None` is not assignable to `LegacyServerIdentityDict | None`
```
## Test Plan
New Markdown tests