## Summary
`bool()` is equal to `False`, and we infer `Literal[False]` for it. Which
means that the test here will fail as soon as we treat the body of
this `if` as unreachable.
## Summary
Fix panics related to expressions without inferred types in invalid
syntax examples like:
```py
x: f"Literal[{1 + 2}]" = 3
```
where the `1 + 2` expression (and its sub-expressions) inside the
annotation did not have an inferred type.
## Test Plan
Added new corpus test.
## Summary
This is about the easiest patch that I can think of. It has a drawback
in that there is no real guarantee this won't happen again. I think this
might be acceptable, given that all of this is a temporary thing.
And we also add a new CI job to prevent regressions like this in the
future.
For the record though, I'm listing alternative approaches I thought of:
- We could get rid of the debug/release distinction and just add `@Todo`
type metadata everywhere. This has possible affects on runtime. The main
reason I didn't follow through with this is that the size of `Type`
increases. We would either have to adapt the `assert_eq_size!` test or
get rid of it. Even if we add messages everywhere and get rid of the
file-and-line-variant in the enum, it's not enough to get back to the
current release-mode size of `Type`.
- We could generally discard `@Todo` meta information when using it in
tests. I think this would be a huge drawback. I like that we can have
the actual messages in the mdtest. And make sure we get the expected
`@Todo` type, not just any `@Todo`. It's also helpful when debugging
tests.
closes#14594
## Test Plan
```rs
cargo nextest run --release
```
Fix#14558
## Summary
- Add `typing.NoReturn` and `typing.Never` to known instances and infer
them as `Type::Never`
- Add `is_assignable_to` cases for `Type::Never`
I skipped emitting diagnostic for when a function is annotated as
`NoReturn` but it actually returns.
## Test Plan
Added tests from
https://github.com/python/typing/blob/main/conformance/tests/specialtypes_never.py
except from generics and checking if the return value of the function
and the annotations match.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Closes#14588
```py
x: Literal[42, "hello"] = 42 if bool_instance() else "hello"
reveal_type(x) # revealed: Literal[42] | Literal["hello"]
_ = ... if isinstance(x, str) else ...
# The `isinstance` test incorrectly narrows the type of `x`.
# As a result, `x` is revealed as Literal["hello"], but it should remain Literal[42, "hello"].
reveal_type(x) # revealed: Literal["hello"]
```
## Test Plan
mdtest included!
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This fix addresses panics related to invalid syntax like the following
where a `break` statement is used in a nested definition inside a
loop:
```py
while True:
def b():
x: int
break
```
closes#14342
## Test Plan
* New corpus regression tests.
* New unit test to make sure we handle nested while loops correctly.
This test is passing on `main`, but can easily fail if the
`is_inside_loop` state isn't properly saved/restored.
## Summary
Add support for (non-generic) type aliases. The main motivation behind
this was to get rid of panics involving expressions in (generic) type
aliases. But it turned out the best way to fix it was to implement
(partial) support for type aliases.
```py
type IntOrStr = int | str
reveal_type(IntOrStr) # revealed: typing.TypeAliasType
reveal_type(IntOrStr.__name__) # revealed: Literal["IntOrStr"]
x: IntOrStr = 1
reveal_type(x) # revealed: Literal[1]
def f() -> None:
reveal_type(x) # revealed: int | str
```
## Test Plan
- Updated corpus test allow list to reflect that we don't panic anymore.
- Added Markdown-based test for type aliases (`type_alias.md`)
## Summary
Fixes a panic related to sub-expressions of `typing.Union` where we fail
to store a type for the `int, str` tuple-expression in code like this:
```
x: Union[int, str] = 1
```
relates to [my
comment](https://github.com/astral-sh/ruff/pull/14499#discussion_r1851794467)
on #14499.
## Test Plan
New corpus test
## Summary
Adds meta information to `Type::Todo`, allowing developers to easily
trace back the origin of a particular `@Todo` type they encounter.
Instead of `Type::Todo`, we now write either `type_todo!()` which
creates a `@Todo[path/to/source.rs:123]` type with file and line
information, or using `type_todo!("PEP 604 unions not supported")`,
which creates a variant with a custom message.
`Type::Todo` now contains a `TodoType` field. In release mode, this is
just a zero-sized struct, in order not to create any overhead. In debug
mode, this is an `enum` that contains the meta information.
`Type` implements `Copy`, which means that `TodoType` also needs to be
copyable. This limits the design space. We could intern `TodoType`, but
I discarded this option, as it would require us to have access to the
salsa DB everywhere we want to use `Type::Todo`. And it would have made
the macro invocations less ergonomic (requiring us to pass `db`).
So for now, the meta information is simply a `&'static str` / `u32` for
the file/line variant, or a `&'static str` for the custom message.
Anything involving a chain/backtrace of several `@Todo`s or similar is
therefore currently not implemented. Also because we currently don't see
any direct use cases for this, and because all of this will eventually
go away.
Note that the size of `Type` increases from 16 to 24 bytes, but only in
debug mode.
## Test Plan
- Observed the changes in Markdown tests.
- Added custom messages for all `Type::Todo`s that were revealed in the
tests
- Ran red knot in release and debug mode on the following Python file:
```py
def f(x: int) -> int:
reveal_type(x)
```
Prints `@Todo` in release mode and `@Todo(function parameter type)` in
debug mode.
Fix#14498
## Summary
This PR adds `typing.Union` support
## Test Plan
I created new tests in mdtest.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
closes#14279
### Limitations of the Current Implementation
#### Incorrect Error Propagation
In the current implementation of lexicographic comparisons, if the
result of an Eq operation is Ambiguous, the comparison stops
immediately, returning a bool instance. While this may yield correct
inferences, it fails to capture unsupported-operation errors that might
occur in subsequent comparisons.
```py
class A: ...
(int_instance(), A()) < (int_instance(), A()) # should error
```
#### Weak Inference in Specific Cases
> Example: `(int_instance(), "foo") == (int_instance(), "bar")`
> Current result: `bool`
> Expected result: `Literal[False]`
`Eq` and `NotEq` have unique behavior in lexicographic comparisons
compared to other operators. Specifically:
- For `Eq`, if any non-equal pair exists within the tuples being
compared, we can immediately conclude that the tuples are not equal.
- For `NotEq`, if any equal pair exists, we can conclude that the tuples
are unequal.
```py
a = (str_instance(), int_instance(), "foo")
reveal_type(a == a) # revealed: bool
reveal_type(a != a) # revealed: bool
b = (str_instance(), int_instance(), "bar")
reveal_type(a == b) # revealed: bool # should be Literal[False]
reveal_type(a != b) # revealed: bool # should be Literal[True]
```
#### Incorrect Support for Non-Boolean Rich Comparisons
In CPython, aside from `==` and `!=`, tuple comparisons return a
non-boolean result as-is. Tuples do not convert the value into `bool`.
Note: If all pairwise `==` comparisons between elements in the tuples
return Truthy, the comparison then considers the tuples' lengths.
Regardless of the return type of the dunder methods, the final result
can still be a boolean.
```py
from __future__ import annotations
class A:
def __eq__(self, o: object) -> str:
return "hello"
def __ne__(self, o: object) -> bytes:
return b"world"
def __lt__(self, o: A) -> float:
return 3.14
a = (A(), A())
reveal_type(a == a) # revealed: bool
reveal_type(a != a) # revealed: bool
reveal_type(a < a) # revealed: bool # should be: `float | Literal[False]`
```
### Key Changes
One of the major changes is that comparisons no longer end with a `bool`
result when a pairwise `Eq` result is `Ambiguous`. Instead, the function
attempts to infer all possible cases and unions the results. This
improvement allows for more robust type inference and better error
detection.
Additionally, as the function is now optimized for tuple comparisons,
the name has been changed from the more general
`infer_lexicographic_comparison` to `infer_tuple_rich_comparison`.
## Test Plan
mdtest included
## Summary
Previously, we panicked on expressions like `f"{v:{f'0.2f'}}"` because
we did not infer types for expressions nested inside format spec
elements.
## Test Plan
```
cargo nextest run -p red_knot_workspace -- --ignored linter_af linter_gz
```
## Summary
Add type narrowing for `type(x) is C` conditions (and `else` clauses of
`type(x) is not C` conditionals):
```py
if type(x) is A:
reveal_type(x) # revealed: A
else:
reveal_type(x) # revealed: A | B
```
closes: #14431, part of: #13694
## Test Plan
New Markdown-based tests.
## Summary
This patches up various missing paths where sub-expressions of type
annotations previously had no type attached. Examples include:
```py
tuple[int, str]
# ~~~~~~~~
type[MyClass]
# ~~~~~~~
Literal["foo"]
# ~~~~~
Literal["foo", Literal[1, 2]]
# ~~~~~~~~~~~~~
Literal[1, "a", random.illegal(sub[expr + ession])]
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```
## Test Plan
```
cargo nextest run -p red_knot_workspace -- --ignored linter_af linter_gz
```
## Summary
This PR adds support for parsing and inferring types within string
annotations.
### Implementation (attempt 1)
This is preserved in
6217f48924.
The implementation here would separate the inference of string
annotations in the deferred query. This requires the following:
* Two ways of evaluating the deferred definitions - lazily and eagerly.
* An eager evaluation occurs right outside the definition query which in
this case would be in `binding_ty` and `declaration_ty`.
* A lazy evaluation occurs on demand like using the
`definition_expression_ty` to determine the function return type and
class bases.
* The above point means that when trying to get the binding type for a
variable in an annotated assignment, the definition query won't include
the type. So, it'll require going through the deferred query to get the
type.
This has the following limitations:
* Nested string annotations, although not necessarily a useful feature,
is difficult to implement unless we convert the implementation in an
infinite loop
* Partial string annotations require complex layout because inferring
the types for stringified and non-stringified parts of the annotation
are done in separate queries. This means we need to maintain additional
information
### Implementation (attempt 2)
This is the final diff in this PR.
The implementation here does the complete inference of string annotation
in the same definition query by maintaining certain state while trying
to infer different parts of an expression and take decisions
accordingly. These are:
* Allow names that are part of a string annotation to not exists in the
symbol table. For example, in `x: "Foo"`, if the "Foo" symbol is not
defined then it won't exists in the symbol table even though it's being
used. This is an invariant which is being allowed only for symbols in a
string annotation.
* Similarly, lookup name is updated to do the same and if the symbol
doesn't exists, then it's not bounded.
* Store the final type of a string annotation on the string expression
itself and not for any of the sub-expressions that are created after
parsing. This is because those sub-expressions won't exists in the
semantic index.
Design document:
https://www.notion.so/astral-sh/String-Annotations-12148797e1ca801197a9f146641e5b71?pvs=4Closes: #13796
## Test Plan
* Add various test cases in our markdown framework
* Run `red_knot` on LibCST (contains a lot of string annotations,
specifically
https://github.com/Instagram/LibCST/blob/main/libcst/matchers/_matcher_base.py),
FastAPI (good amount of annotated code including `typing.Literal`) and
compare against the `main` branch output
## Summary
Add a typed representation of function signatures (parameters and return
type) and infer it correctly from a function.
Convert existing usage of function return types to use the signature
representation.
This does not yet add inferred types for parameters within function body
scopes based on the annotations, but it should be easy to add as a next
step.
Part of #14161 and #13693.
## Test Plan
Added tests.
## Summary
This fixes several panics related to invalid assignment targets. All of
these led to some a crash, previously:
```py
(x.y := 1) # only name-expressions are valid targets of named expressions
([x, y] := [1, 2]) # same
(x, y): tuple[int, int] = (2, 3) # tuples are not valid targets for annotated assignments
(x, y) += 2 # tuples are not valid targets for augmented assignments
```
closes#14321closes#14322
## Test Plan
I symlinked four files from `crates/ruff_python_parser/resources` into
the red knot corpus, as they seemed like ideal test files for this exact
scenario. I think eventually, it might be a good idea to simply include *all*
invalid-syntax examples from the parser tests into red knots corpus (I believe
we're actually not too far from that goal). Or expand the scope of the corpus
test to this directory. Then we can get rid of these symlinks again.
## Summary
This avoids a panic inside `TypeInferenceBuilder::infer_type_parameters`
when encountering generic type aliases:
```py
type ListOrSet[T] = list[T] | set[T]
```
To fix this properly, we would have to treat type aliases as being their own
annotation scope [1]. The left hand side is a definition for the type parameter
`T` which is being used in the special annotation scope on the right hand side.
Similar to how it works for generic functions and classes.
[1] https://docs.python.org/3/reference/compound_stmts.html#generic-type-aliasescloses#14307
## Test Plan
Added new example to the corpus.
When we look up the types of class bases or keywords (`metaclass`), we
currently do this little dance: if there are type params, then look up
the type using `SemanticModel` in the type-params scope, if not, look up
the type directly in the definition's own scope, with support for
deferred types.
With inference of function parameter types, I'm now adding another case
of this same dance, so I'm motivated to make it a bit more ergonomic.
Add support to `definition_expression_ty` to handle any sub-expression
of a definition, whether it is in the definition's own scope or in a
type-params sub-scope.
Related to both #13693 and #14161.
## Summary
Use the memory address to uniquely identify AST nodes, instead of
relying on source range and kind. The latter fails for ASTs resulting
from invalid syntax examples. See #14313 for details.
Also results in a 1-2% speedup
(67349cf55f)
closes#14313
## Review
Here are the places where we use `NodeKey` directly or indirectly (via
`ExpressionNodeKey` or `DefinitionNodeKey`):
```rs
// semantic_index.rs
pub(crate) struct SemanticIndex<'db> {
// [...]
/// Map expressions to their corresponding scope.
scopes_by_expression: FxHashMap<ExpressionNodeKey, FileScopeId>,
/// Map from a node creating a definition to its definition.
definitions_by_node: FxHashMap<DefinitionNodeKey, Definition<'db>>,
/// Map from a standalone expression to its [`Expression`] ingredient.
expressions_by_node: FxHashMap<ExpressionNodeKey, Expression<'db>>,
// [...]
}
// semantic_index/builder.rs
pub(super) struct SemanticIndexBuilder<'db> {
// [...]
scopes_by_expression: FxHashMap<ExpressionNodeKey, FileScopeId>,
definitions_by_node: FxHashMap<ExpressionNodeKey, Definition<'db>>,
expressions_by_node: FxHashMap<ExpressionNodeKey, Expression<'db>>,
}
// semantic_index/ast_ids.rs
pub(crate) struct AstIds {
/// Maps expressions to their expression id.
expressions_map: FxHashMap<ExpressionNodeKey, ScopedExpressionId>,
/// Maps expressions which "use" a symbol (that is, [`ast::ExprName`]) to a use id.
uses_map: FxHashMap<ExpressionNodeKey, ScopedUseId>,
}
pub(super) struct AstIdsBuilder {
expressions_map: FxHashMap<ExpressionNodeKey, ScopedExpressionId>,
uses_map: FxHashMap<ExpressionNodeKey, ScopedUseId>,
}
```
## Test Plan
Added two failing examples to the corpus.
## Summary
Fixes a failing debug assertion that triggers for the following code:
```py
match some_int:
case x:=2:
pass
```
closes#14305
## Test Plan
Added problematic code example to corpus.
## Summary
- Emit diagnostics when looking up (possibly) unbound attributes
- More explicit test assertions for unbound symbols
- Review remaining call sites of `Symbol::ignore_possibly_unbound`. Most
of them are something like `builtins_symbol(self.db,
"Ellipsis").ignore_possibly_unbound().unwrap_or(Type::Unknown)` which
look okay to me, unless we want to emit additional diagnostics. There is
one additional case in enum literal handling, which has a TODO comment
anyway.
part of #14022
## Test Plan
New MD tests for (possibly) unbound attributes.
## Summary
This adds a new diagnostic when possibly unbound symbols are imported.
The `TODO` comment had a question mark, do I'm not sure if this is
really something that we want.
This does not touch the un*declared* case, yet.
relates to: #14022
## Test Plan
Updated already existing tests with new diagnostics
## Summary
Apart from one small functional change, this is mostly a refactoring of
the `Symbol` API:
- Rename `as_type` to the more explicit `ignore_possibly_unbound`, no
functional change
- Remove `unwrap_or_unknown` in favor of the more explicit
`.ignore_possibly_unbound().unwrap_or(Type::Unknown)`, no functional
change
- Consistently call it "possibly unbound" (not "may be unbound")
- Rename `replace_unbound_with` to `or_fall_back_to` and properly handle
boundness of the fall back. This is the only functional change (did not
have any impact on existing tests).
relates to: #14022
## Test Plan
New unit tests for `Symbol::or_fall_back_to`
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>