This reverts https://github.com/astral-sh/ruff/pull/13799, and restores
the previous behavior, which I think was the most pragmatic and useful
version of the divide-by-zero error, if we will emit it at all.
In general, a type checker _does_ emit diagnostics when it can detect
something that will definitely be a problem for some inhabitants of a
type, but not others. For example, `x.foo` if `x` is typed as `object`
is a type error, even though some inhabitants of the type `object` will
have a `foo` attribute! The correct fix is to make your type annotations
more precise, so that `x` is assigned a type which definitely has the
`foo` attribute.
If we will emit it divide-by-zero errors, it should follow the same
logic. Dividing an inhabitant of the type `int` by zero may not emit an
error, if the inhabitant is an instance of a subclass of `builtins.int`
that overrides division. But it may emit an error (more likely it will).
If you don't want the diagnostic, you can clarify your type annotations
to require an instance of your safe subclass.
Because the Python type system doesn't have the ability to explicitly
reflect the fact that divide-by-zero is an error in type annotations
(e.g. for `int.__truediv__`), or conversely to declare a type as safe
from divide-by-zero, or include a "nonzero integer" type which it is
always safe to divide by, the analogy doesn't fully apply. You can't
explicitly mark your subclass of `int` as safe from divide-by-zero, we
just semi-arbitrarily choose to silence the diagnostic for subclasses,
to avoid false positives.
Also, if we fully followed the above logic, we'd have to error on every
`int / int` because the RHS `int` might be zero! But this would likely
cause too many false positives, because of the lack of a "nonzero
integer" type.
So this is just a pragmatic choice to emit the diagnostic when it is
very likely to be an error. It's unclear how useful this diagnostic is
in practice, but this version of it is at least very unlikely to cause
harm.
If the LHS is just `int` or `float` type, that type includes custom
subclasses which can arbitrarily override division behavior, so we
shouldn't emit a divide-by-zero error in those cases.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Add type inference for comparisons involving union types. For example:
```py
one_or_two = 1 if flag else 2
reveal_type(one_or_two <= 2) # revealed: Literal[True]
reveal_type(one_or_two <= 1) # revealed: bool
reveal_type(one_or_two <= 0) # revealed: Literal[False]
```
closes#13779
## Test Plan
See `resources/mdtest/comparison/unions.md`
## Summary
Fixes the bug described in #13514 where an unbound public type defaulted
to the type or `Unknown`, whereas it should only be the type if unbound.
## Test Plan
Added a new test case
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
This PR implements comparisons for (tuple, tuple).
It will close#13688 and complete an item in #13618 once merged.
## Test Plan
Basic tests are included for (tuple, tuple) comparisons.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
This PR adds support for unpacking tuple expression in an assignment
statement where the target expression can be a tuple or a list (the
allowed sequence targets).
The implementation introduces a new `infer_assignment_target` which can
then be used for other targets like the ones in for loops as well. This
delegates it to the `infer_definition`. The final implementation uses a
recursive function that visits the target expression in source order and
compares the variable node that corresponds to the definition. At the
same time, it keeps track of where it is on the assignment value type.
The logic also accounts for the number of elements on both sides such
that it matches even if there's a gap in between. For example, if
there's a starred expression like `(a, *b, c) = (1, 2, 3)`, then the
type of `a` will be `Literal[1]` and the type of `b` will be
`Literal[2]`.
There are a couple of follow-ups that can be done:
* Use this logic for other target positions like `for` loop
* Add diagnostics for mis-match length between LHS and RHS
## Test Plan
Add various test cases using the new markdown test framework.
Validate that existing test cases pass.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Porting infer tests to new markdown tests framework.
Link to the corresponding issue: #13696
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
- Fix a bug with `… is not …` type guards.
Previously, in an example like
```py
x = [1]
y = [1]
if x is not y:
reveal_type(x)
```
we would infer a type of `list[int] & ~list[int] == Never` for `x`
inside the conditional (instead of `list[int]`), since we built a
(negative) intersection with the type of the right hand side (`y`).
However, as this example shows, this assumption can only be made for
singleton types (types with a single inhabitant) such as `None`.
- Add support for `… is …` type guards.
closes#13715
## Test Plan
Moved existing `narrow_…` tests to Markdown-based tests and added new
ones (including a regression test for the bug described above). Note
that will create some conflicts with
https://github.com/astral-sh/ruff/pull/13719. I tried to establish the
correct organizational structure as proposed in
https://github.com/astral-sh/ruff/pull/13719#discussion_r1800188105
## Summary
Adds a markdown-based test framework for writing tests of type inference
and type checking. Fixes#11664.
Implements the basic required features. A markdown test file is a suite
of tests, each test can contain one or more Python files, with
optionally specified path/name. The test writes all files to an
in-memory file system, runs red-knot, and matches the resulting
diagnostics against `Type: ` and `Error: ` assertions embedded in the
Python source as comments.
We will want to add features like incremental tests, setting custom
configuration for tests, writing non-Python files, testing syntax
errors, capturing full diagnostic output, etc. There's also plenty of
room for improved UX (colored output?).
## Test Plan
Lots of tests!
Sample of the current output when a test fails:
```
Running tests/inference.rs (target/debug/deps/inference-7c96590aa84de2a4)
running 1 test
test inference::path_1_resources_inference_numbers_md ... FAILED
failures:
---- inference::path_1_resources_inference_numbers_md stdout ----
inference/numbers.md - Numbers - Floats
/src/test.py
line 2: unexpected error: [invalid-assignment] "Object of type `Literal["str"]` is not assignable to `int`"
thread 'inference::path_1_resources_inference_numbers_md' panicked at crates/red_knot_test/src/lib.rs:60:5:
Some tests failed.
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
inference::path_1_resources_inference_numbers_md
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.19s
error: test failed, to rerun pass `-p red_knot_test --test inference`
```
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>