
## Summary Part of #15383, this PR adds support for overloaded callables. Typing spec: https://typing.python.org/en/latest/spec/overload.html Specifically, it does the following: 1. Update the `FunctionType::signature` method to return signatures from a possibly overloaded callable using a new `FunctionSignature` enum 2. Update `CallableType` to accommodate overloaded callable by updating the inner type to `Box<[Signature]>` 3. Update the relation methods on `CallableType` with logic specific to overloads 4. Update the display of callable type to display a list of signatures enclosed by parenthesis 5. Update `CallableTypeOf` special form to recognize overloaded callable 6. Update subtyping, assignability and fully static check to account for callables (equivalence is planned to be done as a follow-up) For (2), it is required to be done in this PR because otherwise I'd need to add some workaround for `into_callable_type` and I though it would be best to include it in here. For (2), another possible design would be convert `CallableType` in an enum with two variants `CallableType::Single` and `CallableType::Overload` but I decided to go with `Box<[Signature]>` for now to (a) mirror it to be equivalent to `overload` field on `CallableSignature` and (b) to avoid any refactor in this PR. This could be done in a follow-up to better split the two kind of callables. ### Design There were two main candidates on how to represent the overloaded definition: 1. To include it in the existing infrastructure which is what this PR is doing by recognizing all the signatures within the `FunctionType::signature` method 2. To create a new `Overload` type variant <details><summary>For context, this is what I had in mind with the new type variant:</summary> <p> ```rs pub enum Type { FunctionLiteral(FunctionType), Overload(OverloadType), BoundMethod(BoundMethodType), ... } pub struct OverloadType { // FunctionLiteral or BoundMethod overloads: Box<[Type]>, // FunctionLiteral or BoundMethod implementation: Option<Type> } pub struct BoundMethodType { kind: BoundMethodKind, self_instance: Type, } pub enum BoundMethodKind { Function(FunctionType), Overload(OverloadType), } ``` </p> </details> The main reasons to choose (1) are the simplicity in the implementation, reusing the existing infrastructure, avoiding any complications that the new type variant has specifically around the different variants between function and methods which would require the overload type to use `Type` instead. ### Implementation The core logic is how to collect all the overloaded functions. The way this is done in this PR is by recording a **use** on the `Identifier` node that represents the function name in the use-def map. This is then used to fetch the previous symbol using the same name. This way the signatures are going to be propagated from top to bottom (from first overload to the final overload or the implementation) with each function / method. For example: ```py from typing import overload @overload def foo(x: int) -> int: ... @overload def foo(x: str) -> str: ... def foo(x: int | str) -> int | str: return x ``` Here, each definition of `foo` knows about all the signatures that comes before itself. So, the first overload would only see itself, the second would see the first and itself and so on until the implementation or the final overload. This approach required some updates specifically recognizing `Identifier` node to record the function use because it doesn't use `ExprName`. ## Test Plan Update existing test cases which were limited by the overload support and add test cases for the following cases: * Valid overloads as functions, methods, generics, version specific * Invalid overloads as stated in https://typing.python.org/en/latest/spec/overload.html#invalid-overload-definitions (implementation will be done in a follow-up) * Various relation: fully static, subtyping, and assignability (others in a follow-up) ## Ecosystem changes _WIP_ After going through the ecosystem changes (there are a lot!), here's what I've found: We need assignability check between a callable type and a class literal because a lot of builtins are defined as classes in typeshed whose constructor method is overloaded e.g., `map`, `sorted`, `list.sort`, `max`, `min` with the `key` parameter, `collections.abc.defaultdict`, etc. (https://github.com/astral-sh/ruff/issues/17343). This makes up most of the ecosystem diff **roughly 70 diagnostics**. For example: ```py from collections import defaultdict # red-knot: No overload of bound method `__init__` matches arguments [lint:no-matching-overload] defaultdict(int) # red-knot: No overload of bound method `__init__` matches arguments [lint:no-matching-overload] defaultdict(list) class Foo: def __init__(self, x: int): self.x = x # red-knot: No overload of function `__new__` matches arguments [lint:no-matching-overload] map(Foo, ["a", "b", "c"]) ``` Duplicate diagnostics in unpacking (https://github.com/astral-sh/ruff/issues/16514) has **~16 diagnostics**. Support for the `callable` builtin which requires `TypeIs` support. This is **5 diagnostics**. For example: ```py from typing import Any def _(x: Any | None) -> None: if callable(x): # red-knot: `Any | None` # Pyright: `(...) -> object` # mypy: `Any` # pyrefly: `(...) -> object` reveal_type(x) ``` Narrowing on `assert` which has **11 diagnostics**. This is being worked on in https://github.com/astral-sh/ruff/pull/17345. For example: ```py import re match = re.search("", "") assert match match.group() # error: [possibly-unbound-attribute] ``` Others: * `Self`: 2 * Type aliases: 6 * Generics: 3 * Protocols: 13 * Unpacking in comprehension: 1 (https://github.com/astral-sh/ruff/pull/17396) ## Performance Refer to https://github.com/astral-sh/ruff/pull/17366#issuecomment-2814053046.
4.1 KiB
Binary operations on integers
Basic Arithmetic
reveal_type(2 + 1) # revealed: Literal[3]
reveal_type(3 - 4) # revealed: Literal[-1]
reveal_type(3 * -1) # revealed: Literal[-3]
reveal_type(-3 // 3) # revealed: Literal[-1]
reveal_type(-3 / 3) # revealed: float
reveal_type(5 % 3) # revealed: Literal[2]
# error: [unsupported-operator] "Operator `+` is unsupported between objects of type `Literal[2]` and `Literal["f"]`"
reveal_type(2 + "f") # revealed: Unknown
def lhs(x: int):
reveal_type(x + 1) # revealed: int
reveal_type(x - 4) # revealed: int
reveal_type(x * -1) # revealed: int
reveal_type(x // 3) # revealed: int
reveal_type(x / 3) # revealed: int | float
reveal_type(x % 3) # revealed: int
def rhs(x: int):
reveal_type(2 + x) # revealed: int
reveal_type(3 - x) # revealed: int
reveal_type(3 * x) # revealed: int
reveal_type(-3 // x) # revealed: int
reveal_type(-3 / x) # revealed: int | float
reveal_type(5 % x) # revealed: int
def both(x: int):
reveal_type(x + x) # revealed: int
reveal_type(x - x) # revealed: int
reveal_type(x * x) # revealed: int
reveal_type(x // x) # revealed: int
reveal_type(x / x) # revealed: int | float
reveal_type(x % x) # revealed: int
Power
For power if the result fits in the int literal type it will be a Literal type. Otherwise the outcome is int.
largest_u32 = 4_294_967_295
reveal_type(2**2) # revealed: Literal[4]
reveal_type(1 ** (largest_u32 + 1)) # revealed: int
reveal_type(2**largest_u32) # revealed: int
def variable(x: int):
reveal_type(x**2) # revealed: int
# TODO: should be `Any` (overload 5 on `__pow__`), requires correct overload matching
reveal_type(2**x) # revealed: int
# TODO: should be `Any` (overload 5 on `__pow__`), requires correct overload matching
reveal_type(x**x) # revealed: int
If the second argument is <0, a float
is returned at runtime. If the first argument is <0 but
the second argument is >=0, an int
is still returned:
reveal_type(1**0) # revealed: Literal[1]
reveal_type(0**1) # revealed: Literal[0]
reveal_type(0**0) # revealed: Literal[1]
reveal_type((-1) ** 2) # revealed: Literal[1]
reveal_type(2 ** (-1)) # revealed: float
reveal_type((-1) ** (-1)) # revealed: float
Division by Zero
This error is really outside the current Python type system, because e.g. int.__truediv__
and
friends are not annotated to indicate that it's an error, and we don't even have a facility to
permit such an annotation. So arguably divide-by-zero should be a lint error rather than a type
checker error. But we choose to go ahead and error in the cases that are very likely to be an error:
dividing something typed as int
or float
by something known to be Literal[0]
.
This isn't definitely an error, because the object typed as int
or float
could be an instance
of a custom subclass which overrides division behavior to handle zero without error. But if this
unusual case occurs, the error can be avoided by explicitly typing the dividend as that safe custom
subclass; we only emit the error if the LHS type is exactly int
or float
, not if its a subclass.
a = 1 / 0 # error: "Cannot divide object of type `Literal[1]` by zero"
reveal_type(a) # revealed: float
b = 2 // 0 # error: "Cannot floor divide object of type `Literal[2]` by zero"
reveal_type(b) # revealed: int
c = 3 % 0 # error: "Cannot reduce object of type `Literal[3]` modulo zero"
reveal_type(c) # revealed: int
# error: "Cannot divide object of type `int` by zero"
reveal_type(int() / 0) # revealed: int | float
# error: "Cannot divide object of type `Literal[1]` by zero"
reveal_type(1 / False) # revealed: float
# error: [division-by-zero] "Cannot divide object of type `Literal[True]` by zero"
True / False
# error: [division-by-zero] "Cannot divide object of type `Literal[True]` by zero"
bool(1) / False
# error: "Cannot divide object of type `float` by zero"
reveal_type(1.0 / 0) # revealed: int | float
class MyInt(int): ...
# No error for a subclass of int
reveal_type(MyInt(3) / 0) # revealed: int | float