This PR extracts a lot of the complex logic in the `match_parameters`
and `check_types` methods of our call binding machinery into separate
helper types. This is setup for #18996, which will update this logic to
handle variadic arguments. To do so, it is helpful to have the
per-argument logic extracted into a method that we can call repeatedly
for each _element_ of a variadic argument.
This should be a pure refactoring, with no behavioral changes.
This PR updates our unpacking assignment logic to use the new tuple
machinery. As a result, we can now unpack variable-length tuples
correctly.
As part of this, the `TupleSpec` classes have been renamed to `Tuple`,
and can now contain any element (Rust) type, not just `Type<'db>`. The
unpacker uses a tuple of `UnionBuilder`s to maintain the types that will
be assigned to each target, as we iterate through potentially many union
elements on the rhs. We also add a new consuming iterator for tuples,
and update the `all_elements` methods to wrap the result in an enum
(similar to `itertools::Position`) letting you know which part of the
tuple each element appears in. I also added a new
`UnionBuilder::try_build`, which lets you specify a different fallback
type if the union contains no elements.
## Summary
Ensure that we correctly infer calls such as `tuple((1, 2))`,
`tuple(range(42))`, etc. Ensure that we emit errors on invalid calls
such as `tuple[int, str]()`.
## Test Plan
Mdtests
## Summary
Under preview 🧪 I've expanded rule `PYI016` to also flag type
union duplicates containing `None` and `Optional`.
## Test Plan
Examples/tests have been added. I've made sure that the existing
examples did not change unless preview is enabled.
## Relevant Issues
* https://github.com/astral-sh/ruff/issues/18508 (discussing
introducing/extending a rule to flag `Optional[None]`)
* https://github.com/astral-sh/ruff/issues/18546 (where I discussed this
addition with @AlexWaygood)
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
Co-authored-by: Brent Westbrook <brentrwestbrook@gmail.com>
## Summary
I think this should be the last step before combining `OldDiagnostic`
and `ruff_db::Diagnostic`. We can't store a `NoqaCode` on
`ruff_db::Diagnostic`, so I converted the `noqa_code` field to an
`Option<String>` and then propagated this change to all of the callers.
I tried to use `&str` everywhere it was possible, so I think the
remaining `to_string` calls are necessary. I spent some time trying to
convert _everything_ to `&str` but ran into lifetime issues, especially
in the `FixTable`. Maybe we can take another look at that if it causes a
performance regression, but hopefully these paths aren't too hot. We
also avoid some `to_string` calls, so it might even out a bit too.
## Test Plan
Existing tests
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Most of the work here was doing some light refactoring to facilitate
sensible testing. That is, we don't want to list every builtin included
in most tests, so we add some structure to the completion type returned.
Tests can now filter based on whether a completion is a builtin or not.
Otherwise, builtins are found using the existing infrastructure for
`object.attr` completions (where we hard-code the module name
`builtins`).
I did consider changing the sort order based on whether a completion
suggestion was a builtin or not. In particular, it seemed like it might
be a good idea to sort builtins after other scope based completions,
but before the dunder and sunder attributes. Namely, it seems likely
that there is an inverse correlation between the size of a scope and
the likelihood of an item in that scope being used at any given point.
So it *might* be a good idea to prioritize the likelier candidates in
the completions returned.
Additionally, the number of items introduced by adding builtins is quite
large. So I wondered whether mixing them in with everything else would
become too noisy.
However, it's not totally clear to me that this is the right thing to
do. Right now, I feel like there is a very obvious lexicographic
ordering that makes "finding" the right suggestion to activate
potentially easier than if the ranking mechanism is less clear.
(Technically, the dunder and sunder attributes are not sorted
lexicographically, but I'd put forward that most folks don't have an
intuitive understanding of where `_` ranks lexicographically with
respect to "regular" letters. Moreover, since dunder and sunder
attributes are all grouped together, I think the ordering here ends up
being very obvious after even a quick glance.)
## Summary
Setting `TY_MEMORY_REPORT=full` will generate and print a memory usage
report to the CLI after a `ty check` run:
```
=======SALSA STRUCTS=======
`Definition` metadata=7.24MB fields=17.38MB count=181062
`Expression` metadata=4.45MB fields=5.94MB count=92804
`member_lookup_with_policy_::interned_arguments` metadata=1.97MB fields=2.25MB count=35176
...
=======SALSA QUERIES=======
`File -> ty_python_semantic::semantic_index::SemanticIndex`
metadata=11.46MB fields=88.86MB count=1638
`Definition -> ty_python_semantic::types::infer::TypeInference`
metadata=24.52MB fields=86.68MB count=146018
`File -> ruff_db::parsed::ParsedModule`
metadata=0.12MB fields=69.06MB count=1642
...
=======SALSA SUMMARY=======
TOTAL MEMORY USAGE: 577.61MB
struct metadata = 29.00MB
struct fields = 35.68MB
memo metadata = 103.87MB
memo fields = 409.06MB
```
Eventually, we should integrate these numbers into CI in some form. The
one limitation currently is that heap allocations in salsa structs (e.g.
interned values) are not tracked, but memoized values should have full
coverage. We may also want a peak memory usage counter (that accounts
for non-salsa memory), but that is relatively simple to profile manually
(e.g. `time -v ty check`) and would require a compile-time option to
avoid runtime overhead.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR also supresses the fix if the assignment expression target
shadows one of the lambda's parameters.
Fixes#18675
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
Add regression tests.
<!-- How was it tested? -->
## Summary
Part of #15584
This PR adds a fix safety section to [fast-api-non-annotated-dependency
(FAST002)](https://docs.astral.sh/ruff/rules/fast-api-non-annotated-dependency/#fast-api-non-annotated-dependency-fast002).
It also re-words the availability section since I found it confusing.
The lint/fix was added in #11579 as always unsafe.
No reasoning is given in the original PR/code as to why this was chosen.
Example of why the fix is unsafe:
https://play.ruff.rs/3bd0566e-1ef6-4cec-ae34-3b07cd308155
```py
from fastapi import Depends, FastAPI, Query
app = FastAPI()
# Fix will remove the parameter default value
@app.get("/items/")
async def read_items(commons: dict = Depends(common_parameters)):
return commons
# Fix will delete comment and change default parameter value
@app.get("/items/")
async def read_items_1(q: str = Query( # This comment will be deleted
default="rick")):
return q
```
After fixing both instances of `FAST002`:
```py
from fastapi import Depends, FastAPI, Query
from typing import Annotated
app = FastAPI()
# Fix will remove the parameter default value
@app.get("/items/")
async def read_items(commons: Annotated[dict, Depends(common_parameters)]):
return commons
# Fix will delete comment and change default parameter value
@app.get("/items/")
async def read_items_1(q: Annotated[str, Query()] = "rick"):
return q
```
It turns out that astral-sh/ty#18692 also fixedastral-sh/ty#203. This
PR adds a regression test for it. (Locally, I "unfixed" the bug and
confirmed that this is actually a regression test.)
Fixesastral-sh/ty#203
It turns out that `annotate-snippets` doesn't do a great job of
consistently handling tabs. The intent of the implementation is clearly
to expand tabs into 4 ASCII whitespace characters. But there are a few
places where the column computation wasn't taking this expansion into
account. In particular, the `unicode-width` crate returns `None` for a
`\t` input, and `annotate-snippets` would in turn treat this as either
zero columns or one column. Both are wrong.
In patching this, it caused one of the existing `annotate-snippets`
tests to fail. I spent a fair bit of time on it trying to fix it before
coming to the conclusion that the test itself was wrong. In particular,
the annotation ranges are 4 bytes off. However, when the range was
wrong, the buggy code was rendering the example as intended since `\t`
characters were treated as taking up zero columns of space. Now that
they are correctly computed as taking up 4 columns of space, the offsets
of the test needed to be adjusted.
Fixes#670
## Summary
Adds a new micro-benchmark as a regression test for
https://github.com/astral-sh/ty/issues/627.
## Test Plan
Ran the benchmark on the parent commit of
89d915a1e3,
and verified that it took > 1s, while it takes ~10 ms after the fix.
## Summary
Format conflicting declared types as
```
`str`, `int` and `bytes`
```
Thanks to @AlexWaygood for the initial draft.
@dcreager, looking forward to your one-character follow-up PR.
## Summary
This PR includes a behavioral change to how we infer types for public
uses of symbols within a module. Where we would previously use the type
that a use at the end of the scope would see, we now consider all
reachable bindings and union the results:
```py
x = None
def f():
reveal_type(x) # previously `Unknown | Literal[1]`, now `Unknown | None | Literal[1]`
f()
x = 1
f()
```
This helps especially in cases where the the end of the scope is not
reachable:
```py
def outer(x: int):
def inner():
reveal_type(x) # previously `Unknown`, now `int`
raise ValueError
```
This PR also proposes to skip the boundness analysis of public uses.
This is consistent with the "all reachable bindings" strategy, because
the implicit `x = <unbound>` binding is also always reachable, and we
would have to emit "possibly-unresolved" diagnostics for every public
use otherwise. Changing this behavior allows common use-cases like the
following to type check without any errors:
```py
def outer(flag: bool):
if flag:
x = 1
def inner():
print(x) # previously: possibly-unresolved-reference, now: no error
```
closes https://github.com/astral-sh/ty/issues/210
closes https://github.com/astral-sh/ty/issues/607
closes https://github.com/astral-sh/ty/issues/699
## Follow up
It is now possible to resolve the following TODO, but I would like to do
that as a follow-up, because it requires some changes to how we treat
implicit attribute assignments, which could result in ecosystem changes
that I'd like to see separately.
315fb0f3da/crates/ty_python_semantic/src/semantic_index/builder.rs (L1095-L1117)
## Ecosystem analysis
[**Full report**](https://shark.fish/diff-public-types.html)
* This change obviously removes a lot of `possibly-unresolved-reference`
diagnostics (7818) because we do not analyze boundness for public uses
of symbols inside modules anymore.
* As the primary goal here, this change also removes a lot of
false-positive `unresolved-reference` diagnostics (231) in scenarios
like this:
```py
def _(flag: bool):
if flag:
x = 1
def inner():
x
raise
```
* This change also introduces some new false positives for cases like:
```py
def _():
x = None
x = "test"
def inner():
x.upper() # Attribute `upper` on type `Unknown | None | Literal["test"]`
is possibly unbound
```
We have test cases for these situations and it's plausible that we can
improve this in a follow-up.
## Test Plan
New Markdown tests
## Summary
This function is huge, and hugely indented. This PR breaks most of it
out into two helper functions: `KnownFunction::check_call()` and
`KnownClass::check_call`.
My immediate motivation is that we need to add yet more special cases to
this function in order to properly handle `tuple` instantiations and
instantiations of tuple subclasses. But I really don't relish the
thought of doing that with the function's current structure 😆
## Test Plan
Existing tests all pass. No new ones are added; this is a pure refactor
that should have no functional change.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Here's the part that was split out of #18906. I wanted to move these
into the rule files since the rest of the rules in
`deferred_scope`/`statement` have that same structure of implementations
being in the rule definition file. It also resolves the dilemma of where
to put the comment, at least for these rules.
## Test Plan
<!-- How was it tested? -->
N/A, no test/functionality affected
Summary
--
Closes#18849 by adding a `## Known issues` section describing the
potential performance issues when fixing nested iterables. I also
deleted the comment check since the fix is already unsafe and added a
note to the `## Fix safety` docs.
Test Plan
--
Existing tests, updated to allow a fix when comments are present since
the fix is already unsafe.
Summary
--
This PR resolves the easiest part of
https://github.com/astral-sh/ruff/issues/18502 by adding an autofix that
just adds
`from __future__ import annotations` at the top of the file, in the same
way
as FA102, which already has an identical unsafe fix.
Test Plan
--
Existing snapshots, updated to add the fixes.
## Summary
Add type narrowing inside comprehensions:
```py
def _(xs: list[int | None]):
[reveal_type(x) for x in xs if x is not None] # revealed: int
```
closes https://github.com/astral-sh/ty/issues/680
## Test Plan
* New Markdown tests
* Made sure the example from https://github.com/astral-sh/ty/issues/680
now checks without errors
* Made sure that all removed ecosystem diagnostics were actually false
positives
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
From @ntBre
https://github.com/astral-sh/ruff/pull/18906#discussion_r2162843366 :
> This could be a good target for a follow-up PR, but we could fold
these `if checker.is_rule_enabled { checker.report_diagnostic` checks
into calls to `checker.report_diagnostic_if_enabled`. I didn't notice
these when adding that method.
>
> Also, the docs on `Checker::report_diagnostic_if_enabled` and
`LintContext::report_diagnostic_if_enabled` are outdated now that the
`Rule` conversion is basically free 😅
>
> No pressure to take on this refactor, just an idea if you're
interested!
This PR folds those calls. I also updated the doc comments by copying
from `report_diagnostic`.
Note: It seems odd to me that the doc comment for `Checker` says
`Diagnostic` while `LintContext` says `OldDiagnostic`, not sure if that
needs a bigger docs change to fix the inconsistency.
<details>
<summary>Python script to do the changes</summary>
This script assumes it is placed in the top level `ruff` directory (ie
next to `.git`/`crates`/`README.md`)
```py
import re
from copy import copy
from pathlib import Path
ruff_crates = Path(__file__).parent / "crates"
for path in ruff_crates.rglob("**/*.rs"):
with path.open(encoding="utf-8", newline="") as f:
original_content = f.read()
if "is_rule_enabled" not in original_content or "report_diagnostic" not in original_content:
continue
original_content_position = 0
changed_content = ""
for match in re.finditer(r"(?m)(?:^[ \n]*|(?<=(?P<else>else )))if[ \n]+checker[ \n]*\.is_rule_enabled\([ \n]*Rule::\w+[ \n]*\)[ \n]*{[ \n]*checker\.report_diagnostic\(", original_content):
# Content between last match and start of this one is unchanged
changed_content += original_content[original_content_position:match.start()]
# If this was an else if, a { needs to be added at the start
if match.group("else"):
changed_content += "{"
# This will result in bad formatting, but the precommit cargo format will handle it
changed_content += "checker.report_diagnostic_if_enabled("
# Depth tracking would fail if a string/comment included a { or }, but unlikely given the context
depth = 1
position = match.end()
while depth > 0:
if original_content[position] == "{":
depth += 1
if original_content[position] == "}":
depth -= 1
position += 1
# pos - 1 is the closing }
changed_content += original_content[match.end():position - 1]
# If this was an else if, a } needs to be added at the end
if match.group("else"):
changed_content += "}"
# Skip the closing }
original_content_position = position
if original_content[original_content_position] == "\n":
# If the } is followed by a \n, also skip it for better formatting
original_content_position += 1
# Add remaining content between last match and file end
changed_content += original_content[original_content_position:]
with path.open("w", encoding="utf-8", newline="") as f:
f.write(changed_content)
```
</details>
## Test Plan
<!-- How was it tested? -->
N/A, no tests/functionality affected.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
While making some of my other changes, I noticed some of the lints were
missing comments with their lint code/had the wrong numbered lint code.
These comments are super useful since they allow for very easily and
quickly finding the source code of a lint, so I decided to try and
normalize them.
Most of them were fairly straightforward, just adding a doc
comment/comment in the appropriate place.
I decided to make all of the `Pylint` rules have the `PL` prefix.
Previously it was split between no prefix and having prefix, but I
decided to normalize to with prefix since that's what's in the docs, and
the with prefix will show up on no prefix searches, while the reverse is
not true.
I also ran into a lot of rules with implementations in "non-standard"
places (where "standard" means inside a file matching the glob
`crates/ruff_linter/rules/*/rules/**/*.rs` and/or the same rule file
where the rule `struct`/`ViolationMetadata` is defined).
I decided to move all the implementations out of
`crates/ruff_linter/src/checkers/ast/analyze/deferred_scopes.rs` and
into their own files, since that is what the rest of the rules in
`deferred_scopes.rs` did, and those were just the outliers.
There were several rules which I did not end up moving, which you can
see as the extra paths I had to add to my python code besides the
"standard" glob. These rules are generally the error-type rules that
just wrap an error from the parser, and have very small
implementations/are very tightly linked to the module they are in, and
generally every rule of that type was implemented in module instead of
in the "standard" place.
Resolving that requires answering a question I don't think I'm equipped
to handle: Is the point of these comments to give quick access to the
rule definition/docs, or the rule implementation? For all the rules with
implementations in the "standard" location this isn't a problem, as they
are the same, but it is an issue for all of these error type rules. In
the end I chose to leave the implementations where they were, but I'm
not sure if that was the right choice.
<details>
<summary>Python script I wrote to find missing comments</summary>
This script assumes it is placed in the top level `ruff` directory (ie
next to `.git`/`crates`/`README.md`)
```py
import re
from copy import copy
from pathlib import Path
linter_to_code_prefix = {
"Airflow": "AIR",
"Eradicate": "ERA",
"FastApi": "FAST",
"Flake82020": "YTT",
"Flake8Annotations": "ANN",
"Flake8Async": "ASYNC",
"Flake8Bandit": "S",
"Flake8BlindExcept": "BLE",
"Flake8BooleanTrap": "FBT",
"Flake8Bugbear": "B",
"Flake8Builtins": "A",
"Flake8Commas": "COM",
"Flake8Comprehensions": "C4",
"Flake8Copyright": "CPY",
"Flake8Datetimez": "DTZ",
"Flake8Debugger": "T10",
"Flake8Django": "DJ",
"Flake8ErrMsg": "EM",
"Flake8Executable": "EXE",
"Flake8Fixme": "FIX",
"Flake8FutureAnnotations": "FA",
"Flake8GetText": "INT",
"Flake8ImplicitStrConcat": "ISC",
"Flake8ImportConventions": "ICN",
"Flake8Logging": "LOG",
"Flake8LoggingFormat": "G",
"Flake8NoPep420": "INP",
"Flake8Pie": "PIE",
"Flake8Print": "T20",
"Flake8Pyi": "PYI",
"Flake8PytestStyle": "PT",
"Flake8Quotes": "Q",
"Flake8Raise": "RSE",
"Flake8Return": "RET",
"Flake8Self": "SLF",
"Flake8Simplify": "SIM",
"Flake8Slots": "SLOT",
"Flake8TidyImports": "TID",
"Flake8Todos": "TD",
"Flake8TypeChecking": "TC",
"Flake8UnusedArguments": "ARG",
"Flake8UsePathlib": "PTH",
"Flynt": "FLY",
"Isort": "I",
"McCabe": "C90",
"Numpy": "NPY",
"PandasVet": "PD",
"PEP8Naming": "N",
"Perflint": "PERF",
"Pycodestyle": "",
"Pydoclint": "DOC",
"Pydocstyle": "D",
"Pyflakes": "F",
"PygrepHooks": "PGH",
"Pylint": "PL",
"Pyupgrade": "UP",
"Refurb": "FURB",
"Ruff": "RUF",
"Tryceratops": "TRY",
}
ruff = Path(__file__).parent / "crates"
ruff_linter = ruff / "ruff_linter" / "src"
code_to_rule_name = {}
with open(ruff_linter / "codes.rs") as codes_file:
for linter, code, rule_name in re.findall(
# The (?<! skips ruff test rules
# Only Preview|Stable rules are checked
r"(?<!#\[cfg\(any\(feature = \"test-rules\", test\)\)\]\n) \((\w+), \"(\w+)\"\) => \(RuleGroup::(?:Preview|Stable), [\w:]+::(\w+)\)",
codes_file.read(),
):
code_to_rule_name[linter_to_code_prefix[linter] + code] = (rule_name, [])
ruff_linter_rules = ruff_linter / "rules"
for rule_file_path in [
*ruff_linter_rules.rglob("*/rules/**/*.rs"),
ruff / "ruff_python_parser" / "src" / "semantic_errors.rs",
ruff_linter / "pyproject_toml.rs",
ruff_linter / "checkers" / "noqa.rs",
ruff_linter / "checkers" / "ast" / "mod.rs",
ruff_linter / "checkers" / "ast" / "analyze" / "unresolved_references.rs",
ruff_linter / "checkers" / "ast" / "analyze" / "expression.rs",
ruff_linter / "checkers" / "ast" / "analyze" / "statement.rs",
]:
with open(rule_file_path, encoding="utf-8") as f:
rule_file_content = f.read()
for code, (rule, _) in copy(code_to_rule_name).items():
if rule in rule_file_content:
if f"// {code}" in rule_file_content or f", {code}" in rule_file_content:
del code_to_rule_name[code]
else:
code_to_rule_name[code][1].append(rule_file_path)
for code, rule in code_to_rule_name.items():
print(code, rule[0])
for path in rule[1]:
print(path)
```
</details>
## Test Plan
<!-- How was it tested? -->
N/A, no tests/functionality affected.
## Summary
Having a recursive type method to check whether a type is fully static
is inefficient, unnecessary, and makes us overly strict about subtyping
relations.
It's inefficient because we end up re-walking the same types many times
to check for fully-static-ness.
It's unnecessary because we can check relations involving the dynamic
type appropriately, depending whether the relation is subtyping or
assignability.
We use the subtyping relation to simplify unions and intersections. We
can usefully consider that `S <: T` for gradual types also, as long as
it remains true that `S | T` is equivalent to `T` and `S & T` is
equivalent to `S`.
One conservative definition (implemented here) that satisfies this
requirement is that we consider `S <: T` if, for every possible pair of
materializations `S'` and `T'`, `S' <: T'`. Or put differently the top
materialization of `S` (`S+` -- the union of all possible
materializations of `S`) is a subtype of the bottom materialization of
`T` (`T-` -- the intersection of all possible materializations of `T`).
In the most basic cases we can usefully say that `Any <: object` and
that `Never <: Any`, and we can handle more complex cases inductively
from there.
This definition of subtyping for gradual subtypes is not reflexive
(`Any` is not a subtype of `Any`).
As a corollary, we also remove `is_gradual_equivalent_to` --
`is_equivalent_to` now has the meaning that `is_gradual_equivalent_to`
used to have. If necessary, we could restore an
`is_fully_static_equivalent_to` or similar (which would not do an
`is_fully_static` pre-check of the types, but would instead pass a
relation-kind enum down through a recursive equivalence check, similar
to `has_relation_to`), but so far this doesn't appear to be necessary.
Credit to @JelleZijlstra for the observation that `is_fully_static` is
unnecessary and overly restrictive on subtyping.
There is another possible definition of gradual subtyping: instead of
requiring that `S+ <: T-`, we could instead require that `S+ <: T+` and
`S- <: T-`. In other words, instead of requiring all materializations of
`S` to be a subtype of every materialization of `T`, we just require
that every materialization of `S` be a subtype of _some_ materialization
of `T`, and that every materialization of `T` be a supertype of some
materialization of `S`. This definition also preserves the core
invariant that `S <: T` implies that `S | T = T` and `S & T = S`, and it
restores reflexivity: under this definition, `Any` is a subtype of
`Any`, and for any equivalent types `S` and `T`, `S <: T` and `T <: S`.
But unfortunately, this definition breaks transitivity of subtyping,
because nominal subclasses in Python use assignability ("consistent
subtyping") to define acceptable overrides. This means that we may have
a class `A` with `def method(self) -> Any` and a subtype `B(A)` with
`def method(self) -> int`, since `int` is assignable to `Any`. This
means that if we have a protocol `P` with `def method(self) -> Any`, we
would have `B <: A` (from nominal subtyping) and `A <: P` (`Any` is a
subtype of `Any`), but not `B <: P` (`int` is not a subtype of `Any`).
Breaking transitivity of subtyping is not tenable, so we don't use this
definition of subtyping.
## Test Plan
Existing tests (modified in some cases to account for updated
semantics.)
Stable property tests pass at a million iterations:
`QUICKCHECK_TESTS=1000000 cargo test -p ty_python_semantic -- --ignored
types::property_tests::stable`
### Changes to property test type generation
Since we no longer have a method of categorizing built types as
fully-static or not-fully-static, I had to add a previously-discussed
feature to the property tests so that some tests can build types that
are known by construction to be fully static, because there are still
properties that only apply to fully-static types (for example,
reflexiveness of subtyping.)
## Changes to handling of `*args, **kwargs` signatures
This PR "discovered" that, once we allow non-fully-static types to
participate in subtyping under the above definitions, `(*args: Any,
**kwargs: Any) -> Any` is now a subtype of `() -> object`. This is true,
if we take a literal interpretation of the former signature: all
materializations of the parameters `*args: Any, **kwargs: Any` can
accept zero arguments, making the former signature a subtype of the
latter. But the spec actually says that `*args: Any, **kwargs: Any`
should be interpreted as equivalent to `...`, and that makes a
difference here: `(...) -> Any` is not a subtype of `() -> object`,
because (unlike a literal reading of `(*args: Any, **kwargs: Any)`),
`...` can materialize to _any_ signature, including a signature with
required positional arguments.
This matters for this PR because it makes the "any two types are both
assignable to their union" property test fail if we don't implement the
equivalence to `...`. Because `FunctionType.__call__` has the signature
`(*args: Any, **kwargs: Any) -> Any`, and if we take that at face value
it's a subtype of `() -> object`, making `FunctionType` a subtype of `()
-> object)` -- but then a function with a required argument is also a
subtype of `FunctionType`, but not a subtype of `() -> object`. So I
went ahead and implemented the equivalence to `...` in this PR.
## Ecosystem analysis
* Most of the ecosystem report are cases of improved union/intersection
simplification. For example, we can now simplify a union like `bool |
(bool & Unknown) | Unknown` to simply `bool | Unknown`, because we can
now observe that every possible materialization of `bool & Unknown` is
still a subtype of `bool` (whereas before we would set aside `bool &
Unknown` as a not-fully-static type.) This is clearly an improvement.
* The `possibly-unresolved-reference` errors in sockeye, pymongo,
ignite, scrapy and others are true positives for conditional imports
that were formerly silenced by bogus conflicting-declarations (which we
currently don't issue a diagnostic for), because we considered two
different declarations of `Unknown` to be conflicting (we used
`is_equivalent_to` not `is_gradual_equivalent_to`). In this PR that
distinction disappears and all equivalence is gradual, so a declaration
of `Unknown` no longer conflicts with a declaration of `Unknown`, which
then results in us surfacing the possibly-unbound error.
* We will now issue "redundant cast" for casting from a typevar with a
gradual bound to the same typevar (the hydra-zen diagnostic). This seems
like an improvement.
* The new diagnostics in bandersnatch are interesting. For some reason
primer in CI seems to be checking bandersnatch on Python 3.10 (not yet
sure why; this doesn't happen when I run it locally). But bandersnatch
uses `enum.StrEnum`, which doesn't exist on 3.10. That makes the `class
SimpleDigest(StrEnum)` a class that inherits from `Unknown` (and
bypasses our current TODO handling for accessing attributes on enum
classes, since we don't recognize it as an enum class at all). This PR
improves our understanding of assignability to classes that inherit from
`Any` / `Unknown`, and we now recognize that a string literal is not
assignable to a class inheriting `Any` or `Unknown`.
Add property test generators for the new variable-length tuples. This
covers homogeneous tuples as well.
The property tests did their job! This identified several fixes we
needed to make to various type property methods.
cf https://github.com/astral-sh/ruff/pull/18600#issuecomment-2993764471
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This PR expands PGH005 to also check for AsyncMock methods in the same
vein. E.g., currently `assert mock.not_called` is linted. This PR adds
the corresponding async assertions `assert mock.not_awaited()`.
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>