## Summary
This PR moves `empty_parenthesized` such that it's peer to
`parenthesized`, and changes the API to better match that of
`parenthesized` (takes `&str` rather than `StaticText`, has a
`with_dangling_comments` method, etc.).
It may be intentionally _not_ part of `parentheses.rs`, but to me
they're so similar that it makes more sense for them to be in the same
module, with the same API, etc.
## Summary
This PR adds support for `StmtMatch` with subs for `MatchCase`.
## Test Plan
Add a few additional test cases around `match` statement, comments, line
breaks.
resolves: #6298
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fix name of rule in example of `extend-per-file-ignores` in `options.rs`
file.
It was `E401` but in configuration example `E402` was listed. Just a
tiny mismatch.
## Test Plan
<!-- How was it tested? -->
Just by my eyes :).
## Bug
Given
```python
x = () - (#
)
```
the comment is a dangling comment of the empty tuple. This is an
end-of-line comment so it may move after the expression. It still
expands the parent, so the operator breaks:
```python
x = (
()
- () #
)
```
In the next formatting pass, the comment is not a trailing tuple but a
trailing bin op comment, so the bin op doesn't break anymore. The
comment again expands the parent, so we still add the superfluous
parentheses
```python
x = (
() - () #
)
```
## Fix
The new formatting is to keep the comment on the empty tuple. This is a
log uglier and again has additional outer parentheses, but it's stable:
```python
x = (
()
- ( #
)
)
```
## Alternatives
Black formats all the examples above as
```python
x = () - () #
```
which i find better.
I would be happy about any suggestions for better solutions than the
current one. I'd mainly need a workaround for expand parent having an
effect on the bin op instead of first moving the comment to the end and
then applying expand parent to the assign statement.
## Summary
Manually add the parentheses around tuple expressions for the autofix in
`B014`.
This is also done in various other autofixes as well such as for
[`RUF005`](6df5ab4098/crates/ruff/src/rules/ruff/rules/collection_literal_concatenation.rs (L183-L184)),
[`UP024`](6df5ab4098/crates/ruff/src/rules/pyupgrade/rules/os_error_alias.rs (L137-L137)).
### Alternate Solution
An alternate solution would be to fix this in the `Generator` itself by
checking
if the tuple expression needs to be generated at the top-level or not.
If so,
then always add the parentheses.
```rust
} else if level == 0 {
// Top-level tuples are always parenthesized.
self.p("(");
let mut first = true;
for elt in elts {
self.p_delim(&mut first, ", ");
self.unparse_expr(elt, precedence::COMMA);
}
self.p_if(elts.len() == 1, ",");
self.p(")");
```
## Test Plan
Add a regression test for this case in `B014`.
fixes: #6412
Extends #6289 to support moving type variable usage in type aliases to
use PEP-695.
Does not remove the possibly unused type variable declaration.
Presumably this is handled by other rules, but is not working for me.
Does not handle type variables with bounds or variance declarations yet.
Part of #4617
## Summary
We have some logic in the expression analyzer method to avoid
re-checking the inner `Union` in `Union[Union[...]]`, since the methods
that analyze `Union` expressions already recurse. Elsewhere, we have
logic to avoid re-checking the inner `|` in `int | (int | str)`, for the
same reason.
This PR unifies that logic into a single method _and_ ensures that, just
as we recurse over both `Union` and `|`, we also detect that we're in
_either_ kind of nested union.
Closes https://github.com/astral-sh/ruff/issues/6285.
## Test Plan
Added some new snapshots.
## Summary
We can anticipate earlier that this will error, so we should avoid
flagging the error at all. Specifically, we're talking about cases like
`"{1} {0}".format(*args)"`, in which we'd need to reorder the arguments
in order to remove the `1` and `0`, but we _can't_ reorder the arguments
since they're not statically analyzable.
Closes https://github.com/astral-sh/ruff/issues/6388.
## Summary
I noticed some deviations in how we treat dangling comments that hug the
opening parenthesis for function definitions.
For example, given:
```python
def f( # first
# second
): # third
...
```
We currently format as:
```python
def f(
# first
# second
): # third
...
```
This PR adds the proper opening-parenthesis dangling comment handling
for function parameters. Specifically, as with all other parenthesized
nodes, we now detect that dangling comment in `placement.rs` and handle
it in `parameters.rs`. We have to take some care in that file, since we
have multiple "kinds" of dangling comments, but I added a bunch of test
cases that we now format identically to Black.
## Test Plan
`cargo test`
Before:
- `zulip`: 0.99388
- `django`: 0.99784
- `warehouse`: 0.99504
- `transformers`: 0.99404
- `cpython`: 0.75913
- `typeshed`: 0.74364
After:
- `zulip`: 0.99386
- `django`: 0.99784
- `warehouse`: 0.99504
- `transformers`: 0.99404
- `cpython`: 0.75913
- `typeshed`: 0.74409
Meaningful improvement on `typeshed`, minor decrease on `zulip`.
Closes https://github.com/astral-sh/ruff/issues/6068
These commits are kind of a mess as I did some stumbling around here.
Unrolls formatting of chained boolean operations to prevent nested
grouping which gives us Black-compatible formatting where each boolean
operation is on a new line.
## Summary
This PR modifies our `can_omit_optional_parentheses` rules to ensure
that if we see a call followed by an attribute, we treat that as an
attribute access rather than a splittable call expression.
This in turn ensures that we wrap like:
```python
ct_match = aaaaaaaaaaact_id == self.get_content_type(
obj=rel_obj, using=instance._state.db
)
```
For calls, but:
```python
ct_match = (
aaaaaaaaaaact_id == self.get_content_type(obj=rel_obj, using=instance._state.db).id
)
```
For calls with trailing attribute accesses.
Closes https://github.com/astral-sh/ruff/issues/6065.
## Test Plan
Similarity index before:
- `zulip`: 0.99436
- `django`: 0.99779
- `warehouse`: 0.99504
- `transformers`: 0.99403
- `cpython`: 0.75912
- `typeshed`: 0.72293
And after:
- `zulip`: 0.99436
- `django`: 0.99780
- `warehouse`: 0.99504
- `transformers`: 0.99404
- `cpython`: 0.75913
- `typeshed`: 0.72293
## Summary
This PR leverages the unified function definition node to add precise
AST node types to `MemberKind`, which is used to power our docstring
definition tracking (e.g., classes and functions, whether they're
methods or functions or nested functions and so on, whether they have a
docstring, etc.). It was painful to do this in the past because the
function variants needed to support a union anyway, but storing precise
nodes removes like a dozen panics.
No behavior changes -- purely a refactor.
## Test Plan
`cargo test`
## Summary
Per the suggestion in
https://github.com/astral-sh/ruff/discussions/6183, this PR removes
`AsyncWith`, `AsyncFor`, and `AsyncFunctionDef`, replacing them with an
`is_async` field on the non-async variants of those structs. Unlike an
interpreter, we _generally_ have identical handling for these nodes, so
separating them into distinct variants adds complexity from which we
don't really benefit. This can be seen below, where we get to remove a
_ton_ of code related to adding generic `Any*` wrappers, and a ton of
duplicate branches for these cases.
## Test Plan
`cargo test` is unchanged, apart from parser snapshots.
## Summary
See discussion in
https://github.com/astral-sh/ruff/pull/6351#discussion_r1284996979. We
can remove `RefEquality` entirely and instead use a text offset for
statement keys, since no two statements can start at the same text
offset.
## Test Plan
`cargo test`
## Summary
This PR adds support for help end escape command in the lexer.
### What are "help end escape commands"?
First, the escape commands are special IPython syntax which enhances the
functionality for the IPython REPL. There are 9 types of escape kinds
which are recognized by the tokens which are present at the start of the
command (`?`, `??`, `!`, `!!`, etc.).
Here, the help command is using either the `?` or `??` token at the
start (`?str.replace` for example). Those 2 tokens are also supported
when they're at the end of the command (`str.replace?`), but the other
tokens aren't supported in that position.
There are mainly two types of help end escape commands:
1. Ending with either `?` or `??`, but it also starts with one of the
escape tokens (`%matplotlib?`)
2. On the other hand, there's a stricter version for (1) which doesn't
start with any escape tokens (`str.replace?`)
This PR adds support for (1) while (2) will be supported in the parser.
### Priority
Now, if the command starts and ends with an escape token, how do we
decide the kind of this command? This is where priority comes into
picture. This is simple as there's only one priority where `?`/`??` at
the end takes priority over any other escape token and all of the other
tokens are at the same priority. Remember that only `?`/`??` at the end
is considered valid.
This is mainly useful in the case where someone would want to invoke the
help command on the magic command itself. For example, in `%matplotlib?`
the help command takes priority which means that we want help for the
`matplotlib` magic function instead of calling the magic function
itself.
### Specification
Here's where things get a bit tricky. What if there are question mark
tokens at both ends. How do we decide if it's `Help` (`?`) kind or
`Help2` (`??`) kind?
| | Magic | Value | Kind |
| --- | --- | --- | --- |
| 1 | `?foo?` | `foo` | `Help` |
| 2 | `??foo?` | `foo` | `Help` |
| 3 | `?foo??` | `foo` | `Help2` |
| 4 | `??foo??` | `foo` | `Help2` |
| 5 | `???foo??` | `foo` | `Help2` |
| 6 | `??foo???` | `foo???` | `Help2` |
| 7 | `???foo???` | `?foo???` | `Help2` |
Looking at the above table:
- The question mark tokens on the right takes priority over the ones on
the left but only if the number of question mark on the right is 1 or 2.
- If there are more than 2 question mark tokens on the right side, then
the left side is used to determine the same.
- If the right side is used to determine the kind, then all of the
question marks and whitespaces on the left side are ignored in the
`value`, but if it’s the other way around, then all of the extra
question marks are part of the `value`.
### References
- IPython implementation using the regex:
292e3a2345/IPython/core/inputtransformer2.py (L454-L462)
- Priorities:
292e3a2345/IPython/core/inputtransformer2.py (L466-L469)
## Test Plan
Add a bunch of test cases for the lexer and verify that it matches the
behavior of
IPython transformer.
resolves: #6357
## Summary
This PR fixes the performance degradation introduced in
https://github.com/astral-sh/ruff/pull/6345. Instead of using the
generic `Nodes` structs, we now use separate `Statement` and
`Expression` structs. Importantly, we can avoid tracking a bunch of
state for expressions that we need for parents: we don't need to track
reference-to-ID pointers (we just have no use-case for this -- I'd
actually like to remove this from statements too, but we need it for
branch detection right now), we don't need to track depth, etc.
In my testing, this entirely removes the regression on all-rules, and
gets us down to 2ms slower on the default rules (as a crude hyperfine
benchmark, so this is within margin of error IMO).
No behavioral changes.
## Summary
This PR attempts to draw a clearer divide between "methods that take
(e.g.) an expression or statement as input" and "methods that rely on
the _current_ expression or statement" in the semantic model, by
renaming methods like `stmt()` to `current_statement()`.
This had led to confusion in the past. For example, prior to this PR, we
had `scope()` (which returns the current scope), and `parent_scope`,
which returns the parent _of a scope that's passed in_. Now, the API is
clearer: `current_scope` returns the current scope, and `parent_scope`
takes a scope as argument and returns its parent.
Per above, I also changed `stmt` to `statement` and `expr` to
`expression`.
## Summary
Given:
```python
[ # comment
first,
second,
third
] # another comment
```
We were adding a hard line break as part of the formatting of `#
comment`, which led to the following formatting:
```python
[first, second, third] # comment
# another comment
```
Closes https://github.com/astral-sh/ruff/issues/6367.
## Summary
Fixes an instability whereby this:
```python
def get_recent_deployments(threshold_days: int) -> Set[str]:
# Returns a list of deployments not older than threshold days
# including `/root/zulip` directory if it exists.
recent = set()
threshold_date = datetime.datetime.now() - datetime.timedelta( # noqa: DTZ005
days=threshold_days
)
```
Was being formatted as:
```python
def get_recent_deployments(threshold_days: int) -> Set[str]:
# Returns a list of deployments not older than threshold days
# including `/root/zulip` directory if it exists.
recent = set()
threshold_date = (
datetime.datetime.now()
- datetime.timedelta(days=threshold_days) # noqa: DTZ005
)
```
Which was in turn being formatted as:
```python
def get_recent_deployments(threshold_days: int) -> Set[str]:
# Returns a list of deployments not older than threshold days
# including `/root/zulip` directory if it exists.
recent = set()
threshold_date = (
datetime.datetime.now() - datetime.timedelta(days=threshold_days) # noqa: DTZ005
)
```
The second-to-third formattings still differs from Black because we
aren't taking the line suffix into account when splitting
(https://github.com/astral-sh/ruff/issues/6377), but the first
formatting is correct and should be unchanged (i.e., the first-to-second
formattings is incorrect, and fixed here).
## Test Plan
`cargo run --bin ruff_dev -- format-dev --stability-check ../zulip`
## Summary
When we iterate over the AST for analysis, we often process nodes in a
"deferred" manner. For example, if we're analyzing a function, we push
the function body onto a deferred stack, along with a snapshot of the
current semantic model state. Later, when we analyze the body, we
restore the semantic model state from the snapshot. This ensures that we
know the correct scope, hierarchy of statement parents, etc., when we go
to analyze the function body.
Historically, we _haven't_ included the _expression_ hierarchy in the
model snapshot -- so we track the current expression parents in the
visitor, but we never save and restore them when processing deferred
nodes. This can lead to subtle bugs, in that methods like
`expr_parent()` aren't guaranteed to be correct, if you're in a deferred
visitor.
This PR migrates expression tracking to mirror statement tracking
exactly. So we push all expressions onto an `IndexVec`, and include the
current expression on the snapshot. This ensures that `expr_parent()`
and related methods are "always correct" rather than "sometimes
correct".
There's a performance cost here, both at runtime and in terms of memory
consumption (we now store an additional pointer for every expression).
In my hyperfine testing, it's about a 1% performance decrease for
all-rules on CPython (up to 533.8ms, from 528.3ms) and a 4% performance
decrease for default-rules on CPython (up to 212ms, from 204ms).
However... I think this is worth it given the incorrectness of our
current approach. In the future, we may want to reconsider how we do
these upward traversals (e.g., with something like a red-green tree).
(**Note**: in https://github.com/astral-sh/ruff/pull/6351, the slowdown
seems to be entirely removed.)
Changes:
- Fixes typo and repeated phrase in `DTZ002`
- Adds docs for `DTZ003`
- Adds docs for `DTZ004`
- Adds example for <=Python3.10 in `DTZ001`
Related to: https://github.com/astral-sh/ruff/issues/2646
## Summary
Historically, we've stored "qualified names" on our
`BindingKind::Import`, `BindingKind::SubmoduleImport`, and
`BindingKind::ImportFrom` structs. In Ruff, a "qualified name" is a
dot-separated path to a symbol. For example, given `import foo.bar`, the
"qualified name" would be `"foo.bar"`; and given `from foo.bar import
baz`, the "qualified name" would be `foo.bar.baz`.
This PR modifies the `BindingKind` structs to instead store _call paths_
rather than qualified names. So in the examples above, we'd store
`["foo", "bar"]` and `["foo", "bar", "baz"]`. It turns out that this
more efficient given our data access patterns. Namely, we frequently
need to convert the qualified name to a call path (whenever we call
`resolve_call_path`), and it turns out that we do this operation enough
that those conversations show up on benchmarks.
There are a few other advantages to using call paths, rather than
qualified names:
1. The size of `BindingKind` is reduced from 32 to 24 bytes, since we no
longer need to store a `String` (only a boxed slice).
2. All three import types are more consistent, since they now all store
a boxed slice, rather than some storing an `&str` and some storing a
`String` (for `BindingKind::ImportFrom`, we needed to allocate a
`String` to create the qualified name, but the call path is a slice of
static elements that don't require that allocation).
3. A lot of code gets simpler, in part because we now do call path
resolution "earlier". Most notably, for relative imports (`from .foo
import bar`), we store the _resolved_ call path rather than the relative
call path, so the semantic model doesn't have to deal with that
resolution. (See that `resolve_call_path` is simpler, fewer branches,
etc.)
In my testing, this change improves the all-rules benchmark by another
4-5% on top of the improvements mentioned in #6047.
## Summary
Update `F841` autofix to not remove line magic expr
## Test Plan
Added test case for assignment statement with and without type
annotation
fixes: #6116
## Summary
Enable using the new `Mode::Jupyter` for the tokenizer/parser to parse
Jupyter line magic tokens.
The individual call to the lexer i.e., `lex_starts_at` done by various
rules should consider the context of the source code (is this content
from a Jupyter Notebook?). Thus, a new field `source_type` (of type
`PySourceType`) is added to `Checker` which is being passed around as an
argument to the relevant functions. This is then used to determine the
`Mode` for the lexer.
## Test Plan
Add new test cases to make sure that the magic statement is considered
while generating the diagnostic and autofix:
* For `I001`, if there's a magic statement in between two import blocks,
they should be sorted independently
fixes: #6090
## Summary
As of version
[23.1.0](2a86db8271/CHANGELOG.md (L158-L160)),
`flake8-pyi` remove the rule `Y027`.
The errors that resulted in `PYI027` are now being emitted by `PYI022`
(`UP035`).
ref: #848
## Test Plan
Add new tests cases.
## Summary
Previously, failed on methods like:
```python
@classmethod
def bad_posonly_class_method(cls: type[_S], /) -> _S: ... # PYI019
```
Since we check if there are any positional-only or non-positional
arguments, but then do an unsafe access on `parameters.args`.
Closes https://github.com/astral-sh/ruff/issues/6349.
## Test Plan
`cargo test` (verified that `main` panics on the new fixtures)
These rules assume that the docstring is on its own line. pydocstyle
treats them inconsistently, so I'm just going to disable them in this
case.
Closes https://github.com/astral-sh/ruff/issues/6329.
## Summary
Fixes some comprehension formatting by avoiding creating the group for
the comprehension itself (so that if it breaks, all parts break on their
own lines, e.g. the `for` and the `if` clauses).
Closes https://github.com/astral-sh/ruff/issues/6063.
## Test Plan
Bunch of new fixtures.