## Summary
This PR refactors unary expression parsing with the following changes:
* Ability to get `OperatorPrecedence` from a unary operator (`UnaryOp`)
* Implement methods on `TokenKind`
* Add `as_unary_operator` which returns an `Option<UnaryOp>`
* Add `as_unary_arithmetic_operator` which returns an `Option<UnaryOp>`
(used for pattern parsing)
* Rename `is_unary` to `is_unary_arithmetic_operator` (used in the
linter)
resolves: #10752
## Test Plan
Verify that the existing test cases pass, no ecosystem changes, run the
Python based fuzzer on 3000 random inputs and run it on dozens of
open-source repositories.
## Summary
This PR refactors the binary expression parsing in a way to make it
readable and easy to understand. It draws inspiration from the suggested
edits in the linked messages in #10752.
### Changes
* Ability to get the precedence of an operator
* From a boolean operator (`BinOp`) to `OperatorPrecedence`
* From a binary operator (`Operator`) to `OperatorPrecedence`
* No comparison operator because all of them have the same precedence
* Implement methods on `TokenKind` to convert it to an appropriate
operator enum
* Add `as_boolean_operator` which returns an `Option<BoolOp>`
* Add `as_binary_operator` which returns an `Option<Operator>`
* No `as_comparison_operator` because it requires lookahead and I'm not
sure if `token.as_comparison_operator(peek)` is a good way to implement
it
* Introduce `BinaryLikeOperator`
* Constructed from two tokens using the methods from the second point
* Add `precedence` method using the conversion methods mentioned in the
first point
* Make most of the functions in `TokenKind` private to the module
* Use `self` instead of `&self` for `TokenKind`
fixes: #11072
## Test Plan
Refer #11088
## Summary
This PR does a few things but the main change is that is makes
associativity a property of operator precedence.
1. Rename `Precedence` -> `OperatorPrecedence`
2. Rename `parse_expression_with_precedence` ->
`parse_binary_expression_or_higher`
3. Move `current_binding_power` to `OperatorPrecedence::try_from_tokens`
[^1]
4. Add a `OperatorPrecedence::is_right_associative` method
5. Move from `increment_precedence` to using `<=` / `<` to check if the
parsing loop needs to stop [^2]
[^1]: Another alternative would be to have two separate methods to avoid
lookahead as it's required only for once case (`not in`). So,
`try_from_current_token(current).or_else(|| try_from_next_token(current,
peek))`
[^2]: This will allow us to easily make the refactors mentioned in
#10752
## Test Plan
Make sure the precedence parsing algorithm is still correct by running
the test suite, fuzz testing it and running it against a dozen or so
open-source repositories.
## Summary
This PR adds a new `ExpressionContext` struct which is used in
expression parsing.
This solves the following problem:
1. Allowing starred expression with different precedence
2. Allowing yield expression in certain context
3. Remove ambiguity with `in` keyword when parsing a `for ... in`
statement
For context, (1) was solved by adding `parse_star_expression_list` and
`parse_star_expression_or_higher` in #10623, (2) was solved by by adding
`parse_yield_expression_or_else` in #10809, and (3) was fixed in #11009.
All of the mentioned functions have been removed in favor of the context
flags.
As mentioned in #11009, an ideal solution would be to implement an
expression context which is what this PR implements. This is passed
around as function parameter and the call stack is used to automatically
reset the context.
### Recovery
How should the parser recover if the target expression is invalid when
an expression can consume the `in` keyword?
1. Should the `in` keyword be part of the target expression?
2. Or, should the expression parsing stop as soon as `in` keyword is
encountered, no matter the expression?
For example:
```python
for yield x in y: ...
# Here, should this be parsed as
for (yield x) in (y): ...
# Or
for (yield x in y): ...
# where the `in iter` part is missing
```
Or, for binary expression parsing:
```python
for x or y in z: ...
# Should this be parsed as
for (x or y) in z: ...
# Or
for (x or y in z): ...
# where the `in iter` part is missing
```
This need not be solved now, but is very easy to change. For context
this PR does the following:
* For binary, comparison, and unary expressions, stop at `in`
* For lambda, yield expressions, consume the `in`
## Test Plan
1. Add test cases for the `for ... in` statement and verify the
snapshots
2. Make sure the existing test suite pass
3. Run the fuzzer for around 3000 generated source code
4. Run the updated logic on a dozen or so open source repositories
(codename "parser-checkouts")
## Summary
Fixes#11059
Several major editors don't support [pull
diagnostics](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#textDocument_pullDiagnostics),
a method of sending diagnostics to the client that was introduced in
version `0.3.17` of the specification. Until now, `ruff server` has only
used pull diagnostics, which resulted in diagnostics not being available
on Neovim and Helix, which don't support pull diagnostics yet (though
Neovim `10.0` will have support for this).
`ruff server` will now utilize the older method of sending diagnostics,
known as 'publish diagnostics', when pull diagnostics aren't supported
by the client. This involves re-linting a document every time it is
opened or modified, and then sending the diagnostics generated from that
lint to the client via the `textDocument/publishDiagnostics`
notification.
## Test Plan
The easiest way to test that this PR works is to check if diagnostics
show up on Neovim `<=0.9`.
## Summary
Fixes#10463
Add `FURB192` which detects violations like this:
```python
# Bad
a = sorted(l)[0]
# Good
a = min(l)
```
There is a caveat that @Skylion007 has pointed out, which is that
violations with `reverse=True` technically aren't compatible with this
change, in the edge case where the unstable behavior is intended. For
example:
```python
from operator import itemgetter
data = [('red', 1), ('blue', 1), ('red', 2), ('blue', 2)]
min(data, key=itemgetter(0)) # ('blue', 1)
sorted(data, key=itemgetter(0))[0] # ('blue', 1)
sorted(data, key=itemgetter(0), reverse=True)[-1] # ('blue, 2')
```
This seems like a rare edge case, but I can make the `reverse=True`
fixes unsafe if that's best.
## Test Plan
This is unit tested.
## References
https://github.com/dosisod/refurb/pull/333/files
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
The `operator.itemgetter` behavior changes where there's more than one
argument, such that `operator.itemgetter(0)` yields `r[0]`, rather than
`(r[0],)`.
Closes https://github.com/astral-sh/ruff/issues/11075.
## Summary
There is no class `integer` in python, nor is there a type `integer`, so
I updated the docs to remove the backticks on these references, such
that it is the representation of an integer, and not a reference.
## Summary
Move `blanket-noqa` rule from the token checker to the noqa checker.
This allows us to make use of the line directives already computed in
the noqa checker.
## Test Plan
Verified test results are unchanged.
Resolves#10187
<details>
<summary>Old PR description; accurate through commit e86dd7d; probably
best to leave this fold closed</summary>
## Description of change
In the case of a printf-style format string with only one %-placeholder
and a variable at right (e.g. `"%s" % var`):
* The new behavior attempts to dereference the variable and then match
on the bound expression to distinguish between a 1-tuple (fix), n-tuple
(bug 🐛), or a non-tuple (fix). Dereferencing is via
`analyze::typing::find_binding_value`.
* If the variable cannot be dereferenced, then the type-analysis routine
is called to distinguish only tuple (no-fix) or non-tuple (fix). Type
analysis is via `analyze::typing::is_tuple`.
* If any of the above fails, the rule still fires, but no fix is
offered.
## Alternatives
* If the reviewers think that singling out the 1-tuple case is too
complicated, I will remove that.
* The ecosystem results show that no new fixes are detected. So I could
probably delete all the variable dereferencing code and code that tries
to generate fixes, tbh.
## Changes to existing behavior
**All the previous rule-firings and fixes are unchanged except for** the
"false negatives" in
`crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_1.py`. Those
previous "false negatives" are now true positives and so I moved them to
`crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py`.
<details>
<summary>Existing false negatives that are now true positives</summary>
```
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:134:1: UP031 Use format specifiers instead of percent format
|
133 | # UP031 (no longer false negatives)
134 | 'Hello %s' % bar
| ^^^^^^^^^^^^^^^^ UP031
135 |
136 | 'Hello %s' % bar.baz
|
= help: Replace with format specifiers
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:136:1: UP031 Use format specifiers instead of percent format
|
134 | 'Hello %s' % bar
135 |
136 | 'Hello %s' % bar.baz
| ^^^^^^^^^^^^^^^^^^^^ UP031
137 |
138 | 'Hello %s' % bar['bop']
|
= help: Replace with format specifiers
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:138:1: UP031 Use format specifiers instead of percent format
|
136 | 'Hello %s' % bar.baz
137 |
138 | 'Hello %s' % bar['bop']
| ^^^^^^^^^^^^^^^^^^^^^^^ UP031
|
= help: Replace with format specifiers
```
One of them newly offers a fix.
```
# UP031 (no longer false negatives)
-'Hello %s' % bar
+'Hello {}'.format(bar)
```
This fix occurs because the new code dereferences `bar` to where it was
defined earlier in the file as a non-tuple:
```python
bar = {"bar": y}
```
---
</details>
## Behavior requiring new tests
Additionally, we now handle a few cases that we didn't previously test.
These cases are when a string has a single %-placeholder and the
righthand operand to the modulo operator is a variable **which can be
dereferenced.** One of those was shown in the previous section (the
"dereference non-tuple" case).
<details>
<summary>New cases handled</summary>
```
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:126:1: UP031 [*] Use format specifiers instead of percent format
|
125 | t1 = (x,)
126 | "%s" % t1
| ^^^^^^^^^ UP031
127 | # UP031: deref t1 to 1-tuple, offer fix
|
= help: Replace with format specifiers
crates/ruff_linter/resources/test/fixtures/pyupgrade/UP031_0.py:130:1: UP031 Use format specifiers instead of percent format
|
129 | t2 = (x,y)
130 | "%s" % t2
| ^^^^^^^^^ UP031
131 | # UP031: deref t2 to n-tuple, this is a bug
|
= help: Replace with format specifiers
```
One of these offers a fix.
```
t1 = (x,)
-"%s" % t1
+"{}".format(t1[0])
# UP031: deref t1 to 1-tuple, offer fix
```
The other doesn't offer a fix because it's a bug.
---
</details>
---
</details>
## Changes to existing behavior
In the case of a string with a single %-placeholder and a single
ambiguous righthand argument to the modulo operator, (e.g. `"%s" % var`)
the rule now fires and offers a fix. We explain about this in the "fix
safety" section of the updated documentation.
## Documentation changes
I swapped the order of the "known problems" and the "examples" sections
so that the examples which describe the rule are first, before the
exceptions to the rule are described. I also tweaked the language to be
more explicit, as I had trouble understanding the documentation at
first. The "known problems" section is now "fix safety" but the content
is largely similar.
The diff of the documentation changes looks a little difficult unless
you look at the individual commits.
## Summary
- Properly fix the race condition identified in
https://github.com/astral-sh/ruff/pull/11039. Instead of running the
version of Ruff we're testing by invoking `cargo run --release` on each
generated source file, we either (1) accept a path to an executable on
the command line or (2) if that's not specified, we run `cargo build
--release` once at the start and then invoke the executable found in
`target/release/ruff` directly.
- Now that the race condition is properly fixed, remove the workaround
for the race condition added in
https://github.com/astral-sh/ruff/pull/11039.
- Also allow users to pass in an executable to compare against for the
`--only-new-bugs` argument (previously it was hardcoded to always
compare against the version of Ruff installed into the Python
environment)
- Use `argparse.RawDescriptionHelpFormatter` as the formatter class
rather than `argparse.RawTextHelpFormatter`. This means that long help
texts for the individual arguments will be wrapped to a sensible width.
- On completion of the script, indicate success or failure of the script
overall by raising `SytemExit` with the appropriate exit code.
- Add myself as a codeowner for the script
## Summary
I happened to notice that we box `TypeParams` on `StmtClassDef` but not
on `StmtFunctionDef` and wondered why, since `StmtFunctionDef` is bigger
and sets the size of `Stmt`.
@charliermarsh found that at the time we started boxing type params on
classes, classes were the largest statement type (see #6275), but that's
no longer true.
So boxing type-params also on functions reduces the overall size of
`Stmt`.
## Test Plan
The `<=` size tests are a bit irritating (since their failure doesn't
tell you the actual size), but I manually confirmed that the size is
actually 120 now.
Occasionally you intentionally have iterables of differing lengths. The
rule permits this by explicitly adding `strict=False`, but this was not
documented.
## Summary
The rule does not currently document how to avoid it when having
differing length iterables is intentional. This PR adds that to the rule
documentation.
## Summary
This fixes a bug where the parser would panic when there is a "gap" in
the token source.
What's a gap?
The reason it's `<=` instead of just `==` is because there could be
whitespaces between
the two tokens. For example:
```python
# last token end
# | current token (newline) start
# v v
def foo \n
# ^
# assume there's trailing whitespace here
```
Or, there could tokens that are considered "trivia" and thus aren't
emitted by the token
source. These are comments and non-logical newlines. For example:
```python
# last token end
# v
def foo # comment\n
# ^ current token (newline) start
```
In either of the above cases, there's a "gap" between the end of the
last token and start
of the current token.
## Test Plan
Add test cases and update the snapshots.
## Summary
This PR adds a new `Clause::Case` and uses it to parse the body of a
`case` block. Earlier, it was using `Match` which would give an
incorrect error message like:
```
|
1 | match subject:
2 | case 1:
3 | case 2: ...
| ^^^^ Syntax Error: Expected an indented block after `match` statement
|
```
## Test Plan
Add test case and update the snapshot.
Add pylint rule invalid-hash-returned (PLE0309)
See https://github.com/astral-sh/ruff/issues/970 for rules
Test Plan: `cargo test`
TBD: from the description: "Strictly speaking `bool` is a subclass of
`int`, thus returning `True`/`False` is valid. To be consistent with
other rules (e.g.
[PLE0305](https://github.com/astral-sh/ruff/pull/10962)
invalid-index-returned), ruff will raise, compared to pylint which will
not raise."
## Summary
This PR fixes the bug in with items parsing where it would fail to
recognize that the parenthesized expression is part of a large binary
expression.
## Test Plan
Add test cases and verified the snapshots.
## Summary
This PR fixes the bug in parenthesized with items parsing where the `if`
expression would result into a syntax error.
The reason being that once we identify that the ambiguous left
parenthesis belongs to the context expression, the parser converts the
parsed with item into an equivalent expression. Then, the parser
continuous to parse any postfix expressions. Now, attribute, subscript,
and call are taken into account as they're grouped in
`parse_postfix_expression` but `if` expression has it's own parsing
function.
Use `parse_if_expression` once all postfix expressions have been parsed.
Ideally, I think that `if` could be included in postfix expression
parsing as they can be chained as well (`x if True else y if True else
z`).
## Test Plan
Add test cases and verified the snapshots.
## Summary
This PR fixes a bug in the new parser which involves the parser context
w.r.t. for statement. This is specifically around the `in` keyword which
can be present in the target expression and shouldn't be considered to
be part of the `for` statement header. Ideally it should use a context
which is passed between functions, thus using a call stack to set /
unset a specific variant which will be done in a follow-up PR as it
requires some amount of refactor.
## Test Plan
Add test cases and update the snapshots.
(Supersedes #9152, authored by @LaBatata101)
## Summary
This PR replaces the current parser generated from LALRPOP to a
hand-written recursive descent parser.
It also updates the grammar for [PEP
646](https://peps.python.org/pep-0646/) so that the parser outputs the
correct AST. For example, in `data[*x]`, the index expression is now a
tuple with a single starred expression instead of just a starred
expression.
Beyond the performance improvements, the parser is also error resilient
and can provide better error messages. The behavior as seen by any
downstream tools isn't changed. That is, the linter and formatter can
still assume that the parser will _stop_ at the first syntax error. This
will be updated in the following months.
For more details about the change here, refer to the PR corresponding to
the individual commits and the release blog post.
## Test Plan
Write _lots_ and _lots_ of tests for both valid and invalid syntax and
verify the output.
## Acknowledgements
- @MichaReiser for reviewing 100+ parser PRs and continuously providing
guidance throughout the project
- @LaBatata101 for initiating the transition to a hand-written parser in
#9152
- @addisoncrump for implementing the fuzzer which helped
[catch](https://github.com/astral-sh/ruff/pull/10903)
[a](https://github.com/astral-sh/ruff/pull/10910)
[lot](https://github.com/astral-sh/ruff/pull/10966)
[of](https://github.com/astral-sh/ruff/pull/10896)
[bugs](https://github.com/astral-sh/ruff/pull/10877)
---------
Co-authored-by: Victor Hugo Gomes <labatata101@linuxmail.org>
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
The following client settings have been introduced to the language
server:
* `lint.preview`
* `format.preview`
* `lint.select`
* `lint.extendSelect`
* `lint.ignore`
* `exclude`
* `lineLength`
`exclude` and `lineLength` apply to both the linter and formatter.
This does not actually use the settings yet, but makes them available
for future use.
## Test Plan
Snapshot tests have been updated.
## Summary
A setup guide has been written for NeoVim under a new
`crates/ruff_server/docs/setup` folder, where future setup guides will
also go. This setup guide was adapted from the [`ruff-lsp`
guide](https://github.com/astral-sh/ruff-lsp?tab=readme-ov-file#example-neovim).
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Add pylint rule invalid-length-returned (PLE0303)
See https://github.com/astral-sh/ruff/issues/970 for rules
Test Plan: `cargo test`
TBD: from the description: "Strictly speaking `bool` is a subclass of
`int`, thus returning `True`/`False` is valid. To be consistent with
other rules (e.g.
[PLE0305](https://github.com/astral-sh/ruff/pull/10962)
invalid-index-returned), ruff will raise, compared to pylint which will
not raise."
## Summary
If the user is analyzing a script (i.e., we have no module path), it
seems reasonable to use the script name when trying to identify paths to
objects defined _within_ the script.
Closes https://github.com/astral-sh/ruff/issues/10960.
## Test Plan
Ran:
```shell
check --isolated --select=B008 \
--config 'lint.flake8-bugbear.extend-immutable-calls=["test.A"]' \
test.py
```
On:
```python
class A: pass
def f(a=A()):
pass
```