## Summary
The implementation here differs from the non-`stdin` version -- this is
now more consistent.
## Test Plan
```
❯ cat Untitled.ipynb | cargo run -p ruff_cli -- check --stdin-filename Untitled.ipynb --diff -n
Finished dev [unoptimized + debuginfo] target(s) in 0.11s
Running `target/debug/ruff check --stdin-filename Untitled.ipynb --diff -n`
--- Untitled.ipynb:cell 2
+++ Untitled.ipynb:cell 2
@@ -1 +0,0 @@
-import os
--- Untitled.ipynb:cell 4
+++ Untitled.ipynb:cell 4
@@ -1 +0,0 @@
-import sys
```
## Summary
This PR fixes the bug where the formatter would panic if a class/function with
decorators had a suppression comment.
The fix is to use to correct start location to find the `async`/`def`/`class`
keyword when decorators are present which is the end of the last
decorator.
## Test Plan
Add test cases for the fix and update the snapshots.
- Only trigger for immediately adjacent isinstance() calls with the same
target
- Preserve order of or conditions
Two existing tests changed:
- One was incorrectly reordering the or conditions, and is now correct.
- Another was combining two non-adjacent isinstance() calls. It's safe
enough in that example,
but this isn't safe to do in general, and it feels low-value to come up
with a heuristic for
when it is safe, so it seems better to not combine the calls in that
case.
Fixes https://github.com/astral-sh/ruff/issues/7797
## Summary
We now list each changed file when running with `--check`.
Closes https://github.com/astral-sh/ruff/issues/7782.
## Test Plan
```
❯ cargo run -p ruff_cli -- format foo.py --check
Compiling ruff_cli v0.0.292 (/Users/crmarsh/workspace/ruff/crates/ruff_cli)
rgo + Finished dev [unoptimized + debuginfo] target(s) in 1.41s
Running `target/debug/ruff format foo.py --check`
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended only for experimentation.
Would reformat: foo.py
1 file would be reformatted
```
## Summary
Check that the sequence type is a list, set, dict, or tuple before
recommending replacing the `enumerate(...)` call with `range(len(...))`.
Document behaviour so users are aware of the type inference limitation
leading to false negatives.
Closes#7656.
## Summary
This PR fixes a bug in the lexer for f-string format spec where it would
consider the `{{` (double curly braces) as an escape pattern.
This is not the case as evident by the
[PEP](https://peps.python.org/pep-0701/#how-to-produce-these-new-tokens)
as well but I missed the part:
> [..]
> * **If in “format specifier mode” (see step 3), an opening brace ({)
or a closing brace (}).**
> * If not in “format specifier mode” (see step 3), an opening brace ({)
or a closing brace (}) that is not immediately followed by another
opening/closing brace.
## Test Plan
Add a test case to verify the fix and update the snapshot.
fixes: #7778
## Summary
Two of the three listed examples were wrong: one was semantically
incorrect, another was _correct_ but not actually within the scope of
the rule.
Good motivation for us to start linting documentation examples :)
Closes https://github.com/astral-sh/ruff/issues/7773.
## Summary
This change fixes an error in the documentation where cr-lf was
displayed as crlf which if you tried to enter into the configuration
file running ruff would break.
## Test Plan
I ran the tests locally and I ran the documentation server locally and
verified the edit
### [Documentation
Site](https://docs.astral.sh/ruff/settings/#format-line-ending)

### Local

## Summary
We'll revert back to the crates.io release once it's up-to-date, but
better to get this out now that Python 3.12 is released.
## Test Plan
`cargo test`
## Summary
This PR enables `ruff format` to format Jupyter notebooks.
Most of the work is contained in a new `format_source` method that
formats a generic `SourceKind`, then returns `Some(transformed)` if the
source required formatting, or `None` otherwise.
Closes https://github.com/astral-sh/ruff/issues/7598.
## Test Plan
Ran `cat foo.py | cargo run -p ruff_cli -- format --stdin-filename
Untitled.ipynb`; verified that the console showed a reasonable error:
```console
warning: Failed to read notebook Untitled.ipynb: Expected a Jupyter Notebook, which must be internally stored as JSON, but this file isn't valid JSON: EOF while parsing a value at line 1 column 0
```
Ran `cat Untitled.ipynb | cargo run -p ruff_cli -- format
--stdin-filename Untitled.ipynb`; verified that the JSON output
contained formatted source code.
## Summary
When writing back notebooks via `stdout`, we need to write back the
entire JSON content, not _just_ the fixed source code. Otherwise,
writing the output _back_ to the file will yield an invalid notebook.
Closes https://github.com/astral-sh/ruff/issues/7747
## Test Plan
`cargo test`
## Summary
It turns out that _some_ identifiers can contain newlines --
specifically, dot-delimited import identifiers, like:
```python
import foo\
.bar
```
At present, we print all identifiers verbatim, which causes us to retain
the `\` in the formatted output. This also leads to violating some debug
assertions (see the linked issue, though that's a symptom of this
formatting failure).
This PR adds detection for import identifiers that contain newlines, and
formats them via `text` (slow) rather than `source_code_slice` (fast) in
those cases.
Closes https://github.com/astral-sh/ruff/issues/7734.
## Test Plan
`cargo test`
## Summary
There's no way for users to fix this warning if they're intentionally
using an "invalid" PEP 593 annotation, as is the case in CPython. This
is a symptom of having warnings that aren't themselves diagnostics. If
we want this to be user-facing, we should add a diagnostic for it!
## Test Plan
Ran `cargo run -p ruff_cli -- check foo.py -n` on:
```python
from typing import Annotated
Annotated[int]
```
## Summary
If we have, e.g.:
```python
sum((
factor.dims for factor in bases
), [])
```
We generate three edits: two insertions (for the `operator` and
`functools` imports), and then one replacement (for the `sum` call
itself). We need to ensure that the insertions come before the
replacement; otherwise, the edits will appear overlapping and
out-of-order.
Closes https://github.com/astral-sh/ruff/issues/7718.
## Summary
This PR fixes a bug where if a Windows newline (`\r\n`) character was
escaped, then only the `\r` was consumed and not `\n` leading to an
unterminated string error.
## Test Plan
Add new test cases to check the newline escapes.
fixes: #7632
## Summary
This PR fixes the bug where the value of a string node type includes the
escaped mac/windows newline character.
Note that the token value still includes them, it's only removed when
parsing the string content.
## Test Plan
Add new test cases for the string node type to check that the escapes
aren't being included in the string value.
fixes: #7723
## Summary
This PR modifies the `line-too-long` and `doc-line-too-long` rules to
ignore lines that are too long due to the presence of a pragma comment
(e.g., `# type: ignore` or `# noqa`). That is, if a line only exceeds
the limit due to the pragma comment, it will no longer be flagged as
"too long". This behavior mirrors that of the formatter, thus ensuring
that we don't flag lines under E501 that the formatter would otherwise
avoid wrapping.
As a concrete example, given a line length of 88, the following would
_no longer_ be considered an E501 violation:
```python
# The string literal is 88 characters, including quotes.
"shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:sh" # type: ignore
```
This, however, would:
```python
# The string literal is 89 characters, including quotes.
"shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:shape:sha" # type: ignore
```
In addition to mirroring the formatter, this also means that adding a
pragma comment (like `# noqa`) won't _cause_ additional violations to
appear (namely, E501). It's very common for users to add a `# type:
ignore` or similar to a line, only to find that they then have to add a
suppression comment _after_ it that was required before, as in `# type:
ignore # noqa: E501`.
Closes https://github.com/astral-sh/ruff/issues/7471.
## Test Plan
`cargo test`
## Summary
This PR fixes the bug where the `NotebookIndex` was not being computed
when
using stdin as the input source.
## Test Plan
On `main`, the diagnostic output won't include the cell number when
using stdin
while it'll be included after this fix.
### `main`
```console
$ cat ~/playground/ruff/notebooks/test.ipynb | cargo run --bin ruff -- check --isolated --no-cache - --stdin-filename ~/playground/ruff/notebooks/test.ipynb
/Users/dhruv/playground/ruff/notebooks/test.ipynb:2:8: F401 [*] `math` imported but unused
/Users/dhruv/playground/ruff/notebooks/test.ipynb:7:8: F811 Redefinition of unused `random` from line 1
/Users/dhruv/playground/ruff/notebooks/test.ipynb:8:8: F401 [*] `pprint` imported but unused
/Users/dhruv/playground/ruff/notebooks/test.ipynb:12:4: F632 [*] Use `==` to compare constant literals
/Users/dhruv/playground/ruff/notebooks/test.ipynb:13:38: F632 [*] Use `==` to compare constant literals
Found 5 errors.
[*] 4 potentially fixable with the --fix option.
```
### `dhruv/notebook-index-stdin`
```console
$ cat ~/playground/ruff/notebooks/test.ipynb | cargo run --bin ruff -- check --isolated --no-cache - --stdin-filename ~/playground/ruff/notebooks/test.ipynb
/Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 3:2:8: F401 [*] `math` imported but unused
/Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 5:1:8: F811 Redefinition of unused `random` from line 1
/Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 5:2:8: F401 [*] `pprint` imported but unused
/Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 6:2:4: F632 [*] Use `==` to compare constant literals
/Users/dhruv/playground/ruff/notebooks/test.ipynb:cell 6:3:38: F632 [*] Use `==` to compare constant literals
Found 5 errors.
[*] 4 potentially fixable with the --fix option.
```
## Summary
This PR implements a variety of optimizations to improve performance of
the Eradicate rule, which always shows up in all-rules benchmarks and
bothers me. (These improvements are not hugely important, but it was
kind of a fun Friday thing to spent a bit of time on.)
The improvements include:
- Doing cheaper work first (checking for some explicit substrings
upfront).
- Using `aho-corasick` to speed an exact substring search.
- Merging multiple regular expressions using a `RegexSet`.
- Removing some unnecessary `\s*` and other pieces from the regular
expressions (since we already trim strings before matching on them).
## Test Plan
I benchmarked this function in a standalone crate using a variety of
cases. Criterion reports that this version is up to 80% faster, and
almost every case is at least 50% faster:
```
Eradicate/Detection/# Warn if we are installing over top of an existing installation. This can
time: [101.84 ns 102.32 ns 102.82 ns]
change: [-77.166% -77.062% -76.943%] (p = 0.00 < 0.05)
Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
Eradicate/Detection/#from foo import eradicate
time: [74.872 ns 75.096 ns 75.314 ns]
change: [-84.180% -84.131% -84.079%] (p = 0.00 < 0.05)
Performance has improved.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
Eradicate/Detection/# encoding: utf8
time: [46.522 ns 46.862 ns 47.237 ns]
change: [-29.408% -28.918% -28.471%] (p = 0.00 < 0.05)
Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
6 (6.00%) high mild
1 (1.00%) high severe
Eradicate/Detection/# Issue #999
time: [16.942 ns 16.994 ns 17.058 ns]
change: [-57.243% -57.064% -56.815%] (p = 0.00 < 0.05)
Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
2 (2.00%) high mild
1 (1.00%) high severe
Eradicate/Detection/# type: ignore
time: [43.074 ns 43.163 ns 43.262 ns]
change: [-17.614% -17.390% -17.152%] (p = 0.00 < 0.05)
Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe
Eradicate/Detection/# user_content_type, _ = TimelineEvent.objects.using(db_alias).get_or_create(
time: [209.40 ns 209.81 ns 210.23 ns]
change: [-32.806% -32.630% -32.470%] (p = 0.00 < 0.05)
Performance has improved.
Eradicate/Detection/# this is = to that :(
time: [72.659 ns 73.068 ns 73.473 ns]
change: [-68.884% -68.775% -68.655%] (p = 0.00 < 0.05)
Performance has improved.
Found 9 outliers among 100 measurements (9.00%)
7 (7.00%) high mild
2 (2.00%) high severe
Eradicate/Detection/#except Exception:
time: [92.063 ns 92.366 ns 92.691 ns]
change: [-64.204% -64.052% -63.909%] (p = 0.00 < 0.05)
Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
Eradicate/Detection/#print(1)
time: [68.359 ns 68.537 ns 68.725 ns]
change: [-72.424% -72.356% -72.278%] (p = 0.00 < 0.05)
Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high mild
Eradicate/Detection/#'key': 1 + 1,
time: [79.604 ns 79.865 ns 80.135 ns]
change: [-69.787% -69.667% -69.549%] (p = 0.00 < 0.05)
Performance has improved.
```
## Summary
The parser now uses the raw source code as global context and slices
into it to parse debug text. It turns out we were always passing in the
_old_ source code, so when code was fixed, we were making invalid
accesses. This PR modifies the call to use the _fixed_ source code,
which will always be consistent with the tokens.
Closes https://github.com/astral-sh/ruff/issues/7711.
## Test Plan
`cargo test`
## Summary
This wasn't necessary in the past, since we _only_ applied this rule to
bodies that contained two statements, one of which was a `pass`. Now
that it applies to any `pass` in a block with multiple statements, we
can run into situations in which we remove both passes, and so need to
apply the fixes in isolation.
See:
https://github.com/astral-sh/ruff/issues/7455#issuecomment-1741107573.
## Summary
The markdown documentation was present, but in the wrong place, so was
not displaying on the website. I moved it and added some references.
Related to #2646.
## Test Plan
`python scripts/check_docs_formatted.py`
Previously attempted to repair these tests at
https://github.com/astral-sh/ruff/pull/6992 but I don't think we should
prioritize that and instead I would like to remove this dead code.
## Summary
Extend `unnecessary-pass` (`PIE790`) to trigger on all unnecessary
`pass` statements by checking for `pass` statements in any class or
function body with more than one statement.
Closes#7600.
## Test Plan
`cargo test`
Part of #1646.
## Summary
Implement `S505`
([`weak_cryptographic_key`](https://bandit.readthedocs.io/en/latest/plugins/b505_weak_cryptographic_key.html))
rule from `bandit`.
For this rule, `bandit` [reports the issue
with](https://github.com/PyCQA/bandit/blob/1.7.5/bandit/plugins/weak_cryptographic_key.py#L47-L56):
- medium severity for DSA/RSA < 2048 bits and EC < 224 bits
- high severity for DSA/RSA < 1024 bits and EC < 160 bits
Since Ruff does not handle severities for `bandit`-related rules, we
could either report the issue if we have lower values than medium
severity, or lower values than high one. Two reasons led me to choose
the first option:
- a medium severity issue is still a security issue we would want to
report to the user, who can then decide to either handle the issue or
ignore it
- `bandit` [maps the EC key algorithms to their respective key lengths
in
bits](https://github.com/PyCQA/bandit/blob/1.7.5/bandit/plugins/weak_cryptographic_key.py#L112-L133),
but there is no value below 160 bits, so technically `bandit` would
never report medium severity issues for EC keys, only high ones
Another consideration is that as shared just above, for EC key
algorithms, `bandit` has a mapping to map the algorithms to their
respective key lengths. In the implementation in Ruff, I rather went
with an explicit list of EC algorithms known to be vulnerable (which
would thus be reported) rather than implementing a mapping to retrieve
the associated key length and comparing it with the minimum value.
## Test Plan
Snapshot tests from
https://github.com/PyCQA/bandit/blob/1.7.5/examples/weak_cryptographic_key_sizes.py.