## Summary
As the title says, this PR removes the `Message::to_rule` method by
replacing related uses of `Rule` with `NoqaCode` (or the rule's name in
the case of the cache). Where it seemed a `Rule` was really needed, we
convert back to the `Rule` by parsing either the rule name (with
`str::parse`) or the `NoqaCode` (with `Rule::from_code`).
I thought this was kind of like cheating and that it might not resolve
this part of Micha's
[comment](https://github.com/astral-sh/ruff/pull/18391#issuecomment-2933764275):
> because we can't add Rule to Diagnostic or **have it anywhere in our
shared rendering logic**
but after looking again, the only remaining `Rule` conversion in
rendering code is for the SARIF output format. The other two non-test
`Rule` conversions are for caching and writing a fix summary, which I
don't think fall into the shared rendering logic. That leaves the SARIF
format as the only real problem, but maybe we can delay that for now.
The motivation here is that we won't be able to store a `Rule` on the
new `Diagnostic` type, but we should be able to store a `NoqaCode`,
likely as a string.
## Test Plan
Existing tests
##
[Benchmarks](https://codspeed.io/astral-sh/ruff/branches/brent%2Fremove-to-rule)
Almost no perf regression, only -1% on
`linter/default-rules[large/dataset.py]`.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This is a practice I followed on previous projects. Should hopefully
further help developers who want to update the documentation.
The big downside is that it's annoying to see this *as a user of the
documentation* if you don't open the Markdown file in the browser. But
I'd argue that those files don't really follow the original Markdown
spirit anyway with all the inline HTML.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
I decided to disable the new
[`needless_continue`](https://rust-lang.github.io/rust-clippy/master/index.html#needless_continue)
rule because I often found the explicit `continue` more readable over an
empty block or having to invert the condition of an other branch.
## Test Plan
`cargo test`
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This is part of the preparation for detecting syntax errors in the
parser from https://github.com/astral-sh/ruff/pull/16090/. As suggested
in [this
comment](https://github.com/astral-sh/ruff/pull/16090/#discussion_r1953084509),
I started working on a `ParseOptions` struct that could be stored in the
parser. For this initial refactor, I only made it hold the existing
`Mode` option, but for syntax errors, we will also need it to have a
`PythonVersion`. For that use case, I'm picturing something like a
`ParseOptions::with_python_version` method, so you can extend the
current calls to something like
```rust
ParseOptions::from(mode).with_python_version(settings.target_version)
```
But I thought it was worth adding `ParseOptions` alone without changing
any other behavior first.
Most of the diff is just updating call sites taking `Mode` to take
`ParseOptions::from(Mode)` or those taking `PySourceType`s to take
`ParseOptions::from(PySourceType)`. The interesting changes are in the
new `parser/options.rs` file and smaller parts of `parser/mod.rs` and
`ruff_python_parser/src/lib.rs`.
## Test Plan
Existing tests, this should not change any behavior.
## Summary
Adds a JSON schema generation step for Red Knot. This PR doesn't yet add
a publishing step because it's still a bit early for that
## Test plan
I tested the schema in Zed, VS Code and PyCharm:
* PyCharm: You have to manually add a schema mapping (settings JSON
Schema Mappings)
* Zed and VS code support the inline schema specification
```toml
#:schema /Users/micha/astral/ruff/knot.schema.json
[environment]
extra-paths = []
[rules]
call-possibly-unbound-method = "error"
unknown-rule = "error"
# duplicate-base = "error"
```
```json
{
"$schema": "file:///Users/micha/astral/ruff/knot.schema.json",
"environment": {
"python-version": "3.13",
"python-platform": "linux2"
},
"rules": {
"unknown-rule": "error"
}
}
```
https://github.com/user-attachments/assets/a18fcd96-7cbe-4110-985b-9f1935584411
The Schema overall works but all editors have their own quirks:
* PyCharm: Hovering a name always shows the section description instead
of the description of the specific setting. But it's the same for other
settings in `pyproject.toml` files 🤷
* VS Code (JSON): Using the generated schema in a JSON file gives
exactly the experience I want
* VS Code (TOML):
* Properties with multiple possible values are repeated during
auto-completion without giving any hint how they're different. 
* The property description mushes together the description of the
property and the value, which looks sort of ridiculous. 
* Autocompletion and documentation hovering works (except the
limitations mentioned above)
* Zed:
* Very similar to VS Code with the exception that it uses the
description attribute to distinguish settings with multiple possible
values 
I don't think there's much we can do here other than hope (or help)
editors improve their auto completion. The same short comings also apply
to ruff, so this isn't something new. For now, I think this is good
enough
## Summary
I look at the token stream a lot, not specifically in the playground but
in the terminal output and it's annoying to scroll a lot to find
specific location. Most of the information is also redundant.
The final format we end up with is: `<kind> <range> (flags = ...)` e.g.,
`String 0..4 (flags = BYTE_STRING)` where the flags part is only
populated if there are any flags set.
## Summary
This PR updates the entire parser stack in multiple ways:
### Make the lexer lazy
* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473
Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.
The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.
The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.
Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.
This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)
[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`
### Remove `SoftKeywordTransformer`
* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474
For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.
* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.
If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).
The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.
### Update the parser API
* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505
Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.
Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens
Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.
### Refactor the downstream tools (linter and formatter)
* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592
And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
* https://github.com/astral-sh/ruff/pull/11402
* https://github.com/astral-sh/ruff/pull/11406
* https://github.com/astral-sh/ruff/pull/11418
* https://github.com/astral-sh/ruff/pull/11419
* https://github.com/astral-sh/ruff/pull/11420
* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR
For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.
## Test Plan
- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects
### Running this change on dozens of open-source projects
Refer to the PR description to get the list of open source projects used
for testing.
Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`
**Conclusion: all output were same**
## What's next?
The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
## Summary
Some contributors have referenced settings in their documentation
without adding the settings to an options section, this has lead to some
rendering issues (#10427). This PR addresses this looking for potential
inline links to settings, cross-checking them with the options sections,
and then linking them anyway if they are not found.
Resolves#10427.
## Test Plan
Manually verified that the correct modifications were made and no docs
were broken.
## Summary
I used `cargo-shear` (see
[tweet](https://twitter.com/boshen_c/status/1770106165923586395)) to
remove some unused dependencies that `cargo udeps` wasn't reporting.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo test`
## Summary
The upstream category check here
fd26b29986/crates/ruff_linter/src/upstream_categories.rs (L54-L65)
was not working because the code is actually "E0001" not "PLE0001", I
changed it so it will detect the upstream category correctly.
I also sorted the upstream categories alphabetically, so that the
document generation will be deterministic.
## Test Plan
I compared the diff before and after the change.
Fixes#8368
Fixes https://github.com/astral-sh/ruff/issues/9186
## Summary
Arbitrary TOML strings can be provided via the command-line to override
configuration options in `pyproject.toml` or `ruff.toml`. As an example:
to run over typeshed and respect typeshed's `pyproject.toml`, but
override a specific isort setting and enable an additional pep8-naming
setting:
```
cargo run -- check ../typeshed --no-cache --config ../typeshed/pyproject.toml --config "lint.isort.combine-as-imports=false" --config "lint.extend-select=['N801']"
```
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Co-authored-by: Zanie Blue <contact@zanie.dev>
Similar to https://github.com/astral-sh/ruff/pull/9689 — retains removed
rules for better error messages and documentation but removed rules
_cannot_ be used in any context.
Removes PLR1706 as a useful test case and something we want to
accomplish in #9680 anyway. The rule was in preview so we do not need to
deprecate it first.
Closes https://github.com/astral-sh/ruff/issues/9007
## Test plan
<img width="1110" alt="Rules table"
src="ac9fa682-623c-44aa-8e51-d8ab0d308355">
<img width="1110" alt="Rule page"
src="05850b2d-7ca5-49bb-8df8-bb931bab25cd">
Adds a new `Deprecated` rule group in addition to `Stable` and
`Preview`.
Deprecated rules:
- Warn on explicit selection without preview
- Error on explicit selection with preview
- Are excluded when selected by prefix with preview
Deprecates `TRY200`, `ANN101`, and `ANN102` as a proof of concept. We
can consider deprecating them separately.
Previously, without the 'wrap_help' feature enabled, Clap would not do
any auto-wrapping of help text. For help text with long lines, this
tends to lead to non-ideal formatting. It can be especially difficult to
read when the width of the terminal is smaller.
This commit enables 'wrap_help', which will automatically cause Clap to
query the terminal size and wrap according to that. Or, if the terminal
size cannot be determined, it will default to a maximum line width of
100.
Ref https://github.com/astral-sh/ruff/pull/9599#discussion_r1464992692
## Summary
Long ago, we had a single `ruff` crate. We started to break that up, and
at some point, we wanted to separate the CLI from the core library. So
we created `ruff_cli`, which created a `ruff` binary. Later, the `ruff`
crate was renamed to `ruff_linter` and further broken up into additional
crates.
(This is all from memory -- I didn't bother to look through the history
to ensure that this is 100% correct :))
Now that `ruff` no longer exists, this PR renames `ruff_cli` to `ruff`.
The primary benefit is that the binary target and the crate name are now
the same, which helps with downstream tooling like `cargo-dist`, and
also removes some complexity from the crate and `Cargo.toml` itself.
## Test Plan
- Ran `rm -rf target/release`.
- Ran `cargo build --release`.
- Verified that `./target/release/ruff` was created.
## Summary
Sort of a random PR to make the coupling between `pyproject_config` and
`resolver` more explicit by passing it to the `Resolver`, rather than
threading it through to each individual method.
## Summary
This PR modifies our `Cargo.toml` files to use workspace dependencies
for _all_ dependencies, rather than the status quo of sporadically
trying to use workspace dependencies for those dependencies that are
used across multiple crates. I find the current situation more confusing
and harder to manage, since we have a mix of workspace and crate-local
dependencies, whereas this setup consistently uses the same approach for
all dependencies.
## Summary
I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
## Summary
If a rule ends with a trailing placeholder (like "Use {target}"), that
gets interpreted as an HTML attribute adding, `target="target"` to the
node. This PR escapes such cases. In reality, they're rare, since we
almost always wrap placeholders in backticks, which avoids this problem
-- but in some cases, they are in fact correct to be un-backticked.
Closes https://github.com/astral-sh/ruff/issues/9288.
## Test Plan
<img width="673" alt="Screen Shot 2023-12-28 at 9 33 40 AM"
src="14aaa168-c802-4258-b82d-217a85b42ebf">
Update to [Rust
1.74](https://blog.rust-lang.org/2023/11/16/Rust-1.74.0.html) and use
the new clippy lints table.
The update itself introduced a new clippy lint about superfluous hashes
in raw strings, which got removed.
I moved our lint config from `rustflags` to the newly stabilized
[workspace.lints](https://doc.rust-lang.org/stable/cargo/reference/workspaces.html#the-lints-table).
One consequence is that we have to `unsafe_code = "warn"` instead of
"forbid" because the latter now actually bans unsafe code:
```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
--> crates/ruff_source_file/src/newlines.rs:62:17
|
62 | #[allow(unsafe_code)]
| ^^^^^^^^^^^ overruled by previous forbid
|
= note: `forbid` lint level was set on command line
```
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>