Commit graph

168 commits

Author SHA1 Message Date
Carl Meyer
fa5b19d4b6
[red-knot] use a simpler builtin in the benchmark (#12393)
In preparation for supporting resolving builtins, simplify the benchmark
so it doesn't look up `str`, which is actually a complex builtin to deal
with because it inherits `Sequence[str]`.

Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
2024-07-18 14:04:33 -07:00
Micha Reiser
91338ae902
[red-knot] Add basic workspace support (#12318) 2024-07-17 11:34:21 +02:00
Micha Reiser
ac04380f36
[red-knot] Rename FileSystem to System (#12214) 2024-07-09 07:20:51 +00:00
Micha Reiser
9d61727289
[red-knot] Exclude drop time in benchmark (#12218) 2024-07-06 17:35:00 +02:00
Alex Waygood
a62a432a48
[red-knot] Respect typeshed's VERSIONS file when resolving stdlib modules (#12141) 2024-07-05 22:43:31 +00:00
Micha Reiser
e2e0889a30
[red-knot] Add very basic benchmark (#12182) 2024-07-04 15:29:00 +00:00
Dhruv Manilawala
72b6c26101 Simplify LinterResult, avoid cloning ParseError (#11903)
## Summary

Follow-up to #11902

This PR simplifies the `LinterResult` struct by avoiding the generic and
not store the `ParseError`.

This is possible because the callers already have access to the
`ParseError` via the `Parsed` output. This also means that we can
simplify the return type of `check_path` and avoid the generic `T` on
`LinterResult`.

## Test Plan

`cargo insta test`
2024-06-27 13:44:11 +02:00
Dhruv Manilawala
549cc1e437
Build CommentRanges outside the parser (#11792)
## Summary

This PR updates the parser to remove building the `CommentRanges` and
instead it'll be built by the linter and the formatter when it's
required.

For the linter, it'll be built and owned by the `Indexer` while for the
formatter it'll be built from the `Tokens` struct and passed as an
argument.

## Test Plan

`cargo insta test`
2024-06-09 09:55:17 +00:00
Dhruv Manilawala
bf5b62edac
Maintain synchronicity between the lexer and the parser (#11457)
## Summary

This PR updates the entire parser stack in multiple ways:

### Make the lexer lazy

* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473

Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.

The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.

The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.

Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.

This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)

[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`

### Remove `SoftKeywordTransformer`

* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474

For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.

* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.

If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).

The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.

### Update the parser API

* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505

Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.

Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens

Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.

### Refactor the downstream tools (linter and formatter)

* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592

And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
	* https://github.com/astral-sh/ruff/pull/11402
	* https://github.com/astral-sh/ruff/pull/11406
	* https://github.com/astral-sh/ruff/pull/11418
	* https://github.com/astral-sh/ruff/pull/11419
	* https://github.com/astral-sh/ruff/pull/11420
	* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR

For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
	1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.

## Test Plan

- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects

### Running this change on dozens of open-source projects

Refer to the PR description to get the list of open source projects used
for testing.

Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`

**Conclusion: all output were same**

## What's next?

The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
2024-06-03 18:23:50 +05:30
Micha Reiser
921bc15542
use owned ast and tokens in bench (#11598) 2024-05-29 18:10:32 +02:00
Dhruv Manilawala
50f14d017e
Use tokenize for linter benchmark (#11417)
## Summary

This PR updates the linter benchmark to use the `tokenize` function
instead of the lexer.

The linter expects the token list to be up to and including the first
error which is what the `ruff_python_parser::tokenize` function returns.

This was not a problem before because the benchmarks only uses valid
Python code.
2024-05-14 10:28:40 -04:00
Charlie Marsh
af60d539ab
Move sub-crates to workspace dependencies (#11407)
## Summary

This matches the setup we use in `uv` and allows for consistency in the
`Cargo.toml` files.
2024-05-13 14:37:50 +00:00
Micha Reiser
341c2698a7
Run doctests as part of CI pipeline (#9939) 2024-02-12 10:18:58 +01:00
Micha Reiser
47ad7b4500
Approximate tokens len (#9546) 2024-01-19 17:39:37 +01:00
Charlie Marsh
b983ab1c6c
Update contributing docs to use cargo bench -p ruff_benchmark (#9535)
## Summary

I found that `cargo benchmark lexer` didn't work as expected:

```shell
❯ cargo benchmark lexer
    Finished bench [optimized] target(s) in 0.08s
     Running benches/formatter.rs (target/release/deps/formatter-4e1d9bf9d3ba529d)
     Running benches/linter.rs (target/release/deps/linter-e449086ddfd8ad8c)
```

Turns out that `cargo bench -p ruff_benchmark` is now recommended over
`cargo benchmark`, so updating the docs to reflect that.
2024-01-15 14:57:30 -05:00
Micha Reiser
f192c72596
Remove type parameter from parse_* methods (#9466) 2024-01-11 19:41:19 +01:00
Charlie Marsh
9073220887
Make all dependencies workspace dependencies (#9333)
## Summary

This PR modifies our `Cargo.toml` files to use workspace dependencies
for _all_ dependencies, rather than the status quo of sporadically
trying to use workspace dependencies for those dependencies that are
used across multiple crates. I find the current situation more confusing
and harder to manage, since we have a mix of workspace and crate-local
dependencies, whereas this setup consistently uses the same approach for
all dependencies.
2024-01-02 13:41:59 +00:00
Charlie Marsh
e80260a3c5
Remove source path from parser errors (#9322)
## Summary

I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
2023-12-30 20:33:05 +00:00
Charlie Marsh
9d6444138b
Remove lexing and parsing from the linter benchmark (#9264)
## Summary

This PR adds some helper structs to the linter paths to enable passing
in the pre-computed tokens and parsed source code during benchmarking,
to remove lexing and parsing from the overall linter benchmark
measurement. We already remove parsing for the formatter, and we have
separate benchmarks for the lexer and the parser, so this should make it
much easier to measure linter performance changes.
2023-12-23 16:43:11 -05:00
dependabot[bot]
f5d4676c13
Bump ureq from 2.8.0 to 2.9.1 (#8993) 2023-12-04 09:53:25 -06:00
dependabot[bot]
df69dc9f8d
Bump url from 2.4.1 to 2.5.0 (#8994) 2023-12-04 10:16:39 -05:00
Micha Reiser
506be68782
Enable Preview mode for formatter benchmarks (#8944) 2023-12-01 10:02:59 +00:00
Micha Reiser
2314b9aaca
Add benchmark for running all rules including preview rules (#8865) 2023-11-29 04:26:25 +00:00
dependabot[bot]
04f0625d23
Bump codspeed-criterion-compat from 2.3.1 to 2.3.3 (#8784) 2023-11-20 11:10:48 +00:00
konsti
14e65afdc6
Update to Rust 1.74 and use new clippy lints table (#8722)
Update to [Rust
1.74](https://blog.rust-lang.org/2023/11/16/Rust-1.74.0.html) and use
the new clippy lints table.

The update itself introduced a new clippy lint about superfluous hashes
in raw strings, which got removed.

I moved our lint config from `rustflags` to the newly stabilized
[workspace.lints](https://doc.rust-lang.org/stable/cargo/reference/workspaces.html#the-lints-table).
One consequence is that we have to `unsafe_code = "warn"` instead of
"forbid" because the latter now actually bans unsafe code:

```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
  --> crates/ruff_source_file/src/newlines.rs:62:17
   |
62 |         #[allow(unsafe_code)]
   |                 ^^^^^^^^^^^ overruled by previous forbid
   |
   = note: `forbid` lint level was set on command line
```

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-11-16 18:12:46 -05:00
dependabot[bot]
82c3c513d2
Bump codspeed-criterion-compat from 2.3.0 to 2.3.1 (#8508) 2023-11-06 14:32:40 +00:00
dependabot[bot]
8cd09c88d3
Bump codspeed-criterion-compat from 2.2.0 to 2.3.0 (#8128)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-23 10:04:19 +01:00
dependabot[bot]
7d7e0824af
Bump ureq from 2.7.1 to 2.8.0 (#7756) 2023-10-02 09:49:59 -04:00
Dhruv Manilawala
e62e245c61
Add support for PEP 701 (#7376)
## Summary

This PR adds support for PEP 701 in Ruff. This is a rollup PR of all the
other individual PRs. The separate PRs were created for logic separation
and code reviews. Refer to each pull request for a detail description on
the change.

Refer to the PR description for the list of pull requests within this PR.

## Test Plan

### Formatter ecosystem checks

Explanation for the change in ecosystem check:
https://github.com/astral-sh/ruff/pull/7597#issue-1908878183

#### `main`

```
| project      | similarity index  | total files       | changed files     |
|--------------|------------------:|------------------:|------------------:|
| cpython      |           0.76083 |              1789 |              1631 |
| django       |           0.99983 |              2760 |                36 |
| transformers |           0.99963 |              2587 |               319 |
| twine        |           1.00000 |                33 |                 0 |
| typeshed     |           0.99983 |              3496 |                18 |
| warehouse    |           0.99967 |               648 |                15 |
| zulip        |           0.99972 |              1437 |                21 |
```

#### `dhruv/pep-701`

```
| project      | similarity index  | total files       | changed files     |
|--------------|------------------:|------------------:|------------------:|
| cpython      |           0.76051 |              1789 |              1632 |
| django       |           0.99983 |              2760 |                36 |
| transformers |           0.99963 |              2587 |               319 |
| twine        |           1.00000 |                33 |                 0 |
| typeshed     |           0.99983 |              3496 |                18 |
| warehouse    |           0.99967 |               648 |                15 |
| zulip        |           0.99972 |              1437 |                21 |
```
2023-09-29 02:55:39 +00:00
konsti
4d16e2308d
Formatter and parser refactoring (#7569)
I got confused and refactored a bit, now the naming should be more
consistent. This is the basis for the range formatting work.

Chages:
* `format_module` -> `format_module_source` (format a string)
* `format_node` -> `format_module_ast` (format a program parsed into an
AST)
* Added `parse_ok_tokens` that takes `Token` instead of `Result<Token>`
* Call the source code `source` consistently
* Added a `tokens_and_ranges` helper
* `python_ast` -> `module` (because that's the type)
2023-09-26 15:29:43 +02:00
Micha Reiser
b34278e0cd
Introduce LinterSettings
## Stack Summary

This stack splits `Settings` into `FormatterSettings` and `LinterSettings` and moves it into `ruff_workspace`. This change is necessary to add the `FormatterSettings` to `Settings` without adding `ruff_python_formatter` as a dependency to `ruff_linter` (and the linter should not contain the formatter settings). 

A quick overview of our settings struct at play:

* `Options`: 1:1 representation of the options in the `pyproject.toml` or `ruff.toml`.  Used for deserialization.
* `Configuration`: Resolved `Options`, potentially merged from multiple configurations (when using `extend`). The representation is very close if not identical to the `Options`.
* `Settings`: The resolved configuration that uses a data format optimized for reading. Optional fields are initialized with their default values. Initialized by `Configuration::into_settings` .

The goal of this stack is to split `Settings` into tool-specific resolved `Settings` that are independent of each other. This comes at the advantage that the individual crates don't need to know anything about the other tools. The downside is that information gets duplicated between `Settings`. Right now the duplication is minimal (`line-length`, `tab-width`) but we may need to come up with a solution if more expensive data needs sharing.

This stack focuses on `Settings`. Splitting `Configuration` into some smaller structs is something I'll follow up on later. 

## PR Summary

This PR extracts the linter-specific settings into a new `LinterSettings` struct and adds it as a `linter` field to the `Settings` struct. This is in preparation for moving `Settings` from `ruff_linter` to `ruff_workspace`

## Test Plan

`cargo test`
2023-09-20 17:02:34 +02:00
Charlie Marsh
5849a75223
Rename ruff crate to ruff_linter (#7529) 2023-09-20 08:38:27 +02:00
dependabot[bot]
4e2769a16c
Bump mimalloc from 0.1.38 to 0.1.39 (#7402)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-15 08:46:55 +00:00
Zanie Blue
24b848a4ea
Enable preview mode during benchmarks (#7208)
Split out of https://github.com/astral-sh/ruff/pull/7195 so benchmark
changes from enabling additional rules can be reviewed separately.
2023-09-11 14:09:33 -05:00
Zanie Blue
6566d00295
Update rule selection to respect preview mode (#7195)
## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Extends work in #7046 (some relevant discussion there)

Changes:

- All nursery rules are now referred to as preview rules
- Documentation for the nursery is updated to describe preview
- Adds a "PREVIEW" selector for preview rules
- This is primarily to allow `--preview --ignore PREVIEW --extend-select
FOO001,BAR200`
- Using `--preview` enables preview rules that match selectors

Notable decisions:

- Preview rules are not selectable by their rule code without enabling
preview
- Retains the "NURSERY" selector for backwards compatibility
- Nursery rules are selectable by their rule code for backwards
compatiblity

Additional work:

- Selection of preview rules without the "--preview" flag should display
a warning
- Use of deprecated nursery selection behavior should display a warning
- Nursery selection should be removed after some time

## Test Plan

<!-- How was it tested? -->

Manual confirmation (i.e. we don't have an preview rules yet just
nursery rules so I added a preview rule for manual testing)

New unit tests

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-09-11 12:28:39 -05:00
Micha Reiser
9cb5ce750e
Remove IO based lints from linter benchmark (#7240) 2023-09-08 13:39:50 +02:00
Micha Reiser
d9544a2d37
Disable default criterion features (#7241) 2023-09-08 10:03:14 +00:00
Dhruv Manilawala
154fe7bdcc
Add lexer benchmark (#7132) 2023-09-04 13:18:36 +00:00
Micha Reiser
edfd888bd6
Add unicode benchmark (#7002) 2023-08-30 09:57:57 +02:00
Micha Reiser
ed1b4122d0
Use Codspeed for continous benchmarking (#6896) 2023-08-26 16:34:35 +02:00
Micha Reiser
34b2ae73b4
Extend BestFitting with mode (#6814) 2023-08-23 17:23:45 +02:00
Micha Reiser
ea72d5feba
Refactor SourceKind to store file content (#6640) 2023-08-18 13:45:38 +00:00
Charlie Marsh
7f7df852e8
Remove some extraneous newlines in Cargo.toml (#6577) 2023-08-14 23:39:41 +00:00
Micha Reiser
51ae47ad56
Remove lex and parsing from formatter benchmark (#6547) 2023-08-14 10:25:37 +02:00
Dhruv Manilawala
32fa05765a
Use Jupyter mode while parsing Notebook files (#5552)
## Summary

Enable using the new `Mode::Jupyter` for the tokenizer/parser to parse
Jupyter line magic tokens.

The individual call to the lexer i.e., `lex_starts_at` done by various
rules should consider the context of the source code (is this content
from a Jupyter Notebook?). Thus, a new field `source_type` (of type
`PySourceType`) is added to `Checker` which is being passed around as an
argument to the relevant functions. This is then used to determine the
`Mode` for the lexer.

## Test Plan

Add new test cases to make sure that the magic statement is considered
while generating the diagnostic and autofix:
* For `I001`, if there's a magic statement in between two import blocks,
they should be sorted independently

fixes: #6090
2023-08-05 00:32:07 +00:00
konsti
1031bb6550
Formatter: Add SourceType to context to enable special formatting for stub files (#6331)
**Summary** This adds the information whether we're in a .py python
source file or in a .pyi stub file to enable people working on #5822 and
related issues.

I'm not completely happy with `Default` for something that depends on
the input.

**Test Plan** None, this is currently unused, i'm leaving this to first
implementation of stub file specific formatting.

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2023-08-04 11:52:26 +00:00
Micha Reiser
debfca3a11
Remove Parse trait (#6235) 2023-08-01 18:35:03 +02:00
Micha Reiser
40f54375cb
Pull in RustPython parser (#6099) 2023-07-27 09:29:11 +00:00
Micha Reiser
2cf00fee96
Remove parser dependency from ruff-python-ast (#6096) 2023-07-26 17:47:22 +02:00
Dhruv Manilawala
7e6b472c5b
Make lint_only aware of the source kind (#5876) 2023-07-19 09:29:35 +05:30