Commit graph

195 commits

Author SHA1 Message Date
K
47653ca88a
[formatter] Fix missing blank lines before decorated classes in .pyi files (#18888)
Some checks are pending
CI / Determine changes (push) Waiting to run
CI / cargo fmt (push) Waiting to run
CI / cargo clippy (push) Blocked by required conditions
CI / cargo test (linux) (push) Blocked by required conditions
CI / cargo test (linux, release) (push) Blocked by required conditions
CI / cargo test (windows) (push) Blocked by required conditions
CI / cargo test (wasm) (push) Blocked by required conditions
CI / cargo build (release) (push) Waiting to run
CI / cargo build (msrv) (push) Blocked by required conditions
CI / cargo fuzz build (push) Blocked by required conditions
CI / fuzz parser (push) Blocked by required conditions
CI / test scripts (push) Blocked by required conditions
CI / ecosystem (push) Blocked by required conditions
CI / Fuzz for new ty panics (push) Blocked by required conditions
CI / cargo shear (push) Blocked by required conditions
CI / python package (push) Waiting to run
CI / pre-commit (push) Waiting to run
CI / mkdocs (push) Waiting to run
CI / formatter instabilities and black similarity (push) Blocked by required conditions
CI / test ruff-lsp (push) Blocked by required conditions
CI / check playground (push) Blocked by required conditions
CI / benchmarks-instrumented (push) Blocked by required conditions
CI / benchmarks-walltime (push) Blocked by required conditions
[ty Playground] Release / publish (push) Waiting to run
Co-authored-by: Micha Reiser <micha@reiser.io>
2025-06-24 16:25:44 +02:00
Ibraheem Ahmed
c9dff5c7d5
[ty] AST garbage collection (#18482)
## Summary

Garbage collect ASTs once we are done checking a given file. Queries
with a cross-file dependency on the AST will reparse the file on demand.
This reduces ty's peak memory usage by ~20-30%.

The primary change of this PR is adding a `node_index` field to every
AST node, that is assigned by the parser. `ParsedModule` can use this to
create a flat index of AST nodes any time the file is parsed (or
reparsed). This allows `AstNodeRef` to simply index into the current
instance of the `ParsedModule`, instead of storing a pointer directly.

The indices are somewhat hackily (using an atomic integer) assigned by
the `parsed_module` query instead of by the parser directly. Assigning
the indices in source-order in the (recursive) parser turns out to be
difficult, and collecting the nodes during semantic indexing is
impossible as `SemanticIndex` does not hold onto a specific
`ParsedModuleRef`, which the pointers in the flat AST are tied to. This
means that we have to do an extra AST traversal to assign and collect
the nodes into a flat index, but the small performance impact (~3% on
cold runs) seems worth it for the memory savings.

Part of https://github.com/astral-sh/ty/issues/214.
2025-06-13 08:40:11 -04:00
Dylan
9bbf4987e8
Implement template strings (#17851)
This PR implements template strings (t-strings) in the parser and
formatter for Ruff.

Minimal changes necessary to compile were made in other parts of the code (e.g. ty, the linter, etc.). These will be covered properly in follow-up PRs.
2025-05-30 15:00:56 -05:00
Micha Reiser
9ae698fe30
Switch to Rust 2024 edition (#18129) 2025-05-16 13:25:28 +02:00
Micha Reiser
196e4befba
Update MSRV to 1.85 and toolchain to 1.87 (#18126) 2025-05-16 09:19:55 +02:00
Micha Reiser
fa628018b2
Use #[expect(lint)] over #[allow(lint)] where possible (#17822) 2025-05-03 21:20:31 +02:00
Alex Waygood
25920fe489
Rename ExprStringLiteral::as_unconcatenated_string() to ExprStringLiteral::as_single_part_string() (#16253) 2025-02-19 16:06:57 +00:00
Brent Westbrook
a9efdea113
Use ast::PythonVersion internally in the formatter and linter (#16170)
## Summary

This PR updates the formatter and linter to use the `PythonVersion`
struct from the `ruff_python_ast` crate internally. While this doesn't
remove the need for the `linter::PythonVersion` enum, it does remove the
`formatter::PythonVersion` enum and limits the use in the linter to
deserializing from CLI arguments and config files and moves most of the
remaining methods to the `ast::PythonVersion` struct.

## Test Plan

Existing tests, with some inputs and outputs updated to reflect the new
(de)serialization format. I think these are test-specific and shouldn't
affect any external (de)serialization.

---------

Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
2025-02-18 12:03:13 -05:00
Douglas Creager
98ef564170
Remove AstNode and AnyNode (#15479)
Some checks are pending
CI / Determine changes (push) Waiting to run
CI / cargo fmt (push) Waiting to run
CI / cargo clippy (push) Blocked by required conditions
CI / cargo test (linux) (push) Blocked by required conditions
CI / cargo test (linux, release) (push) Blocked by required conditions
CI / cargo test (windows) (push) Blocked by required conditions
CI / cargo test (wasm) (push) Blocked by required conditions
CI / cargo build (release) (push) Waiting to run
CI / cargo build (msrv) (push) Blocked by required conditions
CI / cargo fuzz build (push) Blocked by required conditions
CI / fuzz parser (push) Blocked by required conditions
CI / test scripts (push) Blocked by required conditions
CI / ecosystem (push) Blocked by required conditions
CI / cargo shear (push) Blocked by required conditions
CI / python package (push) Waiting to run
CI / pre-commit (push) Waiting to run
CI / mkdocs (push) Waiting to run
CI / formatter instabilities and black similarity (push) Blocked by required conditions
CI / test ruff-lsp (push) Blocked by required conditions
CI / benchmarks (push) Blocked by required conditions
While looking into potential AST optimizations, I noticed the `AstNode`
trait and `AnyNode` type aren't used anywhere in Ruff or Red Knot. It
looks like they might be historical artifacts of previous ways of
consuming AST nodes?

- `AstNode::cast`, `AstNode::cast_ref`, and `AstNode::can_cast` are not
used anywhere.
- Since `cast_ref` isn't needed anymore, the `Ref` associated type isn't
either.

This is a pure refactoring, with no intended behavior changes.
2025-01-17 17:11:00 -05:00
Micha Reiser
424b720c19 Ruff 2025 style guide (#13906)
Closes #13371
2025-01-09 10:20:06 +01:00
Micha Reiser
b63c2e126b
Upgrade Rust toolchain to 1.83 (#14677) 2024-11-29 12:05:05 +00:00
Dhruv Manilawala
f3dac27e9a
Fix f-string formatting in assignment statement (#14454)
## Summary

fixes: #13813

This PR fixes a bug in the formatting assignment statement when the
value is an f-string.

This is resolved by using custom best fit layouts if the f-string is (a)
not already a flat f-string (thus, cannot be multiline) and (b) is not a
multiline string (thus, cannot be flattened). So, it is used in cases
like the following:
```py
aaaaaaaaaaaaaaaaaa = f"testeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee{
    expression}moreeeeeeeeeeeeeeeee"
```
Which is (a) `FStringLayout::Multiline` and (b) not a multiline.

There are various other examples in the PR diff along with additional
explanation and context as code comments.

## Test Plan

Add multiple test cases for various scenarios.
2024-11-26 15:07:18 +05:30
Micha Reiser
443fd3b660
Disallow single-line implicit concatenated strings (#13928)
Some checks are pending
CI / Determine changes (push) Waiting to run
CI / cargo fmt (push) Waiting to run
CI / cargo clippy (push) Blocked by required conditions
CI / cargo test (linux) (push) Blocked by required conditions
CI / cargo test (windows) (push) Blocked by required conditions
CI / cargo test (wasm) (push) Blocked by required conditions
CI / cargo build (release) (push) Blocked by required conditions
CI / cargo build (msrv) (push) Blocked by required conditions
CI / cargo fuzz (push) Blocked by required conditions
CI / Fuzz the parser (push) Blocked by required conditions
CI / test scripts (push) Blocked by required conditions
CI / ecosystem (push) Blocked by required conditions
CI / cargo shear (push) Blocked by required conditions
CI / python package (push) Waiting to run
CI / pre-commit (push) Waiting to run
CI / mkdocs (push) Waiting to run
CI / formatter instabilities and black similarity (push) Blocked by required conditions
CI / test ruff-lsp (push) Blocked by required conditions
CI / benchmarks (push) Blocked by required conditions
2024-11-03 11:49:26 +00:00
Micha Reiser
cf0f5e1318
Fix formatting of single with-item with trailing comment (#14005) 2024-11-01 09:08:06 +01:00
Micha Reiser
73ee72b665
Join implicit concatenated strings when they fit on a line (#13663) 2024-10-24 11:52:22 +02:00
Micha Reiser
fc661e193a
Normalize implicit concatenated f-string quotes per part (#13539) 2024-10-08 09:59:17 +00:00
Micha Reiser
531ebf6dff
Fix parentheses around return type annotations (#13381) 2024-09-20 09:23:53 +02:00
Micha Reiser
138e70bd5c
Upgrade to Rust 1.80 (#12586) 2024-07-30 19:18:08 +00:00
konsti
9a817a2922
Insert empty line between suite and alternative branch after def/class (#12294)
When there is a function or class definition at the end of a suite
followed by the beginning of an alternative block, we have to insert a
single empty line between them.

In the if-else-statement example below, we insert an empty line after
the `foo` in the if-block, but none after the else-block `foo`, since in
the latter case the enclosing suite already adds empty lines.

```python
if sys.version_info >= (3, 10):
    def foo():
        return "new"
else:
    def foo():
        return "old"
class Bar:
    pass
```

To do so, we track whether the current suite is the last one in the
current statement with a new option on the suite kind.

Fixes #12199

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2024-07-15 12:59:33 +02:00
Dhruv Manilawala
549cc1e437
Build CommentRanges outside the parser (#11792)
## Summary

This PR updates the parser to remove building the `CommentRanges` and
instead it'll be built by the linter and the formatter when it's
required.

For the linter, it'll be built and owned by the `Indexer` while for the
formatter it'll be built from the `Tokens` struct and passed as an
argument.

## Test Plan

`cargo insta test`
2024-06-09 09:55:17 +00:00
Dhruv Manilawala
bf5b62edac
Maintain synchronicity between the lexer and the parser (#11457)
## Summary

This PR updates the entire parser stack in multiple ways:

### Make the lexer lazy

* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473

Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.

The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.

The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.

Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.

This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)

[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`

### Remove `SoftKeywordTransformer`

* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474

For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.

* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.

If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).

The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.

### Update the parser API

* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505

Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.

Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens

Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.

### Refactor the downstream tools (linter and formatter)

* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592

And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
	* https://github.com/astral-sh/ruff/pull/11402
	* https://github.com/astral-sh/ruff/pull/11406
	* https://github.com/astral-sh/ruff/pull/11418
	* https://github.com/astral-sh/ruff/pull/11419
	* https://github.com/astral-sh/ruff/pull/11420
	* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR

For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
	1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.

## Test Plan

- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects

### Running this change on dozens of open-source projects

Refer to the PR description to get the list of open source projects used
for testing.

Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`

**Conclusion: all output were same**

## What's next?

The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
2024-06-03 18:23:50 +05:30
Micha Reiser
9b6d2ce1f2
Fix incorect placement of trailing stub function comments (#11632) 2024-05-31 12:06:17 +00:00
Micha Reiser
6a1e555537
Upgrade to Rust 1.78 (#11260) 2024-05-03 12:46:21 +00:00
Carl Meyer
845ba7cf5f
Make ImportFrom level just a u32 (#11170) 2024-04-26 20:38:35 -06:00
Carl Meyer
c80b9a4a90
Reduce size of Stmt from 144 to 120 bytes (#11051)
## Summary

I happened to notice that we box `TypeParams` on `StmtClassDef` but not
on `StmtFunctionDef` and wondered why, since `StmtFunctionDef` is bigger
and sets the size of `Stmt`.

@charliermarsh found that at the time we started boxing type params on
classes, classes were the largest statement type (see #6275), but that's
no longer true.

So boxing type-params also on functions reduces the overall size of
`Stmt`.

## Test Plan

The `<=` size tests are a bit irritating (since their failure doesn't
tell you the actual size), but I manually confirmed that the size is
actually 120 now.
2024-04-19 17:02:17 -06:00
Micha Reiser
b64f2ea401
Formatter: Improve single-with item formatting for Python 3.8 or older (#10276)
## Summary

This PR changes how we format `with` statements with a single with item
for Python 3.8 or older. This change is not compatible with Black.

This is how we format a single-item with statement today 

```python
def run(data_path, model_uri):
    with pyspark.sql.SparkSession.builder.config(
        key="spark.python.worker.reuse", value=True
    ).config(key="spark.ui.enabled", value=False).master(
        "local-cluster[2, 1, 1024]"
    ).getOrCreate():
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))
```

This is different than how we would format the same expression if it is
inside any other clause header (`while`, `if`, ...):

```python
def run(data_path, model_uri):
    while (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))

```

Which seems inconsistent to me. 

This PR changes the formatting of the single-item with Python 3.8 or
older to match that of other clause headers.

```python
def run(data_path, model_uri):
    with (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))
```

According to our versioning policy, this style change is gated behind a
preview flag.

## Test Plan

See added tests.

Added
2024-03-08 23:56:02 +00:00
Micha Reiser
4bce801065
Fix unstable with-items formatting (#10274)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/10267

The issue with the current formatting is that the formatter flips
between the `SingleParenthesizedContextManager` and
`ParenthesizeIfExpands` or `SingleWithTarget` because the layouts use
incompatible formatting ( `SingleParenthesizedContextManager`:
`maybe_parenthesize_expression(context)` vs `ParenthesizeIfExpands`:
`parenthesize_if_expands(item)`, `SingleWithTarget`:
`optional_parentheses(item)`.

The fix is to ensure that the layouts between which the formatter flips
when adding or removing parentheses are the same. I do this by
introducing a new `SingleWithoutTarget` layout that uses the same
formatting as `SingleParenthesizedContextManager` if it has no target
and prefer `SingleWithoutTarget` over using `ParenthesizeIfExpands` or
`SingleWithTarget`.

## Formatting change

The downside is that we now use `maybe_parenthesize_expression` over
`parenthesize_if_expands` for expressions where
`can_omit_optional_parentheses` returns `false`. This can lead to stable
formatting changes. I only found one formatting change in our ecosystem
check and, unfortunately, this is necessary to fix the instability (and
instability fixes are okay to have as part of minor changes according to
our versioning policy)

The benefit of the change is that `with` items with a single context
manager and without a target are now formatted identically to how the
same expression would be formatted in other clause headers.

## Test Plan

I ran the ecosystem check locally
2024-03-08 23:48:47 +00:00
Micha Reiser
a56d42f183
Refactor with statement formatting to have explicit layouts (#10296)
## Summary

This PR refactors the with item formatting to use more explicit layouts
to make it easier to understand the different formatting cases.

The benefit of the explicit layout is that it makes it easier to reasons
about layout transition between format runs. For example, today it's
possible that `SingleWithTarget` or `ParenthesizeIfExpands` add
parentheses around the with items for `with aaaaaaaaaa + bbbbbbbbbbbb:
pass`, resulting in `with (aaaaaaaaaa + bbbbbbbbbbbb): pass`. The
problem with this is that the next formatting pass uses the
`SingleParenthesizedContextExpression` layout that uses
`maybe_parenthesize_expression` which is different from
`parenthesize_if_expands(&expr)` or `optional_parentheses(&expr)`.

## Test Plan

`cargo test`

I ran the ecosystem checks locally and there are no changes.
2024-03-08 18:40:39 -05:00
Micha Reiser
a6f32ddc5e
Ruff 2024.2 style (#9639) 2024-02-29 09:30:54 +01:00
Jane Lewis
0293908b71
Implement RUF028 to detect useless formatter suppression comments (#9899)
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Fixes #6611

## Summary

This lint rule spots comments that are _intended_ to suppress or enable
the formatter, but will be ignored by the Ruff formatter.

We borrow some functions the formatter uses for determining comment
placement / putting them in context within an AST.

The analysis function uses an AST visitor to visit each comment and
attach it to the AST. It then uses that context to check:
1. Is this comment in an expression?
2. Does this comment have bad placement? (e.g. a `# fmt: skip` above a
function instead of at the end of a line)
3. Is this comment redundant?
4. Does this comment actually suppress any code?
5. Does this comment have ambiguous placement? (e.g. a `# fmt: off`
above an `else:` block)

If any of these are true, a violation is thrown. The reported reason
depends on the order of the above check-list: in other words, a `# fmt:
skip` comment on its own line within a list expression will be reported
as being in an expression, since that reason takes priority.

The lint suggests removing the comment as an unsafe fix, regardless of
the reason.

## Test Plan

A snapshot test has been created.
2024-02-28 19:21:06 +00:00
Micha Reiser
edfe8421ec
Disable top-level docstring formatting for notebooks (#9957) 2024-02-12 18:14:02 +00:00
Shaygan Hooshyari
b47f85eb69
Preview Style: Format module level docstring (#9725)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-02-05 15:03:34 +00:00
Micha Reiser
4f7fb566f0
Range formatting: Fix invalid syntax after parenthesizing expression (#9751) 2024-02-02 17:56:25 +01:00
Micha Reiser
ce14f4dea5
Range formatting API (#9635) 2024-01-31 11:13:37 +01:00
Dhruv Manilawala
541aef4e6c
Implement blank_line_after_nested_stub_class preview style (#9155)
## Summary

This PR implements the `blank_line_after_nested_stub_class` preview
style in the formatter.

The logic is divided into 3 parts:
1. In between preceding and following nodes at top level and nested
suite
2. When there's a trailing comment after the class
3. When there is no following node from (1) which is the case when it's
the last or the only node in a suite

We handle (3) with `FormatLeadingAlternateBranchComments`.

## Test Plan

- Add new test cases and update existing snapshots
- Checked the `typeshed` diff

fixes: #8891
2024-01-31 00:09:38 +05:30
Charlie Marsh
e80260a3c5
Remove source path from parser errors (#9322)
## Summary

I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
2023-12-30 20:33:05 +00:00
Micha Reiser
9cc257ee7d
Improve dummy_implementations preview style formatting (#9240) 2023-12-22 03:44:14 +00:00
Micha Reiser
a06723da2b
Parenthesize multi-context managers (#9222) 2023-12-22 03:41:03 +00:00
Micha Reiser
fa2c37b411
Parenthesize long type annotations in annotated assignments (#9210) 2023-12-22 03:33:47 +00:00
Micha Reiser
3cc719bd74
Use named preview test functions (#9239) 2023-12-22 00:23:04 +00:00
Micha Reiser
ef4bd8d5ff
Fix: Avoid parenthesizing subscript targets and values (#9209) 2023-12-20 23:42:25 +00:00
Dhruv Manilawala
09296e3e3c
Implement no_blank_line_before_class_docstring preview style (#9154)
## Summary

This PR implements the `no_blank_line_before_class_docstring` preview
style.

## Test Plan

Update existing snapshots.

### Formatter ecosystem

`main`

| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 213 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |

`dhruv/no-blank-line-docstring`

| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 213 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |

fixes: #8888
2023-12-19 00:43:20 -06:00
Micha Reiser
c8d6958d15
Add new with and match sequence test cases (#9128)
## Summary

Add new test cases for `with_item` and `match` sequence that demonstrate how long headers break. 

Removes one use of `optional_parentheses` in a position where it is know that the parentheses always need to be added.

## Test Plan

cargo test
2023-12-15 11:45:13 +09:00
Dhruv Manilawala
189e947808
Split string formatting to individual nodes (#9058)
This PR splits the string formatting code in the formatter to be handled
by the respective nodes.

Previously, the string formatting was done through a single
`FormatString` interface. Now, the nodes themselves are responsible for
formatting.

The following changes were made:
1. Remove `StringLayout::ImplicitStringConcatenationInBinaryLike` and
inline the call to `FormatStringContinuation`. After the refactor, the
binary like formatting would delegate to `FormatString` which would then
delegate to `FormatStringContinuation`. This removes the intermediary
steps.
2. Add formatter implementation for `FStringPart` which delegates it to
the respective string literal or f-string node.
3. Add `ExprStringLiteralKind` which is either `String` or `Docstring`.
If it's a docstring variant, then the string expression would not be
implicitly concatenated. This is guaranteed by the
`DocstringStmt::try_from_expression` constructor.
4. Add `StringLiteralKind` which is either a `String`, `Docstring` or
`InImplicitlyConcatenatedFString`. The last variant is for when the
string literal is implicitly concatenated with an f-string (`"foo" f"bar
{x}"`).
5. Remove `FormatString`.
6. Extract the f-string quote detection as a standalone function which
is public to the crate. This is used to detect the quote to be used for
an f-string at the expression level (`ExprFString` or
`FormatStringContinuation`).


### Formatter ecosystem result

**This PR**

| project | similarity index | total files | changed files |

|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 214 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |

**main**

| project | similarity index | total files | changed files |

|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 214 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |
2023-12-14 12:55:10 -06:00
Micha Reiser
45f603000d
prefer_splitting_right_hand_side_of_assignments preview style (#8943) 2023-12-13 03:43:23 +00:00
Andrew Gallant
b972455ac7
ruff_python_formatter: implement "dynamic" line width mode for docstring code formatting (#9098)
## Summary

This PR changes the internal `docstring-code-line-width` setting to
additionally accept a string value `dynamic`. When `dynamic` is set, the
line width is dynamically adjusted when reformatting code snippets in
docstrings based on the indent level of the docstring. The result is
that the reformatted lines from the code snippet should not exceed the
"global" line width configuration for the surrounding source.

This PR does not change the default behavior, although I suspect the
default should probably be `dynamic`.

## Test Plan

I added a new configuration to the existing docstring code tests and
also added a new set of tests dedicated to the new `dynamic` mode.
2023-12-12 09:58:07 -05:00
Micha Reiser
d0d88d9375
Fix handling of trailing target comment (#9051) 2023-12-08 05:00:36 +00:00
Micha Reiser
0bf0aa28ac
Inline trailing comments for type alias similar to assignments (#8941) 2023-12-04 05:27:04 +00:00
Micha Reiser
8088c5367a
Refactor the comment handling of a statement's last expression (#8920) 2023-12-04 05:12:12 +00:00
Charlie Marsh
6fe8f8a272
Avoid unstable formatting in ellipsis-only body with trailing comment (#8984)
## Summary

We should avoid inlining the ellipsis in:

```python
def h():
    ...
    # bye
```

Just as we omit the ellipsis in:

```python
def h():
    # bye
    ...
```

Closes https://github.com/astral-sh/ruff/issues/8905.
2023-12-03 19:15:40 -05:00