Commit graph

383 commits

Author SHA1 Message Date
Hamir Mahal
8b3da1867e
refactor: remove unnecessary string hashes (#13250) 2024-09-18 19:08:59 +02:00
Micha Reiser
6ac61d7b89
Fix placement of inline parameter comments (#13379) 2024-09-18 08:26:06 +02:00
Micha Reiser
d86e5ad031
Update Black tests (#13375) 2024-09-17 11:16:50 +02:00
Micha Reiser
ed238e0c76
Fix incorrect placement of leading function comment with type params (#12447) 2024-07-22 14:17:00 +02:00
konsti
9a817a2922
Insert empty line between suite and alternative branch after def/class (#12294)
When there is a function or class definition at the end of a suite
followed by the beginning of an alternative block, we have to insert a
single empty line between them.

In the if-else-statement example below, we insert an empty line after
the `foo` in the if-block, but none after the else-block `foo`, since in
the latter case the enclosing suite already adds empty lines.

```python
if sys.version_info >= (3, 10):
    def foo():
        return "new"
else:
    def foo():
        return "old"
class Bar:
    pass
```

To do so, we track whether the current suite is the last one in the
current statement with a new option on the suite kind.

Fixes #12199

---------

Co-authored-by: Micha Reiser <micha@reiser.io>
2024-07-15 12:59:33 +02:00
Micha Reiser
bd01004a42
Use space separator before parenthesiszed expressions in comprehensions with leading comments. (#12282) 2024-07-11 22:38:12 +02:00
Micha Reiser
5806bc915d
Fix formatter instability for lines only consisting of zero-width characters (#11748) 2024-06-05 17:55:14 +02:00
Dhruv Manilawala
bf5b62edac
Maintain synchronicity between the lexer and the parser (#11457)
## Summary

This PR updates the entire parser stack in multiple ways:

### Make the lexer lazy

* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473

Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.

The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.

The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.

Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.

This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)

[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`

### Remove `SoftKeywordTransformer`

* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474

For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.

* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.

If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).

The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.

### Update the parser API

* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505

Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.

Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens

Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.

### Refactor the downstream tools (linter and formatter)

* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592

And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
	* https://github.com/astral-sh/ruff/pull/11402
	* https://github.com/astral-sh/ruff/pull/11406
	* https://github.com/astral-sh/ruff/pull/11418
	* https://github.com/astral-sh/ruff/pull/11419
	* https://github.com/astral-sh/ruff/pull/11420
	* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR

For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
	1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.

## Test Plan

- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects

### Running this change on dozens of open-source projects

Refer to the PR description to get the list of open source projects used
for testing.

Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`

**Conclusion: all output were same**

## What's next?

The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
2024-06-03 18:23:50 +05:30
Micha Reiser
9b6d2ce1f2
Fix incorect placement of trailing stub function comments (#11632) 2024-05-31 12:06:17 +00:00
Dimitri Papadopoulos Orfanos
3b0584449d
Fix a few typos found by codespell (#11404)
## Summary

Just fix typos.

## Test Plan

CI jobs.

---------

Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
2024-05-13 13:22:35 +00:00
Dhruv Manilawala
77a72ecd38
Avoid multiline expression if format specifier is present (#11123)
## Summary

This PR fixes the bug where the formatter would format an f-string and
could potentially change the AST.

For a triple-quoted f-string, the element can't be formatted into
multiline if it has a format specifier because otherwise the newline
would be treated as part of the format specifier.

Given the following f-string:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
    variable:.3f} ddddddddddddddd eeeeeeee"""
```

The formatter sees that the f-string is already multiline so it assumes
that it can contain line breaks i.e., broken into multiple lines. But,
in this specific case we can't format it as:

```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
    variable:.3f
} ddddddddddddddd eeeeeeee"""
```
                     
Because the format specifier string would become ".3f\n", which is not
the original string (`.3f`).

If the original source code already contained a newline, they'll be
preserved. For example:
```python
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {
    variable:.3f
} ddddddddddddddd eeeeeeee"""
```

The above will be formatted as:
```py
f"""aaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbb ccccccccccc {variable:.3f
} ddddddddddddddd eeeeeeee"""
```

Note that the newline after `.3f` is part of the format specifier which
needs to be preserved.
The Python version is irrelevant in this case.

fixes: #10040 

## Test Plan

Add some test cases to verify this behavior.
2024-04-26 13:34:38 +00:00
Jelle Zijlstra
cd3e319538
Add support for PEP 696 syntax (#11120) 2024-04-26 09:47:29 +02:00
Dhruv Manilawala
13ffb5bc19
Replace LALRPOP parser with hand-written parser (#10036)
(Supersedes #9152, authored by @LaBatata101)

## Summary

This PR replaces the current parser generated from LALRPOP to a
hand-written recursive descent parser.

It also updates the grammar for [PEP
646](https://peps.python.org/pep-0646/) so that the parser outputs the
correct AST. For example, in `data[*x]`, the index expression is now a
tuple with a single starred expression instead of just a starred
expression.

Beyond the performance improvements, the parser is also error resilient
and can provide better error messages. The behavior as seen by any
downstream tools isn't changed. That is, the linter and formatter can
still assume that the parser will _stop_ at the first syntax error. This
will be updated in the following months.

For more details about the change here, refer to the PR corresponding to
the individual commits and the release blog post.

## Test Plan

Write _lots_ and _lots_ of tests for both valid and invalid syntax and
verify the output.

## Acknowledgements

- @MichaReiser for reviewing 100+ parser PRs and continuously providing
guidance throughout the project
- @LaBatata101 for initiating the transition to a hand-written parser in
#9152
- @addisoncrump for implementing the fuzzer which helped
[catch](https://github.com/astral-sh/ruff/pull/10903)
[a](https://github.com/astral-sh/ruff/pull/10910)
[lot](https://github.com/astral-sh/ruff/pull/10966)
[of](https://github.com/astral-sh/ruff/pull/10896)
[bugs](https://github.com/astral-sh/ruff/pull/10877)

---------

Co-authored-by: Victor Hugo Gomes <labatata101@linuxmail.org>
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-04-18 17:57:39 +05:30
Micha Reiser
9d705a4414
Fix subscript comment placement with parenthesized value (#10496)
## Summary

This is a follow up on https://github.com/astral-sh/ruff/pull/10492 

I incorrectly assumed that `subscript.value.end()` always points past
the value. However, this isn't the case for parenthesized values where
the end "ends" before the parentheses.

## Test Plan

I added new tests for the parenthesized case.
2024-03-20 20:30:22 +00:00
Micha Reiser
954a48b129
Fix instable formatting for trailing subscribt end-of-line comment (#10492)
## Summary

This PR fixes an instability where formatting a subscribt 
where the `slice` is not an `ExprSlice` and it has a trailing
end-of-line comment after its opening `[` required two formatting passes
to be stable.

The fix is to associate the trailing end-of-line comment as dangling
comment on `[` to preserve its position, similar to how Ruff does it for
other parenthesized expressions.
This also matches how trailing end-of-line subscript comments are
handled when the `slice` is an `ExprSlice`.

Fixes https://github.com/astral-sh/ruff/issues/10355

## Versioning

Shipping this as part of a patch release is fine because:

* It fixes a stability issue
* It doesn't impact already formatted code because Ruff would already
have moved to the comment to the end of the line (instead of preserving
it)

## Test Plan

Added tests
2024-03-20 18:12:10 +01:00
Auguste Lalande
3ed707f245
Spellcheck & grammar (#10375)
## Summary

I used `codespell` and `gramma` to identify mispellings and grammar
errors throughout the codebase and fixed them. I tried not to make any
controversial changes, but feel free to revert as you see fit.
2024-03-13 02:34:23 +00:00
Micha Reiser
b64f2ea401
Formatter: Improve single-with item formatting for Python 3.8 or older (#10276)
## Summary

This PR changes how we format `with` statements with a single with item
for Python 3.8 or older. This change is not compatible with Black.

This is how we format a single-item with statement today 

```python
def run(data_path, model_uri):
    with pyspark.sql.SparkSession.builder.config(
        key="spark.python.worker.reuse", value=True
    ).config(key="spark.ui.enabled", value=False).master(
        "local-cluster[2, 1, 1024]"
    ).getOrCreate():
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))
```

This is different than how we would format the same expression if it is
inside any other clause header (`while`, `if`, ...):

```python
def run(data_path, model_uri):
    while (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))

```

Which seems inconsistent to me. 

This PR changes the formatting of the single-item with Python 3.8 or
older to match that of other clause headers.

```python
def run(data_path, model_uri):
    with (
        pyspark.sql.SparkSession.builder.config(
            key="spark.python.worker.reuse", value=True
        )
        .config(key="spark.ui.enabled", value=False)
        .master("local-cluster[2, 1, 1024]")
        .getOrCreate()
    ):
        # ignore spark log output
        spark.sparkContext.setLogLevel("OFF")
        print(score_model(spark, data_path, model_uri))
```

According to our versioning policy, this style change is gated behind a
preview flag.

## Test Plan

See added tests.

Added
2024-03-08 23:56:02 +00:00
Micha Reiser
4bce801065
Fix unstable with-items formatting (#10274)
## Summary

Fixes https://github.com/astral-sh/ruff/issues/10267

The issue with the current formatting is that the formatter flips
between the `SingleParenthesizedContextManager` and
`ParenthesizeIfExpands` or `SingleWithTarget` because the layouts use
incompatible formatting ( `SingleParenthesizedContextManager`:
`maybe_parenthesize_expression(context)` vs `ParenthesizeIfExpands`:
`parenthesize_if_expands(item)`, `SingleWithTarget`:
`optional_parentheses(item)`.

The fix is to ensure that the layouts between which the formatter flips
when adding or removing parentheses are the same. I do this by
introducing a new `SingleWithoutTarget` layout that uses the same
formatting as `SingleParenthesizedContextManager` if it has no target
and prefer `SingleWithoutTarget` over using `ParenthesizeIfExpands` or
`SingleWithTarget`.

## Formatting change

The downside is that we now use `maybe_parenthesize_expression` over
`parenthesize_if_expands` for expressions where
`can_omit_optional_parentheses` returns `false`. This can lead to stable
formatting changes. I only found one formatting change in our ecosystem
check and, unfortunately, this is necessary to fix the instability (and
instability fixes are okay to have as part of minor changes according to
our versioning policy)

The benefit of the change is that `with` items with a single context
manager and without a target are now formatted identically to how the
same expression would be formatted in other clause headers.

## Test Plan

I ran the ecosystem check locally
2024-03-08 23:48:47 +00:00
Micha Reiser
965adbed4b
Fix trailing kwargs end of line comment after slash (#10297)
## Summary

Fixes the handling end of line comments that belong to `**kwargs` when
the `**kwargs` come after a slash.

The issue was that we missed to include the `**kwargs` start position
when determining the start of the next node coming after the `/`.

Fixes https://github.com/astral-sh/ruff/issues/10281

## Test Plan

Added test
2024-03-08 14:45:26 +00:00
Micha Reiser
dcc92f50cf
Update black tests (#10166) 2024-02-29 10:00:51 +01:00
Micha Reiser
a6f32ddc5e
Ruff 2024.2 style (#9639) 2024-02-29 09:30:54 +01:00
Dhruv Manilawala
72bf1c2880
Preview minimal f-string formatting (#9642)
## Summary

_This is preview only feature and is available using the `--preview`
command-line flag._

With the implementation of [PEP 701] in Python 3.12, f-strings can now
be broken into multiple lines, can contain comments, and can re-use the
same quote character. Currently, no other Python formatter formats the
f-strings so there's some discussion which needs to happen in defining
the style used for f-string formatting. Relevant discussion:
https://github.com/astral-sh/ruff/discussions/9785

The goal for this PR is to add minimal support for f-string formatting.
This would be to format expression within the replacement field without
introducing any major style changes.

### Newlines

The heuristics for adding newline is similar to that of
[Prettier](https://prettier.io/docs/en/next/rationale.html#template-literals)
where the formatter would only split an expression in the replacement
field across multiple lines if there was already a line break within the
replacement field.

In other words, the formatter would not add any newlines unless they
were already present i.e., they were added by the user. This makes
breaking any expression inside an f-string optional and in control of
the user. For example,

```python
# We wouldn't break this
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa { aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"

# But, we would break the following as there's already a newline
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa {
	aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"
```


If there are comments in any of the replacement field of the f-string,
then it will always be a multi-line f-string in which case the formatter
would prefer to break expressions i.e., introduce newlines. For example,

```python
x = f"{ # comment
    a }"
```

### Quotes

The logic for formatting quotes remains unchanged. The existing logic is
used to determine the necessary quote char and is used accordingly.

Now, if the expression inside an f-string is itself a string like, then
we need to make sure to preserve the existing quote and not change it to
the preferred quote unless it's 3.12. For example,

```python
f"outer {'inner'} outer"

# For pre 3.12, preserve the single quote
f"outer {'inner'} outer"

# While for 3.12 and later, the quotes can be changed
f"outer {"inner"} outer"
```

But, for triple-quoted strings, we can re-use the same quote char unless
the inner string is itself a triple-quoted string.

```python
f"""outer {"inner"} outer"""  # valid
f"""outer {'''inner'''} outer"""  # preserve the single quote char for the inner string
```

### Debug expressions

If debug expressions are present in the replacement field of a f-string,
then the whitespace needs to be preserved as they will be rendered as it
is (for example, `f"{ x = }"`. If there are any nested f-strings, then
the whitespace in them needs to be preserved as well which means that
we'll stop formatting the f-string as soon as we encounter a debug
expression.

```python
f"outer {   x =  !s  :.3f}"
#                  ^^
#                  We can remove these whitespaces
```

Now, the whitespace doesn't need to be preserved around conversion spec
and format specifiers, so we'll format them as usual but we won't be
formatting any nested f-string within the format specifier.

### Miscellaneous

- The
[`hug_parens_with_braces_and_square_brackets`](https://github.com/astral-sh/ruff/issues/8279)
preview style isn't implemented w.r.t. the f-string curly braces.
- The
[indentation](https://github.com/astral-sh/ruff/discussions/9785#discussioncomment-8470590)
is always relative to the f-string containing statement

## Test Plan

* Add new test cases
* Review existing snapshot changes
* Review the ecosystem changes

[PEP 701]: https://peps.python.org/pep-0701/
2024-02-16 20:28:11 +05:30
Micha Reiser
edfe8421ec
Disable top-level docstring formatting for notebooks (#9957) 2024-02-12 18:14:02 +00:00
Micha Reiser
8657a392ff
Docstring formatting: Preserve tab indentation when using indent-style=tabs (#9915) 2024-02-12 16:09:13 +01:00
Micha Reiser
4946a1876f
Stabilize quote-style preserve (#9922) 2024-02-12 09:30:07 +00:00
Micha Reiser
fe7d965334
Reduce Result<Tok, LexicalError> size by using Box<str> instead of String (#9885) 2024-02-08 20:36:22 +00:00
Shaygan Hooshyari
b47f85eb69
Preview Style: Format module level docstring (#9725)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-02-05 15:03:34 +00:00
Micha Reiser
80fc02e7d5
Don't trim last empty line in docstrings (#9813) 2024-02-05 13:29:24 +00:00
Micha Reiser
4f7fb566f0
Range formatting: Fix invalid syntax after parenthesizing expression (#9751) 2024-02-02 17:56:25 +01:00
Micha Reiser
ce14f4dea5
Range formatting API (#9635) 2024-01-31 11:13:37 +01:00
Dhruv Manilawala
541aef4e6c
Implement blank_line_after_nested_stub_class preview style (#9155)
## Summary

This PR implements the `blank_line_after_nested_stub_class` preview
style in the formatter.

The logic is divided into 3 parts:
1. In between preceding and following nodes at top level and nested
suite
2. When there's a trailing comment after the class
3. When there is no following node from (1) which is the case when it's
the last or the only node in a suite

We handle (3) with `FormatLeadingAlternateBranchComments`.

## Test Plan

- Add new test cases and update existing snapshots
- Checked the `typeshed` diff

fixes: #8891
2024-01-31 00:09:38 +05:30
Micha Reiser
3c7fea769c
Show source-type in formatter snapshot tests with options (#9699) 2024-01-30 10:08:50 +00:00
Micha Reiser
0045032905
Set source type: Stub for black tests with options (#9674) 2024-01-29 15:54:30 +01:00
Micha Reiser
91046e4c81
Preserve indent around multiline strings (#9637) 2024-01-26 08:18:30 +01:00
Micha Reiser
395bf3dc98
Fix the input for black's line ranges test file (#9622) 2024-01-23 10:40:23 +00:00
Micha Reiser
58fcd96ac1
Update Black Tests (#9455) 2024-01-10 12:09:34 +00:00
Micha Reiser
ac02d3aedd
Hug multiline-strings preview style (#9243) 2024-01-10 12:47:34 +01:00
Charlie Marsh
ba71772d93
Parenthesize breaking named expressions in match guards (#9396)
## Summary

This is an attempt to solve
https://github.com/astral-sh/ruff/issues/9394 by avoiding breaks in
named expressions when invalid.
2024-01-08 14:47:01 +00:00
Charlie Marsh
60ba7a7c0d
Allow # fmt: skip with interspersed same-line comments (#9395)
## Summary

This is similar to https://github.com/astral-sh/ruff/pull/8876, but more
limited in scope:

1. It only applies to `# fmt: skip` (like Black). Like `# isort: on`, `#
fmt: on` needs to be on its own line (still).
2. It only delimits on `#`, so you can do `# fmt: skip # noqa`, but not
`# fmt: skip - some other content` or `# fmt: skip; noqa`.

If we want to support the `;`-delimited version, we should revisit
later, since we don't support that in the linter (so `# fmt: skip; noqa`
wouldn't register a `noqa`).

Closes https://github.com/astral-sh/ruff/issues/8892.
2024-01-04 19:39:37 -05:00
Dimitri Papadopoulos Orfanos
d04d49cc0e
Fix typos found by codespell (#9346)
## Summary

Fix typos found by
[codespell](https://github.com/codespell-project/codespell).

## Test Plan

CI tests.
2024-01-02 02:08:15 +00:00
Charlie Marsh
e80260a3c5
Remove source path from parser errors (#9322)
## Summary

I always found it odd that we had to pass this in, since it's really
higher-level context for the error. The awkwardness is further evidenced
by the fact that we pass in fake values everywhere (even outside of
tests). The source path isn't actually used to display the error; it's
only accessed elsewhere to _re-display_ the error in certain cases. This
PR modifies to instead pass the path directly in those cases.
2023-12-30 20:33:05 +00:00
Micha Reiser
5d4825b60f
Normalise Hex and unicode escape sequences in string (#9280) 2023-12-28 09:06:58 +08:00
Micha Reiser
9cc257ee7d
Improve dummy_implementations preview style formatting (#9240) 2023-12-22 03:44:14 +00:00
Micha Reiser
a06723da2b
Parenthesize multi-context managers (#9222) 2023-12-22 03:41:03 +00:00
Micha Reiser
fa2c37b411
Parenthesize long type annotations in annotated assignments (#9210) 2023-12-22 03:33:47 +00:00
Micha Reiser
d835b28d01
Show preview changes for tests with options (#9223) 2023-12-21 23:36:19 +00:00
Micha Reiser
c6d8076034
Set target versions in Black tests (#9221) 2023-12-21 04:20:17 +00:00
Micha Reiser
8cb7950102
Add target_version to formatter options (#9220) 2023-12-21 04:05:58 +00:00
Micha Reiser
ef4bd8d5ff
Fix: Avoid parenthesizing subscript targets and values (#9209) 2023-12-20 23:42:25 +00:00
Dhruv Manilawala
09296e3e3c
Implement no_blank_line_before_class_docstring preview style (#9154)
## Summary

This PR implements the `no_blank_line_before_class_docstring` preview
style.

## Test Plan

Update existing snapshots.

### Formatter ecosystem

`main`

| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 213 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |

`dhruv/no-blank-line-docstring`

| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75804 | 1799 | 1648 |
| django | 0.99984 | 2772 | 34 |
| home-assistant | 0.99955 | 10596 | 213 |
| poetry | 0.99905 | 321 | 15 |
| transformers | 0.99967 | 2657 | 324 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99980 | 3669 | 18 |
| warehouse | 0.99976 | 654 | 14 |
| zulip | 0.99958 | 1459 | 36 |

fixes: #8888
2023-12-19 00:43:20 -06:00