Commit graph

38 commits

Author SHA1 Message Date
Alex Waygood
7caf0d064a
Simplify formatting of strings by using flags from the AST nodes (#10489) 2024-03-20 16:16:54 +00:00
Alex Waygood
162d2eb723
Track casing of r-string prefixes in the tokenizer and AST (#10314)
Co-authored-by: Micha Reiser <micha@reiser.io>
2024-03-18 17:18:04 +00:00
Alex Waygood
92e6026446
Apply NFKC normalization to unicode identifiers in the lexer (#10412) 2024-03-18 11:56:56 +00:00
Alex Waygood
c504d7ab11
Track quoting style in the tokenizer (#10256) 2024-03-08 08:40:06 +00:00
Micha Reiser
fe7d965334
Reduce Result<Tok, LexicalError> size by using Box<str> instead of String (#9885) 2024-02-08 20:36:22 +00:00
Charlie Marsh
6fffde72e7
Use memchr for string lexing (#9888)
## Summary

On `main`, string lexing consists of walking through the string
character-by-character to search for the closing quote (with some
nuance: we also need to skip escaped characters, and error if we see
newlines in non-triple-quoted strings). This PR rewrites `lex_string` to
instead use `memchr` to search for the closing quote, which is
significantly faster. On my machine, at least, the `globals.py`
benchmark (which contains a lot of docstrings) gets 40% faster...

```text
lexer/numpy/globals.py  time:   [3.6410 µs 3.6496 µs 3.6585 µs]
                        thrpt:  [806.53 MiB/s 808.49 MiB/s 810.41 MiB/s]
                 change:
                        time:   [-40.413% -40.185% -39.984%] (p = 0.00 < 0.05)
                        thrpt:  [+66.623% +67.181% +67.822%]
                        Performance has improved.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high mild
lexer/unicode/pypinyin.py
                        time:   [12.422 µs 12.445 µs 12.467 µs]
                        thrpt:  [337.03 MiB/s 337.65 MiB/s 338.27 MiB/s]
                 change:
                        time:   [-9.4213% -9.1930% -8.9586%] (p = 0.00 < 0.05)
                        thrpt:  [+9.8401% +10.124% +10.401%]
                        Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) high mild
  2 (2.00%) high severe
lexer/pydantic/types.py time:   [107.45 µs 107.50 µs 107.56 µs]
                        thrpt:  [237.11 MiB/s 237.24 MiB/s 237.35 MiB/s]
                 change:
                        time:   [-4.0108% -3.7005% -3.3787%] (p = 0.00 < 0.05)
                        thrpt:  [+3.4968% +3.8427% +4.1784%]
                        Performance has improved.
Found 7 outliers among 100 measurements (7.00%)
  2 (2.00%) high mild
  5 (5.00%) high severe
lexer/numpy/ctypeslib.py
                        time:   [46.123 µs 46.165 µs 46.208 µs]
                        thrpt:  [360.36 MiB/s 360.69 MiB/s 361.01 MiB/s]
                 change:
                        time:   [-19.313% -18.996% -18.710%] (p = 0.00 < 0.05)
                        thrpt:  [+23.016% +23.451% +23.935%]
                        Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
  3 (3.00%) low mild
  1 (1.00%) high mild
  4 (4.00%) high severe
lexer/large/dataset.py  time:   [231.07 µs 231.19 µs 231.33 µs]
                        thrpt:  [175.87 MiB/s 175.97 MiB/s 176.06 MiB/s]
                 change:
                        time:   [-2.0437% -1.7663% -1.4922%] (p = 0.00 < 0.05)
                        thrpt:  [+1.5148% +1.7981% +2.0864%]
                        Performance has improved.
Found 10 outliers among 100 measurements (10.00%)
  5 (5.00%) high mild
  5 (5.00%) high severe
```
2024-02-08 17:23:06 +00:00
Seo Sanghyeon
df7fb95cbc
Index multiline f-strings (#9837)
Fix #9777.
2024-02-05 21:25:33 -05:00
Micha Reiser
21f2d0c90b
Add an explicit fast path for whitespace to is_identifier_continuation (#9532) 2024-01-16 08:23:43 +00:00
Micha Reiser
94968fedd5
Use Rust 1.75 toolchain (#9437) 2024-01-08 18:03:16 +01:00
Charlie Marsh
f0d43dafcf
Ignore trailing quotes for unclosed l-brace errors (#9388)
## Summary

Given:

```python
F"{"ڤ
```

We try to locate the "unclosed left brace" error by subtracting the
quote size from the lexer offset -- so we subtract 1 from the end of the
source, which puts us in the middle of a Unicode character. I don't
think we should try to adjust the offset in this way, since there can be
content _after_ the quote. For example, with the advent of PEP 701, this
string could reasonably be fixed as:

```python
F"{"ڤ"}"
````

Closes https://github.com/astral-sh/ruff/issues/9379.
2024-01-04 05:00:55 +00:00
Charlie Marsh
97e9d3c54f
Use Display for formatter parse errors (#9316)
## Summary

This helps a bit with (but does not close) the issues described in
https://github.com/astral-sh/ruff/issues/9311. E.g., now, we at least
see: `error: Failed to format main.py: source contains syntax errors:
invalid syntax. Got unexpected token '=' at byte offset 20`.
2023-12-29 22:26:57 +00:00
Charlie Marsh
9d6444138b
Remove lexing and parsing from the linter benchmark (#9264)
## Summary

This PR adds some helper structs to the linter paths to enable passing
in the pre-computed tokens and parsed source code during benchmarking,
to remove lexing and parsing from the overall linter benchmark
measurement. We already remove parsing for the formatter, and we have
separate benchmarks for the lexer and the parser, so this should make it
much easier to measure linter performance changes.
2023-12-23 16:43:11 -05:00
Dhruv Manilawala
47d80f29a7
Lexer start of line is false only for Mode::Expression (#8880)
## Summary

This PR fixes the bug in the lexer where the `Mode::Ipython` wasn't
being considered when initializing the soft keyword transformer which
wraps the lexer. This means that if the source code starts with either
`match` or `type` keyword, then the keywords were being considered as
name tokens instead. For example,

```python
match foo:
    case bar:
        pass
```

This would transform the `match` keyword into an identifier if the mode
is `Ipython`.

The fix is to reverse the condition in the soft keyword initializer so
that any new modes are by default considered as the lexer being at start
of line.

## Test Plan

Add a new test case for `Mode::Ipython` and verify the snapshot.

fixes: #8870
2023-11-28 20:38:25 +00:00
konsti
14e65afdc6
Update to Rust 1.74 and use new clippy lints table (#8722)
Update to [Rust
1.74](https://blog.rust-lang.org/2023/11/16/Rust-1.74.0.html) and use
the new clippy lints table.

The update itself introduced a new clippy lint about superfluous hashes
in raw strings, which got removed.

I moved our lint config from `rustflags` to the newly stabilized
[workspace.lints](https://doc.rust-lang.org/stable/cargo/reference/workspaces.html#the-lints-table).
One consequence is that we have to `unsafe_code = "warn"` instead of
"forbid" because the latter now actually bans unsafe code:

```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
  --> crates/ruff_source_file/src/newlines.rs:62:17
   |
62 |         #[allow(unsafe_code)]
   |                 ^^^^^^^^^^^ overruled by previous forbid
   |
   = note: `forbid` lint level was set on command line
```

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2023-11-16 18:12:46 -05:00
Andrew Gallant
6a1fa4778f
Reject more syntactically invalid Python programs (#8524)
## Summary

This commit adds some additional error checking to the parser such that
assignments that are invalid syntax are rejected. This covers the
obvious cases like `5 = 3` and some not so obvious cases like `x + y =
42`.

This does add an additional recursive call to the parser for the cases
handling assignments. I had initially been concerned about doing this,
but `set_context` is already doing recursion during assignments, so I
didn't feel as though this was changing any fundamental performance
characteristics of the parser. (Also, in practice, I would expect any
such recursion here to be quite shallow since the recursion is done on
the target of an assignment. Such things are rarely nested much in
practice.)

Fixes #6895

## Test Plan

I've added unit tests covering every case that is detected as invalid on
an `Expr`.
2023-11-07 07:16:06 -05:00
Carter Snook
e2b5c6ac5f
perf(parser): use memchr for lexing comments (#8193) 2023-10-27 02:07:43 +01:00
Dhruv Manilawala
709abd534a
Fix lexing single-quoted f-string with multi-line format spec (#7787)
## Summary

Reported at https://github.com/python/cpython/issues/110259

## Test Plan

Add test cases for the fix and update the snapshots
2023-10-05 23:12:09 +05:30
Dhruv Manilawala
69b8136463
Avoid curly brace escape in f-string format spec (#7780)
## Summary

This PR fixes a bug in the lexer for f-string format spec where it would
consider the `{{` (double curly braces) as an escape pattern.

This is not the case as evident by the
[PEP](https://peps.python.org/pep-0701/#how-to-produce-these-new-tokens)
as well but I missed the part:

> [..]
> * **If in “format specifier mode” (see step 3), an opening brace ({)
or a closing brace (}).**
> * If not in “format specifier mode” (see step 3), an opening brace ({)
or a closing brace (}) that is not immediately followed by another
opening/closing brace.

## Test Plan

Add a test case to verify the fix and update the snapshot.

fixes: #7778
2023-10-03 19:38:03 +05:30
Dhruv Manilawala
e91ffe3e93
Consume the escaped Windows newline (\r\n) for FStringMiddle (#7722)
## Summary

This PR fixes a bug where if a Windows newline (`\r\n`) character was
escaped, then only the `\r` was consumed and not `\n` leading to an
unterminated string error.

## Test Plan

Add new test cases to check the newline escapes.

fixes: #7632
2023-10-01 07:58:20 +05:30
Dhruv Manilawala
e62e245c61
Add support for PEP 701 (#7376)
## Summary

This PR adds support for PEP 701 in Ruff. This is a rollup PR of all the
other individual PRs. The separate PRs were created for logic separation
and code reviews. Refer to each pull request for a detail description on
the change.

Refer to the PR description for the list of pull requests within this PR.

## Test Plan

### Formatter ecosystem checks

Explanation for the change in ecosystem check:
https://github.com/astral-sh/ruff/pull/7597#issue-1908878183

#### `main`

```
| project      | similarity index  | total files       | changed files     |
|--------------|------------------:|------------------:|------------------:|
| cpython      |           0.76083 |              1789 |              1631 |
| django       |           0.99983 |              2760 |                36 |
| transformers |           0.99963 |              2587 |               319 |
| twine        |           1.00000 |                33 |                 0 |
| typeshed     |           0.99983 |              3496 |                18 |
| warehouse    |           0.99967 |               648 |                15 |
| zulip        |           0.99972 |              1437 |                21 |
```

#### `dhruv/pep-701`

```
| project      | similarity index  | total files       | changed files     |
|--------------|------------------:|------------------:|------------------:|
| cpython      |           0.76051 |              1789 |              1632 |
| django       |           0.99983 |              2760 |                36 |
| transformers |           0.99963 |              2587 |               319 |
| twine        |           1.00000 |                33 |                 0 |
| typeshed     |           0.99983 |              3496 |                18 |
| warehouse    |           0.99967 |               648 |                15 |
| zulip        |           0.99972 |              1437 |                21 |
```
2023-09-29 02:55:39 +00:00
Charlie Marsh
f45281345d
Include radix base prefix in large number representation (#7700)
## Summary

When lexing a number like `0x995DC9BBDF1939FA` that exceeds our small
number representation, we were only storing the portion after the base
(in this case, `995DC9BBDF1939FA`). When using that representation in
code generation, this could lead to invalid syntax, since
`995DC9BBDF1939FA)` on its own is not a valid integer.

This PR modifies the code to store the full span, including the radix
prefix.

See:
https://github.com/astral-sh/ruff/issues/7455#issuecomment-1739802958.

## Test Plan

`cargo test`
2023-09-28 20:38:06 +00:00
Charlie Marsh
93b5d8a0fb
Implement our own small-integer optimization (#7584)
## Summary

This is a follow-up to #7469 that attempts to achieve similar gains, but
without introducing malachite. Instead, this PR removes the `BigInt`
type altogether, instead opting for a simple enum that allows us to
store small integers directly and only allocate for values greater than
`i64`:

```rust
/// A Python integer literal. Represents both small (fits in an `i64`) and large integers.
#[derive(Clone, PartialEq, Eq, Hash)]
pub struct Int(Number);

#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum Number {
    /// A "small" number that can be represented as an `i64`.
    Small(i64),
    /// A "large" number that cannot be represented as an `i64`.
    Big(Box<str>),
}

impl std::fmt::Display for Number {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
        match self {
            Number::Small(value) => write!(f, "{value}"),
            Number::Big(value) => write!(f, "{value}"),
        }
    }
}
```

We typically don't care about numbers greater than `isize` -- our only
uses are comparisons against small constants (like `1`, `2`, `3`, etc.),
so there's no real loss of information, except in one or two rules where
we're now a little more conservative (with the worst-case being that we
don't flag, e.g., an `itertools.pairwise` that uses an extremely large
value for the slice start constant). For simplicity, a few diagnostics
now show a dedicated message when they see integers that are out of the
supported range (e.g., `outdated-version-block`).

An additional benefit here is that we get to remove a few dependencies,
especially `num-bigint`.

## Test Plan

`cargo test`
2023-09-25 15:13:21 +00:00
Micha Reiser
8ce138760a
Emit LexError for dedent to incorrect level (#7638) 2023-09-25 11:45:44 +01:00
Chammika Mannakkara
e41b08f1d0
Fix typo in infinite (#7614) 2023-09-23 11:19:36 +00:00
Dhruv Manilawala
a41bb2733f
Add range to lexer test snapshots (#7265)
## Summary

This PR updates the lexer test snapshots to include the range value as
well. This is mainly a mechanical refactor.

### Motivation

The main motivation is so that we can verify that the ranges are valid
and do not overlap.

## Test Plan

`cargo test`
2023-09-11 19:12:46 +00:00
Dhruv Manilawala
f5701fcc63
Use snapshots for remaining lexer tests (#7264)
## Summary

This PR updates the remaining lexer test cases to use the snapshots.
This is mainly a mechanical refactor.

## Motivation

The main motivation is so that when we add the token range values to the
test case output, it's easier to update the test cases.

The reason they were not using the snapshots before was because of the usage of
`test_case` macro. The macros is mainly used for different EOL test cases. If we
just generate the snapshots directly, then the snapshot name would be suffixed
with `-1`, `-2`, etc. as the test function is still the same. So, we'll create
the snapshot ourselves with the platform name for the respective EOL
test cases.

## Test Plan

`cargo test`
2023-09-12 00:16:38 +05:30
Micha Reiser
7440e54ec6
Avoid allocating in lex_decimal (#7252) 2023-09-11 06:37:25 +00:00
Micha Reiser
842ff0212e
Add Lexer emoji test case (#7213) 2023-09-07 10:02:50 +00:00
Micha Reiser
f1a4eb9c28
Use the unicode-ident crate (#7212) 2023-09-07 08:19:25 +00:00
Micha Reiser
171b66cb43
Lexer: Add skip whitespace fastpath (#7184) 2023-09-06 16:14:01 +02:00
Dhruv Manilawala
1adde24133
Rename parser mode from Jupyter to Ipython (#7153) 2023-09-05 14:12:26 +00:00
Dhruv Manilawala
4d49d5e845
Add eat_char2 for the lexer (#6968)
## Summary

This PR adds a new helper method on the `Cursor` called `eat_char2`
which is similar to `eat_char` but accepts 2 characters instead of 1. It'll
`bump` the cursor twice if both characters are found on lookahead.

## Test Plan

`cargo test`
2023-08-29 17:18:02 +05:30
Dhruv Manilawala
9c98416b96
Avoid lexer infinite loop on invalid input (#6937)
## Summary

This PR fixes a bug which sends the lexer into infinite loop for an invalid input.
The code in question is `[1` where the nesting is never finished. This means
that the lexer will keep emitting the `Err` token forever.

## Test Plan

Add a test case which collects all the tokens from the lexer. This just
makes sure that it doesn't go into infinite loop.
2023-08-28 17:21:38 +05:30
Dhruv Manilawala
fb7caf43c8
Update lexer tests to use snapshots (#6658)
## Summary

This PR updates the lexer tests to use the snapshot testing framework.
It also
makes the following changes:
* Remove the use of macros in the lexer tests
* Use `test_case` for EOL tests

## Test Plan

```
cargo test --package ruff_python_parser --lib --all-features -- lexer::tests --no-capture
```
2023-08-22 18:23:19 +00:00
Dhruv Manilawala
6a64f2289b
Rename Magic* to IpyEscape* (#6395)
## Summary

This PR renames the `MagicCommand` token to `IpyEscapeCommand` token and
`MagicKind` to `IpyEscapeKind` type to better reflect the purpose of the
token and type. Similarly, it renames the AST nodes from `LineMagic` to
`IpyEscapeCommand` prefixed with `Stmt`/`Expr` wherever necessary.

It also makes renames from using `jupyter_magic` to
`ipython_escape_commands` in various function names.

The mode value is still `Mode::Jupyter` because the escape commands are
part of the IPython syntax but the lexing/parsing is done for a Jupyter
notebook.

### Motivation behind the rename:
* IPython codebase defines it as "EscapeCommand" / "Escape Sequences":
* Escape Sequences:
292e3a2345/IPython/core/inputtransformer2.py (L329-L333)
* Escape command:
292e3a2345/IPython/core/inputtransformer2.py (L410-L411)
* The word "magic" is used mainly for the actual magic commands i.e.,
the ones starting with `%`/`%%`
(https://ipython.readthedocs.io/en/stable/interactive/reference.html#magic-command-system).
So, this avoids any confusion between the Magic token (`%`, `%%`) and
the escape command itself.
## Test Plan

* `cargo test` to make sure all renames are done correctly.
* `grep` for `jupyter_escape`/`magic` to make sure all renames are done
correctly.
2023-08-09 13:28:18 +00:00
Dhruv Manilawala
e257c5af32
Add support for help end IPython escape commands (#6358)
## Summary

This PR adds support for a stricter version of help end escape
commands[^1] in the parser. By stricter, I mean that the escape tokens
are only at the end of the command and there are no tokens at the start.
This makes it difficult to implement it in the lexer without having to
do a lot of look aheads or keeping track of previous tokens.

Now, as we're adding this in the parser, the lexer needs to recognize
and emit a new token for `?`. So, `Question` token is added which will
be recognized only in `Jupyter` mode.

The conditions applied are the same as the ones in the original
implementation in IPython codebase (which is a regex):
* There can only be either 1 or 2 question mark(s) at the end
* The node before the question mark can be a `Name`, `Attribute`,
`Subscript` (only with integer constants in slice position), or any
combination of the 3 nodes.

## Test Plan

Added test cases for various combination of the possible nodes in the
command value position and update the snapshots.

fixes: #6359
fixes: #5030 (This is the final piece)

[^1]: https://github.com/astral-sh/ruff/pull/6272#issue-1833094281
2023-08-09 10:28:52 +05:30
Dhruv Manilawala
e4a4660925
Support help end escape command with priority (#6272)
## Summary

This PR adds support for help end escape command in the lexer.

### What are "help end escape commands"?

First, the escape commands are special IPython syntax which enhances the
functionality for the IPython REPL. There are 9 types of escape kinds
which are recognized by the tokens which are present at the start of the
command (`?`, `??`, `!`, `!!`, etc.).

Here, the help command is using either the `?` or `??` token at the
start (`?str.replace` for example). Those 2 tokens are also supported
when they're at the end of the command (`str.replace?`), but the other
tokens aren't supported in that position.

There are mainly two types of help end escape commands:
1. Ending with either `?` or `??`, but it also starts with one of the
escape tokens (`%matplotlib?`)
2. On the other hand, there's a stricter version for (1) which doesn't
start with any escape tokens (`str.replace?`)

This PR adds support for (1) while (2) will be supported in the parser.

### Priority

Now, if the command starts and ends with an escape token, how do we
decide the kind of this command? This is where priority comes into
picture. This is simple as there's only one priority where `?`/`??` at
the end takes priority over any other escape token and all of the other
tokens are at the same priority. Remember that only `?`/`??` at the end
is considered valid.

This is mainly useful in the case where someone would want to invoke the
help command on the magic command itself. For example, in `%matplotlib?`
the help command takes priority which means that we want help for the
`matplotlib` magic function instead of calling the magic function
itself.

### Specification

Here's where things get a bit tricky. What if there are question mark
tokens at both ends. How do we decide if it's `Help` (`?`) kind or
`Help2` (`??`) kind?

|     | Magic       | Value     | Kind    |
| --- | ---         | ---       | ---     |
| 1   | `?foo?`     | `foo`     | `Help`  |
| 2   | `??foo?`    | `foo`     | `Help`  |
| 3   | `?foo??`    | `foo`     | `Help2` |
| 4   | `??foo??`   | `foo`     | `Help2` |
| 5   | `???foo??`  | `foo`     | `Help2` |
| 6   | `??foo???`  | `foo???`  | `Help2` |
| 7   | `???foo???` | `?foo???` | `Help2` |

Looking at the above table:

- The question mark tokens on the right takes priority over the ones on
the left but only if the number of question mark on the right is 1 or 2.
- If there are more than 2 question mark tokens on the right side, then
the left side is used to determine the same.
- If the right side is used to determine the kind, then all of the
question marks and whitespaces on the left side are ignored in the
`value`, but if it’s the other way around, then all of the extra
question marks are part of the `value`.

### References

- IPython implementation using the regex:
292e3a2345/IPython/core/inputtransformer2.py (L454-L462)
- Priorities:
292e3a2345/IPython/core/inputtransformer2.py (L466-L469)

## Test Plan

Add a bunch of test cases for the lexer and verify that it matches the
behavior of
IPython transformer.

resolves: #6357
2023-08-07 21:01:02 +05:30
Micha Reiser
40f54375cb
Pull in RustPython parser (#6099) 2023-07-27 09:29:11 +00:00