Maintain synchronicity between the lexer and the parser (#11457)

## Summary

This PR updates the entire parser stack in multiple ways:

### Make the lexer lazy

* https://github.com/astral-sh/ruff/pull/11244
* https://github.com/astral-sh/ruff/pull/11473

Previously, Ruff's lexer would act as an iterator. The parser would
collect all the tokens in a vector first and then process the tokens to
create the syntax tree.

The first task in this project is to update the entire parsing flow to
make the lexer lazy. This includes the `Lexer`, `TokenSource`, and
`Parser`. For context, the `TokenSource` is a wrapper around the `Lexer`
to filter out the trivia tokens[^1]. Now, the parser will ask the token
source to get the next token and only then the lexer will continue and
emit the token. This means that the lexer needs to be aware of the
"current" token. When the `next_token` is called, the current token will
be updated with the newly lexed token.

The main motivation to make the lexer lazy is to allow re-lexing a token
in a different context. This is going to be really useful to make the
parser error resilience. For example, currently the emitted tokens
remains the same even if the parser can recover from an unclosed
parenthesis. This is important because the lexer emits a
`NonLogicalNewline` in parenthesized context while a normal `Newline` in
non-parenthesized context. This different kinds of newline is also used
to emit the indentation tokens which is important for the parser as it's
used to determine the start and end of a block.

Additionally, this allows us to implement the following functionalities:
1. Checkpoint - rewind infrastructure: The idea here is to create a
checkpoint and continue lexing. At a later point, this checkpoint can be
used to rewind the lexer back to the provided checkpoint.
2. Remove the `SoftKeywordTransformer` and instead use lookahead or
speculative parsing to determine whether a soft keyword is a keyword or
an identifier
3. Remove the `Tok` enum. The `Tok` enum represents the tokens emitted
by the lexer but it contains owned data which makes it expensive to
clone. The new `TokenKind` enum just represents the type of token which
is very cheap.

This brings up a question as to how will the parser get the owned value
which was stored on `Tok`. This will be solved by introducing a new
`TokenValue` enum which only contains a subset of token kinds which has
the owned value. This is stored on the lexer and is requested by the
parser when it wants to process the data. For example:
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L1260-L1262)

[^1]: Trivia tokens are `NonLogicalNewline` and `Comment`

### Remove `SoftKeywordTransformer`

* https://github.com/astral-sh/ruff/pull/11441
* https://github.com/astral-sh/ruff/pull/11459
* https://github.com/astral-sh/ruff/pull/11442
* https://github.com/astral-sh/ruff/pull/11443
* https://github.com/astral-sh/ruff/pull/11474

For context,
https://github.com/RustPython/RustPython/pull/4519/files#diff-5de40045e78e794aa5ab0b8aacf531aa477daf826d31ca129467703855408220
added support for soft keywords in the parser which uses infinite
lookahead to classify a soft keyword as a keyword or an identifier. This
is a brilliant idea as it basically wraps the existing Lexer and works
on top of it which means that the logic for lexing and re-lexing a soft
keyword remains separate. The change here is to remove
`SoftKeywordTransformer` and let the parser determine this based on
context, lookahead and speculative parsing.

* **Context:** The transformer needs to know the position of the lexer
between it being at a statement position or a simple statement position.
This is because a `match` token starts a compound statement while a
`type` token starts a simple statement. **The parser already knows
this.**
* **Lookahead:** Now that the parser knows the context it can perform
lookahead of up to two tokens to classify the soft keyword. The logic
for this is mentioned in the PR implementing it for `type` and `match
soft keyword.
* **Speculative parsing:** This is where the checkpoint - rewind
infrastructure helps. For `match` soft keyword, there are certain cases
for which we can't classify based on lookahead. The idea here is to
create a checkpoint and keep parsing. Based on whether the parsing was
successful and what tokens are ahead we can classify the remaining
cases. Refer to #11443 for more details.

If the soft keyword is being parsed in an identifier context, it'll be
converted to an identifier and the emitted token will be updated as
well. Refer
8196720f80/crates/ruff_python_parser/src/parser/expression.rs (L487-L491).

The `case` soft keyword doesn't require any special handling because
it'll be a keyword only in the context of a match statement.

### Update the parser API

* https://github.com/astral-sh/ruff/pull/11494
* https://github.com/astral-sh/ruff/pull/11505

Now that the lexer is in sync with the parser, and the parser helps to
determine whether a soft keyword is a keyword or an identifier, the
lexer cannot be used on its own. The reason being that it's not
sensitive to the context (which is correct). This means that the parser
API needs to be updated to not allow any access to the lexer.

Previously, there were multiple ways to parse the source code:
1. Passing the source code itself
2. Or, passing the tokens

Now that the lexer and parser are working together, the API
corresponding to (2) cannot exists. The final API is mentioned in this
PR description: https://github.com/astral-sh/ruff/pull/11494.

### Refactor the downstream tools (linter and formatter)

* https://github.com/astral-sh/ruff/pull/11511
* https://github.com/astral-sh/ruff/pull/11515
* https://github.com/astral-sh/ruff/pull/11529
* https://github.com/astral-sh/ruff/pull/11562
* https://github.com/astral-sh/ruff/pull/11592

And, the final set of changes involves updating all references of the
lexer and `Tok` enum. This was done in two-parts:
1. Update all the references in a way that doesn't require any changes
from this PR i.e., it can be done independently
	* https://github.com/astral-sh/ruff/pull/11402
	* https://github.com/astral-sh/ruff/pull/11406
	* https://github.com/astral-sh/ruff/pull/11418
	* https://github.com/astral-sh/ruff/pull/11419
	* https://github.com/astral-sh/ruff/pull/11420
	* https://github.com/astral-sh/ruff/pull/11424
2. Update all the remaining references to use the changes made in this
PR

For (2), there were various strategies used:
1. Introduce a new `Tokens` struct which wraps the token vector and add
methods to query a certain subset of tokens. These includes:
	1. `up_to_first_unknown` which replaces the `tokenize` function
2. `in_range` and `after` which replaces the `lex_starts_at` function
where the former returns the tokens within the given range while the
latter returns all the tokens after the given offset
2. Introduce a new `TokenFlags` which is a set of flags to query certain
information from a token. Currently, this information is only limited to
any string type token but can be expanded to include other information
in the future as needed. https://github.com/astral-sh/ruff/pull/11578
3. Move the `CommentRanges` to the parsed output because this
information is common to both the linter and the formatter. This removes
the need for `tokens_and_ranges` function.

## Test Plan

- [x] Update and verify the test snapshots
- [x] Make sure the entire test suite is passing
- [x] Make sure there are no changes in the ecosystem checks
- [x] Run the fuzzer on the parser
- [x] Run this change on dozens of open-source projects

### Running this change on dozens of open-source projects

Refer to the PR description to get the list of open source projects used
for testing.

Now, the following tests were done between `main` and this branch:
1. Compare the output of `--select=E999` (syntax errors)
2. Compare the output of default rule selection
3. Compare the output of `--select=ALL`

**Conclusion: all output were same**

## What's next?

The next step is to introduce re-lexing logic and update the parser to
feed the recovery information to the lexer so that it can emit the
correct token. This moves us one step closer to having error resilience
in the parser and provides Ruff the possibility to lint even if the
source code contains syntax errors.
This commit is contained in:
Dhruv Manilawala 2024-06-03 18:23:50 +05:30 committed by GitHub
parent c69a789aa5
commit bf5b62edac
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
262 changed files with 8174 additions and 6132 deletions

View file

@ -2,20 +2,16 @@ use std::cmp::Ordering;
use bitflags::bitflags;
use ast::Mod;
use ruff_python_ast as ast;
use ruff_python_ast::{Mod, ModExpression, ModModule};
use ruff_text_size::{Ranged, TextRange, TextSize};
use crate::lexer::lex;
use crate::lexer::TokenValue;
use crate::parser::expression::ExpressionContext;
use crate::parser::progress::{ParserProgress, TokenId};
use crate::{
lexer::{LexResult, Spanned},
token_set::TokenSet,
token_source::TokenSource,
Mode, ParseError, ParseErrorType, Tok, TokenKind,
};
use self::expression::ExpressionContext;
use crate::token_set::TokenSet;
use crate::token_source::{TokenSource, TokenSourceCheckpoint};
use crate::{Mode, ParseError, ParseErrorType, TokenKind};
use crate::{Parsed, Tokens};
mod expression;
mod helpers;
@ -26,57 +22,12 @@ mod statement;
#[cfg(test)]
mod tests;
/// Represents the parsed source code.
///
/// This includes the AST and all of the errors encountered during parsing.
#[derive(Debug)]
pub struct Program {
ast: ast::Mod,
parse_errors: Vec<ParseError>,
}
impl Program {
/// Returns the parsed AST.
pub fn ast(&self) -> &ast::Mod {
&self.ast
}
/// Returns a list of syntax errors found during parsing.
pub fn errors(&self) -> &[ParseError] {
&self.parse_errors
}
/// Consumes the [`Program`] and returns the parsed AST.
pub fn into_ast(self) -> ast::Mod {
self.ast
}
/// Consumes the [`Program`] and returns a list of syntax errors found during parsing.
pub fn into_errors(self) -> Vec<ParseError> {
self.parse_errors
}
/// Returns `true` if the program is valid i.e., it has no syntax errors.
pub fn is_valid(&self) -> bool {
self.parse_errors.is_empty()
}
/// Parse the given Python source code using the specified [`Mode`].
pub fn parse_str(source: &str, mode: Mode) -> Program {
let tokens = lex(source, mode);
Self::parse_tokens(source, tokens.collect(), mode)
}
/// Parse a vector of [`LexResult`]s using the specified [`Mode`].
pub fn parse_tokens(source: &str, tokens: Vec<LexResult>, mode: Mode) -> Program {
Parser::new(source, mode, TokenSource::new(tokens)).parse_program()
}
}
#[derive(Debug)]
pub(crate) struct Parser<'src> {
source: &'src str,
tokens: TokenSource,
/// Token source for the parser that skips over any non-trivia token.
tokens: TokenSource<'src>,
/// Stores all the syntax errors found during the parsing.
errors: Vec<ParseError>,
@ -84,37 +35,29 @@ pub(crate) struct Parser<'src> {
/// Specify the mode in which the code will be parsed.
mode: Mode,
/// Current token along with its range.
current: Spanned,
/// The ID of the current token. This is used to track the progress of the parser
/// to avoid infinite loops when the parser is stuck.
current_token_id: TokenId,
/// The end of the last processed. Used to determine a node's end.
last_token_end: TextSize,
/// The range of the tokens to parse.
///
/// The range is equal to `[0; source.len())` when parsing an entire file. The range can be
/// different when parsing only a part of a file using the [`crate::lex_starts_at`] and
/// [`crate::parse_expression_starts_at`] APIs in which case the the range is equal to
/// `[offset; subrange.len())`.
tokens_range: TextRange,
/// The end of the previous token processed. This is used to determine a node's end.
prev_token_end: TextSize,
/// The recovery context in which the parser is currently in.
recovery_context: RecoveryContext,
/// The start offset in the source code from which to start parsing at.
start_offset: TextSize,
}
impl<'src> Parser<'src> {
pub(crate) fn new(source: &'src str, mode: Mode, mut tokens: TokenSource) -> Parser<'src> {
let tokens_range = TextRange::new(
tokens.position().unwrap_or_default(),
tokens.end().unwrap_or_default(),
);
/// Create a new parser for the given source code.
pub(crate) fn new(source: &'src str, mode: Mode) -> Self {
Parser::new_starts_at(source, mode, TextSize::new(0))
}
let current = tokens
.next()
.unwrap_or_else(|| (Tok::EndOfFile, TextRange::empty(tokens_range.end())));
/// Create a new parser for the given source code which starts parsing at the given offset.
pub(crate) fn new_starts_at(source: &'src str, mode: Mode, start_offset: TextSize) -> Self {
let tokens = TokenSource::from_source(source, mode, start_offset);
Parser {
mode,
@ -122,24 +65,20 @@ impl<'src> Parser<'src> {
errors: Vec::new(),
tokens,
recovery_context: RecoveryContext::empty(),
last_token_end: tokens_range.start(),
current,
prev_token_end: TextSize::new(0),
start_offset,
current_token_id: TokenId::default(),
tokens_range,
}
}
/// Consumes the [`Parser`] and returns the parsed [`Program`].
pub(crate) fn parse_program(mut self) -> Program {
let ast = match self.mode {
/// Consumes the [`Parser`] and returns the parsed [`Parsed`].
pub(crate) fn parse(mut self) -> Parsed<Mod> {
let syntax = match self.mode {
Mode::Expression => Mod::Expression(self.parse_single_expression()),
Mode::Module | Mode::Ipython => Mod::Module(self.parse_module()),
};
Program {
ast,
parse_errors: self.finish(),
}
self.finish(syntax)
}
/// Parses a single expression.
@ -150,7 +89,7 @@ impl<'src> Parser<'src> {
///
/// After parsing a single expression, an error is reported and all remaining tokens are
/// dropped by the parser.
fn parse_single_expression(&mut self) -> ast::ModExpression {
fn parse_single_expression(&mut self) -> ModExpression {
let start = self.node_start();
let parsed_expr = self.parse_expression_list(ExpressionContext::default());
@ -170,13 +109,13 @@ impl<'src> Parser<'src> {
if self.at(TokenKind::EndOfFile) {
break;
}
self.next_token();
self.bump_any();
}
}
self.bump(TokenKind::EndOfFile);
ast::ModExpression {
ModExpression {
body: Box::new(parsed_expr.expr),
range: self.node_range(start),
}
@ -185,7 +124,7 @@ impl<'src> Parser<'src> {
/// Parses a Python module.
///
/// This is to be used for [`Mode::Module`] and [`Mode::Ipython`].
fn parse_module(&mut self) -> ast::ModModule {
fn parse_module(&mut self) -> ModModule {
let body = self.parse_list_into_vec(
RecoveryContextKind::ModuleStatements,
Parser::parse_statement,
@ -193,13 +132,13 @@ impl<'src> Parser<'src> {
self.bump(TokenKind::EndOfFile);
ast::ModModule {
ModModule {
body,
range: self.tokens_range,
range: TextRange::new(self.start_offset, self.current_token_range().end()),
}
}
fn finish(self) -> Vec<ParseError> {
fn finish(self, syntax: Mod) -> Parsed<Mod> {
assert_eq!(
self.current_token_kind(),
TokenKind::EndOfFile,
@ -208,13 +147,18 @@ impl<'src> Parser<'src> {
// TODO consider re-integrating lexical error handling into the parser?
let parse_errors = self.errors;
let lex_errors = self.tokens.finish();
let (tokens, comment_ranges, lex_errors) = self.tokens.finish();
// Fast path for when there are no lex errors.
// There's no fast path for when there are no parse errors because a lex error
// always results in a parse error.
if lex_errors.is_empty() {
return parse_errors;
return Parsed {
syntax,
tokens: Tokens::new(tokens),
comment_ranges,
errors: parse_errors,
};
}
let mut merged = Vec::with_capacity(parse_errors.len().saturating_add(lex_errors.len()));
@ -241,7 +185,12 @@ impl<'src> Parser<'src> {
merged.extend(parse_errors);
merged.extend(lex_errors.map(ParseError::from));
merged
Parsed {
syntax,
tokens: Tokens::new(tokens),
comment_ranges,
errors: merged,
}
}
/// Returns the start position for a node that starts at the current token.
@ -280,7 +229,7 @@ impl<'src> Parser<'src> {
//
// In either of the above cases, there's a "gap" between the end of the last token and start
// of the current token.
if self.last_token_end <= start {
if self.prev_token_end <= start {
// We need to create an empty range at the last token end instead of the start because
// otherwise this node range will fall outside the range of it's parent node. Taking
// the above example:
@ -302,9 +251,9 @@ impl<'src> Parser<'src> {
// def foo # comment
// def bar(): ...
// def baz
TextRange::empty(self.last_token_end)
TextRange::empty(self.prev_token_end)
} else {
TextRange::new(start, self.last_token_end)
TextRange::new(start, self.prev_token_end)
}
}
@ -319,65 +268,48 @@ impl<'src> Parser<'src> {
// # ^^^^ expression range
// # ^ last token end
// ```
TextRange::empty(self.last_token_end)
TextRange::empty(self.prev_token_end)
}
/// Moves the parser to the next token.
///
/// Returns the old current token as an owned value.
fn next_token(&mut self) -> Spanned {
let next = self
.tokens
.next()
.unwrap_or_else(|| (Tok::EndOfFile, TextRange::empty(self.tokens_range.end())));
self.current_token_id.increment();
let current = std::mem::replace(&mut self.current, next);
fn do_bump(&mut self, kind: TokenKind) {
if !matches!(
current.0,
self.current_token_kind(),
// TODO explore including everything up to the dedent as part of the body.
Tok::Dedent
TokenKind::Dedent
// Don't include newlines in the body
| Tok::Newline
| TokenKind::Newline
// TODO(micha): Including the semi feels more correct but it isn't compatible with lalrpop and breaks the
// formatters semicolon detection. Exclude it for now
| Tok::Semi
| TokenKind::Semi
) {
self.last_token_end = current.1.end();
self.prev_token_end = self.current_token_range().end();
}
current
self.tokens.bump(kind);
self.current_token_id.increment();
}
/// Returns the next token kind without consuming it.
fn peek(&self) -> TokenKind {
self.tokens
.peek()
.map_or(TokenKind::EndOfFile, |spanned| spanned.0)
fn peek(&mut self) -> TokenKind {
self.tokens.peek()
}
/// Returns the current token kind along with its range.
///
/// Use [`Parser::current_token_kind`] or [`Parser::current_token_range`] to only get the kind
/// or range respectively.
#[inline]
fn current_token(&self) -> (TokenKind, TextRange) {
(self.current_token_kind(), self.current_token_range())
/// Returns the next two token kinds without consuming it.
fn peek2(&mut self) -> (TokenKind, TokenKind) {
self.tokens.peek2()
}
/// Returns the current token kind.
#[inline]
fn current_token_kind(&self) -> TokenKind {
// TODO: Converting the token kind over and over again can be expensive.
TokenKind::from_token(&self.current.0)
self.tokens.current_kind()
}
/// Returns the range of the current token.
#[inline]
fn current_token_range(&self) -> TextRange {
self.current.1
self.tokens.current_range()
}
/// Returns the current token ID.
@ -386,50 +318,88 @@ impl<'src> Parser<'src> {
self.current_token_id
}
/// Eat the current token if it is of the given kind, returning `true` in
/// that case. Otherwise, return `false`.
/// Bumps the current token assuming it is of the given kind.
///
/// # Panics
///
/// If the current token is not of the given kind.
fn bump(&mut self, kind: TokenKind) {
assert_eq!(self.current_token_kind(), kind);
self.do_bump(kind);
}
/// Take the token value from the underlying token source and bump the current token.
///
/// # Panics
///
/// If the current token is not of the given kind.
fn bump_value(&mut self, kind: TokenKind) -> TokenValue {
let value = self.tokens.take_value();
self.bump(kind);
value
}
/// Bumps the current token assuming it is found in the given token set.
///
/// # Panics
///
/// If the current token is not found in the given token set.
fn bump_ts(&mut self, ts: TokenSet) {
let kind = self.current_token_kind();
assert!(ts.contains(kind));
self.do_bump(kind);
}
/// Bumps the current token regardless of its kind and advances to the next token.
///
/// # Panics
///
/// If the parser is at end of file.
fn bump_any(&mut self) {
let kind = self.current_token_kind();
assert_ne!(kind, TokenKind::EndOfFile);
self.do_bump(kind);
}
/// Bumps the soft keyword token as a `Name` token.
///
/// # Panics
///
/// If the current token is not a soft keyword.
pub(crate) fn bump_soft_keyword_as_name(&mut self) {
assert!(self.at_soft_keyword());
self.do_bump(TokenKind::Name);
}
/// Consume the current token if it is of the given kind. Returns `true` if it matches, `false`
/// otherwise.
fn eat(&mut self, kind: TokenKind) -> bool {
if self.at(kind) {
self.next_token();
self.do_bump(kind);
true
} else {
false
}
}
/// Bumps the current token assuming it is of the given kind.
///
/// Returns the current token as an owned value.
///
/// # Panics
///
/// If the current token is not of the given kind.
fn bump(&mut self, kind: TokenKind) -> (Tok, TextRange) {
assert_eq!(self.current_token_kind(), kind);
self.next_token()
}
/// Bumps the current token assuming it is found in the given token set.
///
/// Returns the current token as an owned value.
///
/// # Panics
///
/// If the current token is not found in the given token set.
fn bump_ts(&mut self, ts: TokenSet) -> (Tok, TextRange) {
assert!(ts.contains(self.current_token_kind()));
self.next_token()
}
/// Eat the current token if its of the expected kind, otherwise adds an appropriate error.
fn expect(&mut self, expected: TokenKind) -> bool {
if self.eat(expected) {
return true;
}
let (found, range) = self.current_token();
self.add_error(ParseErrorType::ExpectedToken { found, expected }, range);
self.add_error(
ParseErrorType::ExpectedToken {
found: self.current_token_kind(),
expected,
},
self.current_token_range(),
);
false
}
@ -468,11 +438,7 @@ impl<'src> Parser<'src> {
where
T: Ranged,
{
let range = ranged.range();
// `ranged` uses absolute ranges to the source text of an entire file. Fix the source by
// subtracting the start offset when parsing only a part of a file (when parsing the tokens
// from `lex_starts_at`).
&self.source[range - self.tokens_range.start()]
&self.source[ranged.range()]
}
/// Parses a list of elements into a vector where each element is parsed using
@ -531,7 +497,7 @@ impl<'src> Parser<'src> {
break;
}
self.next_token();
self.bump_any();
}
}
@ -615,7 +581,7 @@ impl<'src> Parser<'src> {
trailing_comma_range = None;
}
self.next_token();
self.bump_any();
}
}
@ -641,6 +607,42 @@ impl<'src> Parser<'src> {
false
}
/// Creates a checkpoint to which the parser can later return to using [`Self::rewind`].
fn checkpoint(&self) -> ParserCheckpoint<'src> {
ParserCheckpoint {
tokens: self.tokens.checkpoint(),
errors_position: self.errors.len(),
current_token_id: self.current_token_id,
prev_token_end: self.prev_token_end,
recovery_context: self.recovery_context,
}
}
/// Restore the parser to the given checkpoint.
fn rewind(&mut self, checkpoint: ParserCheckpoint<'src>) {
let ParserCheckpoint {
tokens,
errors_position,
current_token_id,
prev_token_end,
recovery_context,
} = checkpoint;
self.tokens.rewind(tokens);
self.errors.truncate(errors_position);
self.current_token_id = current_token_id;
self.prev_token_end = prev_token_end;
self.recovery_context = recovery_context;
}
}
struct ParserCheckpoint<'src> {
tokens: TokenSourceCheckpoint<'src>,
errors_position: usize,
current_token_id: TokenId,
prev_token_end: TextSize,
recovery_context: RecoveryContext,
}
#[derive(Copy, Clone, Debug, Eq, PartialEq)]
@ -872,7 +874,7 @@ impl RecoveryContextKind {
fn is_list_terminator(self, p: &Parser) -> bool {
match self {
// The program must consume all tokens until the end
// The parser must consume all tokens until the end
RecoveryContextKind::ModuleStatements => false,
RecoveryContextKind::BlockStatements => p.at(TokenKind::Dedent),
@ -1008,9 +1010,9 @@ impl RecoveryContextKind {
RecoveryContextKind::Except => p.at(TokenKind::Except),
RecoveryContextKind::AssignmentTargets => p.at(TokenKind::Equal),
RecoveryContextKind::TypeParams => p.at_type_param(),
RecoveryContextKind::ImportNames => p.at(TokenKind::Name),
RecoveryContextKind::ImportNames => p.at_name_or_soft_keyword(),
RecoveryContextKind::ImportFromAsNames(_) => {
matches!(p.current_token_kind(), TokenKind::Star | TokenKind::Name)
p.at(TokenKind::Star) || p.at_name_or_soft_keyword()
}
RecoveryContextKind::Slices => p.at(TokenKind::Colon) || p.at_expr(),
RecoveryContextKind::ListElements
@ -1029,11 +1031,13 @@ impl RecoveryContextKind {
RecoveryContextKind::MatchPatternClassArguments => p.at_pattern_start(),
RecoveryContextKind::Arguments => p.at_expr(),
RecoveryContextKind::DeleteTargets => p.at_expr(),
RecoveryContextKind::Identifiers => p.at(TokenKind::Name),
RecoveryContextKind::Parameters(_) => matches!(
p.current_token_kind(),
TokenKind::Name | TokenKind::Star | TokenKind::DoubleStar | TokenKind::Slash
),
RecoveryContextKind::Identifiers => p.at_name_or_soft_keyword(),
RecoveryContextKind::Parameters(_) => {
matches!(
p.current_token_kind(),
TokenKind::Star | TokenKind::DoubleStar | TokenKind::Slash
) || p.at_name_or_soft_keyword()
}
RecoveryContextKind::WithItems(_) => p.at_expr(),
RecoveryContextKind::FStringElements => matches!(
p.current_token_kind(),