mirror of
https://github.com/astral-sh/ruff.git
synced 2025-07-24 13:33:50 +00:00
Implement re-lexing logic for better error recovery (#11845)
## Summary This PR implements the re-lexing logic in the parser. This logic is only applied when recovering from an error during list parsing. The logic is as follows: 1. During list parsing, if an unexpected token is encountered and it detects that an outer context can understand it and thus recover from it, it invokes the re-lexing logic in the lexer 2. This logic first checks if the lexer is in a parenthesized context and returns if it's not. Thus, the logic is a no-op if the lexer isn't in a parenthesized context 3. It then reduces the nesting level by 1. It shouldn't reset it to 0 because otherwise the recovery from nested list parsing will be incorrect 4. Then, it tries to find last newline character going backwards from the current position of the lexer. This avoids any whitespaces but if it encounters any character other than newline or whitespace, it aborts. 5. Now, if there's a newline character, then it needs to be re-lexed in a logical context which means that the lexer needs to emit it as a `Newline` token instead of `NonLogicalNewline`. 6. If the re-lexing gives a different token than the current one, the token source needs to update it's token collection to remove all the tokens which comes after the new current position. It turns out that the list parsing isn't that happy with the results so it requires some re-arranging such that the following two errors are raised correctly: 1. Expected comma 2. Recovery context error For (1), the following scenarios needs to be considered: * Missing comma between two elements * Half parsed element because the grammar doesn't allow it (for example, named expressions) For (2), the following scenarios needs to be considered: 1. If the parser is at a comma which means that there's a missing element otherwise the comma would've been consumed by the first `eat` call above. And, the parser doesn't take the re-lexing route on a comma token. 2. If it's the first element and the current token is not a comma which means that it's an invalid element. resolves: #11640 ## Test Plan - [x] Update existing test snapshots and validate them - [x] Add additional test cases specific to the re-lexing logic and validate the snapshots - [x] Run the fuzzer on 3000+ valid inputs - [x] Run the fuzzer on invalid inputs - [x] Run the parser on various open source projects - [x] Make sure the ecosystem changes are none
This commit is contained in:
parent
1f654ee729
commit
8499abfa7f
43 changed files with 1593 additions and 212 deletions
|
@ -1,4 +1,4 @@
|
|||
use ruff_text_size::{TextRange, TextSize};
|
||||
use ruff_text_size::{Ranged, TextRange, TextSize};
|
||||
|
||||
use crate::lexer::{Lexer, LexerCheckpoint, LexicalError, Token, TokenFlags, TokenValue};
|
||||
use crate::{Mode, TokenKind};
|
||||
|
@ -58,6 +58,23 @@ impl<'src> TokenSource<'src> {
|
|||
self.lexer.take_value()
|
||||
}
|
||||
|
||||
/// Calls the underlying [`re_lex_logical_token`] method on the lexer and updates the token
|
||||
/// vector accordingly.
|
||||
///
|
||||
/// [`re_lex_logical_token`]: Lexer::re_lex_logical_token
|
||||
pub(crate) fn re_lex_logical_token(&mut self) {
|
||||
if self.lexer.re_lex_logical_token() {
|
||||
let current_start = self.current_range().start();
|
||||
while self
|
||||
.tokens
|
||||
.last()
|
||||
.is_some_and(|last| last.start() >= current_start)
|
||||
{
|
||||
self.tokens.pop();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the next non-trivia token without consuming it.
|
||||
///
|
||||
/// Use [`peek2`] to get the next two tokens.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue