Use TokenSource to find new location for re-lexing (#12060)

## Summary

This PR splits the re-lexing logic into two parts:
1. `TokenSource`: The token source will be responsible to find the
position the lexer needs to be moved to
2. `Lexer`: The lexer will be responsible to reduce the nesting level
and move itself to the new position if recovered from a parenthesized
context

This split makes it easy to find the new lexer position without needing
to implement the backwards lexing logic again which would need to handle
cases involving:
* Different kinds of newlines
* Line continuation character(s)
* Comments
* Whitespaces

### F-strings

This change did reveal one thing about re-lexing f-strings. Consider the
following example:
```py
f'{'
#  ^
f'foo'
```

Here, the quote as highlighted by the caret (`^`) is the start of a
string inside an f-string expression. This is unterminated string which
means the token emitted is actually `Unknown`. The parser tries to
recover from it but there's no newline token in the vector so the new
logic doesn't recover from it. The previous logic does recover because
it's looking at the raw characters instead.

The parser would be at `FStringStart` (the one for the second line) when
it calls into the re-lexing logic to recover from an unterminated
f-string on the first line. So, moving backwards the first character
encountered is a newline character but the first token encountered is an
`Unknown` token.

This is improved with #12067 

fixes: #12046 
fixes: #12036

## Test Plan

Update the snapshot and validate the changes.
This commit is contained in:
Dhruv Manilawala 2024-06-27 17:12:39 +05:30 committed by GitHub
parent e137c824c3
commit a4688aebe9
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 49 additions and 82 deletions

View file

@ -60,12 +60,23 @@ impl<'src> TokenSource<'src> {
self.lexer.take_value()
}
/// Calls the underlying [`re_lex_logical_token`] method on the lexer and updates the token
/// vector accordingly.
/// Calls the underlying [`re_lex_logical_token`] method on the lexer with the new lexer
/// position and updates the token vector accordingly.
///
/// [`re_lex_logical_token`]: Lexer::re_lex_logical_token
pub(crate) fn re_lex_logical_token(&mut self) {
if self.lexer.re_lex_logical_token() {
let mut non_logical_newline_start = None;
for token in self.tokens.iter().rev() {
match token.kind() {
TokenKind::NonLogicalNewline => {
non_logical_newline_start = Some(token.start());
}
TokenKind::Comment => continue,
_ => break,
}
}
if self.lexer.re_lex_logical_token(non_logical_newline_start) {
let current_start = self.current_range().start();
while self
.tokens