mirror of
https://github.com/python/cpython.git
synced 2025-07-07 19:35:27 +00:00
gh-102856: Python tokenizer implementation for PEP 701 (#104323)
This commit replaces the Python implementation of the tokenize module with an implementation that reuses the real C tokenizer via a private extension module. The tokenize module now implements a compatibility layer that transforms tokens from the C tokenizer into Python tokenize tokens for backward compatibility. As the C tokenizer does not emit some tokens that the Python tokenizer provides (such as comments and non-semantic newlines), a new special mode has been added to the C tokenizer mode that currently is only used via the extension module that exposes it to the Python layer. This new mode forces the C tokenizer to emit these new extra tokens and add the appropriate metadata that is needed to match the old Python implementation. Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
This commit is contained in:
parent
3ed57e4995
commit
6715f91edc
22 changed files with 424 additions and 374 deletions
6
Lib/token.py
generated
6
Lib/token.py
generated
|
@ -67,10 +67,10 @@ SOFT_KEYWORD = 60
|
|||
FSTRING_START = 61
|
||||
FSTRING_MIDDLE = 62
|
||||
FSTRING_END = 63
|
||||
COMMENT = 64
|
||||
NL = 65
|
||||
# These aren't used by the C tokenizer but are needed for tokenize.py
|
||||
ERRORTOKEN = 64
|
||||
COMMENT = 65
|
||||
NL = 66
|
||||
ERRORTOKEN = 66
|
||||
ENCODING = 67
|
||||
N_TOKENS = 68
|
||||
# Special definitions for cooperation with parser
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue