gh-104169: Refactor tokenizer into lexer and wrappers (#110684)

* The lexer, which include the actual lexeme producing logic, goes into
  the `lexer` directory.
* The wrappers, one wrapper per input mode (file, string, utf-8, and
  readline), go into the `tokenizer` directory and include logic for
  creating a lexer instance and managing the buffer for different modes.
---------

Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
This commit is contained in:
Lysandros Nikolaou 2023-10-11 17:14:44 +02:00 committed by GitHub
parent eb50cd37ea
commit 01481f2dc1
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
29 changed files with 3185 additions and 2988 deletions

View file

@ -2,7 +2,8 @@
#include <errcode.h>
#include "pycore_pyerrors.h" // _PyErr_ProgramDecodedTextObject()
#include "tokenizer.h"
#include "lexer/state.h"
#include "lexer/lexer.h"
#include "pegen.h"
// TOKENIZER ERRORS