mirror of
https://github.com/python/cpython.git
synced 2025-08-04 17:08:35 +00:00
gh-116666: Add "token" glossary term (GH-130888)
Add glossary entry for `token`, and link to it. Avoid talking about tokens in the SyntaxError intro (errors.rst); at this point tokenization is too much of a technical detail. (Even to an advanced reader, the fact that a *single* token is highlighted isn't too relevant. Also, we don't need to guarantee that it's a single token.) Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
This commit is contained in:
parent
863d54cbaf
commit
30d5205849
4 changed files with 28 additions and 11 deletions
|
@ -8,8 +8,9 @@ Lexical analysis
|
|||
.. index:: lexical analysis, parser, token
|
||||
|
||||
A Python program is read by a *parser*. Input to the parser is a stream of
|
||||
*tokens*, generated by the *lexical analyzer*. This chapter describes how the
|
||||
lexical analyzer breaks a file into tokens.
|
||||
:term:`tokens <token>`, generated by the *lexical analyzer* (also known as
|
||||
the *tokenizer*).
|
||||
This chapter describes how the lexical analyzer breaks a file into tokens.
|
||||
|
||||
Python reads program text as Unicode code points; the encoding of a source file
|
||||
can be given by an encoding declaration and defaults to UTF-8, see :pep:`3120`
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue