mirror of
https://github.com/python/cpython.git
synced 2025-10-09 16:34:44 +00:00
Turns out Neil didn't intend for *all* of his gen-branch work to get
committed. tokenize.py: I like these changes, and have tested them extensively without even realizing it, so I just updated the docstring and the docs. tabnanny.py: Also liked this, but did a little code fiddling. I should really rewrite this to *exploit* generators, but that's near the bottom of my effort/benefit scale so doubt I'll get to it anytime soon (it would be most useful as a non-trivial example of ideal use of generators; but test_generators.py has already grown plenty of food-for-thought examples). inspect.py: I'm sure Ping intended for this to continue running even under 1.5.2, so I reverted this to the last pre-gen-branch version. The "bugfix" I checked in in-between was actually repairing a bug *introduced* by the conversion to generators, so it's OK that the reverted version doesn't reflect that checkin.
This commit is contained in:
parent
88e66254f9
commit
4efb6e9643
4 changed files with 79 additions and 47 deletions
|
@ -1,13 +1,26 @@
|
|||
"""Tokenization help for Python programs.
|
||||
|
||||
This module exports a function called 'tokenize()' that breaks a stream of
|
||||
generate_tokens(readline) is a generator that breaks a stream of
|
||||
text into Python tokens. It accepts a readline-like method which is called
|
||||
repeatedly to get the next line of input (or "" for EOF) and a "token-eater"
|
||||
function which is called once for each token found. The latter function is
|
||||
passed the token type, a string containing the token, the starting and
|
||||
ending (row, column) coordinates of the token, and the original line. It is
|
||||
designed to match the working of the Python tokenizer exactly, except that
|
||||
it produces COMMENT tokens for comments and gives type OP for all operators."""
|
||||
repeatedly to get the next line of input (or "" for EOF). It generates
|
||||
5-tuples with these members:
|
||||
|
||||
the token type (see token.py)
|
||||
the token (a string)
|
||||
the starting (row, column) indices of the token (a 2-tuple of ints)
|
||||
the ending (row, column) indices of the token (a 2-tuple of ints)
|
||||
the original line (string)
|
||||
|
||||
It is designed to match the working of the Python tokenizer exactly, except
|
||||
that it produces COMMENT tokens for comments and gives type OP for all
|
||||
operators
|
||||
|
||||
Older entry points
|
||||
tokenize_loop(readline, tokeneater)
|
||||
tokenize(readline, tokeneater=printtoken)
|
||||
are the same, except instead of generating tokens, tokeneater is a callback
|
||||
function to which the 5 fields described above are passed as 5 arguments,
|
||||
each time a new token is found."""
|
||||
|
||||
__author__ = 'Ka-Ping Yee <ping@lfw.org>'
|
||||
__credits__ = \
|
||||
|
@ -111,7 +124,7 @@ def tokenize(readline, tokeneater=printtoken):
|
|||
except StopTokenizing:
|
||||
pass
|
||||
|
||||
# backwards compatible interface, probably not used
|
||||
# backwards compatible interface
|
||||
def tokenize_loop(readline, tokeneater):
|
||||
for token_info in generate_tokens(readline):
|
||||
apply(tokeneater, token_info)
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue