* Enumeration members are singletons. Copying on them would be no-op
* Avoid generating unnecessary `pass` statement
* Several trivial refactor
* Avoid building unnecessary intermediate lists, which are mere slight waste of time and space
* Remove unused import, an overlook from commit 8e6bf9e9
* `collections.abc.Mapping.get()` defaults to return `None` when key doesn't exist
* Just use unittest's `assertRaises` to specify expected exception types, instead of catching every possible `Exception`s, which could suppress legitimate errors and hide bugs
* We know for sure that the body of `CSTTypedTransformerFunctions` won't be empty, so don't bother with complex formal completeness
This massive PR implements an alternative Python parser that will allow LibCST to parse Python 3.10's new grammar features. The parser is implemented in Rust, but it's turned off by default through the `LIBCST_PARSER_TYPE` environment variable. Set it to `native` to enable. The PR also enables new CI steps that test just the Rust parser, as well as steps that produce binary wheels for a variety of CPython versions and platforms.
Note: this PR aims to be roughly feature-equivalent to the main branch, so it doesn't include new 3.10 syntax features. That will be addressed as a follow-up PR.
The new parser is implemented in the `native/` directory, and is organized into two rust crates: `libcst_derive` contains some macros to facilitate various features of CST nodes, and `libcst` contains the `parser` itself (including the Python grammar), a `tokenizer` implementation by @bgw, and a very basic representation of CST `nodes`. Parsing is done by
1. **tokenizing** the input utf-8 string (bytes are not supported at the Rust layer, they are converted to utf-8 strings by the python wrapper)
2. running the **PEG parser** on the tokenized input, which also captures certain anchor tokens in the resulting syntax tree
3. using the anchor tokens to **inflate** the syntax tree into a proper CST
Co-authored-by: Benjamin Woodruff <github@benjam.info>
which ensures we won't have inconsistent black-vs-isort errors
going forward. We can always format by running `ufmt format .`
at the root, and check with `ufmt check .` in our CI actions.
* Add Python 3.9 to tox envlist
* Require newer typing_extensions for 3.9
For simplicity, use the new version in all cases.
* Improve default-version selection to work on 3.9
While were at it, improve the code to work with a likely 3.10 by
allowing multiple digits for minor version.
Several of the python 2 features are gated on these in addition to
version (like `with_statement`), and a refactoring tool like Bowler
commonly needs this information anyway.
If you have such a program like "pass\\\n", this is technically a program without a trailing newline, since line continuations are defined as being a `\` followed by a newline. We were misdetecting this as having a trailing newline, thus making it impossible to parse the continuation. Add some tests to verify this behavior and then fix the problem.
Note that this was found via hypothesis.
Standardize on the convention that private modules (those we don't expect people to directly import) are prefixed with an underscore. Everything under a directory/module that has an underscore is considered private, unless it is re-exported from a non-underscored module. Most things are exported from libcst directly, but there are a few things in libcst.tool, libcst.codegen and libcst.metadata that are namedspaced as such.
There were only grammar changes between 3.5 and 3.6, which are pretty trivial to put in. So, do that trivial bit and get us completely covered for production Python 3 versions. This could use better testing, but it is good enough as-is and we can address issues with GitHub's issue tracker.
This adds the ability to subdivide the grammar into different python versions,
as well as the ability to change the tokenizer per python version. I use this
to add support for Python 3.6 since this is a supported version we run on. This
can be used for others to add support for older or newer Python versions, regardless
of tokenizer/grammar changes.
Fork both of these and convert internal LibCST stuff to point at them.
Removes the TODO from previous commit now that the tokenizer is in
complete agreement.
Brings a copy of utils.py internal to LibCST, rewriting our uses of it
to use our internal fork. A compatibility shim (__iter__) is currently
included to make PythonVersionInfo compatible with Parso directly.
Hypothesis found that when we have a statement like `pass\r`, we detect that
`\r` is the default and parse the trailing newline as `Newline(None)`. However, when
we render the statement back out again, since we don't have a module, we construct
a default module which treats `Newline(None)` as a `\n` not a '\r'. So, when we are
parsing statements or expressions, disable auto-inferring the default newline and always
infer the default rendered newline (`\n`) so that rendering a statement/expression back
out behaves as expected.
For anything that's not an internal logic error, conversion functions
should raise a ParserSyntaxError.
Internal logic errors should probably use an AssertionError or an assert
statement, but that's not as important and is out of scope for this PR.
This removes the hard-coded logic about encountered/expected, and moves
it into a separate helper method.
Line and column can now be initialized lazily. We'll use this later to
raise `ParserSyntaxError`s inside of conversion functions, backfilling
the positions inside `_base_parser`, since conversion functions don't
always have access to position information.