Before this commit there was a single `parse_expr(u8)` method, which
was called both
1) from within the expression parser (to parse subexpression consisting
of operators with higher priority than the current one), and
2) from the top-down parser both
a) to parse true expressions (such as an item of the SELECT list or
the condition after WHERE or after ON), and
b) to parse sequences which are not exactly "expressions".
This starts cleaning this up by renaming the `parse_expr(u8)` method to
`parse_subexpr()` and using it only for (1) - i.e. usually providing a
non-zero precedence parameter.
The non-intuitively called `parse()` method is renamed to `parse_expr()`,
which became available and is used for (2a).
While reviewing the existing callers of `parse_expr`, four points to
follow up on were identified (marked "TBD (#)" in the commit):
1) Do not lose parens (e.g. `(1+2)*3`) when roundtripping
String->AST->String by using SQLNested.
2) Incorrect precedence of the NOT unary
3) `parse_table_factor` accepts any expression where a SELECT subquery
is expected.
4) parse_delete uses parse_expr() to retrieve a table name
These are dealt with in the commits to follow.
Parser::parse_sql() can now parse a semicolon-separated list of
statements, returning them in a Vec<SQLStatement>.
To support this we:
- Move handling of inter-statement tokens from the end of individual
statement parsers (`parse_select` and `parse_delete`; this was not
implemented for other top-level statements) to the common
statement-list parsing code (`parse_sql`);
- Change the "Unexpected token at end of ..." error, which didn't have
tests and prevented us from parsing successive statements ->
"Expected end of statement" (i.e. a delimiter - currently only ";" -
or the EOF);
- Add PartialEq on ParserError to be able to assert_eq!() that parsing
statements that do not terminate properly returns an expected error.
(The primary motivation was that it makes the tests more resilient to
the upcoming changes to the SQLSelectStatement to support `AS` aliases
and `UNION`.)
Also start using `&'static str` literals consistently instead of
String::from for the `let sql` test strings.
Continuing from https://github.com/andygrove/sqlparser-rs/pull/33#issuecomment-453060427
This stops the parser from accepting (and the AST from being able to
represent) SQL look-alike code that makes no sense, e.g.
SELECT ... FROM (CREATE TABLE ...) foo
SELECT ... FROM (1+CAST(...)) foo
Generally this makes the AST less "partially typed": meaning certain
parts are strongly typed (e.g. SELECT can only contain projections,
relations, etc.), while everything that didn't get its own type is
dumped into ASTNode, effectively untyped. After a few more fixes (yet
to be implemented), `ASTNode` could become an `SQLExpression`. The
Pratt-style expression parser (returning an SQLExpression) would be
invoked from the top-down parser in places where a generic expression
is expected (e.g. after SELECT <...>, WHERE <...>, etc.), while things
like select's `projection` and `relation` could be more appropriately
(narrowly) typed.
Since the diff is quite large due to necessarily large number of
mechanical changes, here's an overview:
1) Interface changes:
- A new AST enum - `SQLStatement` - is split out of ASTNode:
- The variants of the ASTNode enum, which _only_ make sense as a top
level statement (INSERT, UPDATE, DELETE, CREATE, ALTER, COPY) are
_moved_ to the new enum, with no other changes.
- SQLSelect is _duplicated_: now available both as a variant in
SQLStatement::SQLSelect (top-level SELECT) and ASTNode:: (subquery).
- The main entry point (Parser::parse_sql) now expects an SQL statement
as input, and returns an `SQLStatement`.
2) Parser changes: instead of detecting the top-level constructs deep
down in the precedence parser (`parse_prefix`) we are able to do it
just right after setting up the parser in the `parse_sql` entry point
(SELECT, again, is kept in the expression parser to demonstrate how
subqueries could be implemented).
The rest of parser changes are mechanical ASTNode -> SQLStatement
replacements resulting from the AST change.
3) Testing changes: for every test - depending on whether the input was
a complete statement or an expresssion - I used an appropriate helper
function:
- `verified` (parses SQL, checks that it round-trips, and returns
the AST) - was replaced by `verified_stmt` or `verified_expr`.
- `parse_sql` (which returned AST without checking it round-tripped)
was replaced by:
- `parse_sql_expr` (same function, for expressions)
- `one_statement_parses_to` (formerly `parses_to`), extended to
deal with statements that are not expected to round-trip.
The weird name is to reduce further churn when implementing
multi-statement parsing.
- `verified_stmt` (in 4 testcases that actually round-tripped)
...as in `FROM foo bar WHERE bar.x > 1`.
To avoid ambiguity as to whether a token is an alias or a keyword, we
maintain a blacklist of keywords, that can follow a "table factor", to
prevent parsing them as an alias. This "context-specific reserved
keyword" approach lets us accept more SQL that's valid in some dialects,
than a list of globally reserved keywords. Also some dialects (e.g.
Oracle) apparently don't reserve some keywords (like JOIN), while
presumably they won't accept them as an alias (`FROM foo JOIN` meaning
`FROM foo AS JOIN`).
A "table factor" (name borrowed from the ANSI SQL grammar) is a table
name or a derived table (subquery), followed by an optional `AS` and an
optional alias. (The alias is *not* optional for subqueries, but we
don't enforce that.) It can appear in the FROM/JOIN part of the query.
This commit:
- introduces ASTNode::TableFactor
- changes the parser to populate SQLSelect::relation and Join::relation
with ASTNode::TableFactor instead of the table name
- changes the parser to only accept subqueries or identifiers, not
arbitrary expressions in the "table factor" context
- always parses the table name as SQLCompoundIdentifier (whether or not
it was actually compound).
This will allow re-using it for SQLStatement in a later commit.
(Also split the new struct into a separate file, other query-related
types will be moved here in a follow-up commit.)
Fold Token::{Keyword, Identifier, DoubleQuotedString} into one
Token::SQLWord, which has the necessary information (was it a
known keyword and/or was it quoted).
This lets the parser easily accept DoubleQuotedString (a quoted
identifier) everywhere it expects an Identifier in the same match
arm. (To complete support of quoted identifiers, or "delimited
identifiers" as the spec calls them, a TODO in parse_tablename()
ought to be addressed.)
As an aside, per <https://en.wikibooks.org/wiki/SQL_Dialects_Reference/Data_structure_definition/Delimited_identifiers>
sqlite seems to be the only one supporting 'identifier'
(which is rather hairy, since it can also be a string
literal), and `identifier` seems only to be supported by
MySQL. I didn't implement either one.
This also allows the use of `parse`/`expect_keyword` machinery
for non-reserved keywords: previously they relied on the keyword
being a Token::Keyword, which wasn't a Token::Identifier, and so
wasn't accepted as one.
Now whether a keyword can be used as an identifier can be decided
by the parser. (I didn't add a blacklist of "reserved" keywords,
so that any keyword which doesn't have a special meaning in the
parser could be used as an identifier. The list of keywords in
the dialect could be re-used for that purpose at a later stage.)
i.e. ASC/DESC/unspecified - so that we don't lose information about
source code.
Also don't take any keyword other than ASC/DESC or Comma to mean
'ascending'.
...as this syntax is not specific to the PostgreSQL dialect.
Also use verified() to assert that parsing + serializing results in the
original SQL string.
Its existence alongside SingleQuotedString simply doesn't make sense:
`'a string'` is a string literal, while `a string` is not a "value".
It's only used in postgresql-specific tab-separated-values parser to
store the string representation of a field's value. For that use-case
Option<String> looks like a more appropriate choice than Value.
...and parser support for the corresponding token, as "..." in SQL[*] is
not a literal string like we parse it - but a quoted identifier (which I
intend to implement later).
[*] in all the RBDMSes I know, except for sqlite which has complex rules
in the name of "compatibility": https://www.sqlite.org/lang_keywords.html
Mainly by replacing `assert_eq!(sql, ast.to_string())` with a call to
the recently introduced `verified()` helper or using `parses_to()` where
the expected serialization differs from the original SQL string.
There was one case (parse_implicit_join), where the inputs were different:
let sql = "SELECT * FROM t1,t2";
//vs
let sql = "SELECT * FROM t1, t2";
and since we don't test the whitespace handling in other tests, I just
used the canonical representation as input.
Before this missing keywords THEN/WHEN/AS would be parsed as if they
were in the text as the code didn't check the return value of
consume_token() - see upcoming commit.