Beep boop.
What happened you ask? I removed the dumb balancing algorithm I
implemented in favor of SQLite's implementation based on B*Tree[1] where
a page is 2/3 full instead of 1/2. It also tries to balance a page by
taking a maximum 3 pages and distributing cells evenly between them.
I've made some changes that are somewhat related:
* Moved most operations on pages out of BTreeCursor because those
operations are based on a page, not on a cursor, and it makes it easier
to test.
* Fixed `write_u16` and `read_u16` cases that didn't need a implicit
offset calculation. Added: `write_u16_no_offset` and
`read_u16_no_offset` to counter this.
* Added some tests with fuzz testing too.
* Fixed some important actions like: `compute_free_space`,
`defragment_page` and `drop_cell`.
[1] https://dl.acm.org/doi/10.1145/356770.356776Closes#968
- Rename push_only.yml workflow to rust_perf.yml
- Move the 'bench' task from rust.yml to rust_perf.yml
- More Nyrkio configuration options exposed:
Team support (everyone in gh/tursodatabase can access Nyrkiö)
Public results
Alerting options: Comment on PR, if change detected, don't fail task
- Add Nyrkio also to analyse results from the clickbench task
Closes#959
A nice python class was added in #941 for easily testing limbo's CLI
output. As the tests around extensions continue to grow, we should use
this as much as possible for non-Tcl/compatibility tests so they can
scale more easily without getting messy. I am going to add a whole bunch
of tests that will explicitly test each kind of extension both when
loaded staticly/dynamically and combined with other export types and I
want to get the existing ones migrated over first.
Closes#1048
The SerialType::try_from() was pretty high up in CPU profiles so I asked
my dear friend Claude to optimize it away by using the serial type
integer value directly instead of constructing a fancy enumeration.
Closes#1057
The SerialType::try_from() was pretty high up in CPU profiles so I asked
my dear friend Claude to optimize it away by using the serial type
integer value directly instead of constructing a fancy enumeration.
We currently have two value types, `Value` and `OwnedValue`. The
original thinking was that `Value` is external type and `OwnedValue` is
internal type. However, this just results in unnecessary transformation
between the types as data crosses the Limbo library boundary.
Let's just follow SQLite here and consolidate on a single value type
(where `sqlite3_value` is just an alias for the internal `Mem` type).
The way this will eventually work is that we can have bunch of pre-
allocated `OwnedValue` objects in `ProgramState` and basically return a
reference to them all the way to the application itself, which extracts
the actual value.
Fixes#906
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#1056
We currently have two value types, `Value` and `OwnedValue`. The
original thinking was that `Value` is external type and `OwnedValue` is
internal type. However, this just results in unnecessary transformation
between the types as data crosses the Limbo library boundary.
Let's just follow SQLite here and consolidate on a single value type
(where `sqlite3_value` is just an alias for the internal `Mem` type).
The way this will eventually work is that we can have bunch of
pre-allocated `OwnedValue` objects in `ProgramState` and basically
return a reference to them all the way to the application itself, which
extracts the actual value.
```console
thread 'fuzz::tests::logical_expression_fuzz_run' panicked at tests\integration\fuzz\mod.rs:818:13:
assertion `left == right` failed: query: SELECT ( ( 3622873 || -8851250 ) * ( ( ( -124 ) + ( -5792536 ) ) ) ) = ( 179434259456392 < 65481085924370 ), limbo: [[Integer(1)]], sqlite: [[Integer(0)]]
left: [[Integer(1)]]
right: [[Integer(0)]]
```
This and a few other failing fuzzing tests were due to incorrectly
parsing numerics from strings. Some of our casting was done properly,
but it wasn't being applied to all cases where the behavior was needed.
It was also attempting to parse a string[0..N] N times until
`string[0..N].parse()` would no longer succeed. This searches for the
index of the first illegal character and parses the resulting slice
once.
Tests were added for some of the edgecases that were previously failing.
This PR also adds a macro in vdbe/insn.rs that allows for a bit of
cleanup and reduces some matching.
Closes#1053
This PR started out as one to improve the API of extensions but I ended
up building on top of this quite a bit and it just kept going. Sorry
this one is so large but there wasn't really a good stopping point, as
it kept leaving stuff in broken states.
**VCreate**: Support for `CREATE VIRTUAL TABLE t USING vtab_module`
**VUpdate**: Support for `INSERT` and `DELETE` methods on virtual
tables.
Sqlite uses `xUpdate` function with the `VUpdate` opcode to handle all
insert/update/delete functionality in virtual tables..
have to just document that:
```
if args[0] == NULL: INSERT args[1] the values in args[2..]
if args[1] == NULL: DELETE args[0]
if args[0] != NULL && len(args) > 2: Update values=args[2..] rowid=args[0]
```
I know I asked @jussisaurio on discord about this already, but it just
sucked so bad that I added some internal translation so we could expose
a [nice API](https://github.com/tursodatabase/limbo/pull/996/files#diff-
3e8f8a660b11786745b48b528222d11671e9f19fa00a032a4eefb5412e8200d1R54) and
handle the logic ourselves while keeping with sqlite's opcodes.
I'll change it back if I have to, I just thought it was genuinely awful
to have to rely on comments to explain all that to extension authors.
The included extension is not meant to be a legitimately useful one, it
is there for testing purposes. I did something similar in #960 using a
test extension, so I figure when they are both merged, I will go back
and combine them into one since you can do many kinds at once, and that
way it will reduce the amount of crates and therefore compile time.
1. Remaining opcodes.
2. `UPDATE` (when we support the syntax)
3. `xConnect` - expose API for a DB connection to a vtab so it can
perform arbitrary queries.
Closes#996
normalize `offset_sec` in case it's outside its usual range.
It's done in a similar way in `sqlean`:
b9ba442909
4da1f9c/src/time/time.c#L300-L306
Closes#1041
This PR fixes
[#1040](https://github.com/tursodatabase/limbo/issues/1040) and modifies
the `LIKE` function in the VDBE to work on expressions of all types like
SQLite.
Looking at how SQLite handles this, it gets the text value of the
expression regardless of its affinity. I used `exec_cast(exp, "TEXT")`
to achieve the same effect. Since most `LIKE` queries will probably be
done on `TEXT` expressions, I avoid casting the expression if it's
already `TEXT`. If either of the expressions was `NULL`, SQLite returns
nothing i.e. `NULL`. I also changed the unreachable arm message from
`Like on non-text registers` to `Like failed`.
The following queries produced the same results in Limbo:
```
SQLite version 3.46.1 2024-08-13 09:16:08 (UTF-16 console I/O)
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> CREATE TABLE tbl (n NULL, i INTEGER, r REAL, t TEXT, b BLOB);
sqlite> INSERT INTO tbl VALUES(NULL,1,2.0,'a',X'0500');
sqlite> SELECT * FROM tbl;
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE n LIKE NULL;
sqlite> SELECT * FROM tbl WHERE n LIKE 'NULL';
sqlite> SELECT * FROM tbl WHERE n LIKE 1;
sqlite> SELECT * FROM tbl WHERE n LIKE 2.0;
sqlite> SELECT * FROM tbl WHERE n LIKE x'0500';
sqlite>
sqlite> SELECT * FROM tbl WHERE i LIKE NULL;
sqlite> SELECT * FROM tbl WHERE i LIKE 1;
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE i LIKE '1';
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE i LIKE 2.0;
sqlite> SELECT * FROM tbl WHERE i LIKE 1.0;
sqlite> SELECT * FROM tbl WHERE i LIKE x'0500';
sqlite>
sqlite> SELECT * FROM tbl WHERE r LIKE NULL;
sqlite> SELECT * FROM tbl WHERE r LIKE 2;
sqlite> SELECT * FROM tbl WHERE r LIKE 2.0;
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE r LIKE '2.0';
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE r LIKE 'a';
sqlite> SELECT * FROM tbl WHERE r LIKE x'0500';
sqlite>
sqlite> SELECT * FROM tbl WHERE t LIKE NULL;
sqlite> SELECT * FROM tbl WHERE t LIKE 1;
sqlite> SELECT * FROM tbl WHERE t LIKE 2.0;
sqlite> SELECT * FROM tbl WHERE t LIKE 'a';
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE t LIKE x'0500';
sqlite>
sqlite> SELECT * FROM tbl WHERE b LIKE NULL;
sqlite> SELECT * FROM tbl WHERE b LIKE 1;
sqlite> SELECT * FROM tbl WHERE b LIKE 2.0;
sqlite> SELECT * FROM tbl WHERE b LIKE 'a';
sqlite> SELECT * FROM tbl WHERE b LIKE x'0500';
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE b LIKE 'x''0500''';
sqlite> SELECT * FROM tbl WHERE b LIKE '♣';
sqlite>
sqlite> SELECT * FROM tbl WHERE 1 LIKE 1;
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE 2.0 LIKE 2.0;
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE 2.0 LIKE '2.0';
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE '2.0' LIKE 2.0;
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE '123.45' LIKE 123.45;
|1|2.0|a|♣
sqlite> SELECT * FROM tbl WHERE NULL LIKE NULL;
sqlite> SELECT * FROM tbl WHERE x'0500' LIKE x'0500';
|1|2.0|a|♣
sqlite> SELECT typeof(n), typeof(i), typeof(r), typeof(t), typeof(b) FROM tbl;
null|integer|real|text|blob
```
Though, these queries are very basic, and more testing could be done.
Closes#1044
Modified `cast_text_to_number` to be more compatible with SQLite. When
I was running some fuzz tests, I would eventually get errors due to
incorrect casting of text to `INTEGER` or `REAL`. Previously in code
there were 2 implementations of `cast_text_to_number`: one in
`core/vdbe/insn.rs` and one in `core/vdbe/mod.rs`. I consolidated the
casting to only one function. Previously, the `mod.rs` function was just
calling `checked_cast_text_to_numeric`, which was used in `MustBeInt`
opcode. Hopefully this fixes some of the CI testing issues we are
having. This was the query that prompted me to do this: `SELECT ( ( (
878352367 ) <> ( 29 ) ) ) = ( ( ( -4309097 ) / ( -37 || -149680985265412
) ) - 755066415 );`
Closes#1038
closes#977
In order to properly get #960 merged and keep some sort of the same API
we have now, we need to support URIs/query parameters for opening new
databases.
This PR doesn't attempt to implement anything useful with this, it only
handles parsing, but it will allow #960 to properly open a new file with
a specific VFS without having to entirely re-design the `open_file`
method/API. The existing option in that PR right now is less than ideal.
e.g. All of the existing methods already accept an `IO` impl
```rust
pub fn open_file(io: Arc<dyn IO>, path: &str) -> Result<Arc<Database>> {
// or
pub fn open(
io: Arc<dyn IO>,
page_io: Rc<dyn DatabaseStorage>,
wal: Rc<RefCell<dyn Wal>>,
shared_wal: Arc<RwLock<WalFileShared>>,
buffer_pool: Rc<BufferPool>,
) -> Result<Arc<Database>> {
```
Right now, most of the parsed query parameters are not options we
support yet, but I figured it's better to handle parsing them now and
using them later on when we support them.
Also, if this looks way overly complicated for what it does... that's
because the cross platform edge-cases are a super pain in the ass.
Closes#1039