This fixes the contents of the `n`-th repeater inside the
`ComponentContainer` to also show up in the `n`-th repeater *after*
the `ComponentContainer`.
The ComponentContainer is lowered to a Comonent Container and a
dynamic tree node. The tree node is managed manually and I messed
up the repeater indices since there were no entries in the repeater
array for the embedded components. That mad things hard to keep
consistent as components get merged.
This chnages that: It lowers a Component Container to Something like this:
```slint
ComponentContainer {
if false: Emtpy {}
}
```
This way the standard mechanismns make sure we have something to put into
the repeater list and that unscrews the indices, saving a bit of code along
the way.
The inserted repeated node is still marked as `is_component_placeholder`, so
that we can wire it up as needed.
The code would fail to compile because the property would not be seen as
used and would be removed, but not the change callback.
Fixes#8269
Also fix a segfault in the added test because it will initialize the
change callback (and therefore query the properties) because the
SharedGlobal structure is fully initialized.
So we must only initialize the change callback on global after the
SharedGlobal is fully initialized
We try to only visit the bindings of used property.
The problem is that when we visit the element, not all properties have
been marked as used, yet. We have a chicken and egg problem.
So just visit all bindings even for property we haven't reached.
This is unfortunate that we have to do that, and we'd need a much
deeper analysis to do this properly.
Fixes#8144
You can not create this expression manually, but there
is a pass in the compiler that adds it to all set
properties in a compilation run.
All it does is basically associate an id with an expression,
so that we can then in a later step have the interpreter do
something with that information. Apart from that, it tries to
be as transparent as possible.
The LLR lowering removes that expression again, just so we can
be sure it does not end up in the generated live code.
This does some refactoring to allow builtin item functions to return a
value:
- builtin member functions are no longer BuiltinFunction, but they are
just normal NamedReference
- Move special case for them in the LLR/eval
Adds methods to change a `string`'s case to lowercase or uppercase.
They use Rust's `to_lowercase` and `to_uppercase` `String` methods.
ChangeLog: Added string.to-lowercase and string.to-uppercase
Closes#7860
Add two new float to string conversion methods that mimic
JavaScript's Number.toFixed() and Number.toPrecision().
They are implemented as no_mangle functions similar to the already
existing float to shared string conversion.
Closes#5822
`__CARGO_FIX_YOLO=1` is a hack, but it does help a lot with the tedious fixes where the result is fairly clear.
See https://rust-lang.github.io/rust-clippy/master/index.html#needless_return
```
__CARGO_FIX_YOLO=1 cargo clippy --fix --all-targets --workspace --exclude gstreamer-player --exclude i-slint-backend-linuxkms --exclude uefi-demo --exclude ffmpeg -- -A clippy::all -W clippy::needless_return
cargo fmt --all
```
`__CARGO_FIX_YOLO=1` is a hack, but it does help a lot with the tedious fixes where the result is fairly clear.
See https://rust-lang.github.io/rust-clippy/master/index.html#needless_lifetimes
```
__CARGO_FIX_YOLO=1 cargo clippy --fix --all-targets --workspace --exclude gstreamer-player --exclude i-slint-backend-linuxkms --exclude uefi-demo --exclude ffmpeg -- -A clippy::all -W clippy::needless_lifetimes
cargo fmt --all
```
`__CARGO_FIX_YOLO=1` is a hack, but it does help a lot with the tedious fixes where the result is fairly clear.
See https://rust-lang.github.io/rust-clippy/master/index.html#/needless_borrow
```
__CARGO_FIX_YOLO=1 cargo clippy --fix --all-targets --workspace --exclude gstreamer-player --exclude i-slint-backend-linuxkms --exclude uefi-demo --exclude ffmpeg -- -A clippy::all -W clippy::needless_borrow
cargo fmt --all
```
This is a hacky approach, but does help a lot with the tedious fixes.
See https://rust-lang.github.io/rust-clippy/master/index.html#/unnecessary_map_or
```
__CARGO_FIX_YOLO=1 cargo clippy --fix --all-targets --workspace --exclude gstreamer-player --exclude i-slint-backend-linuxkms --exclude uefi-demo --exclude ffmpeg -- -A clippy::all -W clippy::unnecessary_map_or
cargo fmt --all
```
type-safety+=1
This would avoid for example using a sub-component index in an array of
instances or items.
It also self-document what the index is for.
There are still a couple of cast from repeater-index to u32 because we
do some hack with the repeater-index number for component containers
We also cast back to numbers in order to convert it to string in the
generated code.
Introduce two new properties for string in .slint:
- .is-empty: Checks if a string is empty.
- .character-count: Retrieves the number of grapheme clusters
https://www.unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries
These additions enhance functionality and improve convenience when working with string properties.
Only the interpreter is implemented so far
MacOs won't work yet because we don't disable the default winit menubar
The viewer don't support removing the MenuBar yet
If the `image-url()` expression of an `Image { source: @image-url("large-image.png"); ... }` gets inlined into geometry getters and other places that are called for every frame, then we might end up decoding images every frame, if the image isn't in the 5MB image decoder cache. It's better to rely on the `property <image>` of the `Image` for caching the decoded image, so don't inline those.
This fixes CPU being time being spent constantly on decoding images on the home automation lock screen.
SmolStr has an Arc internally for large strings. This allows
cheap copies of large strings, but we lose that ability
when we convert the SmolStr to a &str and then reconstruct a
SmolStr from that slice.
I was hoping for some larger gains here, considering the impact
of this code change, but it only removes ~50k allocations,
while the impact on the runtime is not noticeable at all.
Still, I believe this is the right thing to do.
Before:
```
allocations: 2338981
Time (mean ± σ): 988.3 ms ± 17.9 ms [User: 690.2 ms, System: 206.4 ms]
Range (min … max): 956.4 ms … 1016.3 ms 10 runs
```
After:
```
allocations: 2287723
Time (mean ± σ): 989.8 ms ± 23.2 ms [User: 699.2 ms, System: 197.6 ms]
Range (min … max): 945.3 ms … 1021.4 ms 10 runs
```
Popups are stored in a HashMap and are assigned an ID so popup.close(); closes the correct popup and so a single PopupWindow cannot be opened multiple times
This currently doesn't have public API to enable it yet.
TODO:
- Error handling in the compiler
- Public API in the compiler configuration
- Documentation
This is just for completeness, we "only" save ~13k allocations
with no measurable speed impact:
Before:
```
Time (mean ± σ): 1.019 s ± 0.033 s [User: 0.716 s, System: 0.203 s]
Range (min … max): 0.957 s … 1.061 s 10 runs
allocations: 2679001
```
After:
```
Time (mean ± σ): 1.015 s ± 0.015 s [User: 0.715 s, System: 0.201 s]
Range (min … max): 0.997 s … 1.038 s 10 runs
allocations: 2666889
```
This is rarely used, but using Rc here like elsewhere allows us to
elide a few unneccessary memory allocations when copying such types.
The speed impact is not measurable though. With heaptrack I see that
we get rid of the last ~7600 allocations in my benchmark when cloning
Type.
This makes copying such types much cheaper and will allow us to
intern common struct types in the future too. This further
drops the sample cost for langtype.rs from ~6.6% down to 4.0%.
We are now also able to share/intern common struct types.
Before:
```
Time (mean ± σ): 1.073 s ± 0.021 s [User: 0.759 s, System: 0.215 s]
Range (min … max): 1.034 s … 1.105 s 10 runs
allocations: 3074261
```
After:
```
Time (mean ± σ): 1.034 s ± 0.026 s [User: 0.733 s, System: 0.201 s]
Range (min … max): 1.000 s … 1.078 s 10 runs
allocations: 2917476
```
This allows us to cheaply copy the langtype::Type values which
contain such a type. The runtime impact is small and barely noticable
but a sampling profiler shows a clear reduction in samples pointing
at langtype.rs, roughly reducing that from ~8.6% inclusive cost
down to 6.6% inclusive cost.
Furthermore, this allows us to share/intern common types.
Before:
```
Benchmark 1: ./target/release/slint-viewer ../slint-perf/app.slint
Time (mean ± σ): 1.089 s ± 0.026 s [User: 0.771 s, System: 0.216 s]
Range (min … max): 1.046 s … 1.130 s 10 runs
allocations: 3152149
```
After:
```
Time (mean ± σ): 1.073 s ± 0.021 s [User: 0.759 s, System: 0.215 s]
Range (min … max): 1.034 s … 1.105 s 10 runs
allocations: 3074261
```