* WIP: swrenderer: use fixed point for the pixmap font coordinate
* swrenderer: signed distance field: move the glyph to the middle
* swrenderer: round the advance instead of truncating it in distance field
* swrenderer: actually align the gplyph on the sub-pixel precision
sub-pixel within the source.
* swrenderer: adapt the threshold for signed distance field
sqrt(2) is the distance to the diagonal, seems like this gives sharper fonts
* Fix bug in the elision, and re-upload the screenshort
the screenshot changed because the afvanced is now rounded intead of
truncated
The idea of the live-preview is that it never causes disk
access itself, the LSP side handles all that for it.
With this in mind:
* Keep source code of invalidated files in the cache. This
way we will see whether we need to refresh the UI after the
LSP has read the data back from disk. This avoids quite
a bit of rerendering just because an unchanged buffer was
closed in the editor (e.g. because the editor switched buffers!)
* Always return `Some` from our file open fallback so that the
compiler does not fall back to reading data from disk
* Do not try to render if the main file has no source code yet.
The LSP will tell us about the sources in time
If the `image-url()` expression of an `Image { source: @image-url("large-image.png"); ... }` gets inlined into geometry getters and other places that are called for every frame, then we might end up decoding images every frame, if the image isn't in the 5MB image decoder cache. It's better to rely on the `property <image>` of the `Image` for caching the decoded image, so don't inline those.
This fixes CPU being time being spent constantly on decoding images on the home automation lock screen.
The change in https://github.com/slint-ui/slint/pull/6747
invalidated the cache, but it was only reloaded when one of the dependent was reloaded.
We need to reload the cache for all open file so that LSP feature continue to work on
open document even if they get no changes
* creating a lookup table of colors based on the set on apple docs
then selecting from these colors rather than always typing hex
also added an in property for setting selected color (future)
* splitting out method of changing selection colour for later
* forgot to pull CupertinoColors from import
* Squashed commit of the following:
commit 4924aa908d6e039a7bf1f79ede3dc7c26f71007f
Author: szecket <szecket@magrittescow.com>
Date: Fri Nov 15 17:31:45 2024 -0500
use defined Palette for states
commit 80711ee7188f37b1b29ce11855b6a636d7a39306
Author: szecket <szecket@magrittescow.com>
Date: Fri Nov 15 17:29:51 2024 -0500
make control colour consistent with style and other controls
commit 1cfd39e6da6643600e8b553dfab2418c8552cdc4
Author: szecket <szecket@magrittescow.com>
Date: Fri Nov 15 13:58:07 2024 -0500
selection of controls when focused is not current cupertino style and too strong
commit 4bf4ae6ad385e118687f752362b34e079c03fe22
Author: szecket <szecket@magrittescow.com>
Date: Fri Nov 15 13:42:58 2024 -0500
make foreground color contrast when selected
* removing property that is only in cupertino
SmolStr has an Arc internally for large strings. This allows
cheap copies of large strings, but we lose that ability
when we convert the SmolStr to a &str and then reconstruct a
SmolStr from that slice.
I was hoping for some larger gains here, considering the impact
of this code change, but it only removes ~50k allocations,
while the impact on the runtime is not noticeable at all.
Still, I believe this is the right thing to do.
Before:
```
allocations: 2338981
Time (mean ± σ): 988.3 ms ± 17.9 ms [User: 690.2 ms, System: 206.4 ms]
Range (min … max): 956.4 ms … 1016.3 ms 10 runs
```
After:
```
allocations: 2287723
Time (mean ± σ): 989.8 ms ± 23.2 ms [User: 699.2 ms, System: 197.6 ms]
Range (min … max): 945.3 ms … 1021.4 ms 10 runs
```
Don't filter type import with the extension. Instead if the import
statement is having braces, always consider it is a slint file, and if
not, consider it as a foreign import
the deduplicate_property_read was bailing out the replacement if one
part of the coinditional branch do assignment. But the other part might
already have partial assignment, so we must continue
Fixes#6616
Instead of panicking.
Attempt to fix it in #6765 didn't work for C++.
Code generation might be hard for C++, so I thought it's better to error out.
Fix#6760
The "tmpobj" variable was overwriten because the interpreter (contrary
to rust and C++) don't have scopes for the local variables, and local
variable of the same name would conflict.
(I think this could in theory be a problem in C++ and rust although i
haven't reproduced it)
Other uses of StoreLocalVariable also make the number unique with a
counter
Fixes#6721
Popups are stored in a HashMap and are assigned an ID so popup.close(); closes the correct popup and so a single PopupWindow cannot be opened multiple times
This currently doesn't have public API to enable it yet.
TODO:
- Error handling in the compiler
- Public API in the compiler configuration
- Documentation
Instead of cloning the vector on every iteration level, pass the
scope in and out of the visitation function and push/pop the element
there as needed. This way we can operate on a single vector that
gets moved around, which removes a few thousand memory allocations.
The speed impact is not measurable, as the code also triggers rowan
API that is much more allocation happy.
Still, I believe this patch is still merge-worthy as it also reduces
the code duplication a bit and is subjectively better, esp. from a
performance pov.
Instead of doing potentially multiple calls in the chained calls,
each of which would allocate in rowan, we now only call the iterator
function once and then leverage `find_map`. This is arguably even
more readable and it removes ~300k allocations and speeds up parsing.
Before:
```
Time (mean ± σ): 930.7 ms ± 15.1 ms [User: 678.7 ms, System: 165.5 ms]
Range (min … max): 906.4 ms … 956.3 ms 10 runs
allocations: 2339130
```
After:
```
Time (mean ± σ): 914.6 ms ± 22.7 ms [User: 649.6 ms, System: 174.5 ms]
Range (min … max): 874.8 ms … 946.3 ms 10 runs
allocations: 2017915
```
This is just for completeness, we "only" save ~13k allocations
with no measurable speed impact:
Before:
```
Time (mean ± σ): 1.019 s ± 0.033 s [User: 0.716 s, System: 0.203 s]
Range (min … max): 0.957 s … 1.061 s 10 runs
allocations: 2679001
```
After:
```
Time (mean ± σ): 1.015 s ± 0.015 s [User: 0.715 s, System: 0.201 s]
Range (min … max): 0.997 s … 1.038 s 10 runs
allocations: 2666889
```
Instead of occupying multiple TLS slots, introduce a single type
that stores all the builtin function types in members. Use a macro
to define this struct then, which allows us to use a nice DSL to
define these function types, reducing the boiler plate significantly.
The downside is that we no longer have the ability to easily share
semantically equivalent function types (e.g. for `Round`, `Ceil`
and `Floor` etc.). Doing so would require us to introdue a separate
name for these types, and then use external matching to map the
BuiltinFunctions to the reduced list of types. The performance impact
is minimal though, so this is not done to KISS.
This is rarely used, but using Rc here like elsewhere allows us to
elide a few unneccessary memory allocations when copying such types.
The speed impact is not measurable though. With heaptrack I see that
we get rid of the last ~7600 allocations in my benchmark when cloning
Type.
These are known statically, so let's store them once in thread local
statics and return them from there.
Before:
```
Time (mean ± σ): 1.034 s ± 0.026 s [User: 0.733 s, System: 0.201 s]
Range (min … max): 1.000 s … 1.078 s 10 runs
allocations: 2917476
```
After:
```
Time (mean ± σ): 996.9 ms ± 17.7 ms [User: 708.9 ms, System: 202.9 ms]
Range (min … max): 977.8 ms … 1033.1 ms 10 runs
allocations: 2686677
```