Merge remote-tracking branch 'origin/trunk' into windows-linking

This commit is contained in:
Folkert de Vries 2022-08-02 14:11:02 +02:00
commit 19159d170a
45 changed files with 1849 additions and 991 deletions

View file

@ -71,6 +71,8 @@ To build the compiler, you need these installed:
* `libxkbcommon` - macOS seems to have it already; on Ubuntu or Debian you can get it with `apt-get install libxkbcommon-dev` * `libxkbcommon` - macOS seems to have it already; on Ubuntu or Debian you can get it with `apt-get install libxkbcommon-dev`
* On Debian/Ubuntu `sudo apt-get install pkg-config` * On Debian/Ubuntu `sudo apt-get install pkg-config`
* LLVM, see below for version * LLVM, see below for version
* [rust](https://rustup.rs/)
* Also run `cargo install bindgen` after installing rust. You may need to open a new terminal.
To run the test suite (via `cargo test`), you additionally need to install: To run the test suite (via `cargo test`), you additionally need to install:

2
Cargo.lock generated
View file

@ -3380,6 +3380,7 @@ dependencies = [
"roc_target", "roc_target",
"roc_types", "roc_types",
"roc_unify", "roc_unify",
"roc_utils",
"serde_json", "serde_json",
"target-lexicon", "target-lexicon",
"tempfile", "tempfile",
@ -3396,6 +3397,7 @@ dependencies = [
"roc_module", "roc_module",
"roc_region", "roc_region",
"roc_target", "roc_target",
"roc_utils",
"tempfile", "tempfile",
] ]

140
FAQ.md
View file

@ -1,6 +1,7 @@
# Frequently Asked Questions # Frequently Asked Questions
# Why make a new editor instead of making an LSP plugin for VSCode, Vim or Emacs? # Why make a new editor instead of making an LSP plugin for VSCode, Vim or Emacs?
The Roc editor is one of the key areas where we want to innovate. Constraining ourselves to a plugin for existing editors would severely limit our possibilities for innovation. The Roc editor is one of the key areas where we want to innovate. Constraining ourselves to a plugin for existing editors would severely limit our possibilities for innovation.
A key part of our editor will be the use of plugins that are shipped with libraries. Think of a regex visualizer, parser debugger, or color picker. For library authors, it would be most convenient to write these plugins in Roc. Trying to dynamically load library plugins (written in Roc) in for example VSCode seems very difficult. A key part of our editor will be the use of plugins that are shipped with libraries. Think of a regex visualizer, parser debugger, or color picker. For library authors, it would be most convenient to write these plugins in Roc. Trying to dynamically load library plugins (written in Roc) in for example VSCode seems very difficult.
@ -8,7 +9,7 @@ A key part of our editor will be the use of plugins that are shipped with librar
## Is there syntax highlighting for Vim/Emacs/VS Code or a LSP? ## Is there syntax highlighting for Vim/Emacs/VS Code or a LSP?
Not currently. Although they will presumably exist someday, while Roc is in the early days there's actually a conscious Not currently. Although they will presumably exist someday, while Roc is in the early days there's actually a conscious
effort to focus on the Roc Editor *instead of* adding Roc support to other editors - specifically in order to give the Roc effort to focus on the Roc Editor _instead of_ adding Roc support to other editors - specifically in order to give the Roc
Editor the best possible chance at kickstarting a virtuous cycle of plugin authorship. Editor the best possible chance at kickstarting a virtuous cycle of plugin authorship.
This is an unusual approach, but there are more details in [this 2021 interview](https://youtu.be/ITrDd6-PbvY?t=212). This is an unusual approach, but there are more details in [this 2021 interview](https://youtu.be/ITrDd6-PbvY?t=212).
@ -68,16 +69,18 @@ Both of these would make revising code riskier across the entire language, which
Another option would be to define that function equality always returns `False`. So both of these would evaluate Another option would be to define that function equality always returns `False`. So both of these would evaluate
to `False`: to `False`:
* `(\x -> x + 1) == (\x -> 1 + x)` - `(\x -> x + 1) == (\x -> 1 + x)`
* `(\x -> x + 1) == (\x -> x + 1)` - `(\x -> x + 1) == (\x -> x + 1)`
This makes function equality effectively useless, while still technically allowing it. It has some other downsides: This makes function equality effectively useless, while still technically allowing it. It has some other downsides:
* Now if you put a function inside a record, using `==` on that record will still type-check, but it will then return `False`. This could lead to bugs if you didn't realize you had accidentally put a function in there - for example, because you were actually storing a different type (e.g. an opaque type) and didn't realize it had a function inside it.
* If you put a function (or a value containing a function) into a `Dict` or `Set`, you'll never be able to get it out again. This is a common problem with [NaN](https://en.wikipedia.org/wiki/NaN), which is also defined not to be equal to itself. - Now if you put a function inside a record, using `==` on that record will still type-check, but it will then return `False`. This could lead to bugs if you didn't realize you had accidentally put a function in there - for example, because you were actually storing a different type (e.g. an opaque type) and didn't realize it had a function inside it.
- If you put a function (or a value containing a function) into a `Dict` or `Set`, you'll never be able to get it out again. This is a common problem with [NaN](https://en.wikipedia.org/wiki/NaN), which is also defined not to be equal to itself.
The first of these problems could be addressed by having function equality always return `True` instead of `False` (since that way it would not affect other fields' equality checks in a record), but that design has its own problems: The first of these problems could be addressed by having function equality always return `True` instead of `False` (since that way it would not affect other fields' equality checks in a record), but that design has its own problems:
* Although function equality is still useless, `(\x -> x + 1) == (\x -> x)` returns `True`. Even if it didn't lead to bugs in practice, this would certainly be surprising and confusing to beginners.
* Now if you put several different functions into a `Dict` or `Set`, only one of them will be kept; the others will be discarded or overwritten. This could cause bugs if a value stored a function internally, and then other functions relied on that internal function for correctness. - Although function equality is still useless, `(\x -> x + 1) == (\x -> x)` returns `True`. Even if it didn't lead to bugs in practice, this would certainly be surprising and confusing to beginners.
- Now if you put several different functions into a `Dict` or `Set`, only one of them will be kept; the others will be discarded or overwritten. This could cause bugs if a value stored a function internally, and then other functions relied on that internal function for correctness.
Each of these designs makes Roc a language that's some combination of more error-prone, more confusing, and more Each of these designs makes Roc a language that's some combination of more error-prone, more confusing, and more
brittle to change. Disallowing function equality at compile time eliminates all of these drawbacks. brittle to change. Disallowing function equality at compile time eliminates all of these drawbacks.
@ -107,12 +110,12 @@ To describe something that's neither an optional field nor an operation that can
more descriptive than something like `Maybe`. For example, if a record type has an `artist` field, but the artist more descriptive than something like `Maybe`. For example, if a record type has an `artist` field, but the artist
information may not be available, compare these three alternative ways to represent that: information may not be available, compare these three alternative ways to represent that:
* `artist : Maybe Artist` - `artist : Maybe Artist`
* `artist : [Loading, Loaded Artist]` - `artist : [Loading, Loaded Artist]`
* `artist : [Unspecified, Specified Artist]` - `artist : [Unspecified, Specified Artist]`
All three versions tell us that we might not have access to an `Artist`. However, the `Maybe` version doesn't All three versions tell us that we might not have access to an `Artist`. However, the `Maybe` version doesn't
tell us why that might be. The `Loading`/`Loaded` version tells us we don't have one *yet*, because we're tell us why that might be. The `Loading`/`Loaded` version tells us we don't have one _yet_, because we're
still loading it, whereas the `Unspecified`/`Specified` version tells us we don't have one and shouldn't expect still loading it, whereas the `Unspecified`/`Specified` version tells us we don't have one and shouldn't expect
to have one later if we wait, because it wasn't specified. to have one later if we wait, because it wasn't specified.
@ -135,8 +138,8 @@ _Since this is a FAQ answer, I'm going to assume familiarity with higher-kinded
A valuable aspect of Roc's type system is that it has decidable [principal](https://en.wikipedia.org/wiki/Principal_type) A valuable aspect of Roc's type system is that it has decidable [principal](https://en.wikipedia.org/wiki/Principal_type)
type inference. This means that: type inference. This means that:
* At compile time, Roc can correctly infer the types for every expression in a program, even if you don't annotate any of the types. - At compile time, Roc can correctly infer the types for every expression in a program, even if you don't annotate any of the types.
* This inference always infers the most general type possible; you couldn't possibly add a valid type annotation that would make the type more flexible than the one that Roc would infer if you deleted the annotation. - This inference always infers the most general type possible; you couldn't possibly add a valid type annotation that would make the type more flexible than the one that Roc would infer if you deleted the annotation.
It's been proven that any type system which supports either [higher-kinded polymorphism](https://www.cl.cam.ac.uk/~jdy22/papers/lightweight-higher-kinded-polymorphism.pdf) or [arbitrary-rank types](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/putting.pdf) cannot have decidable It's been proven that any type system which supports either [higher-kinded polymorphism](https://www.cl.cam.ac.uk/~jdy22/papers/lightweight-higher-kinded-polymorphism.pdf) or [arbitrary-rank types](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/putting.pdf) cannot have decidable
principal type inference. With either of those features in the language, there will be situations where the compiler principal type inference. With either of those features in the language, there will be situations where the compiler
@ -152,9 +155,9 @@ sacrificing principal type inference to attain, so let's focus on the trade-offs
Supporting Rank-2 types in Roc has been discussed before, but it has several important downsides: Supporting Rank-2 types in Roc has been discussed before, but it has several important downsides:
* It would increase the complexity of the language. - It would increase the complexity of the language.
* It would make some compiler error messages more confusing (e.g. they might mention `forall` because that was the most general type that could be inferred, even if that wasn't helpful or related to the actual problem). - It would make some compiler error messages more confusing (e.g. they might mention `forall` because that was the most general type that could be inferred, even if that wasn't helpful or related to the actual problem).
* It would substantially increase the complexity of the type checker, which would necessarily slow it down. - It would substantially increase the complexity of the type checker, which would necessarily slow it down.
No implementation of Rank-2 types can remove any of these downsides. Thus far, we've been able to come up No implementation of Rank-2 types can remove any of these downsides. Thus far, we've been able to come up
with sufficiently nice APIs that only require Rank-1 types, and we haven't seen a really compelling use case with sufficiently nice APIs that only require Rank-1 types, and we haven't seen a really compelling use case
@ -201,9 +204,9 @@ Culturally, to support HKP is to take a side, and to decline to support it is al
Given this, language designers have three options: Given this, language designers have three options:
* Have HKP and have Monad in the standard library. Embrace them and build a culture and ecosystem around them. - Have HKP and have Monad in the standard library. Embrace them and build a culture and ecosystem around them.
* Have HKP and don't have Monad in the standard library. An alternate standard lbirary built around monads will inevitably emerge, and both the community and ecosystem will divide themselves along pro-monad and anti-monad lines. - Have HKP and don't have Monad in the standard library. An alternate standard lbirary built around monads will inevitably emerge, and both the community and ecosystem will divide themselves along pro-monad and anti-monad lines.
* Don't have HKP; build a culture and ecosystem around other things. - Don't have HKP; build a culture and ecosystem around other things.
Considering that these are the only three options, I think the best choice for Roc—not only on a technical Considering that these are the only three options, I think the best choice for Roc—not only on a technical
level, but on a cultural level as well—is to make it clear that the plan is for Roc never to support HKP. level, but on a cultural level as well—is to make it clear that the plan is for Roc never to support HKP.
@ -224,30 +227,30 @@ the result would be broken code and sadness.
So why does Roc have the specific syntax changes it does? Here are some brief explanations: So why does Roc have the specific syntax changes it does? Here are some brief explanations:
* `#` instead of `--` for comments - this allows [hashbang](https://senthilnayagan.medium.com/shebang-hashbang-10966b8f28a8)s to work without needing special syntax. That isn't a use case Elm supports, but it is one Roc is designed to support. - `#` instead of `--` for comments - this allows [hashbang](https://senthilnayagan.medium.com/shebang-hashbang-10966b8f28a8)s to work without needing special syntax. That isn't a use case Elm supports, but it is one Roc is designed to support.
* `{}` instead of `()` for the unit type - Elm has both, and they can both be used as a unit type. Since `{}` has other uses in the type system, but `()` doesn't, I consider it redundant and took it out. - `{}` instead of `()` for the unit type - Elm has both, and they can both be used as a unit type. Since `{}` has other uses in the type system, but `()` doesn't, I consider it redundant and took it out.
* `when`...`is` instead of `case`...`of` - I predict it will be easier for beginners to pick up, because usually the way I explain `case`...`of` to beginners is by saying the words "when" and "is" out loud - e.g. "when `color` is `Red`, it runs this first branch; when `color` is `Blue`, it runs this other branch..." - `when`...`is` instead of `case`...`of` - I predict it will be easier for beginners to pick up, because usually the way I explain `case`...`of` to beginners is by saying the words "when" and "is" out loud - e.g. "when `color` is `Red`, it runs this first branch; when `color` is `Blue`, it runs this other branch..."
* `:` instead of `=` for record field definitions (e.g. `{ foo: bar }` where Elm syntax would be `{ foo = bar }`): I like `=` being reserved for definitions, and `:` is the most popular alternative. - `:` instead of `=` for record field definitions (e.g. `{ foo: bar }` where Elm syntax would be `{ foo = bar }`): I like `=` being reserved for definitions, and `:` is the most popular alternative.
* Backpassing syntax - since Roc is designed to be used for use cases like command-line apps, shell scripts, and servers, I expect chained effects to come up a lot more often than they do in Elm. I think backpassing is nice for those use cases, similarly to how `do` notation is nice for them in Haskell. - Backpassing syntax - since Roc is designed to be used for use cases like command-line apps, shell scripts, and servers, I expect chained effects to come up a lot more often than they do in Elm. I think backpassing is nice for those use cases, similarly to how `do` notation is nice for them in Haskell.
* Tag unions instead of Elm's custom types (aka algebraic data types). This isn't just a syntactic change; tag unions are mainly in Roc because they can facilitate errors being accumulated across chained effects, which (as noted a moment ago) I expect to be a lot more common in Roc than in Elm. If you have tag unions, you don't really need a separate language feature for algebraic data types, since closed tag unions essentially work the same way - aside from not giving you a way to selectively expose variants or define phantom types. Roc's opaque types language feature covers those use cases instead. - Tag unions instead of Elm's custom types (aka algebraic data types). This isn't just a syntactic change; tag unions are mainly in Roc because they can facilitate errors being accumulated across chained effects, which (as noted a moment ago) I expect to be a lot more common in Roc than in Elm. If you have tag unions, you don't really need a separate language feature for algebraic data types, since closed tag unions essentially work the same way - aside from not giving you a way to selectively expose variants or define phantom types. Roc's opaque types language feature covers those use cases instead.
* No `::` operator, or `::` pattern matching for lists. Both of these are for the same reason: an Elm `List` is a linked list, so both prepending to it and removing an element from the front are very cheap operations. In contrast, a Roc `List` is a flat array, so both prepending to it and removing an element from the front are among the most expensive operations you can possibly do with it! To get good performance, this usage pattern should be encouraged in Elm and discouraged in Roc. Since having special syntax would encourage it, it would not be good for Roc to have that syntax! - No `::` operator, or `::` pattern matching for lists. Both of these are for the same reason: an Elm `List` is a linked list, so both prepending to it and removing an element from the front are very cheap operations. In contrast, a Roc `List` is a flat array, so both prepending to it and removing an element from the front are among the most expensive operations you can possibly do with it! To get good performance, this usage pattern should be encouraged in Elm and discouraged in Roc. Since having special syntax would encourage it, it would not be good for Roc to have that syntax!
* No `<|` operator. In Elm, I almost exclusively found myself wanting to use this in conjunction with anonymous functions (e.g. `foo <| \bar -> ...`) or conditionals (e.g. `foo <| if bar then ...`). In Roc you can do both of these without the `<|`. That means the main remaining use for `<|` is to reduce parentheses, but I tend to think `|>` is better at that (or else the parens are fine), so after the other syntactic changes, I considered `<|` an unnecessary stylistic alternative to `|>` or parens. - No `<|` operator. In Elm, I almost exclusively found myself wanting to use this in conjunction with anonymous functions (e.g. `foo <| \bar -> ...`) or conditionals (e.g. `foo <| if bar then ...`). In Roc you can do both of these without the `<|`. That means the main remaining use for `<|` is to reduce parentheses, but I tend to think `|>` is better at that (or else the parens are fine), so after the other syntactic changes, I considered `<|` an unnecessary stylistic alternative to `|>` or parens.
* The `|>` operator passes the expression before the `|>` as the *first* argument to the function after the `|>` instead of as the last argument. See the section on currying for details on why this works this way. - The `|>` operator passes the expression before the `|>` as the _first_ argument to the function after the `|>` instead of as the last argument. See the section on currying for details on why this works this way.
* `:` instead of `type alias` - I like to avoid reserved keywords for terms that are desirable in userspace, so that people don't have to name things `typ` because `type` is a reserved keyword, or `clazz` because `class` is reserved. (I couldn't think of satisfactory alternatives for `as`, `when`, `is`, or `if` other than different reserved keywords. I could see an argument for `then`—and maybe even `is`—being replaced with a `->` or `=>` or something, but I don't anticipate missing either of those words much in userspace. `then` is used in JavaScript promises, but I think there are several better names for that function.) - `:` instead of `type alias` - I like to avoid reserved keywords for terms that are desirable in userspace, so that people don't have to name things `typ` because `type` is a reserved keyword, or `clazz` because `class` is reserved. (I couldn't think of satisfactory alternatives for `as`, `when`, `is`, or `if` other than different reserved keywords. I could see an argument for `then`—and maybe even `is`—being replaced with a `->` or `=>` or something, but I don't anticipate missing either of those words much in userspace. `then` is used in JavaScript promises, but I think there are several better names for that function.)
* No underscores in variable names - I've seen Elm beginners reflexively use `snake_case` over `camelCase` and then need to un-learn the habit after the compiler accepted it. I'd rather have the compiler give feedback that this isn't the way to do it in Roc, and suggest a camelCase alternative. I've also seen underscores used for lazy naming, e.g. `foo` and then `foo_`. If lazy naming is the goal, `foo2` is just as concise as `foo_`, but `foo3` is more concise than `foo__`. So in a way, removing `_` is a forcing function for improved laziness. (Of course, more descriptive naming would be even better.) - No underscores in variable names - I've seen Elm beginners reflexively use `snake_case` over `camelCase` and then need to un-learn the habit after the compiler accepted it. I'd rather have the compiler give feedback that this isn't the way to do it in Roc, and suggest a camelCase alternative. I've also seen underscores used for lazy naming, e.g. `foo` and then `foo_`. If lazy naming is the goal, `foo2` is just as concise as `foo_`, but `foo3` is more concise than `foo__`. So in a way, removing `_` is a forcing function for improved laziness. (Of course, more descriptive naming would be even better.)
* Trailing commas - I've seen people walk away (in some cases physically!) from Elm as soon as they saw the leading commas in collection literals. While I think they've made a mistake by not pushing past this aesthetic preference to give the language a chance, I also would prefer not put them in a position to make such a mistake in the first place. Secondarily, while I'm personally fine with either style, between the two I prefer the look of trailing commas. - Trailing commas - I've seen people walk away (in some cases physically!) from Elm as soon as they saw the leading commas in collection literals. While I think they've made a mistake by not pushing past this aesthetic preference to give the language a chance, I also would prefer not put them in a position to make such a mistake in the first place. Secondarily, while I'm personally fine with either style, between the two I prefer the look of trailing commas.
* The `!` unary prefix operator. I didn't want to have a `Basics` module (more on that in a moment), and without `Basics`, this would either need to be called fully-qualified (`Bool.not`) or else a module import of `Bool.{ not }` would be necessary. Both seemed less nice than supporting the `!` prefix that's common to so many widely-used languages, especially when we already have a unary prefix operator of `-` for negation (e.g. `-x`). - The `!` unary prefix operator. I didn't want to have a `Basics` module (more on that in a moment), and without `Basics`, this would either need to be called fully-qualified (`Bool.not`) or else a module import of `Bool.{ not }` would be necessary. Both seemed less nice than supporting the `!` prefix that's common to so many widely-used languages, especially when we already have a unary prefix operator of `-` for negation (e.g. `-x`).
* `!=` for the inequality operator (instead of Elm's `/=`) - this one pairs more naturally with the `!` prefix operator and is also very common in other languages. - `!=` for the inequality operator (instead of Elm's `/=`) - this one pairs more naturally with the `!` prefix operator and is also very common in other languages.
Roc also has a different standard library from Elm. Some of the differences come down to platforms and applications (e.g. having `Task` in Roc's standard library wouldn't make sense), but others do not. Here are some brief explanations: Roc also has a different standard library from Elm. Some of the differences come down to platforms and applications (e.g. having `Task` in Roc's standard library wouldn't make sense), but others do not. Here are some brief explanations:
* No `Basics` module. I wanted to have a simple rule of "all modules in the standard library are imported by default, and so are their exposed types," and that's it. Given that I wanted the comparison operators (e.g. `<`) to work only on numbers, it ended up that having `Num` and `Bool` modules meant that almost nothing would be left for a `Basics` equivalent in Roc except `identity` and `Never`. The Roc type `[]` (empty tag union) is equivalent to `Never`, so that wasn't necessary, and I generally think that `identity` is a good concept but a sign of an incomplete API whenever its use comes up in practice. For example, instead of calling `|> List.filterMap identity` I'd rather have access to a more self-descriptive function like `|> List.dropNothings`. With `Num` and `Bool`, and without `identity` and `Never`, there was nothing left in `Basics`. - No `Basics` module. I wanted to have a simple rule of "all modules in the standard library are imported by default, and so are their exposed types," and that's it. Given that I wanted the comparison operators (e.g. `<`) to work only on numbers, it ended up that having `Num` and `Bool` modules meant that almost nothing would be left for a `Basics` equivalent in Roc except `identity` and `Never`. The Roc type `[]` (empty tag union) is equivalent to `Never`, so that wasn't necessary, and I generally think that `identity` is a good concept but a sign of an incomplete API whenever its use comes up in practice. For example, instead of calling `|> List.filterMap identity` I'd rather have access to a more self-descriptive function like `|> List.dropNothings`. With `Num` and `Bool`, and without `identity` and `Never`, there was nothing left in `Basics`.
* `Str` instead of `String` - after using the `str` type in Rust, I realized I had no issue whatsoever with the more concise name, especially since it was used in so many places (similar to `Msg` and `Cmd` in Elm) - so I decided to save a couple of letters. - `Str` instead of `String` - after using the `str` type in Rust, I realized I had no issue whatsoever with the more concise name, especially since it was used in so many places (similar to `Msg` and `Cmd` in Elm) - so I decided to save a couple of letters.
* No function composition operators - I stopped using these in Elm so long ago, at one point I forgot they were in the language! See the FAQ entry on currying for details about why. - No function composition operators - I stopped using these in Elm so long ago, at one point I forgot they were in the language! See the FAQ entry on currying for details about why.
* No `Char`. What most people think of as a "character" is a rendered glyph. However, rendered glyphs are comprised of [grapheme clusters](https://stackoverflow.com/a/27331885), which are a variable number of Unicode code points - and there's no upper bound on how many code points there can be in a single cluster. In a world of emoji, I think this makes `Char` error-prone and it's better to have `Str` be the only first-class unit. For convenience when working with unicode code points (e.g. for performance-critical tasks like parsing), the single-quote syntax is sugar for the corresponding `U32` code point - for example, writing `'鹏'` is exactly the same as writing `40527`. Like Rust, you get a compiler error if you put something in single quotes that's not a valid [Unicode scalar value](http://www.unicode.org/glossary/#unicode_scalar_value). - No `Char`. What most people think of as a "character" is a rendered glyph. However, rendered glyphs are comprised of [grapheme clusters](https://stackoverflow.com/a/27331885), which are a variable number of Unicode code points - and there's no upper bound on how many code points there can be in a single cluster. In a world of emoji, I think this makes `Char` error-prone and it's better to have `Str` be the only first-class unit. For convenience when working with unicode code points (e.g. for performance-critical tasks like parsing), the single-quote syntax is sugar for the corresponding `U32` code point - for example, writing `'鹏'` is exactly the same as writing `40527`. Like Rust, you get a compiler error if you put something in single quotes that's not a valid [Unicode scalar value](http://www.unicode.org/glossary/#unicode_scalar_value).
* No `Debug.log` - the editor can do a better job at this, or you can write `expect x != x` to see what `x` is when the expectation fails. Using the editor means your code doesn't change, and using `expect` gives a natural reminder to remove the debugging code before shipping: the build will fail. - No `Debug.log` - the editor can do a better job at this, or you can write `expect x != x` to see what `x` is when the expectation fails. Using the editor means your code doesn't change, and using `expect` gives a natural reminder to remove the debugging code before shipping: the build will fail.
* No `Debug.todo` - instead you can write a type annotation with no implementation below it; the type checker will treat it normally, but attempting to use the value will cause a runtime exception. This is a feature I've often wanted in Elm, because I like prototyping APIs by writing out the types only, but then when I want the compiler to type-check them for me, I end up having to add `Debug.todo` in various places. - No `Debug.todo` - instead you can write a type annotation with no implementation below it; the type checker will treat it normally, but attempting to use the value will cause a runtime exception. This is a feature I've often wanted in Elm, because I like prototyping APIs by writing out the types only, but then when I want the compiler to type-check them for me, I end up having to add `Debug.todo` in various places.
* No `Maybe`. See the "Why doesn't Roc have a `Maybe`/`Option`/`Optional` type" FAQ question - No `Maybe`. See the "Why doesn't Roc have a `Maybe`/`Option`/`Optional` type" FAQ question
## Why aren't Roc functions curried by default? ## Why aren't Roc functions curried by default?
@ -259,15 +262,15 @@ by default" for the sake of brevity.
As I see it, currying has one major upside and several major downsides. The upside: As I see it, currying has one major upside and several major downsides. The upside:
* It makes function calls more concise in some cases. - It makes function calls more concise in some cases.
The downsides: The downsides:
* It lowers error message quality, because there can no longer be an error for "function called with too few arguments." (Calling a function with fewer arguments is always valid in curried functions; the error you get instead will unavoidably be some other sort of type mismatch, and it will be up to you to figure out that the real problem was that you forgot an argument.) - It lowers error message quality, because there can no longer be an error for "function called with too few arguments." (Calling a function with fewer arguments is always valid in curried functions; the error you get instead will unavoidably be some other sort of type mismatch, and it will be up to you to figure out that the real problem was that you forgot an argument.)
* It makes the `|>` operator more error-prone in some cases. - It makes the `|>` operator more error-prone in some cases.
* It makes higher-order function calls need more parentheses in some cases. - It makes higher-order function calls need more parentheses in some cases.
* It significantly increases the language's learning curve. (More on this later.) - It significantly increases the language's learning curve. (More on this later.)
* It facilitates pointfree function composition. (More on why this is listed as a downside later.) - It facilitates pointfree function composition. (More on why this is listed as a downside later.)
There's also a downside that it would make runtime performance of compiled programs worse by default, There's also a downside that it would make runtime performance of compiled programs worse by default,
but I assume it would be possible to optimize that away at the cost of slightly longer compile times. but I assume it would be possible to optimize that away at the cost of slightly longer compile times.
@ -284,8 +287,8 @@ In Roc, this code produces `"Hello, World!"`
|> Str.concat "!" |> Str.concat "!"
``` ```
This is because Roc's `|>` operator uses the expression before the `|>` as the *first* argument to the function This is because Roc's `|>` operator uses the expression before the `|>` as the _first_ argument to the function
after it. For functions where both arguments have the same type, but it's obvious which argument goes where (e.g. after it. For functions where both arguments have the same type, but it's obvious which argument goes where (e.g.
`Str.concat "Hello, " "World!"`, `List.concat [1, 2] [3, 4]`), this works out well. Another example would `Str.concat "Hello, " "World!"`, `List.concat [1, 2] [3, 4]`), this works out well. Another example would
be `|> Num.sub 1`, which subtracts 1 from whatever came before the `|>`. be `|> Num.sub 1`, which subtracts 1 from whatever came before the `|>`.
@ -318,7 +321,7 @@ This is a fundamental design tension. One argument order works well with `|>` (a
today) and with passing anonymous functions to higher-order functions, and the other works well with currying. today) and with passing anonymous functions to higher-order functions, and the other works well with currying.
It's impossible to have both. It's impossible to have both.
Of note, one possible design is to have currying while also having `|>` pass the *last* argument instead of the first. Of note, one possible design is to have currying while also having `|>` pass the _last_ argument instead of the first.
This is what Elm does, and it makes pipeline-friendliness and curry-friendliness the same thing. However, it also This is what Elm does, and it makes pipeline-friendliness and curry-friendliness the same thing. However, it also
means that either `|> Str.concat "!"` would add the `"!"` to the front of the string, or else `Str.concat`'s means that either `|> Str.concat "!"` would add the `"!"` to the front of the string, or else `Str.concat`'s
arguments would have to be flipped - meaning that `Str.concat "Hello, World" "!"` would evaluate to `"!Hello, World"`. arguments would have to be flipped - meaning that `Str.concat "Hello, World" "!"` would evaluate to `"!Hello, World"`.
@ -338,9 +341,9 @@ first pure functional programming language.
Here was my experience teaching currying: Here was my experience teaching currying:
* The only way to avoid teaching it is to refuse to explain why multi-argument functions have multiple `->`s in them. (If you don't explain it, at least one student will ask about it - and many if not all of the others will wonder.) - The only way to avoid teaching it is to refuse to explain why multi-argument functions have multiple `->`s in them. (If you don't explain it, at least one student will ask about it - and many if not all of the others will wonder.)
* Teaching currying properly takes a solid chunk of time, because it requires explaining partial application, explaining how curried functions facilitate partial application, how function signatures accurately reflect that they're curried, and going through examples for all of these. - Teaching currying properly takes a solid chunk of time, because it requires explaining partial application, explaining how curried functions facilitate partial application, how function signatures accurately reflect that they're curried, and going through examples for all of these.
* Even after doing all this, and iterating on my approach each time to try to explain it more effectively than I had the time before, I'd estimate that under 50% of the class ended up actually understanding currying. I consistently heard that in practice it only "clicked" for most people after spending significantly more time writing code with it. - Even after doing all this, and iterating on my approach each time to try to explain it more effectively than I had the time before, I'd estimate that under 50% of the class ended up actually understanding currying. I consistently heard that in practice it only "clicked" for most people after spending significantly more time writing code with it.
This is not the end of the world, especially because it's easy enough to think "okay, I still don't totally get this This is not the end of the world, especially because it's easy enough to think "okay, I still don't totally get this
even after that explanation, but I can remember that function arguments are separated by `->` in this language even after that explanation, but I can remember that function arguments are separated by `->` in this language
@ -396,10 +399,35 @@ Currying facilitates the antipattern of pointfree function composition, which I
Stacking up all these downsides of currying against the one upside of making certain function calls more concise, Stacking up all these downsides of currying against the one upside of making certain function calls more concise,
I concluded that it would be a mistake to have it in Roc. I concluded that it would be a mistake to have it in Roc.
## Why are both rust and zig used? ## Will Roc ever have linear types, dependent types, refinement types, or uniqueness types?
At the start of the project, we did not know zig well and it was not production ready. The reason zig entered the project because it has many different backends (wasm, various assembly formats, llvm IR) and can create code with minimal dependencies The plan is for Roc to never have linear types, dependent types, refinement types, or uniqueness types.
Rust has much more overhead in terms of code size. It's objectively not a lot, but it's less with zig. Fast compile times are a primary goal for Roc, and a major downside of refinement types is an exponential increase in compile times. This rules out refinement types for Roc.
We think rust is a nicer language to work in for a project of this size. It has a type system that we're more familiar with, it has a package ecosystem and excellent tooling. If Roc were to have linear types or uniqueness types, they would move things that are currently behind-the-scenes performance optimizations into the type system. For them to be effective across the ecosystem, they couldn't really be opt-in; everyone would have to use them, even those for whom the current system of behind-the-scenes optimizations already met their performance needs without any added type system complexity. Since the overwhelming majority of use cases are expected to fall into that latter group, adding linear types or uniqueness types to Roc would be a net negative for the ecosystem.
Dependent types are too risky of a bet for Roc to take. They have been implemented in programming languages for three decades, and for at least half that time period, it has been easy to find predictions that dependent types will be the future of type systems. Much harder to find are success stories of complex applications built with dependent types, which realized benefits that significantly outweighed the substantial complexity of introducing value semantics to a type system.
Perhaps more success stories will emerge over time, but in the meantime it remains an open question whether dependent types are net beneficial in practice to application development. Further experimentation would be required to answer this question, and Roc is not the right language to do those experiments.
## Will Roc's compiler ever be self-hosted? (That is, will it ever be written in Roc?)
The plan is to never implement Roc's compiler in Roc.
The goal is for Roc's compiler to deliver the best user experience possible. Compiler performance is strongly influenced by how memory is used, and there are many performance benefits to be gained from using a systems language like Rust which offers more direct control over memory than Roc ever should.
Roc isn't trying to be the best possible language for high-performance compiler development, but it is trying to have a high-performance compiler. The best tool for that job is a language other than Roc, so that's what we're using!
## Why does Roc use both Rust and Zig?
Roc's compiler has always been written in [Rust](https://www.rust-lang.org/). Roc's standard library was briefly written in Rust, but was soon rewritten in [Zig](https://ziglang.org/).
There were a few reasons for this rewrite.
1. We struggled to get Rust to emit LLVM bitcode in the format we needed, which is important so that LLVM can do whole-program optimizations across the standard library and compiled application.
2. Since the standard library has to interact with raw generated machine code (or LLVM bitcode), the Rust code unavoidable needed `unsafe` annotations all over the place. This made one of Rust's biggest selling points inapplicable in this particular use case.
3. Given that Rust's main selling points are inapplicable (its package ecosystem being another), Zig's much faster compile times are a welcome benefit.
4. Zig has more tools for working in a memory-unsafe environment, such as reporting memory leaks in tests. These have been helpful in finding bugs that are out of scope for safe Rust.
The split of Rust for the compiler and Zig for the standard library has worked well so far, and there are no plans to change it.

View file

@ -13,7 +13,7 @@
// use crate::pattern::{bindings_from_patterns, canonicalize_pattern, Pattern}; // use crate::pattern::{bindings_from_patterns, canonicalize_pattern, Pattern};
// use crate::procedure::References; // use crate::procedure::References;
use roc_collections::all::{default_hasher, ImMap, MutMap, MutSet, SendMap}; use roc_collections::all::{default_hasher, ImMap, MutMap, MutSet, SendMap};
use roc_error_macros::{todo_abilities, todo_opaques}; use roc_error_macros::{internal_error, todo_abilities};
use roc_module::ident::Lowercase; use roc_module::ident::Lowercase;
use roc_module::symbol::Symbol; use roc_module::symbol::Symbol;
use roc_parse::ast::{self, CommentOrNewline, Defs, TypeDef, TypeHeader, ValueDef as AstValueDef}; use roc_parse::ast::{self, CommentOrNewline, Defs, TypeDef, TypeHeader, ValueDef as AstValueDef};
@ -21,6 +21,7 @@ use roc_parse::pattern::PatternType;
use roc_problem::can::{Problem, RuntimeError, ShadowKind}; use roc_problem::can::{Problem, RuntimeError, ShadowKind};
use roc_region::all::{Loc, Region}; use roc_region::all::{Loc, Region};
use roc_types::subs::{VarStore, Variable}; use roc_types::subs::{VarStore, Variable};
use roc_types::types::AliasKind;
use std::collections::HashMap; use std::collections::HashMap;
use std::fmt::Debug; use std::fmt::Debug;
use ven_graph::{strongly_connected_components, topological_sort_into_groups}; use ven_graph::{strongly_connected_components, topological_sort_into_groups};
@ -274,7 +275,7 @@ fn to_pending_def<'a>(
} }
} }
Type(TypeDef::Opaque { .. }) => todo_opaques!(), Type(TypeDef::Opaque { .. }) => internal_error!("opaques not implemented"),
Type(TypeDef::Ability { .. }) => todo_abilities!(), Type(TypeDef::Ability { .. }) => todo_abilities!(),
Value(AstValueDef::Expect { .. }) => todo!(), Value(AstValueDef::Expect { .. }) => todo!(),
@ -341,6 +342,7 @@ fn from_pending_alias<'a>(
typ: symbol, typ: symbol,
variable_region: loc_lowercase.region, variable_region: loc_lowercase.region,
variable_name: loc_lowercase.value.clone(), variable_name: loc_lowercase.value.clone(),
alias_kind: AliasKind::Structural,
}); });
} }
} }
@ -373,7 +375,12 @@ fn from_pending_alias<'a>(
scope.add_alias(env.pool, symbol, named, annotation_id); scope.add_alias(env.pool, symbol, named, annotation_id);
} else { } else {
env.problem(Problem::CyclicAlias(symbol, name.region, vec![])); env.problem(Problem::CyclicAlias(
symbol,
name.region,
vec![],
AliasKind::Structural,
));
return output; return output;
} }
} else { } else {

View file

@ -8,7 +8,7 @@ use roc_can::num::{
finish_parsing_base, finish_parsing_float, finish_parsing_num, ParsedNumResult, finish_parsing_base, finish_parsing_float, finish_parsing_num, ParsedNumResult,
}; };
use roc_collections::all::BumpMap; use roc_collections::all::BumpMap;
use roc_error_macros::todo_opaques; use roc_error_macros::internal_error;
use roc_module::symbol::{Interns, Symbol}; use roc_module::symbol::{Interns, Symbol};
use roc_parse::ast::{StrLiteral, StrSegment}; use roc_parse::ast::{StrLiteral, StrSegment};
use roc_parse::pattern::PatternType; use roc_parse::pattern::PatternType;
@ -272,7 +272,7 @@ pub fn to_pattern2<'a>(
} }
} }
OpaqueRef(..) => todo_opaques!(), OpaqueRef(..) => internal_error!("opaques not implemented"),
Apply(tag, patterns) => { Apply(tag, patterns) => {
let can_patterns = PoolVec::with_capacity(patterns.len() as u32, env.pool); let can_patterns = PoolVec::with_capacity(patterns.len() as u32, env.pool);

View file

@ -7,7 +7,7 @@ use roc_error_macros::todo_abilities;
use roc_module::ident::{Ident, Lowercase, TagName, Uppercase}; use roc_module::ident::{Ident, Lowercase, TagName, Uppercase};
use roc_module::symbol::Symbol; use roc_module::symbol::Symbol;
use roc_region::all::{Loc, Region}; use roc_region::all::{Loc, Region};
use roc_types::types::{Problem, RecordField}; use roc_types::types::{AliasKind, Problem, RecordField};
use roc_types::{subs::Variable, types::ErrorType}; use roc_types::{subs::Variable, types::ErrorType};
use crate::lang::env::Env; use crate::lang::env::Env;
@ -793,6 +793,7 @@ fn to_type_apply<'a>(
region, region,
alias_needs: alias.targs.len() as u8, alias_needs: alias.targs.len() as u8,
type_got: args.len() as u8, type_got: args.len() as u8,
alias_kind: AliasKind::Structural,
}); });
return error; return error;
} }

View file

@ -304,8 +304,11 @@ pub fn build_file<'a>(
host_input_path.as_path().to_str().unwrap(), host_input_path.as_path().to_str().unwrap(),
app_o_file.to_str().unwrap(), app_o_file.to_str().unwrap(),
]; ];
let str_host_obj_path = bitcode::get_builtins_host_obj_path();
if matches!(opt_level, OptLevel::Development) { if matches!(opt_level, OptLevel::Development) {
inputs.push(bitcode::BUILTINS_HOST_OBJ_PATH); inputs.push(&str_host_obj_path);
} }
let (mut child, _) = // TODO use lld let (mut child, _) = // TODO use lld

View file

@ -336,31 +336,22 @@ pub fn test(matches: &ArgMatches, triple: Triple) -> io::Result<i32> {
let path = Path::new(filename); let path = Path::new(filename);
// Spawn the root task // Spawn the root task
let path = path.canonicalize().unwrap_or_else(|err| { if !path.exists() {
use io::ErrorKind::*; let path_string = path.to_string_lossy();
match err.kind() { // TODO these should use roc_reporting to display nicer error messages.
NotFound => { match matches.value_source(ROC_FILE) {
let path_string = path.to_string_lossy(); Some(ValueSource::DefaultValue) => {
eprintln!(
// TODO these should use roc_reporting to display nicer error messages. "\nNo `.roc` file was specified, and the current directory does not contain a {} file to use as a default.\n\nYou can run `roc help` for more information on how to provide a .roc file.\n",
match matches.value_source(ROC_FILE) { DEFAULT_ROC_FILENAME
Some(ValueSource::DefaultValue) => { )
eprintln!(
"\nNo `.roc` file was specified, and the current directory does not contain a {} file to use as a default.\n\nYou can run `roc help` for more information on how to provide a .roc file.\n",
DEFAULT_ROC_FILENAME
)
}
_ => eprintln!("\nThis file was not found: {}\n\nYou can run `roc help` for more information on how to provide a .roc file.\n", path_string),
}
process::exit(1);
}
_ => {
todo!("TODO Gracefully handle opening {:?} - {:?}", path, err);
} }
_ => eprintln!("\nThis file was not found: {}\n\nYou can run `roc help` for more information on how to provide a .roc file.\n", path_string),
} }
});
process::exit(1);
}
let arena = &arena; let arena = &arena;
let target = &triple; let target = &triple;
@ -372,7 +363,7 @@ pub fn test(matches: &ArgMatches, triple: Triple) -> io::Result<i32> {
let loaded = roc_load::load_and_monomorphize( let loaded = roc_load::load_and_monomorphize(
arena, arena,
path, path.to_path_buf(),
subs_by_module, subs_by_module,
target_info, target_info,
// TODO: expose this from CLI? // TODO: expose this from CLI?
@ -439,10 +430,8 @@ pub fn test(matches: &ArgMatches, triple: Triple) -> io::Result<i32> {
31 // red 31 // red
}; };
println!();
println!( println!(
"\x1B[{failed_color}m{failed}\x1B[39m failed and \x1B[32m{passed}\x1B[39m passed in {} ms.\n", "\n\x1B[{failed_color}m{failed}\x1B[39m failed and \x1B[32m{passed}\x1B[39m passed in {} ms.\n",
total_time.as_millis(), total_time.as_millis(),
); );
@ -509,15 +498,11 @@ pub fn build(
let path = Path::new(filename); let path = Path::new(filename);
// Spawn the root task // Spawn the root task
let path = path.canonicalize().unwrap_or_else(|err| { if !path.exists() {
use io::ErrorKind::*; let path_string = path.to_string_lossy();
match err.kind() { // TODO these should use roc_reporting to display nicer error messages.
NotFound => { match matches.value_source(ROC_FILE) {
let path_string = path.to_string_lossy();
// TODO these should use roc_reporting to display nicer error messages.
match matches.value_source(ROC_FILE) {
Some(ValueSource::DefaultValue) => { Some(ValueSource::DefaultValue) => {
eprintln!( eprintln!(
"\nNo `.roc` file was specified, and the current directory does not contain a {} file to use as a default.\n\nYou can run `roc help` for more information on how to provide a .roc file.\n", "\nNo `.roc` file was specified, and the current directory does not contain a {} file to use as a default.\n\nYou can run `roc help` for more information on how to provide a .roc file.\n",
@ -527,19 +512,14 @@ pub fn build(
_ => eprintln!("\nThis file was not found: {}\n\nYou can run `roc help` for more information on how to provide a .roc file.\n", path_string), _ => eprintln!("\nThis file was not found: {}\n\nYou can run `roc help` for more information on how to provide a .roc file.\n", path_string),
} }
process::exit(1); process::exit(1);
} }
_ => {
todo!("TODO Gracefully handle opening {:?} - {:?}", path, err);
}
}
});
let target_valgrind = matches.is_present(FLAG_VALGRIND); let target_valgrind = matches.is_present(FLAG_VALGRIND);
let res_binary_path = build_file( let res_binary_path = build_file(
&arena, &arena,
&triple, &triple,
path, path.to_path_buf(),
opt_level, opt_level,
emit_debug_info, emit_debug_info,
emit_timings, emit_timings,

View file

@ -351,6 +351,11 @@ pub fn root_dir() -> PathBuf {
path.pop(); path.pop();
path.pop(); path.pop();
// running cargo with --target will put us in the target dir
if path.ends_with("target") {
path.pop();
}
path path
} }

View file

@ -26,6 +26,7 @@ roc_gen_dev = { path = "../gen_dev", default-features = false }
roc_reporting = { path = "../../reporting" } roc_reporting = { path = "../../reporting" }
roc_error_macros = { path = "../../error_macros" } roc_error_macros = { path = "../../error_macros" }
roc_std = { path = "../../roc_std", default-features = false } roc_std = { path = "../../roc_std", default-features = false }
roc_utils = { path = "../../utils" }
bumpalo = { version = "3.8.0", features = ["collections"] } bumpalo = { version = "3.8.0", features = ["collections"] }
libloading = "0.7.1" libloading = "0.7.1"
tempfile = "3.2.0" tempfile = "3.2.0"

View file

@ -3,6 +3,7 @@ use libloading::{Error, Library};
use roc_builtins::bitcode; use roc_builtins::bitcode;
use roc_error_macros::internal_error; use roc_error_macros::internal_error;
use roc_mono::ir::OptLevel; use roc_mono::ir::OptLevel;
use roc_utils::get_lib_path;
use std::collections::HashMap; use std::collections::HashMap;
use std::env; use std::env;
use std::io; use std::io;
@ -66,12 +67,13 @@ pub fn link(
fn find_zig_str_path() -> PathBuf { fn find_zig_str_path() -> PathBuf {
// First try using the lib path relative to the executable location. // First try using the lib path relative to the executable location.
let exe_relative_str_path = std::env::current_exe() let lib_path_opt = get_lib_path();
.ok()
.and_then(|path| Some(path.parent()?.join("lib").join("str.zig"))); if let Some(lib_path) = lib_path_opt {
if let Some(exe_relative_str_path) = exe_relative_str_path { let zig_str_path = lib_path.join("str.zig");
if std::path::Path::exists(&exe_relative_str_path) {
return exe_relative_str_path; if std::path::Path::exists(&zig_str_path) {
return zig_str_path;
} }
} }
@ -87,7 +89,7 @@ fn find_zig_str_path() -> PathBuf {
return zig_str_path; return zig_str_path;
} }
panic!("cannot find `str.zig`. Launch me from either the root of the roc repo or one level down(roc/examples, roc/cli...)") panic!("cannot find `str.zig`. Check the source code in find_zig_str_path() to show all the paths I tried.")
} }
fn find_wasi_libc_path() -> PathBuf { fn find_wasi_libc_path() -> PathBuf {
@ -124,7 +126,7 @@ pub fn build_zig_host_native(
"build-exe", "build-exe",
"-fPIE", "-fPIE",
shared_lib_path.to_str().unwrap(), shared_lib_path.to_str().unwrap(),
bitcode::BUILTINS_HOST_OBJ_PATH, &bitcode::get_builtins_host_obj_path(),
]); ]);
} else { } else {
command.args(&["build-obj", "-fPIC"]); command.args(&["build-obj", "-fPIC"]);
@ -231,7 +233,7 @@ pub fn build_zig_host_native(
"build-exe", "build-exe",
"-fPIE", "-fPIE",
shared_lib_path.to_str().unwrap(), shared_lib_path.to_str().unwrap(),
bitcode::BUILTINS_HOST_OBJ_PATH, &bitcode::get_builtins_host_obj_path(),
]); ]);
} else { } else {
command.args(&["build-obj", "-fPIC"]); command.args(&["build-obj", "-fPIC"]);
@ -343,7 +345,7 @@ pub fn build_c_host_native(
if let Some(shared_lib_path) = shared_lib_path { if let Some(shared_lib_path) = shared_lib_path {
command.args(&[ command.args(&[
shared_lib_path.to_str().unwrap(), shared_lib_path.to_str().unwrap(),
bitcode::BUILTINS_HOST_OBJ_PATH, &bitcode::get_builtins_host_obj_path(),
"-fPIE", "-fPIE",
"-pie", "-pie",
"-lm", "-lm",
@ -1199,7 +1201,7 @@ pub fn preprocess_host_wasm32(host_input_path: &Path, preprocessed_host_path: &P
let mut command = Command::new(&zig_executable()); let mut command = Command::new(&zig_executable());
let args = &[ let args = &[
"wasm-ld", "wasm-ld",
bitcode::BUILTINS_WASM32_OBJ_PATH, &bitcode::get_builtins_wasm32_obj_path(),
host_input, host_input,
WASI_LIBC_PATH, WASI_LIBC_PATH,
WASI_COMPILER_RT_PATH, // builtins need __multi3, __udivti3, __fixdfti WASI_COMPILER_RT_PATH, // builtins need __multi3, __udivti3, __fixdfti

View file

@ -10,6 +10,7 @@ roc_collections = { path = "../collections" }
roc_region = { path = "../region" } roc_region = { path = "../region" }
roc_module = { path = "../module" } roc_module = { path = "../module" }
roc_target = { path = "../roc_target" } roc_target = { path = "../roc_target" }
roc_utils = { path = "../../utils" }
lazy_static = "1.4.0" lazy_static = "1.4.0"
[build-dependencies] [build-dependencies]

View file

@ -2608,10 +2608,10 @@ test "getScalarUnsafe" {
} }
pub fn strCloneTo( pub fn strCloneTo(
string: RocStr,
ptr: [*]u8, ptr: [*]u8,
offset: usize, offset: usize,
extra_offset: usize, extra_offset: usize,
string: RocStr,
) callconv(.C) usize { ) callconv(.C) usize {
const WIDTH: usize = @sizeOf(RocStr); const WIDTH: usize = @sizeOf(RocStr);
if (string.isSmallStr()) { if (string.isSmallStr()) {

View file

@ -4,6 +4,7 @@ use std::ffi::OsStr;
use std::fs; use std::fs;
use std::io; use std::io;
use std::path::Path; use std::path::Path;
use std::path::PathBuf;
use std::process::Command; use std::process::Command;
use std::str; use std::str;
@ -53,19 +54,9 @@ fn main() {
#[cfg(not(windows))] #[cfg(not(windows))]
const BUILTINS_HOST_FILE: &str = "builtins-host.o"; const BUILTINS_HOST_FILE: &str = "builtins-host.o";
generate_object_file( generate_object_file(&bitcode_path, "object", BUILTINS_HOST_FILE);
&bitcode_path,
"BUILTINS_HOST_O",
"object",
BUILTINS_HOST_FILE,
);
generate_object_file( generate_object_file(&bitcode_path, "wasm32-object", "builtins-wasm32.o");
&bitcode_path,
"BUILTINS_WASM32_O",
"wasm32-object",
"builtins-wasm32.o",
);
copy_zig_builtins_to_target_dir(&bitcode_path); copy_zig_builtins_to_target_dir(&bitcode_path);
@ -84,21 +75,10 @@ fn main() {
.expect("Failed to delete temp dir zig_cache_dir."); .expect("Failed to delete temp dir zig_cache_dir.");
} }
fn generate_object_file( fn generate_object_file(bitcode_path: &Path, zig_object: &str, object_file_name: &str) {
bitcode_path: &Path, let dest_obj_path = get_lib_dir().join(object_file_name);
env_var_name: &str,
zig_object: &str,
object_file_name: &str,
) {
let out_dir = env::var_os("OUT_DIR").unwrap();
let dest_obj_path = Path::new(&out_dir).join(object_file_name);
let dest_obj = dest_obj_path.to_str().expect("Invalid dest object path"); let dest_obj = dest_obj_path.to_str().expect("Invalid dest object path");
// set the variable (e.g. BUILTINS_HOST_O) that is later used in
// `compiler/builtins/src/bitcode.rs` to load the object file
println!("cargo:rustc-env={}={}", env_var_name, dest_obj);
let src_obj_path = bitcode_path.join(object_file_name); let src_obj_path = bitcode_path.join(object_file_name);
let src_obj = src_obj_path.to_str().expect("Invalid src object path"); let src_obj = src_obj_path.to_str().expect("Invalid src object path");
@ -146,20 +126,29 @@ fn generate_bc_file(bitcode_path: &Path, zig_object: &str, file_name: &str) {
); );
} }
fn copy_zig_builtins_to_target_dir(bitcode_path: &Path) { pub fn get_lib_dir() -> PathBuf {
// To enable roc to find the zig biultins, we want them to be moved to a folder next to the roc executable.
// So if <roc_folder>/roc is the executable. The zig files will be in <roc_folder>/lib/*.zig
// Currently we have the OUT_DIR variable which points to `/target/debug/build/roc_builtins-*/out/`. // Currently we have the OUT_DIR variable which points to `/target/debug/build/roc_builtins-*/out/`.
// So we just need to shed a 3 of the outer layers to get `/target/debug/` and then add `lib`. // So we just need to shed a 3 of the outer layers to get `/target/debug/` and then add `lib`.
let out_dir = env::var_os("OUT_DIR").unwrap(); let out_dir = env::var_os("OUT_DIR").unwrap();
let target_profile_dir = Path::new(&out_dir)
let lib_path = Path::new(&out_dir)
.parent() .parent()
.and_then(|path| path.parent()) .and_then(|path| path.parent())
.and_then(|path| path.parent()) .and_then(|path| path.parent())
.unwrap() .unwrap()
.join("lib"); .join("lib");
// create dir of it does not exist
fs::create_dir_all(lib_path.clone()).expect("Failed to make lib dir.");
lib_path
}
fn copy_zig_builtins_to_target_dir(bitcode_path: &Path) {
// To enable roc to find the zig biultins, we want them to be moved to a folder next to the roc executable.
// So if <roc_folder>/roc is the executable. The zig files will be in <roc_folder>/lib/*.zig
let target_profile_dir = get_lib_dir();
let zig_src_dir = bitcode_path.join("src"); let zig_src_dir = bitcode_path.join("src");
cp_unless_zig_cache(&zig_src_dir, &target_profile_dir).unwrap_or_else(|err| { cp_unless_zig_cache(&zig_src_dir, &target_profile_dir).unwrap_or_else(|err| {

View file

@ -1,16 +1,29 @@
use roc_module::symbol::Symbol; use roc_module::symbol::Symbol;
use roc_target::TargetInfo; use roc_target::TargetInfo;
use roc_utils::get_lib_path;
use std::ops::Index; use std::ops::Index;
pub const BUILTINS_HOST_OBJ_PATH: &str = env!( pub fn get_builtins_host_obj_path() -> String {
"BUILTINS_HOST_O", let builtins_host_path = get_lib_path()
"Env var BUILTINS_HOST_O not found. Is there a problem with the build script?" .expect("Failed to find lib dir.")
); .join("builtins-host.o");
pub const BUILTINS_WASM32_OBJ_PATH: &str = env!( builtins_host_path
"BUILTINS_WASM32_O", .into_os_string()
"Env var BUILTINS_WASM32_O not found. Is there a problem with the build script?" .into_string()
); .expect("Failed to convert builtins_host_path to str")
}
pub fn get_builtins_wasm32_obj_path() -> String {
let builtins_wasm32_path = get_lib_path()
.expect("Failed to find lib dir.")
.join("builtins-wasm32.o");
builtins_wasm32_path
.into_os_string()
.into_string()
.expect("Failed to convert builtins_wasm32_path to str")
}
#[derive(Debug, Default, Copy, Clone)] #[derive(Debug, Default, Copy, Clone)]
pub struct IntrinsicName { pub struct IntrinsicName {

View file

@ -566,6 +566,7 @@ fn can_annotation_help(
region, region,
alias_needs: alias.type_variables.len() as u8, alias_needs: alias.type_variables.len() as u8,
type_got: args.len() as u8, type_got: args.len() as u8,
alias_kind: alias.kind,
}); });
return error; return error;
} }

View file

@ -713,7 +713,6 @@ pub struct PatternEq(
pub struct OpportunisticResolve { pub struct OpportunisticResolve {
/// The specialized type of this lookup, to try to resolve. /// The specialized type of this lookup, to try to resolve.
pub specialization_variable: Variable, pub specialization_variable: Variable,
pub specialization_expectation: Index<Expected<Type>>,
/// The ability member to try to resolve. /// The ability member to try to resolve.
pub member: Symbol, pub member: Symbol,

View file

@ -369,6 +369,7 @@ fn canonicalize_alias<'a>(
typ: symbol, typ: symbol,
variable_region: loc_lowercase.region, variable_region: loc_lowercase.region,
variable_name: loc_lowercase.value.clone(), variable_name: loc_lowercase.value.clone(),
alias_kind: AliasKind::Structural,
}); });
} }
AliasKind::Opaque => { AliasKind::Opaque => {
@ -2688,6 +2689,7 @@ fn correct_mutual_recursive_type_alias<'a>(
env, env,
&mut alias.typ, &mut alias.typ,
alias_name, alias_name,
alias.kind,
alias.region, alias.region,
rest, rest,
can_still_report_error, can_still_report_error,
@ -2870,7 +2872,15 @@ fn make_tag_union_recursive_help<'a, 'b>(
} }
_ => { _ => {
// take care to report a cyclic alias only once (not once for each alias in the cycle) // take care to report a cyclic alias only once (not once for each alias in the cycle)
mark_cyclic_alias(env, typ, symbol, region, others, *can_report_cyclic_error); mark_cyclic_alias(
env,
typ,
symbol,
alias_kind,
region,
others,
*can_report_cyclic_error,
);
*can_report_cyclic_error = false; *can_report_cyclic_error = false;
Cyclic Cyclic
@ -2882,6 +2892,7 @@ fn mark_cyclic_alias<'a>(
env: &mut Env<'a>, env: &mut Env<'a>,
typ: &mut Type, typ: &mut Type,
symbol: Symbol, symbol: Symbol,
alias_kind: AliasKind,
region: Region, region: Region,
others: Vec<Symbol>, others: Vec<Symbol>,
report: bool, report: bool,
@ -2890,7 +2901,7 @@ fn mark_cyclic_alias<'a>(
*typ = Type::Erroneous(problem); *typ = Type::Erroneous(problem);
if report { if report {
let problem = Problem::CyclicAlias(symbol, region, others); let problem = Problem::CyclicAlias(symbol, region, others, alias_kind);
env.problems.push(problem); env.problems.push(problem);
} }
} }

View file

@ -1568,13 +1568,6 @@ fn canonicalize_var_lookup(
output.references.insert_value_lookup(symbol); output.references.insert_value_lookup(symbol);
if scope.abilities_store.is_ability_member_name(symbol) { if scope.abilities_store.is_ability_member_name(symbol) {
// Is there a shadow implementation with the same name? If so, we might be in
// the def for that shadow. In that case add a value lookup of the shadow impl,
// so that it's marked as possibly-recursive.
if let Some(shadow) = scope.get_member_shadow(symbol) {
output.references.insert_value_lookup(shadow.value);
}
AbilityMember( AbilityMember(
symbol, symbol,
Some(scope.abilities_store.fresh_specialization_id()), Some(scope.abilities_store.fresh_specialization_id()),

View file

@ -206,7 +206,6 @@ pub fn canonicalize_def_header_pattern<'a>(
// Likely a specialization of an ability. // Likely a specialization of an ability.
Some(ability_member_name) => { Some(ability_member_name) => {
output.references.insert_bound(symbol); output.references.insert_bound(symbol);
output.references.insert_value_lookup(ability_member_name);
Pattern::AbilityMemberSpecialization { Pattern::AbilityMemberSpecialization {
ident: symbol, ident: symbol,
specializes: ability_member_name, specializes: ability_member_name,

View file

@ -439,9 +439,6 @@ pub fn constrain_expr(
if let Some(specialization_id) = specialization_id { if let Some(specialization_id) = specialization_id {
env.resolutions_to_make.push(OpportunisticResolve { env.resolutions_to_make.push(OpportunisticResolve {
specialization_variable: specialization_var, specialization_variable: specialization_var,
specialization_expectation: constraints.push_expected_type(
Expected::NoExpectation(Type::Variable(specialization_var)),
),
member: symbol, member: symbol,
specialization_id, specialization_id,
}); });

View file

@ -1,17 +1,21 @@
use crate::llvm::bitcode::call_bitcode_fn; use crate::debug_info_init;
use crate::llvm::build::{store_roc_value, Env}; use crate::llvm::bitcode::call_str_bitcode_fn;
use crate::llvm::build::{get_tag_id, store_roc_value, Env};
use crate::llvm::build_list::{self, incrementing_elem_loop}; use crate::llvm::build_list::{self, incrementing_elem_loop};
use crate::llvm::convert::basic_type_from_layout; use crate::llvm::convert::{basic_type_from_layout, RocUnion};
use inkwell::builder::Builder; use inkwell::builder::Builder;
use inkwell::types::BasicType; use inkwell::module::Linkage;
use inkwell::values::{BasicValueEnum, IntValue, PointerValue}; use inkwell::types::{BasicMetadataTypeEnum, BasicType};
use inkwell::values::{BasicValueEnum, FunctionValue, IntValue, PointerValue};
use inkwell::AddressSpace; use inkwell::AddressSpace;
use roc_builtins::bitcode; use roc_builtins::bitcode;
use roc_module::symbol::Symbol; use roc_module::symbol::Symbol;
use roc_mono::layout::{Builtin, Layout, LayoutIds, UnionLayout}; use roc_mono::layout::{Builtin, Layout, LayoutIds, UnionLayout};
use roc_region::all::Region; use roc_region::all::Region;
use super::build::{load_symbol_and_layout, Scope}; use super::build::{
add_func, load_roc_value, load_symbol_and_layout, use_roc_value, FunctionSpec, Scope,
};
#[derive(Debug, Clone, Copy)] #[derive(Debug, Clone, Copy)]
struct Cursors<'ctx> { struct Cursors<'ctx> {
@ -204,19 +208,19 @@ fn build_clone<'a, 'ctx, 'env>(
when_recursive, when_recursive,
), ),
Layout::Struct { Layout::Struct { field_layouts, .. } => build_clone_struct(
field_layouts: _, .. env,
} => { layout_ids,
if layout.safe_to_memcpy() { ptr,
build_copy(env, ptr, cursors.offset, value) cursors,
} else { value,
todo!() field_layouts,
} when_recursive,
} ),
Layout::LambdaSet(_) => unreachable!("cannot compare closures"), Layout::LambdaSet(_) => unreachable!("cannot compare closures"),
Layout::Union(_union_layout) => { Layout::Union(union_layout) => {
if layout.safe_to_memcpy() { if layout.safe_to_memcpy() {
let ptr = unsafe { let ptr = unsafe {
env.builder env.builder
@ -230,24 +234,50 @@ fn build_clone<'a, 'ctx, 'env>(
store_roc_value(env, layout, ptr, value); store_roc_value(env, layout, ptr, value);
let width = value.get_type().size_of().unwrap(); cursors.extra_offset
env.builder
.build_int_add(cursors.offset, width, "new_offset")
} else { } else {
todo!() build_clone_tag(
env,
layout_ids,
ptr,
cursors,
value,
union_layout,
WhenRecursive::Loop(union_layout),
)
} }
} }
/* Layout::Boxed(inner_layout) => {
Layout::Boxed(inner_layout) => build_box_eq( // write the offset
env, build_copy(env, ptr, cursors.offset, cursors.extra_offset.into());
layout_ids,
when_recursive, let source = value.into_pointer_value();
lhs_layout, let value = load_roc_value(env, *inner_layout, source, "inner");
inner_layout,
lhs_val, let inner_width = env
rhs_val, .ptr_int()
), .const_int(inner_layout.stack_size(env.target_info) as u64, false);
let new_extra = env
.builder
.build_int_add(cursors.offset, inner_width, "new_extra");
let cursors = Cursors {
offset: cursors.extra_offset,
extra_offset: new_extra,
};
build_clone(
env,
layout_ids,
ptr,
cursors,
value,
*inner_layout,
when_recursive,
)
}
Layout::RecursivePointer => match when_recursive { Layout::RecursivePointer => match when_recursive {
WhenRecursive::Unreachable => { WhenRecursive::Unreachable => {
@ -260,27 +290,249 @@ fn build_clone<'a, 'ctx, 'env>(
let bt = basic_type_from_layout(env, &layout); let bt = basic_type_from_layout(env, &layout);
// cast the i64 pointer to a pointer to block of memory // cast the i64 pointer to a pointer to block of memory
let field1_cast = env let field1_cast = env.builder.build_bitcast(value, bt, "i64_to_opaque");
.builder
.build_bitcast(lhs_val, bt, "i64_to_opaque")
.into_pointer_value();
let field2_cast = env build_clone_tag(
.builder
.build_bitcast(rhs_val, bt, "i64_to_opaque")
.into_pointer_value();
build_tag_eq(
env, env,
layout_ids, layout_ids,
ptr,
cursors,
field1_cast,
union_layout,
WhenRecursive::Loop(union_layout), WhenRecursive::Loop(union_layout),
&union_layout,
field1_cast.into(),
field2_cast.into(),
) )
} }
}, },
*/ }
}
#[allow(clippy::too_many_arguments)]
fn build_clone_struct<'a, 'ctx, 'env>(
env: &Env<'a, 'ctx, 'env>,
layout_ids: &mut LayoutIds<'a>,
ptr: PointerValue<'ctx>,
cursors: Cursors<'ctx>,
value: BasicValueEnum<'ctx>,
field_layouts: &[Layout<'a>],
when_recursive: WhenRecursive<'a>,
) -> IntValue<'ctx> {
let layout = Layout::struct_no_name_order(field_layouts);
if layout.safe_to_memcpy() {
build_copy(env, ptr, cursors.offset, value)
} else {
let mut cursors = cursors;
let structure = value.into_struct_value();
for (i, field_layout) in field_layouts.iter().enumerate() {
let field = env
.builder
.build_extract_value(structure, i as _, "extract")
.unwrap();
let field = use_roc_value(env, *field_layout, field, "field");
let new_extra = build_clone(
env,
layout_ids,
ptr,
cursors,
field,
*field_layout,
when_recursive,
);
let field_width = env
.ptr_int()
.const_int(field_layout.stack_size(env.target_info) as u64, false);
cursors.extra_offset = new_extra;
cursors.offset = env
.builder
.build_int_add(cursors.offset, field_width, "offset");
}
cursors.extra_offset
}
}
#[allow(clippy::too_many_arguments)]
fn build_clone_tag<'a, 'ctx, 'env>(
env: &Env<'a, 'ctx, 'env>,
layout_ids: &mut LayoutIds<'a>,
ptr: PointerValue<'ctx>,
cursors: Cursors<'ctx>,
value: BasicValueEnum<'ctx>,
union_layout: UnionLayout<'a>,
when_recursive: WhenRecursive<'a>,
) -> IntValue<'ctx> {
let layout = Layout::Union(union_layout);
let layout_id = layout_ids.get(Symbol::CLONE, &layout);
let fn_name = layout_id.to_symbol_string(Symbol::CLONE, &env.interns);
let function = match env.module.get_function(fn_name.as_str()) {
Some(function_value) => function_value,
None => {
let block = env.builder.get_insert_block().expect("to be in a function");
let di_location = env.builder.get_current_debug_location().unwrap();
let function_type = env.ptr_int().fn_type(
&[
env.context.i8_type().ptr_type(AddressSpace::Generic).into(),
env.ptr_int().into(),
env.ptr_int().into(),
BasicMetadataTypeEnum::from(value.get_type()),
],
false,
);
let function_value = add_func(
env.context,
env.module,
&fn_name,
FunctionSpec::known_fastcc(function_type),
Linkage::Private,
);
let subprogram = env.new_subprogram(&fn_name);
function_value.set_subprogram(subprogram);
env.dibuilder.finalize();
build_clone_tag_help(
env,
layout_ids,
union_layout,
when_recursive,
function_value,
);
env.builder.position_at_end(block);
env.builder
.set_current_debug_location(env.context, di_location);
function_value
}
};
let call = env.builder.build_call(
function,
&[
ptr.into(),
cursors.offset.into(),
cursors.extra_offset.into(),
value.into(),
],
"build_clone_tag",
);
call.set_call_convention(function.get_call_conventions());
let result = call.try_as_basic_value().left().unwrap();
result.into_int_value()
}
#[allow(clippy::too_many_arguments)]
fn build_clone_tag_help<'a, 'ctx, 'env>(
env: &Env<'a, 'ctx, 'env>,
layout_ids: &mut LayoutIds<'a>,
union_layout: UnionLayout<'a>,
when_recursive: WhenRecursive<'a>,
fn_val: FunctionValue<'ctx>,
) {
use bumpalo::collections::Vec;
let context = &env.context;
let builder = env.builder;
// Add a basic block for the entry point
let entry = context.append_basic_block(fn_val, "entry");
builder.position_at_end(entry);
debug_info_init!(env, fn_val);
// Add args to scope
// let arg_symbol = Symbol::ARG_1;
// tag_value.set_name(arg_symbol.as_str(&env.interns));
let mut it = fn_val.get_param_iter();
let ptr = it.next().unwrap().into_pointer_value();
let offset = it.next().unwrap().into_int_value();
let extra_offset = it.next().unwrap().into_int_value();
let tag_value = it.next().unwrap();
let cursors = Cursors {
offset,
extra_offset,
};
let parent = fn_val;
debug_assert!(tag_value.is_pointer_value());
use UnionLayout::*;
match union_layout {
NonRecursive(&[]) => {
// we're comparing empty tag unions; this code is effectively unreachable
env.builder.build_unreachable();
}
NonRecursive(tags) => {
let id = get_tag_id(env, parent, &union_layout, tag_value);
let switch_block = env.context.append_basic_block(parent, "switch_block");
env.builder.build_unconditional_branch(switch_block);
let mut cases = Vec::with_capacity_in(tags.len(), env.arena);
for (tag_id, field_layouts) in tags.iter().enumerate() {
let block = env.context.append_basic_block(parent, "tag_id_modify");
env.builder.position_at_end(block);
let raw_data_ptr = env
.builder
.build_struct_gep(
tag_value.into_pointer_value(),
RocUnion::TAG_DATA_INDEX,
"tag_data",
)
.unwrap();
let layout = Layout::struct_no_name_order(field_layouts);
let basic_type = basic_type_from_layout(env, &layout);
let data_ptr = env.builder.build_pointer_cast(
raw_data_ptr,
basic_type.ptr_type(AddressSpace::Generic),
"data_ptr",
);
let data = env.builder.build_load(data_ptr, "load_data");
let answer =
build_clone(env, layout_ids, ptr, cursors, data, layout, when_recursive);
env.builder.build_return(Some(&answer));
cases.push((id.get_type().const_int(tag_id as u64, false), block));
}
env.builder.position_at_end(switch_block);
match cases.pop() {
Some((_, default)) => {
env.builder.build_switch(id, default, &cases);
}
None => {
// we're serializing an empty tag union; this code is effectively unreachable
env.builder.build_unreachable();
}
}
}
_ => todo!(), _ => todo!(),
} }
} }
@ -329,14 +581,15 @@ fn build_clone_builtin<'a, 'ctx, 'env>(
Builtin::Str => { Builtin::Str => {
// //
call_bitcode_fn( call_str_bitcode_fn(
env, env,
&[value],
&[ &[
ptr.into(), ptr.into(),
cursors.offset.into(), cursors.offset.into(),
cursors.extra_offset.into(), cursors.extra_offset.into(),
value,
], ],
crate::llvm::bitcode::BitcodeReturns::Basic,
bitcode::STR_CLONE_TO, bitcode::STR_CLONE_TO,
) )
.into_int_value() .into_int_value()
@ -380,10 +633,6 @@ fn build_clone_builtin<'a, 'ctx, 'env>(
"elements", "elements",
); );
// where we write the elements' stack representation
// let element_offset = bd.build_alloca(env.ptr_int(), "element_offset");
// bd.build_store(element_offset, elements_start_offset);
// if the element has any pointers, we clone them to this offset // if the element has any pointers, we clone them to this offset
let rest_offset = bd.build_alloca(env.ptr_int(), "rest_offset"); let rest_offset = bd.build_alloca(env.ptr_int(), "rest_offset");
@ -404,26 +653,24 @@ fn build_clone_builtin<'a, 'ctx, 'env>(
bd.build_int_add(elements_start_offset, current_offset, "current_offset"); bd.build_int_add(elements_start_offset, current_offset, "current_offset");
let current_extra_offset = bd.build_load(rest_offset, "element_offset"); let current_extra_offset = bd.build_load(rest_offset, "element_offset");
let offset = current_offset; // env.ptr_int().const_int(60, false); let offset = current_offset;
let extra_offset = current_extra_offset.into_int_value(); // env.ptr_int().const_int(60 + 24, false); let extra_offset = current_extra_offset.into_int_value();
let cursors = Cursors {
offset,
extra_offset,
};
let new_offset = build_clone( let new_offset = build_clone(
env, env,
layout_ids, layout_ids,
ptr, ptr,
Cursors { cursors,
// offset: current_offset,
// extra_offset: current_extra_offset.into_int_value(),
offset,
extra_offset,
},
element, element,
*elem, *elem,
when_recursive, when_recursive,
); );
// let new_offset = env.ptr_int().const_int(60 + 24 + 34, false);
bd.build_store(rest_offset, new_offset); bd.build_store(rest_offset, new_offset);
}; };

View file

@ -10,7 +10,8 @@ use roc_collections::MutMap;
use roc_derive::SharedDerivedModule; use roc_derive::SharedDerivedModule;
use roc_error_macros::internal_error; use roc_error_macros::internal_error;
use roc_module::symbol::ModuleId; use roc_module::symbol::ModuleId;
use roc_solve::solve::{compact_lambda_sets_of_vars, Phase, Pools}; use roc_solve::solve::Pools;
use roc_solve::specialize::{compact_lambda_sets_of_vars, DerivedEnv, Phase};
use roc_types::subs::{get_member_lambda_sets_at_region, Content, FlatType, LambdaSet}; use roc_types::subs::{get_member_lambda_sets_at_region, Content, FlatType, LambdaSet};
use roc_types::subs::{ExposedTypesStorageSubs, Subs, Variable}; use roc_types::subs::{ExposedTypesStorageSubs, Subs, Variable};
use roc_unify::unify::{unify as unify_unify, Env, Mode, Unified}; use roc_unify::unify::{unify as unify_unify, Env, Mode, Unified};
@ -272,15 +273,18 @@ pub fn unify(
let mut pools = Pools::default(); let mut pools = Pools::default();
let late_phase = LatePhase { home, abilities }; let late_phase = LatePhase { home, abilities };
let derived_env = DerivedEnv {
derived_module,
exposed_types: exposed_by_module,
};
let must_implement_constraints = compact_lambda_sets_of_vars( let must_implement_constraints = compact_lambda_sets_of_vars(
subs, subs,
derived_module, &derived_env,
arena, arena,
&mut pools, &mut pools,
lambda_sets_to_specialize, lambda_sets_to_specialize,
&late_phase, &late_phase,
exposed_by_module,
); );
// At this point we can't do anything with must-implement constraints, since we're no // At this point we can't do anything with must-implement constraints, since we're no
// longer solving. We must assume that they were totally caught during solving. // longer solving. We must assume that they were totally caught during solving.

View file

@ -1005,6 +1005,8 @@ define_builtins! {
30 DEV_TMP5: "#dev_tmp5" 30 DEV_TMP5: "#dev_tmp5"
31 ATTR_INVALID: "#attr_invalid" 31 ATTR_INVALID: "#attr_invalid"
32 CLONE: "#clone" // internal function that clones a value into a buffer
} }
// Fake module for synthesizing and storing derived implementations // Fake module for synthesizing and storing derived implementations
1 DERIVED_SYNTH: "#Derived" => { 1 DERIVED_SYNTH: "#Derived" => {

View file

@ -3999,9 +3999,10 @@ pub fn with_hole<'a>(
} }
// creating a record from the var will unpack it if it's just a single field. // creating a record from the var will unpack it if it's just a single field.
let layout = layout_cache let layout = match layout_cache.from_var(env.arena, record_var, env.subs) {
.from_var(env.arena, record_var, env.subs) Ok(layout) => layout,
.unwrap_or_else(|err| panic!("TODO turn fn_var into a RuntimeError {:?}", err)); Err(_) => return Stmt::RuntimeError("Can't create record with improper layout"),
};
let field_symbols = field_symbols.into_bump_slice(); let field_symbols = field_symbols.into_bump_slice();

View file

@ -45,12 +45,13 @@ pub enum Problem {
shadow: Loc<Ident>, shadow: Loc<Ident>,
kind: ShadowKind, kind: ShadowKind,
}, },
CyclicAlias(Symbol, Region, Vec<Symbol>), CyclicAlias(Symbol, Region, Vec<Symbol>, AliasKind),
BadRecursion(Vec<CycleEntry>), BadRecursion(Vec<CycleEntry>),
PhantomTypeArgument { PhantomTypeArgument {
typ: Symbol, typ: Symbol,
variable_region: Region, variable_region: Region,
variable_name: Lowercase, variable_name: Lowercase,
alias_kind: AliasKind,
}, },
UnboundTypeVariable { UnboundTypeVariable {
typ: Symbol, typ: Symbol,

View file

@ -5,3 +5,4 @@
pub mod ability; pub mod ability;
pub mod module; pub mod module;
pub mod solve; pub mod solve;
pub mod specialize;

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,782 @@
//! Module [specialize] is resolves specialization lambda sets.
use std::collections::VecDeque;
use bumpalo::Bump;
use roc_can::{
abilities::{AbilitiesStore, ImplKey},
module::ExposedByModule,
};
use roc_collections::{VecMap, VecSet};
use roc_debug_flags::dbg_do;
#[cfg(debug_assertions)]
use roc_debug_flags::ROC_TRACE_COMPACTION;
use roc_derive::SharedDerivedModule;
use roc_derive_key::{DeriveError, DeriveKey};
use roc_error_macros::{internal_error, todo_abilities};
use roc_module::symbol::{ModuleId, Symbol};
use roc_types::{
subs::{
get_member_lambda_sets_at_region, Content, Descriptor, GetSubsSlice, LambdaSet, Mark,
OptVariable, Rank, Subs, SubsSlice, UlsOfVar, Variable,
},
types::{AliasKind, MemberImpl, Uls},
};
use roc_unify::unify::{unify, Env as UEnv, Mode, MustImplementConstraints};
use crate::solve::{deep_copy_var_in, introduce, Pools};
/// What phase in the compiler is reaching out to specialize lambda sets?
/// This is important to distinguish subtle differences in the behavior of the solving algorithm.
//
// TODO the APIs of this trait suck, this needs a nice cleanup.
pub trait Phase {
/// The regular type-solving phase, or during some later phase of compilation.
/// During the solving phase we must anticipate that some information is still unknown and react to
/// that; during late phases, we expect that all information is resolved.
const IS_LATE: bool;
fn with_module_abilities_store<T, F>(&self, module: ModuleId, f: F) -> T
where
F: FnMut(&AbilitiesStore) -> T;
/// Given a known lambda set's ambient function in an external module, copy that ambient
/// function into the given subs.
fn copy_lambda_set_ambient_function_to_home_subs(
&self,
external_lambda_set_var: Variable,
external_module_id: ModuleId,
home_subs: &mut Subs,
) -> Variable;
/// Find the ambient function var at a given region for an ability member definition (not a
/// specialization!), and copy that into the given subs.
fn get_and_copy_ability_member_ambient_function(
&self,
ability_member: Symbol,
region: u8,
home_subs: &mut Subs,
) -> Variable;
}
pub(crate) struct SolvePhase<'a> {
pub abilities_store: &'a AbilitiesStore,
}
impl Phase for SolvePhase<'_> {
const IS_LATE: bool = false;
fn with_module_abilities_store<T, F>(&self, _module: ModuleId, mut f: F) -> T
where
F: FnMut(&AbilitiesStore) -> T,
{
// During solving we're only aware of our module's abilities store.
f(self.abilities_store)
}
fn copy_lambda_set_ambient_function_to_home_subs(
&self,
external_lambda_set_var: Variable,
_external_module_id: ModuleId,
home_subs: &mut Subs,
) -> Variable {
// During solving we're only aware of our module's abilities store, the var must
// be in our module store. Even if the specialization lambda set comes from another
// module, we should have taken care to import it before starting solving in this module.
let LambdaSet {
ambient_function, ..
} = home_subs.get_lambda_set(external_lambda_set_var);
ambient_function
}
fn get_and_copy_ability_member_ambient_function(
&self,
ability_member: Symbol,
region: u8,
home_subs: &mut Subs,
) -> Variable {
// During solving we're only aware of our module's abilities store, the var must
// be in our module store. Even if the specialization lambda set comes from another
// module, we should have taken care to import it before starting solving in this module.
let member_def = self
.abilities_store
.member_def(ability_member)
.unwrap_or_else(|| {
internal_error!(
"{:?} is not resolved, or not an ability member!",
ability_member
)
});
let member_var = member_def.signature_var();
let region_lset = get_member_lambda_sets_at_region(home_subs, member_var, region);
let LambdaSet {
ambient_function, ..
} = home_subs.get_lambda_set(region_lset);
ambient_function
}
}
pub struct DerivedEnv<'a> {
pub derived_module: &'a SharedDerivedModule,
/// Exposed types needed by the derived module.
pub exposed_types: &'a ExposedByModule,
}
#[derive(Default)]
pub struct AwaitingSpecializations {
// What variables' specialized lambda sets in `uls_of_var` will be unlocked for specialization
// when an implementation key's specialization is resolved?
waiting: VecMap<ImplKey, VecSet<Variable>>,
uls_of_var: UlsOfVar,
}
impl AwaitingSpecializations {
pub fn remove_for_specialized(&mut self, subs: &Subs, impl_key: ImplKey) -> UlsOfVar {
let spec_variables = self
.waiting
.remove(&impl_key)
.map(|(_, set)| set)
.unwrap_or_default();
let mut result = UlsOfVar::default();
for var in spec_variables {
let target_lambda_sets = self
.uls_of_var
.remove_dependent_unspecialized_lambda_sets(subs, var);
result.extend(var, target_lambda_sets);
}
result
}
pub fn add(
&mut self,
impl_key: ImplKey,
var: Variable,
lambda_sets: impl IntoIterator<Item = Variable>,
) {
self.uls_of_var.extend(var, lambda_sets);
let waiting = self.waiting.get_or_insert(impl_key, Default::default);
waiting.insert(var);
}
pub fn union(&mut self, other: Self) {
for (impl_key, waiting_vars) in other.waiting {
let waiting = self.waiting.get_or_insert(impl_key, Default::default);
waiting.extend(waiting_vars);
}
self.uls_of_var.union(other.uls_of_var);
}
pub fn waiting_for(&self, impl_key: ImplKey) -> bool {
self.waiting.contains_key(&impl_key)
}
}
pub struct CompactionResult {
pub obligations: MustImplementConstraints,
pub awaiting_specialization: AwaitingSpecializations,
}
#[cfg(debug_assertions)]
fn trace_compaction_step_1(subs: &Subs, c_a: Variable, uls_a: &[Variable]) {
let c_a = roc_types::subs::SubsFmtContent(subs.get_content_without_compacting(c_a), subs);
let uls_a = uls_a
.iter()
.map(|v| {
format!(
"{:?}",
roc_types::subs::SubsFmtContent(subs.get_content_without_compacting(*v), subs)
)
})
.collect::<Vec<_>>()
.join(",");
eprintln!("===lambda set compaction===");
eprintln!(" concrete type: {:?}", c_a);
eprintln!(" step 1:");
eprintln!(" uls_a = {{ {} }}", uls_a);
}
#[cfg(debug_assertions)]
fn trace_compaction_step_2(subs: &Subs, uls_a: &[Variable]) {
let uls_a = uls_a
.iter()
.map(|v| {
format!(
"{:?}",
roc_types::subs::SubsFmtContent(subs.get_content_without_compacting(*v), subs)
)
})
.collect::<Vec<_>>()
.join(",");
eprintln!(" step 2:");
eprintln!(" uls_a' = {{ {} }}", uls_a);
}
#[cfg(debug_assertions)]
fn trace_compaction_step_3start() {
eprintln!(" step 3:");
}
#[cfg(debug_assertions)]
fn trace_compaction_step_3iter_start(
subs: &Subs,
iteration_lambda_set: Variable,
t_f1: Variable,
t_f2: Variable,
) {
let iteration_lambda_set = roc_types::subs::SubsFmtContent(
subs.get_content_without_compacting(iteration_lambda_set),
subs,
);
let t_f1 = roc_types::subs::SubsFmtContent(subs.get_content_without_compacting(t_f1), subs);
let t_f2 = roc_types::subs::SubsFmtContent(subs.get_content_without_compacting(t_f2), subs);
eprintln!(" - iteration: {:?}", iteration_lambda_set);
eprintln!(" {:?}", t_f1);
eprintln!(" ~ {:?}", t_f2);
}
#[cfg(debug_assertions)]
#[rustfmt::skip]
fn trace_compaction_step_3iter_end(subs: &Subs, t_f_result: Variable, skipped: bool) {
let t_f_result =
roc_types::subs::SubsFmtContent(subs.get_content_without_compacting(t_f_result), subs);
if skipped {
eprintln!(" SKIP");
}
eprintln!(" = {:?}\n", t_f_result);
}
macro_rules! trace_compact {
(1. $subs:expr, $c_a:expr, $uls_a:expr) => {{
dbg_do!(ROC_TRACE_COMPACTION, {
trace_compaction_step_1($subs, $c_a, $uls_a)
})
}};
(2. $subs:expr, $uls_a:expr) => {{
dbg_do!(ROC_TRACE_COMPACTION, {
trace_compaction_step_2($subs, $uls_a)
})
}};
(3start.) => {{
dbg_do!(ROC_TRACE_COMPACTION, { trace_compaction_step_3start() })
}};
(3iter_start. $subs:expr, $iteration_lset:expr, $t_f1:expr, $t_f2:expr) => {{
dbg_do!(ROC_TRACE_COMPACTION, {
trace_compaction_step_3iter_start($subs, $iteration_lset, $t_f1, $t_f2)
})
}};
(3iter_end. $subs:expr, $t_f_result:expr) => {{
dbg_do!(ROC_TRACE_COMPACTION, {
trace_compaction_step_3iter_end($subs, $t_f_result, false)
})
}};
(3iter_end_skipped. $subs:expr, $t_f_result:expr) => {{
dbg_do!(ROC_TRACE_COMPACTION, {
trace_compaction_step_3iter_end($subs, $t_f_result, true)
})
}};
}
#[inline(always)]
fn iter_concrete_of_unspecialized<'a>(
subs: &'a Subs,
c_a: Variable,
uls: &'a [Uls],
) -> impl Iterator<Item = &'a Uls> {
uls.iter()
.filter(move |Uls(var, _, _)| subs.equivalent_without_compacting(*var, c_a))
}
/// Gets the unique unspecialized lambda resolving to concrete type `c_a` in a list of
/// unspecialized lambda sets.
#[inline(always)]
fn unique_unspecialized_lambda(subs: &Subs, c_a: Variable, uls: &[Uls]) -> Option<Uls> {
let mut iter_concrete = iter_concrete_of_unspecialized(subs, c_a, uls);
let uls = iter_concrete.next()?;
debug_assert!(iter_concrete.next().is_none(), "multiple concrete");
Some(*uls)
}
#[must_use]
pub fn compact_lambda_sets_of_vars<P: Phase>(
subs: &mut Subs,
derived_env: &DerivedEnv,
arena: &Bump,
pools: &mut Pools,
uls_of_var: UlsOfVar,
phase: &P,
) -> CompactionResult {
let mut must_implement = MustImplementConstraints::default();
let mut awaiting_specialization = AwaitingSpecializations::default();
let mut uls_of_var_queue = VecDeque::with_capacity(uls_of_var.len());
uls_of_var_queue.extend(uls_of_var.drain());
// Suppose a type variable `a` with `uls_of_var` mapping `uls_a = {l1, ... ln}` has been instantiated to a concrete type `C_a`.
while let Some((c_a, uls_a)) = uls_of_var_queue.pop_front() {
let c_a = subs.get_root_key_without_compacting(c_a);
// 1. Let each `l` in `uls_a` be of form `[solved_lambdas + ... + C:f:r + ...]`.
// NB: There may be multiple unspecialized lambdas of form `C:f:r, C:f1:r1, ..., C:fn:rn` in `l`.
// In this case, let `t1, ... tm` be the other unspecialized lambdas not of form `C:_:_`,
// that is, none of which are now specialized to the type `C`. Then, deconstruct
// `l` such that `l' = [solved_lambdas + t1 + ... + tm + C:f:r]` and `l1 = [[] + C:f1:r1], ..., ln = [[] + C:fn:rn]`.
// Replace `l` with `l', l1, ..., ln` in `uls_a`, flattened.
// TODO: the flattening step described above
let uls_a = {
let mut uls = uls_a.into_vec();
// De-duplicate lambdas by root key.
uls.iter_mut().for_each(|v| *v = subs.get_root_key(*v));
uls.sort();
uls.dedup();
uls
};
trace_compact!(1. subs, c_a, &uls_a);
// The flattening step - remove lambda sets that don't reference the concrete var, and for
// flatten lambda sets that reference it more than once.
let mut uls_a: Vec<_> = uls_a
.into_iter()
.flat_map(|lambda_set| {
let LambdaSet {
solved,
recursion_var,
unspecialized,
ambient_function,
} = subs.get_lambda_set(lambda_set);
let lambda_set_rank = subs.get_rank(lambda_set);
let unspecialized = subs.get_subs_slice(unspecialized);
// TODO: is it faster to traverse once, see if we only have one concrete lambda, and
// bail in that happy-path, rather than always splitting?
let (concrete, mut not_concrete): (Vec<_>, Vec<_>) = unspecialized
.iter()
.copied()
.partition(|Uls(var, _, _)| subs.equivalent_without_compacting(*var, c_a));
if concrete.len() == 1 {
// No flattening needs to be done, just return the lambda set as-is
return vec![lambda_set];
}
// Must flatten
concrete
.into_iter()
.enumerate()
.map(|(i, concrete_lambda)| {
let (var, unspecialized) = if i == 0 {
// The first lambda set contains one concrete lambda, plus all solved
// lambdas, plus all other unspecialized lambdas.
// l' = [solved_lambdas + t1 + ... + tm + C:f:r]
let unspecialized = SubsSlice::extend_new(
&mut subs.unspecialized_lambda_sets,
not_concrete
.drain(..)
.chain(std::iter::once(concrete_lambda)),
);
(lambda_set, unspecialized)
} else {
// All the other lambda sets consists only of their respective concrete
// lambdas.
// ln = [[] + C:fn:rn]
let unspecialized = SubsSlice::extend_new(
&mut subs.unspecialized_lambda_sets,
[concrete_lambda],
);
let var = subs.fresh(Descriptor {
content: Content::Error,
rank: lambda_set_rank,
mark: Mark::NONE,
copy: OptVariable::NONE,
});
(var, unspecialized)
};
subs.set_content(
var,
Content::LambdaSet(LambdaSet {
solved,
recursion_var,
unspecialized,
ambient_function,
}),
);
var
})
.collect()
})
.collect();
// 2. Now, each `l` in `uls_a` has a unique unspecialized lambda of form `C:f:r`.
// Sort `uls_a` primarily by `f` (arbitrary order), and secondarily by `r` in descending order.
uls_a.sort_by(|v1, v2| {
let unspec_1 = subs.get_subs_slice(subs.get_lambda_set(*v1).unspecialized);
let unspec_2 = subs.get_subs_slice(subs.get_lambda_set(*v2).unspecialized);
let Uls(_, f1, r1) = unique_unspecialized_lambda(subs, c_a, unspec_1).unwrap();
let Uls(_, f2, r2) = unique_unspecialized_lambda(subs, c_a, unspec_2).unwrap();
match f1.cmp(&f2) {
std::cmp::Ordering::Equal => {
// Order by descending order of region.
r2.cmp(&r1)
}
ord => ord,
}
});
trace_compact!(2. subs, &uls_a);
// 3. For each `l` in `uls_a` with unique unspecialized lambda `C:f:r`:
// 1. Let `t_f1` be the directly ambient function of the lambda set containing `C:f:r`. Remove `C:f:r` from `t_f1`'s lambda set.
// - For example, `(b' -[[] + Fo:f:2]-> {})` if `C:f:r=Fo:f:2`. Removing `Fo:f:2`, we get `(b' -[[]]-> {})`.
// 2. Let `t_f2` be the directly ambient function of the specialization lambda set resolved by `C:f:r`.
// - For example, `(b -[[] + b:g:1]-> {})` if `C:f:r=Fo:f:2`, running on example from above.
// 3. Unify `t_f1 ~ t_f2`.
trace_compact!(3start.);
for l in uls_a {
let compaction_result =
compact_lambda_set(subs, derived_env, arena, pools, c_a, l, phase);
match compaction_result {
OneCompactionResult::Compacted {
new_obligations,
new_lambda_sets_to_specialize,
} => {
must_implement.extend(new_obligations);
uls_of_var_queue.extend(new_lambda_sets_to_specialize.drain());
}
OneCompactionResult::MustWaitForSpecialization(impl_key) => {
awaiting_specialization.add(impl_key, c_a, [l])
}
}
}
}
CompactionResult {
obligations: must_implement,
awaiting_specialization,
}
}
enum OneCompactionResult {
Compacted {
new_obligations: MustImplementConstraints,
new_lambda_sets_to_specialize: UlsOfVar,
},
MustWaitForSpecialization(ImplKey),
}
#[must_use]
#[allow(clippy::too_many_arguments)]
fn compact_lambda_set<P: Phase>(
subs: &mut Subs,
derived_env: &DerivedEnv,
arena: &Bump,
pools: &mut Pools,
resolved_concrete: Variable,
this_lambda_set: Variable,
phase: &P,
) -> OneCompactionResult {
// 3. For each `l` in `uls_a` with unique unspecialized lambda `C:f:r`:
// 1. Let `t_f1` be the directly ambient function of the lambda set containing `C:f:r`. Remove `C:f:r` from `t_f1`'s lambda set.
// - For example, `(b' -[[] + Fo:f:2]-> {})` if `C:f:r=Fo:f:2`. Removing `Fo:f:2`, we get `(b' -[[]]-> {})`.
// 2. Let `t_f2` be the directly ambient function of the specialization lambda set resolved by `C:f:r`.
// - For example, `(b -[[] + b:g:1]-> {})` if `C:f:r=Fo:f:2`, from the algorithm's running example.
// 3. Unify `t_f1 ~ t_f2`.
let LambdaSet {
solved,
recursion_var,
unspecialized,
ambient_function: t_f1,
} = subs.get_lambda_set(this_lambda_set);
let target_rank = subs.get_rank(this_lambda_set);
debug_assert!(!unspecialized.is_empty());
let unspecialized = subs.get_subs_slice(unspecialized);
// 1. Let `t_f1` be the directly ambient function of the lambda set containing `C:f:r`.
let Uls(c, f, r) = unique_unspecialized_lambda(subs, resolved_concrete, unspecialized).unwrap();
debug_assert!(subs.equivalent_without_compacting(c, resolved_concrete));
// Now decide: do we
// - proceed with specialization
// - simply drop the specialization lambda set (due to an error)
// - or do we need to wait, because we don't know enough information for the specialization yet?
let specialization_decision = make_specialization_decision(subs, phase, c, f);
let specialization_key_or_drop = match specialization_decision {
SpecializeDecision::Specialize(key) => Ok(key),
SpecializeDecision::Drop => Err(()),
SpecializeDecision::PendingSpecialization(impl_key) => {
// Bail, we need to wait for the specialization to be known.
return OneCompactionResult::MustWaitForSpecialization(impl_key);
}
};
// 1b. Remove `C:f:r` from `t_f1`'s lambda set.
let new_unspecialized: Vec<_> = unspecialized
.iter()
.filter(|Uls(v, _, _)| !subs.equivalent_without_compacting(*v, resolved_concrete))
.copied()
.collect();
debug_assert_eq!(new_unspecialized.len(), unspecialized.len() - 1);
let t_f1_lambda_set_without_concrete = LambdaSet {
solved,
recursion_var,
unspecialized: SubsSlice::extend_new(
&mut subs.unspecialized_lambda_sets,
new_unspecialized,
),
ambient_function: t_f1,
};
subs.set_content(
this_lambda_set,
Content::LambdaSet(t_f1_lambda_set_without_concrete),
);
let specialization_key = match specialization_key_or_drop {
Ok(specialization_key) => specialization_key,
Err(()) => {
// Do nothing other than to remove the concrete lambda to drop from the lambda set,
// which we already did in 1b above.
trace_compact!(3iter_end_skipped. subs, t_f1);
return OneCompactionResult::Compacted {
new_obligations: Default::default(),
new_lambda_sets_to_specialize: Default::default(),
};
}
};
let specialization_ambient_function_var = get_specialization_lambda_set_ambient_function(
subs,
derived_env,
phase,
f,
r,
specialization_key,
target_rank,
);
let t_f2 = match specialization_ambient_function_var {
Ok(lset) => lset,
Err(()) => {
// Do nothing other than to remove the concrete lambda to drop from the lambda set,
// which we already did in 1b above.
trace_compact!(3iter_end_skipped. subs, t_f1);
return OneCompactionResult::Compacted {
new_obligations: Default::default(),
new_lambda_sets_to_specialize: Default::default(),
};
}
};
// Ensure the specialized ambient function we'll unify with is not a generalized one, but one
// at the rank of the lambda set being compacted.
let t_f2 = deep_copy_var_in(subs, target_rank, pools, t_f2, arena);
// 3. Unify `t_f1 ~ t_f2`.
trace_compact!(3iter_start. subs, this_lambda_set, t_f1, t_f2);
let (vars, new_obligations, new_lambda_sets_to_specialize, _meta) =
unify(&mut UEnv::new(subs), t_f1, t_f2, Mode::EQ)
.expect_success("ambient functions don't unify");
trace_compact!(3iter_end. subs, t_f1);
introduce(subs, target_rank, pools, &vars);
OneCompactionResult::Compacted {
new_obligations,
new_lambda_sets_to_specialize,
}
}
#[derive(Debug)]
enum SpecializationTypeKey {
Opaque(Symbol),
Derived(DeriveKey),
Immediate(Symbol),
}
enum SpecializeDecision {
Specialize(SpecializationTypeKey),
Drop,
/// Only relevant during module solving of recursive defs - we don't yet know the
/// specialization type for a declared ability implementation, so we must hold off on
/// specialization.
PendingSpecialization(ImplKey),
}
fn make_specialization_decision<P: Phase>(
subs: &Subs,
phase: &P,
var: Variable,
ability_member: Symbol,
) -> SpecializeDecision {
use Content::*;
use SpecializationTypeKey::*;
match subs.get_content_without_compacting(var) {
Alias(opaque, _, _, AliasKind::Opaque) if opaque.module_id() != ModuleId::NUM => {
if P::IS_LATE {
SpecializeDecision::Specialize(Opaque(*opaque))
} else {
// Solving within a module.
phase.with_module_abilities_store(opaque.module_id(), |abilities_store| {
let impl_key = ImplKey {
opaque: *opaque,
ability_member,
};
match abilities_store.get_implementation(impl_key) {
None => {
// Doesn't specialize; an error will already be reported for this.
SpecializeDecision::Drop
}
Some(MemberImpl::Error | MemberImpl::Derived) => {
// TODO: probably not right, we may want to choose a derive decision!
SpecializeDecision::Specialize(Opaque(*opaque))
}
Some(MemberImpl::Impl(specialization_symbol)) => {
match abilities_store.specialization_info(*specialization_symbol) {
Some(_) => SpecializeDecision::Specialize(Opaque(*opaque)),
// If we expect a specialization impl but don't yet know it, we must hold off
// compacting the lambda set until the specialization is well-known.
None => SpecializeDecision::PendingSpecialization(impl_key),
}
}
}
})
}
}
Structure(_) | Alias(_, _, _, _) => {
// This is a structural type, find the name of the derived ability function it
// should use.
match roc_derive_key::Derived::encoding(subs, var) {
Ok(derived) => match derived {
roc_derive_key::Derived::Immediate(imm) => {
SpecializeDecision::Specialize(Immediate(imm))
// todo!("deal with lambda set extraction from immediates")
}
roc_derive_key::Derived::Key(derive_key) => {
SpecializeDecision::Specialize(Derived(derive_key))
}
},
Err(DeriveError::UnboundVar) => {
// not specialized yet, but that also means that it can't possibly be derivable
// at this point?
// TODO: is this right? Revisit if it causes us problems in the future.
SpecializeDecision::Drop
}
Err(DeriveError::Underivable) => {
// we should have reported an error for this; drop the lambda set.
SpecializeDecision::Drop
}
}
}
Error => SpecializeDecision::Drop,
FlexAbleVar(_, _)
| RigidAbleVar(..)
| FlexVar(..)
| RigidVar(..)
| RecursionVar { .. }
| LambdaSet(..)
| RangedNumber(..) => {
internal_error!("unexpected")
}
}
}
#[allow(clippy::too_many_arguments)]
fn get_specialization_lambda_set_ambient_function<P: Phase>(
subs: &mut Subs,
derived_env: &DerivedEnv,
phase: &P,
ability_member: Symbol,
lset_region: u8,
specialization_key: SpecializationTypeKey,
target_rank: Rank,
) -> Result<Variable, ()> {
match specialization_key {
SpecializationTypeKey::Opaque(opaque) => {
let opaque_home = opaque.module_id();
let external_specialized_lset =
phase.with_module_abilities_store(opaque_home, |abilities_store| {
let impl_key = roc_can::abilities::ImplKey {
opaque,
ability_member,
};
let opt_specialization =
abilities_store.get_implementation(impl_key);
match opt_specialization {
None => {
if P::IS_LATE {
internal_error!(
"expected to know a specialization for {:?}#{:?}, but it wasn't found",
opaque,
ability_member
);
} else {
// doesn't specialize, we'll have reported an error for this
Err(())
}
}
Some(member_impl) => match member_impl {
MemberImpl::Impl(spec_symbol) => {
let specialization =
abilities_store.specialization_info(*spec_symbol).expect("expected custom implementations to always have complete specialization info by this point");
let specialized_lambda_set = *specialization
.specialization_lambda_sets
.get(&lset_region)
.expect("lambda set region not resolved");
Ok(specialized_lambda_set)
}
MemberImpl::Derived => todo_abilities!(),
MemberImpl::Error => todo_abilities!(),
},
}
})?;
let specialized_ambient = phase.copy_lambda_set_ambient_function_to_home_subs(
external_specialized_lset,
opaque_home,
subs,
);
Ok(specialized_ambient)
}
SpecializationTypeKey::Derived(derive_key) => {
let mut derived_module = derived_env.derived_module.lock().unwrap();
let (_, _, specialization_lambda_sets) =
derived_module.get_or_insert(derived_env.exposed_types, derive_key);
let specialized_lambda_set = *specialization_lambda_sets
.get(&lset_region)
.expect("lambda set region not resolved");
let specialized_ambient = derived_module.copy_lambda_set_ambient_function_to_subs(
specialized_lambda_set,
subs,
target_rank,
);
Ok(specialized_ambient)
}
SpecializationTypeKey::Immediate(imm) => {
// Immediates are like opaques in that we can simply look up their type definition in
// the ability store, there is nothing new to synthesize.
//
// THEORY: if something can become an immediate, it will always be available in the
// local ability store, because the transformation is local (?)
let immediate_lambda_set_at_region =
phase.get_and_copy_ability_member_ambient_function(imm, lset_region, subs);
Ok(immediate_lambda_set_at_region)
}
}
}

View file

@ -6510,7 +6510,6 @@ mod solve_expr {
} }
#[test] #[test]
#[ignore = "TODO: fix unification of derived types"]
fn encode_record() { fn encode_record() {
infer_queries!( infer_queries!(
indoc!( indoc!(
@ -6523,14 +6522,11 @@ mod solve_expr {
# ^^^^^^^^^ # ^^^^^^^^^
"# "#
), ),
@r#" @"Encoding#toEncoder(2) : { a : Str } -[[#Derived.toEncoder_{a}(0)]]-> Encoder fmt | fmt has EncoderFormatting"
"Encoding#toEncoder(2) : { a : Str } -[[#Derived.toEncoder_{a}(0)]]-> Encoder fmt | fmt has EncoderFormatting",
"#
) )
} }
#[test] #[test]
#[ignore = "TODO: fix unification of derived types"]
fn encode_record_with_nested_custom_impl() { fn encode_record_with_nested_custom_impl() {
infer_queries!( infer_queries!(
indoc!( indoc!(
@ -6539,16 +6535,14 @@ mod solve_expr {
imports [Encode.{ toEncoder, Encoding, custom }] imports [Encode.{ toEncoder, Encoding, custom }]
provides [main] to "./platform" provides [main] to "./platform"
A := {} A := {} has [Encoding {toEncoder}]
toEncoder = \@A _ -> custom \b, _ -> b toEncoder = \@A _ -> custom \b, _ -> b
main = toEncoder { a: @A {} } main = toEncoder { a: @A {} }
# ^^^^^^^^^ # ^^^^^^^^^
"# "#
), ),
@r#" @"Encoding#toEncoder(2) : { a : A } -[[#Derived.toEncoder_{a}(0)]]-> Encoder fmt | fmt has EncoderFormatting"
"Encoding#toEncoder(2) : { a : A } -[[#Derived.toEncoder_{a}(0)]]-> Encoder fmt | fmt has EncoderFormatting",
"#
) )
} }
@ -6831,15 +6825,13 @@ mod solve_expr {
ping : a -> a | a has Bounce ping : a -> a | a has Bounce
pong : a -> a | a has Bounce pong : a -> a | a has Bounce
A := {} has [Bounce {ping, pong}] A := {} has [Bounce {ping: pingA, pong: pongA}]
ping : A -> A pingA = \@A {} -> pong (@A {})
ping = \@A {} -> pong (@A {}) #^^^^^{-1} ^^^^
#^^^^{-1} ^^^^
pong : A -> A pongA = \@A {} -> ping (@A {})
pong = \@A {} -> ping (@A {}) #^^^^^{-1} ^^^^
#^^^^{-1} ^^^^
main = main =
a : A a : A
@ -6850,17 +6842,16 @@ mod solve_expr {
"# "#
), ),
@r###" @r###"
A#ping(5) : A -[[ping(5)]]-> A pingA : A -[[pingA(5)]]-> A
A#pong(6) : A -[[pong(6)]]-> A A#pong(6) : A -[[pongA(6)]]-> A
A#pong(6) : A -[[pong(6)]]-> A pongA : A -[[pongA(6)]]-> A
A#ping(5) : A -[[ping(5)]]-> A A#ping(5) : A -[[pingA(5)]]-> A
A#ping(5) : A -[[ping(5)]]-> A A#ping(5) : A -[[pingA(5)]]-> A
"### "###
) )
} }
#[test] #[test]
#[ignore = "TODO: this currently runs into trouble with ping and pong first being inferred as overly-general before recursive constraining"]
fn resolve_mutually_recursive_ability_lambda_sets_inferred() { fn resolve_mutually_recursive_ability_lambda_sets_inferred() {
infer_queries!( infer_queries!(
indoc!( indoc!(
@ -6889,7 +6880,7 @@ mod solve_expr {
), ),
@r###" @r###"
A#ping(5) : A -[[ping(5)]]-> A A#ping(5) : A -[[ping(5)]]-> A
Bounce#pong(3) : A -[[pong(6)]]-> A A#pong(6) : A -[[pong(6)]]-> A
A#pong(6) : A -[[pong(6)]]-> A A#pong(6) : A -[[pong(6)]]-> A
A#ping(5) : A -[[ping(5)]]-> A A#ping(5) : A -[[ping(5)]]-> A
A#ping(5) : A -[[ping(5)]]-> A A#ping(5) : A -[[ping(5)]]-> A
@ -7257,24 +7248,11 @@ mod solve_expr {
# ^ # ^
"# "#
), ),
// TODO SERIOUS: Let generalization is broken here, and this is NOT correct!!
// Two problems:
// - 1. `{}` always has its rank adjusted to the toplevel, which forces the rest
// of the type to the toplevel, but that is NOT correct here!
// - 2. During solving lambda set compaction cannot happen until an entire module
// is solved, which forces resolved-but-not-yet-compacted lambdas in
// unspecialized lambda sets to pull the rank into a lower, non-generalized
// rank. Special-casing for that is a TERRIBLE HACK that interferes very
// poorly with (1)
//
// We are BLOCKED on https://github.com/rtfeldman/roc/issues/3207 to make this work
// correctly!
// See also https://github.com/rtfeldman/roc/pull/3175, a separate, but similar problem.
@r###" @r###"
Fo#f(7) : Fo -[[f(7)]]-> (b -[[] + b:g(4):1]-> {}) | b has G Fo#f(7) : Fo -[[f(7)]]-> (b -[[] + b:g(4):1]-> {}) | b has G
Go#g(8) : Go -[[g(8)]]-> {} Go#g(8) : Go -[[g(8)]]-> {}
h : Go -[[g(8)]]-> {} h : b -[[] + b:g(4):1]-> {} | b has G
Fo#f(7) : Fo -[[f(7)]]-> (Go -[[g(8)]]-> {}) Fo#f(7) : Fo -[[f(7)]]-> (b -[[] + b:g(4):1]-> {}) | b has G
h : Go -[[g(8)]]-> {} h : Go -[[g(8)]]-> {}
"### "###
); );

View file

@ -95,7 +95,7 @@ fn build_wasm_test_host() {
run_zig(&[ run_zig(&[
"wasm-ld", "wasm-ld",
bitcode::BUILTINS_WASM32_OBJ_PATH, &bitcode::get_builtins_wasm32_obj_path(),
platform_path.to_str().unwrap(), platform_path.to_str().unwrap(),
WASI_COMPILER_RT_PATH, WASI_COMPILER_RT_PATH,
WASI_LIBC_PATH, WASI_LIBC_PATH,

View file

@ -187,7 +187,7 @@ pub fn helper(
// With the current method all methods are kept and it adds about 100k to all outputs. // With the current method all methods are kept and it adds about 100k to all outputs.
&[ &[
app_o_file.to_str().unwrap(), app_o_file.to_str().unwrap(),
bitcode::BUILTINS_HOST_OBJ_PATH, &bitcode::get_builtins_host_obj_path(),
], ],
LinkType::Dylib, LinkType::Dylib,
) )

View file

@ -361,6 +361,21 @@ impl UlsOfVar {
fn rollback_to(&mut self, snapshot: UlsOfVarSnapshot) { fn rollback_to(&mut self, snapshot: UlsOfVarSnapshot) {
*self = snapshot.0; *self = snapshot.0;
} }
pub fn remove_dependent_unspecialized_lambda_sets<'a>(
&'a mut self,
subs: &'a Subs,
var: Variable,
) -> impl Iterator<Item = Variable> + 'a {
let utable = &subs.utable;
let root_var = utable.root_key_without_compacting(var);
self.0
.drain_filter(move |cand_var, _| {
utable.root_key_without_compacting(*cand_var) == root_var
})
.flat_map(|(_, lambda_set_vars)| lambda_set_vars.into_iter())
}
} }
#[derive(Clone)] #[derive(Clone)]

View file

@ -1322,6 +1322,7 @@ impl Type {
region, region,
type_got: args.len() as u8, type_got: args.len() as u8,
alias_needs: alias.type_variables.len() as u8, alias_needs: alias.type_variables.len() as u8,
alias_kind: AliasKind::Structural,
}); });
return; return;
} }
@ -2028,6 +2029,15 @@ pub enum AliasKind {
Opaque, Opaque,
} }
impl AliasKind {
pub fn as_str(&self) -> &'static str {
match self {
AliasKind::Structural => "alias",
AliasKind::Opaque => "opaque",
}
}
}
#[derive(Clone, Debug, PartialEq)] #[derive(Clone, Debug, PartialEq)]
pub struct AliasVar { pub struct AliasVar {
pub name: Lowercase, pub name: Lowercase,
@ -2104,6 +2114,7 @@ pub enum Problem {
region: Region, region: Region,
type_got: u8, type_got: u8,
alias_needs: u8, alias_needs: u8,
alias_kind: AliasKind,
}, },
InvalidModule, InvalidModule,
SolvedTypeError, SolvedTypeError,
@ -2661,6 +2672,9 @@ pub fn gather_fields_unsorted_iter(
// TODO investigate apparently this one pops up in the reporting tests! // TODO investigate apparently this one pops up in the reporting tests!
RigidVar(_) => break, RigidVar(_) => break,
// Stop on errors in the record
Error => break,
_ => return Err(RecordFieldsError), _ => return Err(RecordFieldsError),
} }
} }

View file

@ -115,14 +115,4 @@ macro_rules! todo_abilities {
}; };
} }
#[macro_export]
macro_rules! todo_opaques {
() => {
$crate::_incomplete_project!("Abilities (opaques)", 2463)
};
($($arg:tt)+) => {
$crate::_incomplete_project!("Abilities (opaques)", 2463, $($arg)+)
};
}
// END LARGE SCALE PROJECTS // END LARGE SCALE PROJECTS

View file

@ -479,4 +479,119 @@ mod test {
), ),
); );
} }
#[test]
fn struct_with_strings() {
run_expect_test(
indoc!(
r#"
app "test" provides [main] to "./platform"
main = 0
expect
a = {
utopia: "Astra mortemque praestare gradatim",
brillist: "Profundum et fundamentum",
}
a != a
"#
),
indoc!(
r#"
This expectation failed:
5> expect
6> a = {
7> utopia: "Astra mortemque praestare gradatim",
8> brillist: "Profundum et fundamentum",
9> }
10>
11> a != a
When it failed, these variables had these values:
a : { brillist : Str, utopia : Str }
a = { brillist: "Profundum et fundamentum", utopia: "Astra mortemque praestare gradatim" }
"#
),
);
}
#[test]
fn box_with_strings() {
run_expect_test(
indoc!(
r#"
app "test" provides [main] to "./platform"
main = 0
expect
a = Box.box "Astra mortemque praestare gradatim"
b = Box.box "Profundum et fundamentum"
a == b
"#
),
indoc!(
r#"
This expectation failed:
5> expect
6> a = Box.box "Astra mortemque praestare gradatim"
7> b = Box.box "Profundum et fundamentum"
8>
9> a == b
When it failed, these variables had these values:
a : Box Str
a = Box.box "Astra mortemque praestare gradatim"
b : Box Str
b = Box.box "Profundum et fundamentum"
"#
),
);
}
#[test]
fn result_with_strings() {
run_expect_test(
indoc!(
r#"
app "test" provides [main] to "./platform"
main = 0
expect
a = Ok "Astra mortemque praestare gradatim"
b = Err "Profundum et fundamentum"
a == b
"#
),
indoc!(
r#"
This expectation failed:
5> expect
6> a = Ok "Astra mortemque praestare gradatim"
7> b = Err "Profundum et fundamentum"
8>
9> a == b
When it failed, these variables had these values:
a : [Ok Str]a
a = Ok "Astra mortemque praestare gradatim"
b : [Err Str]a
b = Err "Profundum et fundamentum"
"#
),
);
}
} }

View file

@ -341,7 +341,5 @@ pub fn expect_mono_module_to_dylib<'a>(
); );
} }
env.module.print_to_file("/tmp/test.ll").unwrap();
llvm_module_to_dylib(env.module, &target, opt_level).map(|lib| (lib, expects)) llvm_module_to_dylib(env.module, &target, opt_level).map(|lib| (lib, expects))
} }

View file

@ -26,7 +26,7 @@ fn main() {
let output = Command::new(&zig_executable()) let output = Command::new(&zig_executable())
.args([ .args([
"wasm-ld", "wasm-ld",
bitcode::BUILTINS_WASM32_OBJ_PATH, &bitcode::get_builtins_wasm32_obj_path(),
platform_obj.to_str().unwrap(), platform_obj.to_str().unwrap(),
WASI_COMPILER_RT_PATH, WASI_COMPILER_RT_PATH,
WASI_LIBC_PATH, WASI_LIBC_PATH,

View file

@ -233,8 +233,10 @@ pub fn can_problem<'b>(
title = DUPLICATE_NAME.to_string(); title = DUPLICATE_NAME.to_string();
severity = Severity::RuntimeError; severity = Severity::RuntimeError;
} }
Problem::CyclicAlias(symbol, region, others) => { Problem::CyclicAlias(symbol, region, others, alias_kind) => {
let answer = crate::error::r#type::cyclic_alias(alloc, lines, symbol, region, others); let answer = crate::error::r#type::cyclic_alias(
alloc, lines, symbol, region, others, alias_kind,
);
doc = answer.0; doc = answer.0;
title = answer.1; title = answer.1;
@ -244,6 +246,7 @@ pub fn can_problem<'b>(
typ: alias, typ: alias,
variable_region, variable_region,
variable_name, variable_name,
alias_kind,
} => { } => {
doc = alloc.stack([ doc = alloc.stack([
alloc.concat([ alloc.concat([
@ -251,10 +254,12 @@ pub fn can_problem<'b>(
alloc.type_variable(variable_name), alloc.type_variable(variable_name),
alloc.reflow(" type parameter is not used in the "), alloc.reflow(" type parameter is not used in the "),
alloc.symbol_unqualified(alias), alloc.symbol_unqualified(alias),
alloc.reflow(" alias definition:"), alloc.reflow(" "),
alloc.reflow(alias_kind.as_str()),
alloc.reflow(" definition:"),
]), ]),
alloc.region(lines.convert_region(variable_region)), alloc.region(lines.convert_region(variable_region)),
alloc.reflow("Roc does not allow unused type alias parameters!"), alloc.reflow("Roc does not allow unused type parameters!"),
// TODO add link to this guide section // TODO add link to this guide section
alloc.tip().append(alloc.reflow( alloc.tip().append(alloc.reflow(
"If you want an unused type parameter (a so-called \"phantom type\"), \ "If you want an unused type parameter (a so-called \"phantom type\"), \

View file

@ -77,6 +77,7 @@ pub fn type_problem<'b>(
region, region,
type_got, type_got,
alias_needs, alias_needs,
alias_kind,
} => { } => {
let needed_arguments = if alias_needs == 1 { let needed_arguments = if alias_needs == 1 {
alloc.reflow("1 type argument") alloc.reflow("1 type argument")
@ -92,7 +93,9 @@ pub fn type_problem<'b>(
alloc.concat([ alloc.concat([
alloc.reflow("The "), alloc.reflow("The "),
alloc.symbol_unqualified(symbol), alloc.symbol_unqualified(symbol),
alloc.reflow(" alias expects "), alloc.reflow(" "),
alloc.reflow(alias_kind.as_str()),
alloc.reflow(" expects "),
needed_arguments, needed_arguments,
alloc.reflow(", but it got "), alloc.reflow(", but it got "),
found_arguments, found_arguments,
@ -433,16 +436,21 @@ pub fn cyclic_alias<'b>(
symbol: Symbol, symbol: Symbol,
region: roc_region::all::Region, region: roc_region::all::Region,
others: Vec<Symbol>, others: Vec<Symbol>,
alias_kind: AliasKind,
) -> (RocDocBuilder<'b>, String) { ) -> (RocDocBuilder<'b>, String) {
let when_is_recursion_legal = let when_is_recursion_legal =
alloc.reflow("Recursion in aliases is only allowed if recursion happens behind a tagged union, at least one variant of which is not recursive."); alloc.reflow("Recursion in ")
.append(alloc.reflow(alias_kind.as_str()))
.append(alloc.reflow("es is only allowed if recursion happens behind a tagged union, at least one variant of which is not recursive."));
let doc = if others.is_empty() { let doc = if others.is_empty() {
alloc.stack([ alloc.stack([
alloc alloc
.reflow("The ") .reflow("The ")
.append(alloc.symbol_unqualified(symbol)) .append(alloc.symbol_unqualified(symbol))
.append(alloc.reflow(" alias is self-recursive in an invalid way:")), .append(alloc.reflow(" "))
.append(alloc.reflow(alias_kind.as_str()))
.append(alloc.reflow(" is self-recursive in an invalid way:")),
alloc.region(lines.convert_region(region)), alloc.region(lines.convert_region(region)),
when_is_recursion_legal, when_is_recursion_legal,
]) ])
@ -451,14 +459,18 @@ pub fn cyclic_alias<'b>(
alloc alloc
.reflow("The ") .reflow("The ")
.append(alloc.symbol_unqualified(symbol)) .append(alloc.symbol_unqualified(symbol))
.append(alloc.reflow(" alias is recursive in an invalid way:")), .append(alloc.reflow(" "))
.append(alloc.reflow(alias_kind.as_str()))
.append(alloc.reflow(" is recursive in an invalid way:")),
alloc.region(lines.convert_region(region)), alloc.region(lines.convert_region(region)),
alloc alloc
.reflow("The ") .reflow("The ")
.append(alloc.symbol_unqualified(symbol)) .append(alloc.symbol_unqualified(symbol))
.append(alloc.reflow( .append(alloc.reflow(" "))
" alias depends on itself through the following chain of definitions:", .append(alloc.reflow(alias_kind.as_str()))
)), .append(
alloc.reflow(" depends on itself through the following chain of definitions:"),
),
crate::report::cycle( crate::report::cycle(
alloc, alloc,
4, 4,

View file

@ -3112,7 +3112,7 @@ mod test_reporting {
@r###" @r###"
TOO MANY TYPE ARGUMENTS /code/proj/Main.roc TOO MANY TYPE ARGUMENTS /code/proj/Main.roc
The `Num` alias expects 1 type argument, but it got 2 instead: The `Num` opaque expects 1 type argument, but it got 2 instead:
4 a : Num.Num Num.I64 Num.F64 4 a : Num.Num Num.I64 Num.F64
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
@ -3134,7 +3134,7 @@ mod test_reporting {
@r###" @r###"
TOO MANY TYPE ARGUMENTS /code/proj/Main.roc TOO MANY TYPE ARGUMENTS /code/proj/Main.roc
The `Num` alias expects 1 type argument, but it got 2 instead: The `Num` opaque expects 1 type argument, but it got 2 instead:
4 f : Str -> Num.Num Num.I64 Num.F64 4 f : Str -> Num.Num Num.I64 Num.F64
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
@ -3210,7 +3210,7 @@ mod test_reporting {
4 Foo a : [Foo] 4 Foo a : [Foo]
^ ^
Roc does not allow unused type alias parameters! Roc does not allow unused type parameters!
Tip: If you want an unused type parameter (a so-called "phantom Tip: If you want an unused type parameter (a so-called "phantom
type"), read the guide section on phantom values. type"), read the guide section on phantom values.
@ -10117,4 +10117,57 @@ All branches in an `if` must have the same type!
determined to actually specialize `Id2`! determined to actually specialize `Id2`!
"### "###
); );
test_report!(
mismatched_record_annotation,
indoc!(
r#"
x : { y : Str }
x = {}
x
"#
),
@r###"
TYPE MISMATCH /code/proj/Main.roc
Something is off with the body of the `x` definition:
4 x : { y : Str }
5 x = {}
^^
The body is a record of type:
{}
But the type annotation on `x` says it should be:
{ y : Str }
Tip: Looks like the y field is missing.
"###
);
test_report!(
cyclic_opaque,
indoc!(
r#"
Recursive := [Infinitely Recursive]
0
"#
),
@r###"
CYCLIC ALIAS /code/proj/Main.roc
The `Recursive` opaque is self-recursive in an invalid way:
4 Recursive := [Infinitely Recursive]
^^^^^^^^^
Recursion in opaquees is only allowed if recursion happens behind a
tagged union, at least one variant of which is not recursive.
"###
);
} }

View file

@ -1,5 +1,5 @@
use snafu::OptionExt; use snafu::OptionExt;
use std::{collections::HashMap, slice::SliceIndex}; use std::{collections::HashMap, path::PathBuf, slice::SliceIndex};
use util_error::{IndexOfFailedSnafu, KeyNotFoundSnafu, OutOfBoundsSnafu, UtilResult}; use util_error::{IndexOfFailedSnafu, KeyNotFoundSnafu, OutOfBoundsSnafu, UtilResult};
pub mod util_error; pub mod util_error;
@ -93,3 +93,30 @@ pub fn first_last_index_of<T: ::std::fmt::Debug + std::cmp::Eq>(
.fail() .fail()
} }
} }
// get the path of the lib folder
// runtime dependencies like zig files, builtin_host.o are put in the lib folder
pub fn get_lib_path() -> Option<PathBuf> {
let exe_relative_str_path_opt = std::env::current_exe().ok();
if let Some(exe_relative_str_path) = exe_relative_str_path_opt {
let mut curr_parent_opt = exe_relative_str_path.parent();
// this differs for regular build and nix releases, so we check in multiple spots.
for _ in 0..3 {
if let Some(curr_parent) = curr_parent_opt {
let lib_path = curr_parent.join("lib");
if std::path::Path::exists(&lib_path) {
return Some(lib_path);
} else {
curr_parent_opt = curr_parent.parent();
}
} else {
break;
}
}
}
None
}

86
default.nix Normal file
View file

@ -0,0 +1,86 @@
{ }:
# we only this file to release a nix package, use flake.nix for development
let
rev = "f6342b8b9e7a4177c7e775cdbf38e1c1b43e7ab3"; # nixpkgs master
nixpkgs = builtins.fetchTarball {
url = "https://github.com/nixos/nixpkgs/tarball/${rev}";
sha256 = "JTiKsBT1BwMbtSUsvtSl8ffkiirby8FaujJVGV766Q8=";
};
pkgs = import nixpkgs { };
rustPlatform = pkgs.rustPlatform;
llvmPkgs = pkgs.llvmPackages_13;
# nix does not store libs in /usr/lib or /lib
nixGlibcPath = if pkgs.stdenv.isLinux then "${pkgs.glibc.out}/lib" else "";
in
rustPlatform.buildRustPackage {
pname = "roc";
version = "0.0.1";
src = pkgs.nix-gitignore.gitignoreSource [] ./.;
cargoSha256 = "sha256-cFzOcU982kANsZjx4YoLQOZSOYN3loj+5zowhWoBWM8=";
LLVM_SYS_130_PREFIX = "${llvmPkgs.llvm.dev}";
# required for zig
XDG_CACHE_HOME = "xdg_cache"; # prevents zig AccessDenied error github.com/ziglang/zig/issues/6810
# want to see backtrace in case of failure
RUST_BACKTRACE = 1;
# skip running rust tests, problems:
# building of example platforms requires network: Could not resolve host
# zig AccessDenied error github.com/ziglang/zig/issues/6810
# Once instance has previously been poisoned ??
doCheck = false;
nativeBuildInputs = (with pkgs; [
cmake
git
pkg-config
python3
llvmPkgs.clang
llvmPkgs.llvm.dev
zig
rust-bindgen
]);
buildInputs = (with pkgs; [
libffi
libiconv
libxkbcommon
libxml2
ncurses
zlib
cargo
makeWrapper # necessary for postBuild wrapProgram
]
++ lib.optionals pkgs.stdenv.isLinux [
alsa-lib
valgrind
vulkan-headers
vulkan-loader
vulkan-tools
vulkan-validation-layers
xorg.libX11
xorg.libXcursor
xorg.libXi
xorg.libXrandr
xorg.libxcb
]
++ lib.optionals pkgs.stdenv.isDarwin [
AppKit
CoreFoundation
CoreServices
CoreVideo
Foundation
Metal
Security
]);
# cp: to copy str.zig,list.zig...
# wrapProgram pkgs.stdenv.cc: to make ld available for compiler/build/src/link.rs
postInstall = ''
cp -r target/x86_64-unknown-linux-gnu/release/lib/. $out/lib
wrapProgram $out/bin/roc --set NIX_GLIBC_PATH ${nixGlibcPath} --prefix PATH : ${pkgs.lib.makeBinPath [ pkgs.stdenv.cc ]}
'';
}

View file

@ -3,6 +3,7 @@
inputs = { inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-22.05"; nixpkgs.url = "github:nixos/nixpkgs/nixos-22.05";
# rust from nixpkgs has some libc problems, this is patched in the rust-overlay # rust from nixpkgs has some libc problems, this is patched in the rust-overlay
rust-overlay = { rust-overlay = {
url = "github:oxalica/rust-overlay"; url = "github:oxalica/rust-overlay";
@ -24,7 +25,7 @@
outputs = { self, nixpkgs, rust-overlay, zig, flake-utils, nixgl }: outputs = { self, nixpkgs, rust-overlay, zig, flake-utils, nixgl }:
let let
supportedSystems = [ "x86_64-linux" "x86_64-darwin" "aarch64-linux" "aarch64-darwin" ]; supportedSystems = [ "x86_64-linux" "x86_64-darwin" "aarch64-darwin" ];
in in
flake-utils.lib.eachSystem supportedSystems (system: flake-utils.lib.eachSystem supportedSystems (system:
let let
@ -120,6 +121,7 @@
buildInputs = sharedInputs ++ darwinInputs ++ linuxInputs ++ (if system == "x86_64-linux" then [ pkgs.nixgl.nixVulkanIntel ] else []); buildInputs = sharedInputs ++ darwinInputs ++ linuxInputs ++ (if system == "x86_64-linux" then [ pkgs.nixgl.nixVulkanIntel ] else []);
LLVM_SYS_130_PREFIX = "${llvmPkgs.llvm.dev}"; LLVM_SYS_130_PREFIX = "${llvmPkgs.llvm.dev}";
# nix does not store libs in /usr/lib or /lib
NIX_GLIBC_PATH = if pkgs.stdenv.isLinux then "${pkgs.glibc.out}/lib" else ""; NIX_GLIBC_PATH = if pkgs.stdenv.isLinux then "${pkgs.glibc.out}/lib" else "";
LD_LIBRARY_PATH = with pkgs; LD_LIBRARY_PATH = with pkgs;
lib.makeLibraryPath lib.makeLibraryPath

View file

@ -5,3 +5,4 @@ components = [
# for usages of rust-analyzer or similar tools inside `nix develop` # for usages of rust-analyzer or similar tools inside `nix develop`
"rust-src" "rust-src"
] ]
targets = [ "x86_64-unknown-linux-gnu" ]