With a code like
```
thenDo = \x, callback ->
callback x
f = \{} ->
code = 10u16
bf = \{} ->
thenDo code \_ -> bf {}
bf {}
```
The lambda `\_ -> bf {}` must capture `bf`. Previously, this would not
happen correctly, because we assumed that mutually recursive functions
(including singleton recursive functions, like `bf` here) cannot capture
themselves.
Of course, that premise does not hold in general. Instead, we should have
mutually recursive functions capture the closure (haha, get it) of
values captured by all functions constituting the mutual recursion.
Then, any nested closures can capture outer recursive closures' values
appropriately.
This change also means we must update the interface of `Dict.empty` and
`Set.empty` from
```
Dict.empty : Dict k v
```
to
```
Dict.empty : {} -> Dict k v
```
Variables introduced in branch patterns should never be generalized in
the new weakening model. This implements that. The strategy is:
- when we have a let-binding that should be weakened, do not introduce
its bound variables in a new (higher) rank
- instead, introduce them at the current rank, and also solve the
let-binding at the current rank
- if any of those variables should then be generalized relative to the
current rank, they will be so when the current rank is popped and
generalized
We must be careful to ensure that if unifying nested lambda sets
results in disjoint lambdas, that the parent lambda sets are
ultimately treated disjointly as well.
Consider
```
v1: {} -[ foo ({} -[ bar Str ]-> {}) ]-> {}
~ v2: {} -[ foo ({} -[ bar U64 ]-> {}) ]-> {}
```
When considering unification of the nested sets
```
[ bar Str ]
~ [ bar U64 ]
```
we should not unify these sets, even disjointly, because that would
ultimately lead us to unifying
```
v1 ~ v2
=> {} -[ foo ({} -[ bar Str, bar U64 ]-> {}) ] -> {}
```
which is quite wrong - we do not have a lambda `foo` that captures
either `bar captures: Str` or `bar captures: U64`, we have two
different lambdas `foo` that capture different `bars`. The target
unification is
```
v1 ~ v2
=> {} -[ foo ({} -[ bar Str ]-> {}),
foo ({} -[ bar U64 ]-> {}) ] -> {}
```
Closes#4712
The current type inference scheme is such that we first introduce the
types for annotation functions, then check their bodies without
additional re-generalization. As part of generalization, we also perform
occurs checks to fix-up recursive tag unions.
However, type annotations can contain type inference variables that are
neither part of the generalization scheme, nor are re-generalized later
on, and in fact end up forming a closure of a recursive type. If we do
not catch and break such closures into recursive types, things go bad
soon after in later stages of the compiler.
To deal with this, re-introduce the values of recursive values after we
check their definitions, forcing an occurs check. This introduction is
benign because we already generalized appropriate type variables anyway.
Though, the introduction is somewhat unnecessary, and I have ideas on
how to make all of this simpler and more performant. That will come in
the future.
When constraining a recursive function like
```
f : _ -> {}
f : \_ -> f {}
```
our first step is to solve the value type of `f` relative to its
annotation. We have to be careful that the inference variable in the
signature of `f` is not generalized until after the body of `f` is
solved. Otherwise, we end up admitting polymorphic recursion.
If a specialization of an ability member has a lambda set that is not
reflected in the unspecialized lambda sets of the member's prototype
signature, then the specialization lambda set is deemed to be immaterial
to the specialization lambda set mapping, and we don't need to associate
it with a particular region from the prototype signature.
This can happen when an opaque contains functions that are some specific
than the generalized prototype signature; for example, when we are
defining a custom impl for an opaque with functions.
Addresses a bug found in 8c3158c3e0
With fixpoint-fixing, we don't want to re-unify type variables that were
just fixed, because doing so may change their shapes in ways that we
explicitly just set them up not to be changed (as fixpoint-fixing
clobbers type variable contents).
However, this restriction need only apply when we re-unify two type
variables that were both involved in the same fixpoint-fixing cycle. If
we have a type variable T that was involved in fixpoint-fixing, and we
unify it with U that wasn't, we know that the $U \notin \bar{T}$, where
$\bar{T}$ is the recursive closure of T. In these cases, we do want to
permit the usual in-band unification of $T \sim U$.