There are times that multiple concrete types may appear in
unspecialized lambda sets that are being unified. The primary case is
during monomorphization, when unspecialized lambda sets join at the same
time that concrete types get instantiated. Since lambda set
specialization and compaction happens only after unifications are
complete, unifications that monomorphize can induce the above-described
situation.
In these cases,
- unspecialized lambda sets that are due to equivalent type variables
can be compacted, since they are in fact the same specialization.
- unspecialized lambda sets that are due to different type variables
cannot be compacted, even if their types unify, since they may point
to different specializations. For example, consider the unspecialized
lambda set `[[] + [A]:toEncoder:1 + [B]:toEncoder:1]` - this set wants
two encoders, one for `[A]` and one for `[B]`, which is materially
different from the set `[[] + [A, B]:toEncoder:1]`.
When we unify variables in mono, we must invalidate the sections of the
layout cache reached by those variables. Previously we did this by
recording changed variables as those that were `merge`d. However this is
not enough; we must also record all the parent types they came from. The
reason is we may have something like
```
Alias (Foo, a) ~ Alias (Bar, U8)
```
where we will merge `a = U8` but we do not merge the aliases.
Closes#4919
🤦 when we check whether lambdas can be unified or must be
treated as disjoint we previously had a code path that is actually
exponential in its runtime, regardless of whether it succeeds or fails.
Now, it is linear, though degraded (with a clone) if it fails.
We must be careful to ensure that if unifying nested lambda sets
results in disjoint lambdas, that the parent lambda sets are
ultimately treated disjointly as well.
Consider
```
v1: {} -[ foo ({} -[ bar Str ]-> {}) ]-> {}
~ v2: {} -[ foo ({} -[ bar U64 ]-> {}) ]-> {}
```
When considering unification of the nested sets
```
[ bar Str ]
~ [ bar U64 ]
```
we should not unify these sets, even disjointly, because that would
ultimately lead us to unifying
```
v1 ~ v2
=> {} -[ foo ({} -[ bar Str, bar U64 ]-> {}) ] -> {}
```
which is quite wrong - we do not have a lambda `foo` that captures
either `bar captures: Str` or `bar captures: U64`, we have two
different lambdas `foo` that capture different `bars`. The target
unification is
```
v1 ~ v2
=> {} -[ foo ({} -[ bar Str ]-> {}),
foo ({} -[ bar U64 ]-> {}) ] -> {}
```
Closes#4712
If a specialization of an ability member has a lambda set that is not
reflected in the unspecialized lambda sets of the member's prototype
signature, then the specialization lambda set is deemed to be immaterial
to the specialization lambda set mapping, and we don't need to associate
it with a particular region from the prototype signature.
This can happen when an opaque contains functions that are some specific
than the generalized prototype signature; for example, when we are
defining a custom impl for an opaque with functions.
Addresses a bug found in 8c3158c3e0
With fixpoint-fixing, we don't want to re-unify type variables that were
just fixed, because doing so may change their shapes in ways that we
explicitly just set them up not to be changed (as fixpoint-fixing
clobbers type variable contents).
However, this restriction need only apply when we re-unify two type
variables that were both involved in the same fixpoint-fixing cycle. If
we have a type variable T that was involved in fixpoint-fixing, and we
unify it with U that wasn't, we know that the $U \notin \bar{T}$, where
$\bar{T}$ is the recursive closure of T. In these cases, we do want to
permit the usual in-band unification of $T \sim U$.