The new implementation is backported from the (unsent) json({key => value})
object template patches. This function serves the same role as
expect_expression_with(), so all callers are migrated to catch_aliases().
I think the recursive version was harder to follow, and I'll remove
expect_<construct>_with() functions for the same reason.
We usually use "kebab-case" for config objects, but I'm not sure if that's a
good choice for data structs generally. Template keywords and methods are
"snake_case", and consumers of JSON outputs would also expect it (unless the
implementation language is lisp-like.) So I didn't add rename("kebab-case").
This doesn't matter for the Signature type, though.
This should produce better results at squash/split operations. Since "op diff"
targets can be flipped, this patch implements basic handling of reversed
predecessors graph. It should also work for sibling operations so long as there
are no multiple greatest common ancestors.
resolve_transitive_edges() takes additional "start" parameter. It might be
useful in order to omit transitive edges in evolution log. Since
walk_predecessors() usually tracks a single commit, it would be wasteful to
build a fully-resolved predecessors graph.
There are a couple of hand-rolled implementations of roots..heads query, and I'm
going to add one more. It's wasteful to test these one by one.
The added iterator will have to scan roots::heads range eagerly instead of
::roots. I think this is good for typical use cases where roots are closer to
heads than the root().
At merge point, ancestor operations are visited in order of the parents list, so
I don't think head_ops should be sorted either. walk_ancestors([op]) should be
identical to [op] + walk_ancestors(op.parents()).
This will be the initial step of operation range iterator. I think it can also
be a building block of various filtered operation iterators (such as operations
filtered by bookmark changes.)
topo_order_reverse_chunked() returns Result<[T], E> instead of [Result<T, E>]
because the former was easier to implement. I'm not sure which one would be
generally useful, but I think the caller would do some post processing, so
getting [T] might be handy.
This turns off the `clippy::cloned_ref_to_slice_refs` lint in some tests
and fixes it in others, for Rust 1.89+. This seems to make `cargo clippy
--workspace --all-targets --all-features` work in stable, beta, and
nightly (1.89).
This depends on the `rustversion` crate. Other than that, it's based on
Austin's https://github.com/jj-vcs/jj/pull/6705.
Co-authored-by: Austin Seipp <aseipp@pobox.com>
Only the expression trees are generated, which I just needed to copy the doc
example. The commit DAG could also be generated, but doing that would mean the
commits would have to be created within a proptest loop, I think, and would make
shrinking process super slow.
Since the extension point is now provided by SymbolResolverExtension, this
abstraction isn't useful anymore. This change will probably help implement
commit/change_id(pattern) functions. If the resolver were abstract type, we
would have to add resolve_commit/change_id() methods separately.
If a commit or change prefix is already ambiguous in the small
disambiguation set, it should be ambiguous in the full repo too, so we
should not have to attempt the lookup in the full repo.
There's the corner case that the disambiguation set contains a hidden
commit, making the look ambiguous in the disambiguation set but
unambiguous among all visible commits. I don't think we need to worry
about this case. Users should not configure such disambiguation sets,
and even if they do, it will just result in a graceful failure.
The semantics is similar to experimental.directaccess=true in Mercurial. Hidden
revisions and their ancestors become temporarily available. This means all() is
not exactly the same as ::visible_heads(). The latter never includes hidden
revisions.
We could instead transform all() to (all() | referenced_commits). However, this
wouldn't work if expressions like ::hidden are intersected/united with filters,
all(), etc.
Fixes#5871
This adds the proptest crate for property-based testing as well as the
proptest-state-machine crate as direct dev dependencies of jj-cli and as
dependencies of the internal testutils crate.
Within testutils, a `proptest` module provides a reference state
machine which models the working copy as a map from path to `DirEntry`.
Directories are not represented explicitly, but are implicit in the
ancestors of entries.
The possible transitions of this state machine are for now limited to
the creation of new files (including replacements of existing files
or directories) and a `Commit` operation which the SUT can use to
snapshot a reference state. Additional transitions (moving files,
modifying file contents incrementally, ...) and states (symlinks,
submodules, conflicts, ...) may be added in the future.
This reference state machine is then applied to the builtin merge-tool's
test suite:
- The initial state is always an empty root directory.
- The `Commit` operation creates `MergedTree` from the current state.
- Each step of the way, the same test logic as in the manual
`test_edit_diff_builtin*` tests is run to check that splitting off
none or all of the changes results in the left or right tree,
respectively. The "right" tree corresponds to the current state,
whereas the "left" tree refers to the last "committed" tree.
Co-authored-by: Waleed Khan <me@waleedkhan.name>
This macro in the style of `assert_eq!` compares two trees based
on their `MergedTreeId`s. In case they do not compare equal, the
corresponding trees are dumped in the panic message.
Like `assert_eq!`, the macro accepts a custom format string which will
be included in the panic message.
The previous patch works, but it seemed weird that the result depends on the
order. For example, `f1 & f2 & s` is rewritten to `s & filter(f1 & f2)`, whereas
`s & f1 & f2` was to `(s & f1) & f2`. This patch normalizes them.
This partially reverts the change in 9a7ca8edb5 "revset: add optimization pass
to flatten intersections." Thanks to the flatten_intersections() step, we no
longer need to process compound right-hand-side expressions.
This follows up 0a9ab49dc5 "revset: do not reinterpret set&filter intersection
as filter." As Scott discovered, 0a9ab49dc5 doesn't handle nested intersection
of filters.
With this patch, `f1&f2` is evaluated as `filter_within(all(), f1&f2)` instead
of `filter_within(filter_within(all(), f1), f2)`. The evaluation cost should be
the same as we now have ResolvedPredicateExpression::Intersection node.
This was originally suggested by Scott Taylor in #6679. I thought filter
intersections would have been externalized to the set intersections, but that
was wrong. Since union of filter intersections (e.g. `(f1 | f2 & f3) & s4`) is
evaluated as `filter_within(s4, f1 | f2 & f3)`, it doesn't make sense to convert
`f2 & f3` back to set intersection `filter_within(all(), f2) & f3`.
The output looks better if the graph had long parallel history. "--limit=N" is
applied after sorting for consistency with "jj log". The doc also mentions that.
Since TopoGroupedGraphIterator emits predecessors in reverse order at squash
point, we no longer need to tweak the visiting order by walk_predecessors().
This will help propagate error from accumulated predecessors graph. "jj op diff"
will collect predecessors from the operation range, and resolve transitive
predecessors within them. To translate CommitId back to OperationId, we would
have to inspect each operation again. Since this error should be caused only by
data corruption or implementation bug, the error content isn't so important.