mirror of
https://github.com/jj-vcs/jj.git
synced 2025-12-23 06:01:01 +00:00
The `Backend::read_file()` method is async but it returns a `Box<dyn Read>` and reading from that trait is blocking. That's fine with the local Git backend but it can be slow for remote backends. For example, our backend at Google reads file chunks 1 MiB at a time from the server. What that means is that reading lots of small files concurrently works fine since the whole file contents are returned by the first `Read::read()` call (it was fetched when `Backend::read_file()` was issued). However, when reading files that are larger than one chunk, we end up blocking on the next `Read::read()` call. I haven't verified that this actually is a problem at Google, but fixing this blocking is something we should do eventually anyway. This patch makes `Backend::read_file()` return a `Pin<Box<dyn AsyncRead>>` instead, so implementations can be async in the read part too. Since `AsyncRead` is not yet standardized, we have to choose between the one from `futures` and the one from `tokio`. I went with the one from `tokio`. I picked that because an earlier version of this patch used `tokio::fs` for some reads. Then I realized that doing that means that we have to use a tokio runtime, meaning that we can't safely keep our existing `pollster::FutureExt::block_on()` calls. If we start depending on tokio's specific runtime, I think we would first want to remove all the `block_on()` calls. I'll leave that for later. I think at this point, we could equally well use `futures::io::AsyncRead`, but I also don't know if there's a reason to prefer that. |
||
|---|---|---|
| .. | ||
| custom-backend | ||
| custom-command | ||
| custom-commit-templater | ||
| custom-global-flag | ||
| custom-operation-templater | ||
| custom-working-copy | ||