Compare commits

...

51 commits

Author SHA1 Message Date
devolutionsbot
87f8d073c8
chore(release): prepare for publishing (#1020)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-12-20 11:42:23 +00:00
Will Warner
bd2aed7686
feat(ironrdp-tls)!: return x509_cert::Certificate from upgrade() (#1054)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
This allows client applications to verify details of the certificate,
possibly with the user, when connecting to a server using TLS.
2025-12-18 04:14:51 +00:00
dependabot[bot]
b50b648344
build(deps): bump the patch group across 1 directory with 3 updates (#1055)
Some checks failed
CI / FFI (push) Has been cancelled
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-12-16 02:09:49 -05:00
dependabot[bot]
113284a053
build(deps): bump uuid from 1.18.1 to 1.19.0 (#1056) 2025-12-16 02:09:12 -05:00
glamberson
d587b0c4c1
fix(cliprdr): allow servers to announce clipboard ownership (#1053)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
Servers can now send Format List PDU via initiate_copy() regardless of
internal state. The existing state machine was designed for clients
where clipboard initialization must complete before announcing
ownership.

MS-RDPECLIP Section 2.2.3.1 specifies that Format List PDU is sent by
either client or server when the local clipboard is updated. Servers
should be able to announce clipboard changes immediately after channel
negotiation.

This change enables RDP servers to properly announce clipboard ownership
by bypassing the Initialization/Ready state check when R::is_server() is
true. Client behavior remains unchanged.

Co-authored-by: lamco-office <office@lamco.io>
2025-12-11 16:49:07 +00:00
dependabot[bot]
0903c9ae75
build(deps): bump the patch group across 1 directory with 4 updates (#1051)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-12-09 00:18:06 -05:00
dependabot[bot]
f31af061b9
build(deps): bump rstest from 0.25.0 to 0.26.1 (#1052) 2025-12-09 00:17:45 -05:00
Benoît Cortier
7123150b63
fix(ffi): preserve full error context across FFI boundary (#1050)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
Replaced generic From implementation that used to_string() with specific
implementations for each error type to ensure source error chains are
preserved. IronRDP errors now use .report() for full context, while
standard library errors are converted to anyhow::Error for proper
alternate formatting with {:#}.
2025-12-05 15:43:28 +00:00
Richard Markiewicz
632ad86f67
chore: bump iOS nuget package to net9.0 (#1049)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Release crates (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
The `net8.0-ios` workload is out of support and won't build
out-of-the-box anymore. Bump the target framework to `net9.0-ios`.
2025-12-04 11:45:49 -05:00
Richard Markiewicz
da5db5bf11
chore(release): prepare for Devolutions.IronRdp v2025.12.4.0 (#1048)
We still don't have a means to set the nuget version directly in the
workflow. I thought of adding it, but on closer inspection we have logic
to handle package and product version differently (and the same is true
of sspi-rs etc) and probably needs a closer look. It can be troublesome
to deploy a newer nuget package that doesn't increment the assembly
versions (for example - and it shouldn't be an issue for Devolutions,
but maybe for other consumers - Windows Installers generally might not
overwrite a DLL if the version number is not newer than what is already
installed.

For now, I just bump the package version manually.
2025-12-04 15:11:34 +00:00
Richard Markiewicz
a2af587e60
fix(cliprdr): prevent window class registration error on multiple sessions (#1047)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
When starting a second clipboard session, `RegisterClassA` would fail
with `ERROR_CLASS_ALREADY_EXISTS` because window classes are global to
the process. Now checks if the class is already registered before
attempting registration, allowing multiple WinClipboard instances to
coexist.
2025-12-04 07:38:05 +00:00
Alex Yusiuk
924330159a
chore: enable large_futures clippy lint (#1046)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Check typos (push) Waiting to run
CI / Checks [macos] (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
> It checks for the size of a Future created by async fn or async {}.

The maximum byte size a `Future` can have, before it triggers the
clippy::large_futures lint is by default 16384, but we can adjust it.
2025-12-03 08:07:33 -05:00
dependabot[bot]
cf978321d3
build(deps): bump the patch group across 1 directory with 6 updates (#1044)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Release crates / Release crates (push) Waiting to run
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
2025-12-03 07:52:21 +00:00
dependabot[bot]
b303ae3a90
build(deps): bump criterion from 0.7.0 to 0.8.0 (#1045)
Bumps [criterion](https://github.com/criterion-rs/criterion.rs) from
0.7.0 to 0.8.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/criterion-rs/criterion.rs/releases">criterion's
releases</a>.</em></p>
<blockquote>
<h2>criterion-plot-v0.8.0</h2>
<p>No release notes provided.</p>
<h2>criterion-v0.8.0</h2>
<h3>BREAKING</h3>
<ul>
<li>Drop async-std support</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Bump MSRV to 1.86, stable to 1.91.1</li>
</ul>
<h3>Added</h3>
<ul>
<li>Add ability to plot throughput on summary page.</li>
<li>Add support for reporting throughput in elements and bytes -
<code>Throughput::ElementsAndBytes</code> allows the text summary to
report throughput in both units simultaneously.</li>
<li>Add alloca-based memory layout randomisation to mitigate memory
effects on measurements.</li>
<li>Add doc comment to benchmark runner in criterion_group macro
(removes linter warnings)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Fix plotting NaN bug</li>
</ul>
<h3>Other</h3>
<ul>
<li>Remove Master API Docs links temporarily while we restore the docs
publishing.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/criterion-rs/criterion.rs/blob/master/CHANGELOG.md">criterion's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/criterion-rs/criterion.rs/compare/criterion-v0.7.0...criterion-v0.8.0">0.8.0</a>
- 2025-11-29</h2>
<h3>BREAKING</h3>
<ul>
<li>Drop async-std support</li>
</ul>
<h3>Changed</h3>
<ul>
<li>Bump MSRV to 1.86, stable to 1.91.1</li>
</ul>
<h3>Added</h3>
<ul>
<li>Add ability to plot throughput on summary page.</li>
<li>Add support for reporting throughput in elements and bytes -
<code>Throughput::ElementsAndBytes</code> allows the text summary to
report throughput in both units simultaneously.</li>
<li>Add alloca-based memory layout randomisation to mitigate memory
effects on measurements.</li>
<li>Add doc comment to benchmark runner in criterion_group macro
(removes linter warnings)</li>
</ul>
<h3>Fixed</h3>
<ul>
<li>Fix plotting NaN bug</li>
</ul>
<h3>Other</h3>
<ul>
<li>Remove Master API Docs links temporarily while we restore the docs
publishing.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="b49ade728c"><code>b49ade7</code></a>
chore: release v0.8.0</li>
<li>See full diff in <a
href="https://github.com/criterion-rs/criterion.rs/compare/criterion-plot-v0.7.0...criterion-v0.8.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=criterion&package-manager=cargo&previous-version=0.7.0&new-version=0.8.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-03 02:36:06 -05:00
Benoît Cortier
bca6d190a8
fix(ironrdp-async)!: use static dispatch for NetworkClient trait (#1043)
- Rename `AsyncNetworkClient` to `NetworkClient`
- Replace dynamic dispatch (`Option<&mut dyn ...>`) with static dispatch
using generics (`&mut N where N: NetworkClient`)
- Reorder `connect_finalize` parameters for consistency across crates
2025-12-03 02:31:12 -05:00
Raphaël Larivière
cca323adab
ci: migrate iron crates to trusted publishing (#1042)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
CI / Web Client (push) Has been cancelled
All crates migrated:

- ironrdp
- ironrdp-acceptor
- ironrdp-ainput
- ironrdp-async
- ironrdp-blocking
- ironrdp-cliprdr
- ironrdp-cliprdr-format
- ironrdp-cliprdr-native
- ironrdp-connector
- ironrdp-core
- ironrdp-displaycontrol
- ironrdp-dvc
- ironrdp-dvc-pipe-proxy
- ironrdp-error
- ironrdp-futures
- ironrdp-graphics
- ironrdp-input
- ironrdp-pdu
- ironrdp-rdcleanpath
- ironrdp-rdpdr
- ironrdp-rdpdr-native
- ironrdp-rdpsnd
- ironrdp-rdpsnd-native
- ironrdp-server
- ironrdp-session
- ironrdp-svc
- ironrdp-tls
- ironrdp-tokio
- iron-remote-desktop
2025-11-26 15:01:04 -05:00
dependabot[bot]
742607240c
build(deps): bump clap from 4.5.52 to 4.5.53 in the patch group across 1 directory (#1039)
Some checks failed
CI / Check typos (push) Has been cancelled
CI / Check formatting (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-11-25 04:13:45 -05:00
dependabot[bot]
866a6d8674
build(deps): bump bytesize from 2.2.0 to 2.3.0 (#1040) 2025-11-25 04:13:24 -05:00
Benoît Cortier
430f70b43f
ci(release): set publish = false in ironrdp-client release-plz config (#1038)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-11-20 10:01:39 -05:00
Allan Zhang
5bd319126d
build(deps): bump picky and sspi (#1028)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
This fixes build issues with some dependencies.
2025-11-19 14:08:42 +00:00
Dion Gionet Mallet
bfb0cae2f8
ci(nuget): use Trusted Publishing auth (#1035)
Some checks failed
Release crates / Release crates (push) Has been cancelled
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
Issue: DEVOPS-3949
2025-11-18 01:24:12 -05:00
Yuval Marcus
a70e01d9c5
fix(server): send TLS close_notify during graceful RDP disconnect (#1032)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
Add support for sending a proper TLS close_notify message when the RDP
client initiates a graceful disconnect PDU.
2025-11-17 07:36:31 -05:00
Yuval Marcus
f2326ef046
fix(cliprdr)!: receiving a TemporaryDirectory PDU should not fail the svc (#1031)
- Fixes the Cliprdr `SvcProcessor` impl. to support handling a
`TemporaryDirectory` Clipboard PDU.
- Removes `ClipboardError::UnimplementedPdu` since it is no longer used
2025-11-17 10:13:47 +00:00
Yuval Marcus
966ba8a53e
feat(ironrdp-tokio): add MovableTokioFramed for Send+!Sync context (#1033)
The `ironrdp-tokio` crate currently provides the following two
`Framed<S>` implementations using the standard `tokio::io` traits:
- `type TokioFramed<S> = Framed<TokioStream<S>>` where `S: Send + Sync +
Unpin`
- `type LocalTokioFramed<S> = Framed<LocalTokioStream<S>>` where `S:
Unpin`

The former is meant for multi-threaded runtimes and the latter is meant
for single-threaded runtimes.

This PR adds a third `Framed<S>` implementation:

`pub type MovableTokioFramed<S> = Framed<MovableTokioStream<S>>` where
`S: Send + Unpin`

This is a valid usecase as some implementations of the `tokio::io`
traits are `Send` but `!Sync`. Without this new third type, consumers of
`Framed<S>` who have a `S: Send + !Sync` trait for their streams are
forced to downgrade to `LocalTokioFramed` and do some hacky workaround
with `tokio::task::spawn_blocking` since the defined associated futures,
`ReadFut` and `WriteAllFut`, are neither `Send` nor `Sync`.
2025-11-17 10:11:16 +00:00
dependabot[bot]
79e71c4f90
build(deps): bump windows from 0.61.3 to 0.62.1 (#1010)
Some checks failed
CI / Check typos (push) Has been cancelled
CI / Check formatting (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Success (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
Co-authored-by: Vladyslav Nikonov <mail@pacmancoder.xyz>
2025-11-10 14:12:28 -05:00
Alex Yusiuk
9622619e8c
refactor: add as_conversions clippy correctness lint (#1021)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
Co-authored-by: Benoît CORTIER <git.divisible626@passmail.com>
2025-11-05 22:23:22 +00:00
irvingouj@Devolutions
d3e0cb17e1
feat(ffi): expose RDCleanPath (#1014)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
Add RDCleanPath support for Devolutions.IronRDP .NET package
2025-10-30 16:38:41 +00:00
Raphaël Larivière
2cedc05722
ci(npm): migrate publishing to OIDC authentication (#1026)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
Issue: DEVOPS-3952
2025-10-30 08:40:10 -04:00
Alex Yusiuk
abc391c134
refactor: add partial_pub_fields clippy style and readability lint (#976)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-10-23 10:58:41 -04:00
dependabot[bot]
bbdecc2aa9
build(deps): bump clap from 4.5.49 to 4.5.50 in the patch group across 1 directory (#1023)
Some checks failed
CI / Check formatting (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-10-21 05:04:36 -04:00
Alex Yusiuk
e87048c19b
refactor: get rid of lazy_static (#1022)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
2025-10-20 15:07:19 -04:00
dependabot[bot]
52225cca3e
build(deps): bump the patch group across 1 directory with 3 updates (#1017)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-10-14 05:25:35 -04:00
Alex Yusiuk
6214c95c6f
test(extra): move the mod.rs to the correct tests directory (#1019) 2025-10-14 05:11:58 -04:00
rhammonds-teleport
a0a3e750c9
fix(rdpdr): fix incorrect padding when parsing NDR strings (#1015)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
When parsing Network Data Representation (NDR) messages, we're supposed
to account for padding at the end of strings to remain aligned on a
4-byte boundary. The existing code doesn't seem to cover all cases, and
the resulting misalignment causes misleading errors when processing the
rest of the message.
2025-10-09 13:25:08 -04:00
dependabot[bot]
d24dbb1e2c
build(deps): bump tokio-tungstenite from 0.27.0 to 0.28.0 (#1009)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Success (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
2025-10-08 04:19:40 -04:00
dependabot[bot]
a24a1fa9e8
build(deps): bump bytemuck from 1.23.2 to 1.24.0 (#1008)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
2025-10-07 09:43:36 -04:00
Alex Yusiuk
cca53fd79b
chore(web): bump iron-remote-desktop to 0.10.1 (#1011)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
2025-10-07 06:47:11 -04:00
Alex Yusiuk
da38fa20a3
chore(web): bump iron-remote-desktop-rdp to 0.6.1 (#1012) 2025-10-07 06:46:35 -04:00
Alex Yusiuk
82dbb6460f
refactor(ironrdp-pdu)!: fix as_conversions clippy lint warnings (#967)
Co-authored-by: Benoît CORTIER <git.divisible626@passmail.com>
2025-10-07 09:36:58 +00:00
Alex Yusiuk
af8ebdcfa2
refactor: enable missing_panics_doc clippy lint (#1006)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Success (push) Blocked by required conditions
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
> Checks the doc comments of publicly visible functions that may panic
and warns if there is no # Panics section.

Co-authored-by: Benoît Cortier <3809077+CBenoit@users.noreply.github.com>
2025-10-06 09:19:22 +00:00
Alex Yusiuk
ce298d1c19
refactor: enable self_named_module_files clippy lint (#1007)
Some checks failed
Coverage / Coverage Report (push) Has been cancelled
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
> Checks that module layout uses only mod.rs files.
2025-10-03 05:37:14 -04:00
Alex Yusiuk
a8289bf63f
refactor: enable unnecessary_self_imports clippy lint (#1005)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
2025-10-02 05:10:55 -04:00
Alex Yusiuk
bbc38db750
chore: enable try_err clippy lint (#1004) 2025-10-02 08:34:02 +00:00
Alex Yusiuk
49a0a9e6d2
chore: enable rest_pat_in_fully_bound_structs clippy lint (#1003) 2025-10-02 04:05:34 -04:00
devolutionsbot
c6b5487559
chore(release): prepare for publishing (#1002) 2025-10-02 03:34:02 +00:00
Gabriel Bauman
18c81ed5d8
feat(web): human-readable descriptions for RDCleanPath errors (#999)
More munging to give human-readable webclient-side errors for
RDCleanPath general/negotiation errors, including strings for WSA and
TLS and HTTP error conditions.
2025-10-01 22:54:37 -04:00
Alex Yusiuk
b91a4eeb01
refactor: enable redundant_type_annotations clippy lint (#1001)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-09-30 09:45:22 -04:00
devolutionsbot
209108dc2c
chore(release): prepare for publishing (#997)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
Release crates / Release crates (push) Waiting to run
2025-09-29 11:07:48 +00:00
Vladyslav Nikonov
a660d7f960
chore(release): remove leading zero from nuget version (#1000)
leading zero in Nuget version caused CI to fail on ffi crate build:

```
 error: invalid leading zero in minor version number
 --> ffi/Cargo.toml:3:11
  |
3 | version = "2025.09.24"
  |           ^^^^^^^^^^^^
  |
error: failed to load manifest for workspace member `/Users/runner/work/IronRDP/IronRDP/ffi`
referenced by workspace at `/Users/runner/work/IronRDP/IronRDP/Cargo.toml`
```
2025-09-29 06:39:28 -04:00
Vladyslav Nikonov
3198abae2f
chore(release): prepare for Devolutions.IronRdp v2025.09.24.0 (#998)
Some checks failed
CI / Check formatting (push) Has been cancelled
CI / Check typos (push) Has been cancelled
Coverage / Coverage Report (push) Has been cancelled
Release crates / Open release PR (push) Has been cancelled
Release crates / Release crates (push) Has been cancelled
CI / Checks [linux] (push) Has been cancelled
CI / Checks [macos] (push) Has been cancelled
CI / Checks [windows] (push) Has been cancelled
CI / Fuzzing (push) Has been cancelled
CI / Web Client (push) Has been cancelled
CI / FFI (push) Has been cancelled
CI / Success (push) Has been cancelled
2025-09-25 08:52:17 -04:00
Alex Yusiuk
6127e13c83
fix(web): fix this.lastSentClipboardData being nulled (#992)
Some checks are pending
CI / Check formatting (push) Waiting to run
CI / Checks [linux] (push) Blocked by required conditions
Coverage / Coverage Report (push) Waiting to run
Release crates / Release crates (push) Waiting to run
Release crates / Open release PR (push) Waiting to run
CI / Check typos (push) Waiting to run
CI / Checks [macos] (push) Blocked by required conditions
CI / Checks [windows] (push) Blocked by required conditions
CI / Fuzzing (push) Blocked by required conditions
CI / Web Client (push) Blocked by required conditions
CI / FFI (push) Blocked by required conditions
CI / Success (push) Blocked by required conditions
```js
await this.remoteDesktopService.onClipboardChanged(...)
```
Consumes the clipboard data we pass, so we need to clone the data to
prevent `this.lastSentClipboardData` from being null.
2025-09-24 08:49:32 +00:00
286 changed files with 7364 additions and 3900 deletions

View file

@ -5,7 +5,7 @@ on:
branches:
- master
pull_request:
types: [ opened, synchronize, reopened ]
types: [opened, synchronize, reopened]
workflow_dispatch:
env:
@ -56,12 +56,12 @@ jobs:
checks:
name: Checks [${{ matrix.os }}]
needs: [formatting]
runs-on: ${{ matrix.runner }}
needs: formatting
strategy:
fail-fast: false
matrix:
os: [ windows, linux, macos ]
os: [windows, linux, macos]
include:
- os: windows
runner: windows-latest
@ -75,17 +75,17 @@ jobs:
uses: actions/checkout@v4
- name: Install devel packages
if: runner.os == 'Linux'
if: ${{ runner.os == 'Linux' }}
run: |
sudo apt-get -y install libasound2-dev
- name: Install NASM
if: runner.os == 'Windows'
shell: pwsh
if: ${{ runner.os == 'Windows' }}
run: |
choco install nasm
$Env:PATH += ";$Env:ProgramFiles\NASM"
echo "PATH=$Env:PATH" >> $Env:GITHUB_ENV
shell: pwsh
- name: Rust cache
uses: Swatinem/rust-cache@v2.7.3
@ -118,8 +118,8 @@ jobs:
fuzz:
name: Fuzzing
needs: [formatting]
runs-on: ubuntu-latest
needs: formatting
steps:
- uses: actions/checkout@v4
@ -147,8 +147,8 @@ jobs:
web:
name: Web Client
needs: [formatting]
runs-on: ubuntu-latest
needs: formatting
steps:
- uses: actions/checkout@v4
@ -173,8 +173,8 @@ jobs:
ffi:
name: FFI
needs: [formatting]
runs-on: ubuntu-latest
needs: formatting
steps:
- uses: actions/checkout@v4
@ -202,20 +202,14 @@ jobs:
success:
name: Success
runs-on: ubuntu-latest
if: ${{ always() }}
needs:
- formatting
- typos
- checks
- fuzz
- web
- ffi
needs: [formatting, typos, checks, fuzz, web, ffi]
runs-on: ubuntu-latest
steps:
- name: Check success
shell: pwsh
run: |
$results = '${{ toJSON(needs.*.result) }}' | ConvertFrom-Json
$succeeded = $($results | Where { $_ -Ne "success" }).Count -Eq 0
exit $(if ($succeeded) { 0 } else { 1 })
shell: pwsh

View file

@ -5,7 +5,7 @@ on:
branches:
- master
pull_request:
types: [ opened, synchronize, reopened ]
types: [opened, synchronize, reopened]
workflow_dispatch:
env:
@ -32,19 +32,19 @@ jobs:
run: cargo xtask cov install -v
- name: Generate PR report
if: github.event.number != ''
if: ${{ github.event.number != '' }}
run: cargo xtask cov report-gh --repo "${{ github.repository }}" --pr "${{ github.event.number }}" -v
env:
GH_TOKEN: ${{ github.token }}
run: cargo xtask cov report-gh --repo "${{ github.repository }}" --pr "${{ github.event.number }}" -v
- name: Configure Git Identity
if: github.ref == 'refs/heads/master'
if: ${{ github.ref == 'refs/heads/master' }}
run: |
git config --local user.name "github-actions[bot]"
git config --local user.email "github-actions[bot]@users.noreply.github.com"
- name: Update coverage data
if: github.ref == 'refs/heads/master'
if: ${{ github.ref == 'refs/heads/master' }}
run: cargo xtask cov update -v
env:
GH_TOKEN: ${{ secrets.DEVOLUTIONSBOT_TOKEN }}
run: cargo xtask cov update -v

View file

@ -36,12 +36,12 @@ jobs:
fuzz:
name: Fuzzing ${{ matrix.target }}
needs: [corpus-download]
runs-on: ubuntu-latest
needs: corpus-download
strategy:
fail-fast: false
matrix:
target: [ pdu_decoding, rle_decompression, bitmap_stream, cliprdr_format, channel_processing ]
target: [pdu_decoding, rle_decompression, bitmap_stream, cliprdr_format, channel_processing]
steps:
- uses: actions/checkout@v4
@ -108,9 +108,9 @@ jobs:
corpus-merge:
name: Corpus merge artifacts
runs-on: ubuntu-latest
needs: fuzz
if: ${{ always() && !cancelled() }}
needs: [fuzz]
runs-on: ubuntu-latest
steps:
- name: Merge Artifacts
@ -122,9 +122,9 @@ jobs:
corpus-upload:
name: Upload corpus
runs-on: ubuntu-latest
needs: corpus-merge
if: ${{ always() && !cancelled() }}
needs: [corpus-merge]
runs-on: ubuntu-latest
env:
AZURE_STORAGE_KEY: ${{ secrets.CORPUS_AZURE_STORAGE_KEY }}
@ -156,13 +156,13 @@ jobs:
notify:
name: Notify failure
runs-on: ubuntu-latest
if: ${{ always() && contains(needs.*.result, 'failure') && github.event_name == 'schedule' }}
needs:
- fuzz
needs: [fuzz]
runs-on: ubuntu-latest
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_ARCHITECTURE }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
steps:
- name: Send slack notification
id: slack

View file

@ -21,7 +21,6 @@ jobs:
steps:
- name: Get dry run
id: get-dry-run
shell: pwsh
run: |
$IsDryRun = '${{ github.event.inputs.dry-run }}' -Eq 'true' -Or '${{ github.event_name }}' -Eq 'schedule'
@ -30,13 +29,12 @@ jobs:
} else {
echo "dry-run=false" >> $Env:GITHUB_OUTPUT
}
shell: pwsh
build:
name: Build package [${{matrix.library}}]
needs: [preflight]
runs-on: ubuntu-latest
needs:
- preflight
strategy:
fail-fast: false
matrix:
@ -49,18 +47,17 @@ jobs:
uses: actions/checkout@v4
- name: Setup wasm-pack
shell: bash
run: |
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
shell: bash
- name: Install dependencies
shell: pwsh
run: |
Set-Location -Path "./web-client/${{matrix.library}}/"
npm install
shell: pwsh
- name: Build package
shell: pwsh
run: |
Set-PSDebug -Trace 1
@ -68,14 +65,15 @@ jobs:
npm run build
Set-Location -Path ./dist
npm pack
shell: pwsh
- name: Harvest package
shell: pwsh
run: |
Set-PSDebug -Trace 1
New-Item -ItemType "directory" -Path . -Name "npm-packages"
Get-ChildItem -Path ./web-client/ -Recurse *.tgz | ForEach { Copy-Item $_ "./npm-packages" }
shell: pwsh
- name: Upload package artifact
uses: actions/upload-artifact@v4
@ -85,8 +83,8 @@ jobs:
npm-merge:
name: Merge artifacts
needs: [build]
runs-on: ubuntu-latest
needs: build
steps:
- name: Merge Artifacts
@ -98,12 +96,13 @@ jobs:
publish:
name: Publish package
runs-on: ubuntu-latest
if: github.event_name == 'workflow_dispatch'
environment: publish
needs:
- preflight
- npm-merge
if: ${{ github.event_name == 'workflow_dispatch' }}
needs: [preflight, npm-merge]
runs-on: ubuntu-latest
permissions:
contents: write
id-token: write
steps:
- name: Checkout repository
@ -117,12 +116,7 @@ jobs:
name: npm
path: npm-packages
- name: Prepare npm
shell: pwsh
run: npm config set "//registry.npmjs.org/:_authToken=${{ secrets.NPM_TOKEN }}"
- name: Publish
shell: pwsh
run: |
Set-PSDebug -Trace 1
@ -168,15 +162,10 @@ jobs:
$publishCmd = $publishCmd -Join ' '
Invoke-Expression $publishCmd
}
shell: pwsh
- name: Create version tags
if: ${{ needs.preflight.outputs.dry-run == 'false' }}
shell: bash
env:
GIT_AUTHOR_NAME: github-actions
GIT_AUTHOR_EMAIL: github-actions@github.com
GIT_COMMITTER_NAME: github-actions
GIT_COMMITTER_EMAIL: github-actions@github.com
run: |
set -e
@ -202,6 +191,12 @@ jobs:
git tag "$tag" "$GITHUB_SHA"
git push origin "$tag"
done
shell: bash
env:
GIT_AUTHOR_NAME: github-actions
GIT_AUTHOR_EMAIL: github-actions@github.com
GIT_COMMITTER_NAME: github-actions
GIT_COMMITTER_EMAIL: github-actions@github.com
- name: Update Artifactory Cache
if: ${{ needs.preflight.outputs.dry-run == 'false' }}
@ -213,14 +208,13 @@ jobs:
notify:
name: Notify failure
runs-on: ubuntu-latest
if: ${{ always() && contains(needs.*.result, 'failure') && github.event_name == 'schedule' }}
needs:
- preflight
- build
needs: [preflight, build]
runs-on: ubuntu-latest
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_ARCHITECTURE }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
steps:
- name: Send slack notification
id: slack

View file

@ -26,7 +26,6 @@ jobs:
- name: Get dry run
id: get-dry-run
shell: pwsh
run: |
$IsDryRun = '${{ github.event.inputs.dry-run }}' -Eq 'true' -Or '${{ github.event_name }}' -Eq 'schedule'
@ -35,26 +34,27 @@ jobs:
} else {
echo "dry-run=false" >> $Env:GITHUB_OUTPUT
}
shell: pwsh
- name: Get version
id: get-version
shell: pwsh
run: |
$CsprojXml = [Xml] (Get-Content .\ffi\dotnet\Devolutions.IronRdp\Devolutions.IronRdp.csproj)
$ProjectVersion = $CsprojXml.Project.PropertyGroup.Version | Select-Object -First 1
$PackageVersion = $ProjectVersion -Replace "^(\d+)\.(\d+)\.(\d+).(\d+)$", "`$1.`$2.`$3"
echo "project-version=$ProjectVersion" >> $Env:GITHUB_OUTPUT
echo "package-version=$PackageVersion" >> $Env:GITHUB_OUTPUT
shell: pwsh
build-native:
name: Native build
needs: [preflight]
runs-on: ${{matrix.runner}}
needs: preflight
strategy:
fail-fast: false
matrix:
os: [ win, osx, linux, ios, android ]
arch: [ x86, x64, arm, arm64 ]
os: [win, osx, linux, ios, android]
arch: [x86, x64, arm, arm64]
include:
- os: win
runner: windows-2022
@ -89,20 +89,20 @@ jobs:
uses: actions/checkout@v4
- name: Configure Android NDK
if: ${{ matrix.os == 'android' }}
uses: Devolutions/actions-public/cargo-android-ndk@v1
if: matrix.os == 'android'
with:
android_api_level: "21"
- name: Configure macOS deployement target
if: ${{ matrix.os == 'osx' }}
shell: pwsh
run: Write-Output "MACOSX_DEPLOYMENT_TARGET=10.10" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
shell: pwsh
- name: Configure iOS deployement target
if: ${{ matrix.os == 'ios' }}
shell: pwsh
run: Write-Output "IPHONEOS_DEPLOYMENT_TARGET=12.1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
shell: pwsh
- name: Update runner
if: ${{ matrix.os == 'linux' }}
@ -110,12 +110,11 @@ jobs:
- name: Install dependencies for rustls
if: ${{ runner.os == 'Windows' }}
shell: pwsh
run: |
choco install ninja nasm
# We need to add the NASM binary folder to the PATH manually.
# We dont need to do that for ninja.
# We don't need to do that for ninja.
Write-Output "PATH=$Env:PATH;$Env:ProgramFiles\NASM" >> $Env:GITHUB_ENV
# libclang / LLVM is a requirement for AWS LC.
@ -125,6 +124,7 @@ jobs:
# Install Visual Studio Developer PowerShell Module for cmdlets such as Enter-VsDevShell
Install-Module VsDevShell -Force
shell: pwsh
# No pre-generated bindings for Android and iOS.
# https://aws.github.io/aws-lc-rs/platform_support.html#pre-generated-bindings
@ -141,19 +141,18 @@ jobs:
sudo apt-get install gcc-multilib
- name: Setup LLVM
if: ${{ matrix.os == 'linux' }}
uses: Devolutions/actions-public/setup-llvm@v1
if: matrix.os == 'linux'
with:
version: "18.1.8"
- name: Setup CBake
if: ${{ matrix.os == 'linux' }}
uses: Devolutions/actions-public/setup-cbake@v1
if: matrix.os == 'linux'
with:
cargo_env_scripts: true
- name: Build native lib (${{matrix.os}}-${{matrix.arch}})
shell: pwsh
run: |
$DotNetOs = '${{matrix.os}}'
$DotNetArch = '${{matrix.arch}}'
@ -210,6 +209,7 @@ jobs:
$OutputPath = Join-Path "dependencies" "runtimes" $DotNetRid "native"
New-Item -ItemType Directory -Path $OutputPath | Out-Null
Copy-Item $OutputLibrary $(Join-Path $OutputPath $RenamedLibraryName)
shell: pwsh
- name: Upload native components
uses: actions/upload-artifact@v4
@ -219,8 +219,8 @@ jobs:
build-universal:
name: Universal build
needs: [preflight, build-native]
runs-on: ubuntu-22.04
needs: [ preflight, build-native ]
strategy:
fail-fast: false
matrix:
@ -239,7 +239,6 @@ jobs:
path: dependencies/runtimes
- name: Lipo native components
shell: pwsh
run: |
Set-Location "dependencies/runtimes"
# No RID for universal binaries, see: https://github.com/dotnet/runtime/issues/53156
@ -249,9 +248,9 @@ jobs:
$LipoCmd = $(@('lipo', '-create', '-output', (Join-Path -Path $OutputPath -ChildPath "libDevolutionsIronRdp.dylib")) + $Libraries) -Join ' '
Write-Host $LipoCmd
Invoke-Expression $LipoCmd
shell: pwsh
- name: Framework
shell: pwsh
if: ${{ matrix.os == 'ios' }}
run: |
$Version = '${{ needs.preflight.outputs.project-version }}'
@ -269,19 +268,19 @@ jobs:
[xml] $InfoPlistXml = Get-Content (Join-Path "ffi" "dotnet" "Devolutions.IronRdp" "Info.plist")
Select-Xml -xml $InfoPlistXml -XPath "/plist/dict/key[. = 'CFBundleIdentifier']/following-sibling::string[1]" |
%{
%{
$_.Node.InnerXml = "com.devolutions.ironrdp"
}
Select-Xml -xml $InfoPlistXml -XPath "/plist/dict/key[. = 'CFBundleExecutable']/following-sibling::string[1]" |
%{
%{
$_.Node.InnerXml = $BundleName
}
Select-Xml -xml $InfoPlistXml -XPath "/plist/dict/key[. = 'CFBundleVersion']/following-sibling::string[1]" |
%{
%{
$_.Node.InnerXml = $Version
}
Select-Xml -xml $InfoPlistXml -XPath "/plist/dict/key[. = 'CFBundleShortVersionString']/following-sibling::string[1]" |
%{
%{
$_.Node.InnerXml = $ShortVersion
}
@ -294,6 +293,7 @@ jobs:
# .NET XML document inserts two square brackets at the end of the DOCTYPE tag
# It's perfectly valid XML, but we're dealing with plists here and dyld will not be able to read the file
((Get-Content -Path (Join-Path $FrameworkDir "Info.plist") -Raw) -Replace 'PropertyList-1.0.dtd"\[\]', 'PropertyList-1.0.dtd"') | Set-Content -Path (Join-Path $FrameworkDir "Info.plist")
shell: pwsh
- name: Upload native components
uses: actions/upload-artifact@v4
@ -303,8 +303,8 @@ jobs:
build-managed:
name: Managed build
needs: [build-universal]
runs-on: windows-2022
needs: build-universal
steps:
- name: Check out ${{ github.repository }}
@ -317,9 +317,9 @@ jobs:
run: dotnet workload install ios
- name: Prepare dependencies
shell: pwsh
run: |
New-Item -ItemType Directory -Path "dependencies/runtimes" | Out-Null
shell: pwsh
- name: Download native components
uses: actions/download-artifact@v4
@ -327,19 +327,19 @@ jobs:
path: dependencies/runtimes
- name: Rename dependencies
shell: pwsh
run: |
Set-Location "dependencies/runtimes"
$(Get-Item ".\ironrdp-*") | ForEach-Object { Rename-Item $_ $_.Name.Replace("ironrdp-", "") }
Get-ChildItem * -Recurse
shell: pwsh
- name: Build Devolutions.IronRdp (managed)
shell: pwsh
run: |
# net8.0 target packaged as Devolutions.IronRdp
dotnet build .\ffi\dotnet\Devolutions.IronRdp\Devolutions.IronRdp.csproj -c Release
# net8.0-ios target packaged as Devolutions.IronRdp.iOS
# net9.0-ios target packaged as Devolutions.IronRdp.iOS
dotnet build .\ffi\dotnet\Devolutions.IronRdp\Devolutions.IronRdp.csproj -c Release /p:PackageId=Devolutions.IronRdp.iOS
shell: pwsh
- name: Upload managed components
uses: actions/upload-artifact@v4
@ -349,12 +349,12 @@ jobs:
publish:
name: Publish NuGet package
runs-on: ubuntu-latest
environment: nuget-publish
if: needs.preflight.outputs.dry-run == 'false'
needs:
- preflight
- build-managed
if: ${{ needs.preflight.outputs.dry-run == 'false' }}
needs: [preflight, build-managed]
runs-on: ubuntu-latest
permissions:
id-token: write
steps:
- name: Download NuGet package artifact
@ -363,19 +363,24 @@ jobs:
name: ironrdp-nupkg
path: package
- name: NuGet login (OIDC)
uses: NuGet/login@v1
id: nuget-login
with:
user: ${{ secrets.NUGET_BOT_USERNAME }}
- name: Publish to nuget.org
shell: pwsh
run: |
$Files = Get-ChildItem -Recurse package/*.nupkg
foreach ($File in $Files) {
$PushCmd = @(
'dotnet',
'nuget',
'push',
'dotnet',
'nuget',
'push',
"$File",
'--api-key',
'${{ secrets.NUGET_API_KEY }}',
'${{ steps.nuget-login.outputs.NUGET_API_KEY }}',
'--source',
'https://api.nuget.org/v3/index.json',
'--skip-duplicate'
@ -385,19 +390,17 @@ jobs:
$PushCmd = $PushCmd -Join ' '
Invoke-Expression $PushCmd
}
shell: pwsh
notify:
name: Notify failure
runs-on: ubuntu-latest
if: ${{ always() && contains(needs.*.result, 'failure') && github.event_name == 'schedule' }}
needs:
- preflight
- build-native
- build-universal
- build-managed
needs: [preflight, build-native, build-universal, build-managed]
runs-on: ubuntu-latest
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_ARCHITECTURE }}
SLACK_WEBHOOK_TYPE: INCOMING_WEBHOOK
steps:
- name: Send slack notification
id: slack

View file

@ -14,9 +14,8 @@ jobs:
# Create a PR with the new versions and changelog, preparing the next release.
open-pr:
name: Open release PR
runs-on: ubuntu-latest
environment: cratesio-publish
runs-on: ubuntu-latest
concurrency:
group: release-plz-${{ github.ref }}
cancel-in-progress: false
@ -37,7 +36,6 @@ jobs:
github-token: ${{ secrets.DEVOLUTIONSBOT_WRITE_TOKEN }}
- name: Update fuzz/Cargo.lock
shell: pwsh
if: ${{ steps.release-plz.outputs.did-open-pr == 'true' }}
run: |
$prRaw = '${{ steps.release-plz.outputs.pr }}'
@ -61,12 +59,15 @@ jobs:
Write-Host "Update the release pull request"
git push --force
shell: pwsh
# Release unpublished packages.
release:
name: Release crates
runs-on: ubuntu-latest
environment: cratesio-publish
runs-on: ubuntu-latest
permissions:
id-token: write
steps:
- name: Checkout repository
@ -74,8 +75,12 @@ jobs:
with:
fetch-depth: 512
- name: Authenticate with crates.io
id: auth
uses: rust-lang/crates-io-auth-action@v1
- name: Run release-plz
uses: Devolutions/actions-public/release-plz@v1
with:
command: release
registry-token: ${{ secrets.CRATES_IO_TOKEN }}
registry-token: ${{ steps.auth.outputs.token }}

2088
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -34,12 +34,11 @@ categories = ["network-programming"]
# even for private dependencies.
expect-test = "1"
proptest = "1.4"
rstest = "0.25"
rstest = "0.26"
# Note: we are trying to move away from using these crates.
# They are being kept around for now for legacy compatibility,
# but new usage should be avoided.
lazy_static = "1.4" # Legacy crate; prefer std::sync::LazyLock or LazyCell
num-derive = "0.4"
num-traits = "0.2"
@ -93,6 +92,7 @@ fn_to_numeric_cast_any = "warn"
ptr_cast_constness = "warn"
# == Correctness == #
as_conversions = "warn"
cast_lossless = "warn"
cast_possible_truncation = "warn"
cast_possible_wrap = "warn"
@ -115,6 +115,7 @@ same_name_method = "warn"
string_slice = "warn"
suspicious_xor_used_as_pow = "warn"
unused_result_ok = "warn"
missing_panics_doc = "warn"
# == Style, readability == #
semicolon_outside_block = "warn" # With semicolon-outside-block-ignore-multiline = true
@ -126,6 +127,7 @@ empty_enum_variants_with_brackets = "warn"
deref_by_slicing = "warn"
multiple_inherent_impl = "warn"
map_with_unused_argument_over_ranges = "warn"
partial_pub_fields = "warn"
trait_duplication_in_bounds = "warn"
type_repetition_in_bounds = "warn"
checked_conversions = "warn"
@ -139,8 +141,12 @@ unused_self = "warn"
useless_let_if_seq = "warn"
string_add = "warn"
range_plus_one = "warn"
# TODO: self_named_module_files = "warn"
self_named_module_files = "warn"
# TODO: partial_pub_fields = "warn" (should we enable only in pdu crates?)
redundant_type_annotations = "warn"
unnecessary_self_imports = "warn"
try_err = "warn"
rest_pat_in_fully_bound_structs = "warn"
# == Compile-time / optimization == #
doc_include_without_cfg = "warn"
@ -150,6 +156,7 @@ or_fun_call = "warn"
rc_buffer = "warn"
string_lit_chars_any = "warn"
unnecessary_box_returns = "warn"
large_futures = "warn"
# == Extra-pedantic clippy == #
allow_attributes = "warn"

View file

@ -17,7 +17,7 @@ qoiz = ["ironrdp/qoiz"]
[dependencies]
anyhow = "1.0.99"
async-trait = "0.1.89"
bytesize = "2.1.0"
bytesize = "2.3"
ironrdp = { path = "../crates/ironrdp", features = [
"server",
"pdu",

View file

@ -68,7 +68,7 @@ async fn main() -> Result<(), anyhow::Error> {
let mut updates = DisplayUpdates::new(file, DesktopSize { width, height }, fps);
while let Some(up) = updates.next_update().await? {
if let DisplayUpdate::Bitmap(ref up) = up {
total_raw += up.data.len() as u64;
total_raw += u64::try_from(up.data.len())?;
} else {
eprintln!("Invalid update");
break;
@ -78,7 +78,7 @@ async fn main() -> Result<(), anyhow::Error> {
let Some(frag) = iter.next().await else {
break;
};
let len = frag?.data.len() as u64;
let len = u64::try_from(frag?.data.len())?;
total_enc += len;
}
n_updates += 1;
@ -87,6 +87,7 @@ async fn main() -> Result<(), anyhow::Error> {
}
println!();
#[expect(clippy::as_conversions, reason = "casting u64 to f64")]
let ratio = total_enc as f64 / total_raw as f64;
let percent = 100.0 - ratio * 100.0;
println!("Encoder: {encoder:?}");

View file

@ -1,4 +1,4 @@
msrv = "1.84"
msrv = "1.87"
semicolon-outside-block-ignore-multiline = true
accept-comment-above-statement = true
accept-comment-above-attributes = true

View file

@ -6,6 +6,12 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [[0.7.0](https://github.com/Devolutions/IronRDP/compare/iron-remote-desktop-v0.6.0...iron-remote-desktop-v0.7.0)] - 2025-09-29
### <!-- 4 -->Bug Fixes
- [**breaking**] Changed onClipboardChanged to not consume the input (#992) ([6127e13c83](https://github.com/Devolutions/IronRDP/commit/6127e13c836d06764d483b6b55188fd23a4314a2))
## [[0.6.0](https://github.com/Devolutions/IronRDP/compare/iron-remote-desktop-v0.5.0...iron-remote-desktop-v0.6.0)] - 2025-08-29
### <!-- 1 -->Features

View file

@ -1,6 +1,6 @@
[package]
name = "iron-remote-desktop"
version = "0.6.0"
version = "0.7.0"
readme = "README.md"
description = "Helper crate for building WASM modules compatible with iron-remote-desktop WebComponent"
edition.workspace = true

View file

@ -159,8 +159,8 @@ macro_rules! make_bridge {
}
#[wasm_bindgen(js_name = onClipboardPaste)]
pub async fn on_clipboard_paste(&self, content: ClipboardData) -> Result<(), IronError> {
$crate::Session::on_clipboard_paste(&self.0, content.0)
pub async fn on_clipboard_paste(&self, content: &ClipboardData) -> Result<(), IronError> {
$crate::Session::on_clipboard_paste(&self.0, &content.0)
.await
.map_err(IronError)
}

View file

@ -84,7 +84,7 @@ pub trait Session {
fn on_clipboard_paste(
&self,
content: Self::ClipboardData,
content: &Self::ClipboardData,
) -> impl core::future::Future<Output = Result<(), Self::Error>>;
fn resize(

View file

@ -6,6 +6,17 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [[0.8.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-acceptor-v0.7.0...ironrdp-acceptor-v0.8.0)] - 2025-12-18
### <!-- 4 -->Bug Fixes
- [**breaking**] Use static dispatch for NetworkClient trait ([#1043](https://github.com/Devolutions/IronRDP/issues/1043)) ([bca6d190a8](https://github.com/Devolutions/IronRDP/commit/bca6d190a870708468534d224ff225a658767a9a))
- Rename `AsyncNetworkClient` to `NetworkClient`
- Replace dynamic dispatch (`Option<&mut dyn ...>`) with static dispatch
using generics (`&mut N where N: NetworkClient`)
- Reorder `connect_finalize` parameters for consistency across crates
## [[0.6.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-acceptor-v0.5.0...ironrdp-acceptor-v0.6.0)] - 2025-07-08
### <!-- 1 -->Features

View file

@ -1,6 +1,6 @@
[package]
name = "ironrdp-acceptor"
version = "0.7.0"
version = "0.8.0"
readme = "README.md"
description = "State machines to drive an RDP connection acceptance sequence"
edition.workspace = true
@ -19,8 +19,8 @@ test = false
ironrdp-core = { path = "../ironrdp-core", version = "0.1", features = ["alloc"] } # public
ironrdp-pdu = { path = "../ironrdp-pdu", version = "0.6" } # public
ironrdp-svc = { path = "../ironrdp-svc", version = "0.5" } # public
ironrdp-connector = { path = "../ironrdp-connector", version = "0.7" } # public
ironrdp-async = { path = "../ironrdp-async", version = "0.7" } # public
ironrdp-connector = { path = "../ironrdp-connector", version = "0.8" } # public
ironrdp-async = { path = "../ironrdp-async", version = "0.8" } # public
tracing = { version = "0.1", features = ["log"] }
[lints]

View file

@ -4,9 +4,8 @@ use ironrdp_connector::{
reason_err, ConnectorError, ConnectorErrorExt as _, ConnectorResult, Sequence, State, Written,
};
use ironrdp_core::WriteBuf;
use ironrdp_pdu::mcs;
use ironrdp_pdu::x224::X224;
use ironrdp_pdu::{self as pdu};
use pdu::mcs;
use tracing::debug;
#[derive(Debug)]
@ -57,13 +56,13 @@ impl State for ChannelConnectionState {
}
impl Sequence for ChannelConnectionSequence {
fn next_pdu_hint(&self) -> Option<&dyn pdu::PduHint> {
fn next_pdu_hint(&self) -> Option<&dyn ironrdp_pdu::PduHint> {
match &self.state {
ChannelConnectionState::Consumed => None,
ChannelConnectionState::WaitErectDomainRequest => Some(&pdu::X224_HINT),
ChannelConnectionState::WaitAttachUserRequest => Some(&pdu::X224_HINT),
ChannelConnectionState::WaitErectDomainRequest => Some(&ironrdp_pdu::X224_HINT),
ChannelConnectionState::WaitAttachUserRequest => Some(&ironrdp_pdu::X224_HINT),
ChannelConnectionState::SendAttachUserConfirm => None,
ChannelConnectionState::WaitChannelJoinRequest { .. } => Some(&pdu::X224_HINT),
ChannelConnectionState::WaitChannelJoinRequest { .. } => Some(&ironrdp_pdu::X224_HINT),
ChannelConnectionState::SendChannelJoinConfirm { .. } => None,
ChannelConnectionState::AllJoined => None,
}

View file

@ -123,6 +123,9 @@ impl Acceptor {
}
}
/// # Panics
///
/// Panics if state is not [AcceptorState::SecurityUpgrade].
pub fn mark_security_upgrade_as_done(&mut self) {
assert!(self.reached_security_upgrade().is_some());
self.step(&[], &mut WriteBuf::new()).expect("transition to next state");
@ -133,6 +136,9 @@ impl Acceptor {
matches!(self.state, AcceptorState::Credssp { .. })
}
/// # Panics
///
/// Panics if state is not [AcceptorState::Credssp].
pub fn mark_credssp_as_done(&mut self) {
assert!(self.should_perform_credssp());
let res = self.step(&[], &mut WriteBuf::new()).expect("transition to next state");
@ -707,7 +713,7 @@ impl Sequence for Acceptor {
AcceptorState::Accepted {
channels,
client_capabilities,
input_events: finalization.input_events,
input_events: finalization.into_input_events(),
}
} else {
AcceptorState::ConnectionFinalization {

View file

@ -1,4 +1,4 @@
use ironrdp_async::AsyncNetworkClient;
use ironrdp_async::NetworkClient;
use ironrdp_connector::sspi::credssp::{
CredSspServer, CredentialsProxy, ServerError, ServerMode, ServerState, TsRequest,
};
@ -71,7 +71,7 @@ impl CredentialsProxy for CredentialsProxyImpl<'_> {
pub(crate) async fn resolve_generator(
generator: &mut CredsspProcessGenerator<'_>,
network_client: &mut dyn AsyncNetworkClient,
network_client: &mut impl NetworkClient,
) -> Result<ServerState, ServerError> {
let mut state = generator.start();

View file

@ -1,8 +1,7 @@
use ironrdp_connector::{ConnectorError, ConnectorErrorExt as _, ConnectorResult, Sequence, State, Written};
use ironrdp_core::WriteBuf;
use ironrdp_pdu::rdp;
use ironrdp_pdu::x224::X224;
use ironrdp_pdu::{self as pdu};
use pdu::rdp;
use tracing::debug;
use crate::util::{self, wrap_share_data};
@ -13,7 +12,7 @@ pub struct FinalizationSequence {
user_channel_id: u16,
io_channel_id: u16,
pub input_events: Vec<Vec<u8>>,
input_events: Vec<Vec<u8>>,
}
#[derive(Default, Debug)]
@ -60,13 +59,13 @@ impl State for FinalizationState {
}
impl Sequence for FinalizationSequence {
fn next_pdu_hint(&self) -> Option<&dyn pdu::PduHint> {
fn next_pdu_hint(&self) -> Option<&dyn ironrdp_pdu::PduHint> {
match &self.state {
FinalizationState::Consumed => None,
FinalizationState::WaitSynchronize => Some(&pdu::X224Hint),
FinalizationState::WaitControlCooperate => Some(&pdu::X224Hint),
FinalizationState::WaitRequestControl => Some(&pdu::X224Hint),
FinalizationState::WaitFontList => Some(&pdu::RdpHint),
FinalizationState::WaitSynchronize => Some(&ironrdp_pdu::X224Hint),
FinalizationState::WaitControlCooperate => Some(&ironrdp_pdu::X224Hint),
FinalizationState::WaitRequestControl => Some(&ironrdp_pdu::X224Hint),
FinalizationState::WaitFontList => Some(&ironrdp_pdu::RdpHint),
FinalizationState::SendSynchronizeConfirm => None,
FinalizationState::SendControlCooperateConfirm => None,
FinalizationState::SendGrantedControlConfirm => None,
@ -191,6 +190,10 @@ impl FinalizationSequence {
}
}
pub fn into_input_events(self) -> Vec<Vec<u8>> {
self.input_events
}
pub fn is_done(&self) -> bool {
self.state.is_terminal()
}
@ -221,7 +224,7 @@ fn create_font_map() -> rdp::headers::ShareDataPdu {
}
fn decode_share_control(input: &[u8]) -> ConnectorResult<rdp::headers::ShareControlHeader> {
let data_request = ironrdp_core::decode::<X224<pdu::mcs::SendDataRequest<'_>>>(input)
let data_request = ironrdp_core::decode::<X224<ironrdp_pdu::mcs::SendDataRequest<'_>>>(input)
.map_err(ConnectorError::decode)
.map(|p| p.0)?;
let share_control = ironrdp_core::decode::<rdp::headers::ShareControlHeader>(data_request.user_data.as_ref())
@ -230,7 +233,7 @@ fn decode_share_control(input: &[u8]) -> ConnectorResult<rdp::headers::ShareCont
}
fn decode_font_list(input: &[u8]) -> Result<rdp::finalization_messages::FontPdu, ()> {
use pdu::rdp::headers::{ShareControlPdu, ShareDataPdu};
use ironrdp_pdu::rdp::headers::{ShareControlPdu, ShareDataPdu};
let share_control = decode_share_control(input).map_err(|_| ())?;

View file

@ -1,7 +1,7 @@
#![cfg_attr(doc, doc = include_str!("../README.md"))]
#![doc(html_logo_url = "https://cdnweb.devolutions.net/images/projects/devolutions/logos/devolutions-icon-shadow.svg")]
use ironrdp_async::{single_sequence_step, AsyncNetworkClient, Framed, FramedRead, FramedWrite, StreamWrapper};
use ironrdp_async::{single_sequence_step, Framed, FramedRead, FramedWrite, NetworkClient, StreamWrapper};
use ironrdp_connector::sspi::credssp::EarlyUserAuthResult;
use ironrdp_connector::sspi::{AuthIdentity, KerberosServerConfig, Username};
use ironrdp_connector::{custom_err, general_err, ConnectorResult, ServerName};
@ -51,16 +51,17 @@ where
}
}
pub async fn accept_credssp<S>(
pub async fn accept_credssp<S, N>(
framed: &mut Framed<S>,
acceptor: &mut Acceptor,
network_client: &mut N,
client_computer_name: ServerName,
public_key: Vec<u8>,
kerberos_config: Option<KerberosServerConfig>,
network_client: Option<&mut dyn AsyncNetworkClient>,
) -> ConnectorResult<()>
where
S: FramedRead + FramedWrite,
N: NetworkClient,
{
let mut buf = WriteBuf::new();
@ -68,11 +69,11 @@ where
perform_credssp_step(
framed,
acceptor,
network_client,
&mut buf,
client_computer_name,
public_key,
kerberos_config,
network_client,
)
.await
} else {
@ -98,34 +99,73 @@ where
}
#[instrument(level = "trace", skip_all, ret)]
async fn perform_credssp_step<S>(
async fn perform_credssp_step<S, N>(
framed: &mut Framed<S>,
acceptor: &mut Acceptor,
network_client: &mut N,
buf: &mut WriteBuf,
client_computer_name: ServerName,
public_key: Vec<u8>,
kerberos_config: Option<KerberosServerConfig>,
network_client: Option<&mut dyn AsyncNetworkClient>,
) -> ConnectorResult<()>
where
S: FramedRead + FramedWrite,
N: NetworkClient,
{
assert!(acceptor.should_perform_credssp());
let AcceptorState::Credssp { protocol, .. } = acceptor.state else {
unreachable!()
};
async fn credssp_loop<S>(
let result = credssp_loop(
framed,
acceptor,
network_client,
buf,
client_computer_name,
public_key,
kerberos_config,
)
.await;
if protocol.intersects(nego::SecurityProtocol::HYBRID_EX) {
trace!(?result, "HYBRID_EX");
let result = if result.is_ok() {
EarlyUserAuthResult::Success
} else {
EarlyUserAuthResult::AccessDenied
};
buf.clear();
result
.to_buffer(&mut *buf)
.map_err(|e| ironrdp_connector::custom_err!("to_buffer", e))?;
let response = &buf[..result.buffer_len()];
framed
.write_all(response)
.await
.map_err(|e| ironrdp_connector::custom_err!("write all", e))?;
}
result?;
acceptor.mark_credssp_as_done();
return Ok(());
async fn credssp_loop<S, N>(
framed: &mut Framed<S>,
acceptor: &mut Acceptor,
network_client: &mut N,
buf: &mut WriteBuf,
client_computer_name: ServerName,
public_key: Vec<u8>,
kerberos_config: Option<KerberosServerConfig>,
mut network_client: Option<&mut dyn AsyncNetworkClient>,
) -> ConnectorResult<()>
where
S: FramedRead + FramedWrite,
N: NetworkClient,
{
let creds = acceptor
.creds
@ -164,12 +204,7 @@ where
let result = {
let mut generator = sequence.process_ts_request(ts_request);
if let Some(network_client_ref) = network_client.as_deref_mut() {
resolve_generator(&mut generator, network_client_ref).await
} else {
generator.resolve_to_result()
}
resolve_generator(&mut generator, network_client).await
}; // drop generator
buf.clear();
@ -184,43 +219,7 @@ where
.map_err(|e| ironrdp_connector::custom_err!("write all", e))?;
}
}
Ok(())
}
let result = credssp_loop(
framed,
acceptor,
buf,
client_computer_name,
public_key,
kerberos_config,
network_client,
)
.await;
if protocol.intersects(nego::SecurityProtocol::HYBRID_EX) {
trace!(?result, "HYBRID_EX");
let result = if result.is_ok() {
EarlyUserAuthResult::Success
} else {
EarlyUserAuthResult::AccessDenied
};
buf.clear();
result
.to_buffer(&mut *buf)
.map_err(|e| ironrdp_connector::custom_err!("to_buffer", e))?;
let response = &buf[..result.buffer_len()];
framed
.write_all(response)
.await
.map_err(|e| ironrdp_connector::custom_err!("write all", e))?;
}
result?;
acceptor.mark_credssp_as_done();
Ok(())
}

View file

@ -6,14 +6,23 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [[0.8.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-async-v0.7.0...ironrdp-async-v0.8.0)] - 2025-12-18
### <!-- 4 -->Bug Fixes
- [**breaking**] Use static dispatch for NetworkClient trait ([#1043](https://github.com/Devolutions/IronRDP/issues/1043)) ([bca6d190a8](https://github.com/Devolutions/IronRDP/commit/bca6d190a870708468534d224ff225a658767a9a))
- Rename `AsyncNetworkClient` to `NetworkClient`
- Replace dynamic dispatch (`Option<&mut dyn ...>`) with static dispatch
using generics (`&mut N where N: NetworkClient`)
- Reorder `connect_finalize` parameters for consistency across crates
## [[0.3.2](https://github.com/Devolutions/IronRDP/compare/ironrdp-async-v0.3.1...ironrdp-async-v0.3.2)] - 2025-03-12
### <!-- 7 -->Build
- Bump ironrdp-pdu
## [[0.3.1](https://github.com/Devolutions/IronRDP/compare/ironrdp-async-v0.3.0...ironrdp-async-v0.3.1)] - 2025-03-12
### <!-- 7 -->Build

View file

@ -1,6 +1,6 @@
[package]
name = "ironrdp-async"
version = "0.7.0"
version = "0.8.0"
readme = "README.md"
description = "Provides `Future`s wrapping the IronRDP state machines conveniently"
edition.workspace = true
@ -16,7 +16,7 @@ doctest = false
test = false
[dependencies]
ironrdp-connector = { path = "../ironrdp-connector", version = "0.7" } # public
ironrdp-connector = { path = "../ironrdp-connector", version = "0.8" } # public
ironrdp-core = { path = "../ironrdp-core", version = "0.1", features = ["alloc"] } # public
ironrdp-pdu = { path = "../ironrdp-pdu", version = "0.6" } # public
tracing = { version = "0.1", features = ["log"] }

View file

@ -2,14 +2,14 @@ use ironrdp_connector::credssp::{CredsspProcessGenerator, CredsspSequence, Kerbe
use ironrdp_connector::sspi::credssp::ClientState;
use ironrdp_connector::sspi::generator::GeneratorState;
use ironrdp_connector::{
custom_err, general_err, ClientConnector, ClientConnectorState, ConnectionResult, ConnectorError, ConnectorResult,
ServerName, State as _,
general_err, ClientConnector, ClientConnectorState, ConnectionResult, ConnectorError, ConnectorResult, ServerName,
State as _,
};
use ironrdp_core::WriteBuf;
use tracing::{debug, info, instrument, trace};
use crate::framed::{Framed, FramedRead, FramedWrite};
use crate::{single_sequence_step, AsyncNetworkClient};
use crate::{single_sequence_step, NetworkClient};
#[non_exhaustive]
pub struct ShouldUpgrade;
@ -30,6 +30,9 @@ where
Ok(ShouldUpgrade)
}
/// # Panics
///
/// Panics if connector state is not [ClientConnectorState::EnhancedSecurityUpgrade].
pub fn skip_connect_begin(connector: &mut ClientConnector) -> ShouldUpgrade {
assert!(connector.should_perform_security_upgrade());
ShouldUpgrade
@ -46,28 +49,29 @@ pub fn mark_as_upgraded(_: ShouldUpgrade, connector: &mut ClientConnector) -> Up
}
#[instrument(skip_all)]
pub async fn connect_finalize<S>(
pub async fn connect_finalize<S, N>(
_: Upgraded,
framed: &mut Framed<S>,
mut connector: ClientConnector,
framed: &mut Framed<S>,
network_client: &mut N,
server_name: ServerName,
server_public_key: Vec<u8>,
network_client: Option<&mut dyn AsyncNetworkClient>,
kerberos_config: Option<KerberosConfig>,
) -> ConnectorResult<ConnectionResult>
where
S: FramedRead + FramedWrite,
N: NetworkClient,
{
let mut buf = WriteBuf::new();
if connector.should_perform_credssp() {
perform_credssp_step(
framed,
&mut connector,
framed,
network_client,
&mut buf,
server_name,
server_public_key,
network_client,
kerberos_config,
)
.await?;
@ -88,7 +92,7 @@ where
async fn resolve_generator(
generator: &mut CredsspProcessGenerator<'_>,
network_client: &mut dyn AsyncNetworkClient,
network_client: &mut impl NetworkClient,
) -> ConnectorResult<ClientState> {
let mut state = generator.start();
@ -107,17 +111,18 @@ async fn resolve_generator(
}
#[instrument(level = "trace", skip_all)]
async fn perform_credssp_step<S>(
framed: &mut Framed<S>,
async fn perform_credssp_step<S, N>(
connector: &mut ClientConnector,
framed: &mut Framed<S>,
network_client: &mut N,
buf: &mut WriteBuf,
server_name: ServerName,
server_public_key: Vec<u8>,
mut network_client: Option<&mut dyn AsyncNetworkClient>,
kerberos_config: Option<KerberosConfig>,
) -> ConnectorResult<()>
where
S: FramedRead + FramedWrite,
N: NetworkClient,
{
assert!(connector.should_perform_credssp());
@ -138,15 +143,8 @@ where
loop {
let client_state = {
let mut generator = sequence.process_ts_request(ts_request);
if let Some(network_client_ref) = network_client.as_deref_mut() {
trace!("resolving network");
resolve_generator(&mut generator, network_client_ref).await?
} else {
generator
.resolve_to_result()
.map_err(|e| custom_err!("resolve without network client", e))?
}
trace!("resolving network");
resolve_generator(&mut generator, network_client).await?
}; // drop generator
buf.clear();

View file

@ -115,6 +115,7 @@ where
if self.buf.len() >= length {
return Ok(self.buf.split_to(length));
} else {
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (checked integer underflow)")]
self.buf
.reserve(length.checked_sub(self.buf.len()).expect("length > self.buf.len()"));
}

View file

@ -1,15 +1,14 @@
#![cfg_attr(doc, doc = include_str!("../README.md"))]
#![doc(html_logo_url = "https://cdnweb.devolutions.net/images/projects/devolutions/logos/devolutions-icon-shadow.svg")]
use core::future::Future;
pub use bytes;
mod connector;
mod framed;
mod session;
use core::future::Future;
use core::pin::Pin;
use ironrdp_connector::sspi::generator::NetworkRequest;
use ironrdp_connector::ConnectorResult;
@ -17,9 +16,6 @@ pub use self::connector::*;
pub use self::framed::*;
// pub use self::session::*;
pub trait AsyncNetworkClient {
fn send<'a>(
&'a mut self,
network_request: &'a NetworkRequest,
) -> Pin<Box<dyn Future<Output = ConnectorResult<Vec<u8>>> + 'a>>;
pub trait NetworkClient {
fn send(&mut self, network_request: &NetworkRequest) -> impl Future<Output = ConnectorResult<Vec<u8>>>;
}

View file

@ -6,7 +6,7 @@ edition.workspace = true
publish = false
[dev-dependencies]
criterion = "0.7"
criterion = "0.8"
ironrdp-graphics.path = "../ironrdp-graphics"
ironrdp-pdu.path = "../ironrdp-pdu"
ironrdp-server = { path = "../ironrdp-server", features = ["__bench"] }

View file

@ -1,3 +1,5 @@
#![expect(clippy::missing_panics_doc, reason = "panics in benches are allowed")]
use core::num::{NonZeroU16, NonZeroUsize};
use criterion::{criterion_group, criterion_main, Criterion};

View file

@ -6,6 +6,17 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [[0.8.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-blocking-v0.7.0...ironrdp-blocking-v0.8.0)] - 2025-12-18
### <!-- 4 -->Bug Fixes
- [**breaking**] Use static dispatch for NetworkClient trait ([#1043](https://github.com/Devolutions/IronRDP/issues/1043)) ([bca6d190a8](https://github.com/Devolutions/IronRDP/commit/bca6d190a870708468534d224ff225a658767a9a))
- Rename `AsyncNetworkClient` to `NetworkClient`
- Replace dynamic dispatch (`Option<&mut dyn ...>`) with static dispatch
using generics (`&mut N where N: NetworkClient`)
- Reorder `connect_finalize` parameters for consistency across crates
## [[0.4.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-blocking-v0.3.1...ironrdp-blocking-v0.4.0)] - 2025-03-12
### <!-- 7 -->Build
@ -13,7 +24,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Bump ironrdp-pdu
## [[0.3.1](https://github.com/Devolutions/IronRDP/compare/ironrdp-blocking-v0.3.0...ironrdp-blocking-v0.3.1)] - 2025-03-12
### <!-- 7 -->Build
@ -31,7 +41,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Use CDN URLs instead of the blob storage URLs for Devolutions logo (#631) ([dd249909a8](https://github.com/Devolutions/IronRDP/commit/dd249909a894004d4f728d30b3a4aa77a0f8193b))
## [[0.2.1](https://github.com/Devolutions/IronRDP/compare/ironrdp-blocking-v0.2.0...ironrdp-blocking-v0.2.1)] - 2024-12-14
### Other

View file

@ -1,6 +1,6 @@
[package]
name = "ironrdp-blocking"
version = "0.7.0"
version = "0.8.0"
readme = "README.md"
description = "Blocking I/O abstraction wrapping the IronRDP state machines conveniently"
edition.workspace = true
@ -16,7 +16,7 @@ doctest = false
test = false
[dependencies]
ironrdp-connector = { path = "../ironrdp-connector", version = "0.7" } # public
ironrdp-connector = { path = "../ironrdp-connector", version = "0.8" } # public
ironrdp-core = { path = "../ironrdp-core", version = "0.1", features = ["alloc"] } # public
ironrdp-pdu = { path = "../ironrdp-pdu", version = "0.6" } # public
tracing = { version = "0.1", features = ["log"] }

View file

@ -32,6 +32,9 @@ where
Ok(ShouldUpgrade)
}
/// # Panics
///
/// Panics if connector state is not [ClientConnectorState::EnhancedSecurityUpgrade].
pub fn skip_connect_begin(connector: &mut ClientConnector) -> ShouldUpgrade {
assert!(connector.should_perform_security_upgrade());
ShouldUpgrade
@ -50,11 +53,11 @@ pub fn mark_as_upgraded(_: ShouldUpgrade, connector: &mut ClientConnector) -> Up
#[instrument(skip_all)]
pub fn connect_finalize<S>(
_: Upgraded,
framed: &mut Framed<S>,
mut connector: ClientConnector,
framed: &mut Framed<S>,
network_client: &mut impl NetworkClient,
server_name: ServerName,
server_public_key: Vec<u8>,
network_client: &mut impl NetworkClient,
kerberos_config: Option<KerberosConfig>,
) -> ConnectorResult<ConnectionResult>
where
@ -66,12 +69,12 @@ where
if connector.should_perform_credssp() {
perform_credssp_step(
framed,
&mut connector,
framed,
network_client,
&mut buf,
server_name,
server_public_key,
network_client,
kerberos_config,
)?;
}
@ -115,12 +118,12 @@ fn resolve_generator(
#[instrument(level = "trace", skip_all)]
fn perform_credssp_step<S>(
framed: &mut Framed<S>,
connector: &mut ClientConnector,
framed: &mut Framed<S>,
network_client: &mut impl NetworkClient,
buf: &mut WriteBuf,
server_name: ServerName,
server_public_key: Vec<u8>,
network_client: &mut impl NetworkClient,
kerberos_config: Option<KerberosConfig>,
) -> ConnectorResult<()>
where

View file

@ -51,6 +51,7 @@ where
if self.buf.len() >= length {
return Ok(self.buf.split_to(length));
} else {
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (checked underflow)")]
self.buf
.reserve(length.checked_sub(self.buf.len()).expect("length > self.buf.len()"));
}

View file

@ -32,7 +32,7 @@ qoiz = ["ironrdp/qoiz"]
[dependencies]
# Protocols
ironrdp = { path = "../ironrdp", version = "0.13", features = [
ironrdp = { path = "../ironrdp", version = "0.14", features = [
"session",
"input",
"graphics",
@ -45,11 +45,11 @@ ironrdp = { path = "../ironrdp", version = "0.13", features = [
"connector",
] }
ironrdp-core = { path = "../ironrdp-core", version = "0.1", features = ["alloc"] }
ironrdp-cliprdr-native = { path = "../ironrdp-cliprdr-native", version = "0.4" }
ironrdp-cliprdr-native = { path = "../ironrdp-cliprdr-native", version = "0.5" }
ironrdp-rdpsnd-native = { path = "../ironrdp-rdpsnd-native", version = "0.4" }
ironrdp-tls = { path = "../ironrdp-tls", version = "0.1" }
ironrdp-tls = { path = "../ironrdp-tls", version = "0.2" }
ironrdp-mstsgu = { path = "../ironrdp-mstsgu" }
ironrdp-tokio = { path = "../ironrdp-tokio", version = "0.7", features = ["reqwest"] }
ironrdp-tokio = { path = "../ironrdp-tokio", version = "0.8", features = ["reqwest"] }
ironrdp-rdcleanpath.path = "../ironrdp-rdcleanpath"
ironrdp-dvc-pipe-proxy.path = "../ironrdp-dvc-pipe-proxy"
ironrdp-propertyset.path = "../ironrdp-propertyset"
@ -72,7 +72,7 @@ tracing-subscriber = { version = "0.3", features = ["env-filter"] }
# Async, futures
tokio = { version = "1", features = ["full"] }
tokio-util = { version = "0.7" }
tokio-tungstenite = "0.27"
tokio-tungstenite = "0.28"
transport = { git = "https://github.com/Devolutions/devolutions-gateway", rev = "06e91dfe82751a6502eaf74b6a99663f06f0236d" }
futures-util = { version = "0.3", features = ["sink"] }
@ -83,12 +83,12 @@ smallvec = "1.15"
tap = "1"
semver = "1"
raw-window-handle = "0.6"
uuid = { version = "1.18" }
uuid = { version = "1.19" }
x509-cert = { version = "0.2", default-features = false, features = ["std"] }
url = "2"
[target.'cfg(windows)'.dependencies]
windows = { version = "0.61", features = ["Win32_Foundation"] }
windows = { version = "0.62", features = ["Win32_Foundation"] }
[lints]
workspace = true

View file

@ -66,6 +66,7 @@ impl App {
let Some((window, _)) = self.window.as_mut() else {
return;
};
#[expect(clippy::as_conversions, reason = "casting f64 to u32")]
let scale_factor = (window.scale_factor() * 100.0) as u32;
let width = u16::try_from(size.width).expect("reasonable width");
@ -222,8 +223,10 @@ impl ApplicationHandler<RdpOutputEvent> for App {
}
WindowEvent::CursorMoved { position, .. } => {
let win_size = window.inner_size();
let x = (position.x / win_size.width as f64 * self.buffer_size.0 as f64) as u16;
let y = (position.y / win_size.height as f64 * self.buffer_size.1 as f64) as u16;
#[expect(clippy::as_conversions, reason = "casting f64 to u16")]
let x = (position.x / f64::from(win_size.width) * f64::from(self.buffer_size.0)) as u16;
#[expect(clippy::as_conversions, reason = "casting f64 to u16")]
let y = (position.y / f64::from(win_size.height) * f64::from(self.buffer_size.1)) as u16;
let operation = ironrdp::input::Operation::MouseMove(ironrdp::input::MousePosition { x, y });
let input_events = self.input_database.apply(core::iter::once(operation));
@ -239,6 +242,7 @@ impl ApplicationHandler<RdpOutputEvent> for App {
operations.push(ironrdp::input::Operation::WheelRotations(
ironrdp::input::WheelRotations {
is_vertical: false,
#[expect(clippy::as_conversions, reason = "casting f32 to i16")]
rotation_units: (delta_x * 100.) as i16,
},
));
@ -248,6 +252,7 @@ impl ApplicationHandler<RdpOutputEvent> for App {
operations.push(ironrdp::input::Operation::WheelRotations(
ironrdp::input::WheelRotations {
is_vertical: true,
#[expect(clippy::as_conversions, reason = "casting f32 to i16")]
rotation_units: (delta_y * 100.) as i16,
},
));
@ -258,6 +263,7 @@ impl ApplicationHandler<RdpOutputEvent> for App {
operations.push(ironrdp::input::Operation::WheelRotations(
ironrdp::input::WheelRotations {
is_vertical: false,
#[expect(clippy::as_conversions, reason = "casting f64 to i16")]
rotation_units: delta.x as i16,
},
));
@ -267,6 +273,7 @@ impl ApplicationHandler<RdpOutputEvent> for App {
operations.push(ironrdp::input::Operation::WheelRotations(
ironrdp::input::WheelRotations {
is_vertical: true,
#[expect(clippy::as_conversions, reason = "casting f64 to i16")]
rotation_units: delta.y as i16,
},
));

View file

@ -229,22 +229,24 @@ async fn connect(
// Ensure there is no leftover
let (initial_stream, leftover_bytes) = framed.into_inner();
let (upgraded_stream, server_public_key) = ironrdp_tls::upgrade(initial_stream, config.destination.name())
let (upgraded_stream, tls_cert) = ironrdp_tls::upgrade(initial_stream, config.destination.name())
.await
.map_err(|e| connector::custom_err!("TLS upgrade", e))?;
let upgraded = ironrdp_tokio::mark_as_upgraded(should_upgrade, &mut connector);
let erased_stream = Box::new(upgraded_stream) as Box<dyn AsyncReadWrite + Unpin + Send + Sync>;
let erased_stream: Box<dyn AsyncReadWrite + Unpin + Send + Sync> = Box::new(upgraded_stream);
let mut upgraded_framed = ironrdp_tokio::TokioFramed::new_with_leftover(erased_stream, leftover_bytes);
let server_public_key = ironrdp_tls::extract_tls_server_public_key(&tls_cert)
.ok_or_else(|| connector::general_err!("unable to extract tls server public key"))?;
let connection_result = ironrdp_tokio::connect_finalize(
upgraded,
&mut upgraded_framed,
connector,
&mut upgraded_framed,
&mut ReqwestNetworkClient::new(),
(&config.destination).into(),
server_public_key,
Some(&mut ReqwestNetworkClient::new()),
server_public_key.to_owned(),
None,
)
.await?;
@ -326,17 +328,17 @@ async fn connect_ws(
let connection_result = ironrdp_tokio::connect_finalize(
upgraded,
&mut framed,
connector,
&mut framed,
&mut ReqwestNetworkClient::new(),
(&config.destination).into(),
server_public_key,
Some(&mut ReqwestNetworkClient::new()),
None,
)
.await?;
let (ws, leftover_bytes) = framed.into_inner();
let erased_stream = Box::new(ws) as Box<dyn AsyncReadWrite + Unpin + Send + Sync>;
let erased_stream: Box<dyn AsyncReadWrite + Unpin + Send + Sync> = Box::new(ws);
let upgraded_framed = ironrdp_tokio::TokioFramed::new_with_leftover(erased_stream, leftover_bytes);
Ok((connection_result, upgraded_framed))
@ -660,7 +662,7 @@ async fn active_session(
desktop_size,
enable_server_pointer,
pointer_software_rendering,
} = connection_activation.state
} = connection_activation.connection_activation_state()
{
debug!(?desktop_size, "Deactivation-Reactivation Sequence completed");
// Update image size with the new desktop size.

View file

@ -6,6 +6,22 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [[0.5.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-cliprdr-native-v0.4.0...ironrdp-cliprdr-native-v0.5.0)] - 2025-12-18
### <!-- 4 -->Bug Fixes
- Prevent window class registration error on multiple sessions ([#1047](https://github.com/Devolutions/IronRDP/issues/1047)) ([a2af587e60](https://github.com/Devolutions/IronRDP/commit/a2af587e60e869f0235703e21772d1fc6a7dadcd))
When starting a second clipboard session, `RegisterClassA` would fail
with `ERROR_CLASS_ALREADY_EXISTS` because window classes are global to
the process. Now checks if the class is already registered before
attempting registration, allowing multiple WinClipboard instances to
coexist.
### <!-- 7 -->Build
- Bump windows from 0.61.3 to 0.62.1 ([#1010](https://github.com/Devolutions/IronRDP/issues/1010)) ([79e71c4f90](https://github.com/Devolutions/IronRDP/commit/79e71c4f90ea68b14fe45241c1cf3953027b22a2))
## [[0.4.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-cliprdr-native-v0.3.0...ironrdp-cliprdr-native-v0.4.0)] - 2025-08-29
### <!-- 4 -->Bug Fixes

View file

@ -1,6 +1,6 @@
[package]
name = "ironrdp-cliprdr-native"
version = "0.4.0"
version = "0.5.0"
readme = "README.md"
description = "Native CLIPRDR static channel backend implementations for IronRDP"
edition.workspace = true
@ -16,12 +16,12 @@ doctest = false
test = false
[dependencies]
ironrdp-cliprdr = { path = "../ironrdp-cliprdr", version = "0.4" } # public
ironrdp-cliprdr = { path = "../ironrdp-cliprdr", version = "0.5" } # public
ironrdp-core = { path = "../ironrdp-core", version = "0.1" }
tracing = { version = "0.1", features = ["log"] }
[target.'cfg(windows)'.dependencies]
windows = { version = "0.61", features = [
windows = { version = "0.62", features = [
"Win32_Foundation",
"Win32_Graphics_Gdi",
"Win32_System_DataExchange",

View file

@ -27,7 +27,7 @@ impl<'a> ClipboardDataRef<'a> {
};
// SAFETY: It is safe to call `GlobalLock` on the valid handle.
let data = unsafe { GlobalLock(handle) } as *const u8;
let data = unsafe { GlobalLock(handle) }.cast::<u8>().cast_const();
if data.is_null() {
// Can't lock data handle, handle is not valid anymore (e.g. clipboard has changed)

View file

@ -1,3 +1,4 @@
use core::ptr::with_exposed_provenance_mut;
use core::time::Duration;
use std::collections::HashSet;
use std::sync::mpsc;
@ -320,17 +321,19 @@ pub(crate) unsafe extern "system" fn clipboard_subproc(
// SAFETY: `data` is a valid pointer, returned by `Box::into_raw`, transferred to OS earlier
// via `SetWindowSubclass` call.
let _ = unsafe { Box::from_raw(data as *mut WinClipboardImpl) };
let _ = unsafe { Box::from_raw(with_exposed_provenance_mut::<WinClipboardImpl>(data)) };
return LRESULT(0);
}
// SAFETY: `data` is a valid pointer, returned by `Box::into_raw`, transferred to OS earlier
// via `SetWindowSubclass` call.
let ctx = unsafe { &mut *(data as *mut WinClipboardImpl) };
let ctx = unsafe { &mut *(with_exposed_provenance_mut::<WinClipboardImpl>(data)) };
match msg {
// We need to keep track of window state to distinguish between local and remote copy
WM_ACTIVATE | WM_ACTIVATEAPP => ctx.window_is_active = wparam.0 != WA_INACTIVE as usize, // `as` conversion is fine for constants
WM_ACTIVATE | WM_ACTIVATEAPP => {
ctx.window_is_active = wparam.0 != usize::try_from(WA_INACTIVE).expect("WA_INACTIVE fits into usize")
}
// Sent by the OS when OS clipboard content is changed
WM_CLIPBOARDUPDATE => {
// SAFETY: `GetClipboardOwner` is always safe to call.
@ -347,8 +350,9 @@ pub(crate) unsafe extern "system" fn clipboard_subproc(
}
// Sent by the OS when delay-rendered data is requested for rendering.
WM_RENDERFORMAT => {
#[expect(clippy::cast_possible_truncation)] // should never truncate in practice
ctx.handle_event(BackendEvent::RenderFormat(ClipboardFormatId::new(wparam.0 as u32)));
ctx.handle_event(BackendEvent::RenderFormat(ClipboardFormatId::new(
u32::try_from(wparam.0).expect("should never truncate in practice"),
)));
}
// Sent by the OS when all delay-rendered data is requested for rendering.
WM_RENDERALLFORMATS => {

View file

@ -19,7 +19,8 @@ use windows::Win32::System::DataExchange::{AddClipboardFormatListener, RemoveCli
use windows::Win32::System::LibraryLoader::GetModuleHandleA;
use windows::Win32::UI::Shell::{RemoveWindowSubclass, SetWindowSubclass};
use windows::Win32::UI::WindowsAndMessaging::{
CreateWindowExA, DefWindowProcA, RegisterClassA, CW_USEDEFAULT, WINDOW_EX_STYLE, WM_USER, WNDCLASSA, WS_POPUP,
CreateWindowExA, DefWindowProcA, GetClassInfoA, RegisterClassA, CW_USEDEFAULT, WINDOW_EX_STYLE, WM_USER, WNDCLASSA,
WS_POPUP,
};
use self::clipboard_impl::{clipboard_subproc, WinClipboardImpl};
@ -152,17 +153,25 @@ impl WinClipboard {
// SAFETY: low-level WinAPI call
let instance = unsafe { GetModuleHandleA(None)? };
let window_class = s!("IronRDPClipboardMonitor");
let wc = WNDCLASSA {
hInstance: instance.into(),
lpszClassName: window_class,
lpfnWndProc: Some(wndproc),
..Default::default()
};
// SAFETY: low-level WinAPI call
let atom = unsafe { RegisterClassA(&wc) };
if atom == 0 {
return Err(Error::from_win32())?;
let mut existing_wc = WNDCLASSA::default();
// SAFETY: `instance` is a valid module handle, `window_class` is a valid null-terminated string,
// and `existing_wc` is a valid mutable reference to a WNDCLASSA structure.
let class_exists = unsafe { GetClassInfoA(Some(instance.into()), window_class, &mut existing_wc).is_ok() };
if !class_exists {
let wc = WNDCLASSA {
hInstance: instance.into(),
lpszClassName: window_class,
lpfnWndProc: Some(wndproc),
..Default::default()
};
// SAFETY: low-level WinAPI call
let atom = unsafe { RegisterClassA(&wc) };
if atom == 0 {
return Err(WinCliprdrError::from(Error::from_thread()));
}
}
// SAFETY: low-level WinAPI call
@ -184,7 +193,7 @@ impl WinClipboard {
};
if window.is_invalid() {
return Err(Error::from_win32())?;
return Err(WinCliprdrError::from(Error::from_thread()));
}
// Init clipboard processing for WinAPI event loop
//
@ -200,8 +209,14 @@ impl WinClipboard {
//
// SAFETY: `window` is a valid window handle, `clipboard_subproc` is in the static memory,
// `ctx` is valid and its ownership is transferred to the subclass via `into_raw`.
let winapi_result =
unsafe { SetWindowSubclass(window, Some(clipboard_subproc), 0, Box::into_raw(ctx) as usize) };
let winapi_result = unsafe {
SetWindowSubclass(
window,
Some(clipboard_subproc),
0,
Box::into_raw(ctx).expose_provenance(),
)
};
if winapi_result == FALSE {
return Err(WinCliprdrError::WindowSubclass);

View file

@ -26,7 +26,7 @@ impl GlobalMemoryBuffer {
// - `dst` is valid for writes of `data.len()` bytes, we allocated enough above.
// - Both `data` and `dst` are properly aligned: u8 alignment is 1
// - Memory regions are not overlapping, `dst` was allocated by us just above.
unsafe { core::ptr::copy_nonoverlapping(data.as_ptr(), dst as *mut u8, data.len()) };
unsafe { core::ptr::copy_nonoverlapping(data.as_ptr(), dst.cast::<u8>(), data.len()) };
// SAFETY: We called `GlobalLock` on this handle just above.
if let Err(error) = unsafe { GlobalUnlock(handle) } {

View file

@ -6,6 +6,30 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [[0.5.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-cliprdr-v0.4.0...ironrdp-cliprdr-v0.5.0)] - 2025-12-18
### <!-- 4 -->Bug Fixes
- Fixes the Cliprdr `SvcProcessor` impl to support handling a `TemporaryDirectory` Clipboard PDU ([#1031](https://github.com/Devolutions/IronRDP/issues/1031)) ([f2326ef046](https://github.com/Devolutions/IronRDP/commit/f2326ef046cc81fb0e8985f03382859085882e86))
- Allow servers to announce clipboard ownership ([#1053](https://github.com/Devolutions/IronRDP/issues/1053)) ([d587b0c4c1](https://github.com/Devolutions/IronRDP/commit/d587b0c4c114c49d30f52859f43b22f829456a01))
Servers can now send Format List PDU via initiate_copy() regardless of
internal state. The existing state machine was designed for clients
where clipboard initialization must complete before announcing
ownership.
MS-RDPECLIP Section 2.2.3.1 specifies that Format List PDU is sent by
either client or server when the local clipboard is updated. Servers
should be able to announce clipboard changes immediately after channel
negotiation.
This change enables RDP servers to properly announce clipboard ownership
by bypassing the Initialization/Ready state check when R::is_server() is
true. Client behavior remains unchanged.
- [**breaking**] Removed the `PackedMetafile::data()` method in favor of making the `PackedMetafile::data` field public.
## [[0.4.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-cliprdr-v0.3.0...ironrdp-cliprdr-v0.4.0)] - 2025-08-29
### <!-- 4 -->Bug Fixes

View file

@ -1,6 +1,6 @@
[package]
name = "ironrdp-cliprdr"
version = "0.4.0"
version = "0.5.0"
readme = "README.md"
description = "CLIPRDR static channel for clipboard implemented as described in MS-RDPECLIP"
edition.workspace = true

View file

@ -1,10 +1,6 @@
#![cfg_attr(doc, doc = include_str!("../README.md"))]
#![doc(html_logo_url = "https://cdnweb.devolutions.net/images/projects/devolutions/logos/devolutions-icon-shadow.svg")]
#![allow(clippy::arithmetic_side_effects)] // FIXME: remove
#![allow(clippy::cast_lossless)] // FIXME: remove
#![allow(clippy::cast_possible_truncation)] // FIXME: remove
#![allow(clippy::cast_possible_wrap)] // FIXME: remove
#![allow(clippy::cast_sign_loss)] // FIXME: remove
pub mod backend;
pub mod pdu;
@ -32,16 +28,12 @@ pub type CliprdrSvcMessages<R> = SvcProcessorMessages<Cliprdr<R>>;
#[derive(Debug)]
enum ClipboardError {
UnimplementedPdu { pdu: &'static str },
FormatListRejected,
}
impl core::fmt::Display for ClipboardError {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
match self {
ClipboardError::UnimplementedPdu { pdu } => {
write!(f, "received clipboard PDU `{pdu}` is not implemented")
}
ClipboardError::FormatListRejected => write!(f, "sent format list was rejected"),
}
}
@ -238,26 +230,32 @@ impl<R: Role> Cliprdr<R> {
pub fn initiate_copy(&self, available_formats: &[ClipboardFormat]) -> PduResult<CliprdrSvcMessages<R>> {
let mut pdus = Vec::new();
match (self.state, R::is_server()) {
// When user initiates copy, we should send format list to server.
(CliprdrState::Ready, _) => {
pdus.push(ClipboardPdu::FormatList(
self.build_format_list(available_formats).map_err(|e| encode_err!(e))?,
));
}
(CliprdrState::Initialization, false) => {
// During initialization state, first copy action is synthetic and should be sent along with
// capabilities and temporary directory PDUs.
pdus.push(ClipboardPdu::Capabilities(self.capabilities.clone()));
pdus.push(ClipboardPdu::TemporaryDirectory(
ClientTemporaryDirectory::new(self.backend.temporary_directory()).map_err(|e| encode_err!(e))?,
));
pdus.push(ClipboardPdu::FormatList(
self.build_format_list(available_formats).map_err(|e| encode_err!(e))?,
));
}
_ => {
error!(?self.state, "Attempted to initiate copy in incorrect state");
if R::is_server() {
pdus.push(ClipboardPdu::FormatList(
self.build_format_list(available_formats).map_err(|e| encode_err!(e))?,
));
} else {
match self.state {
CliprdrState::Ready => {
pdus.push(ClipboardPdu::FormatList(
self.build_format_list(available_formats).map_err(|e| encode_err!(e))?,
));
}
CliprdrState::Initialization => {
// During initialization state, first copy action is synthetic and should be sent along with
// capabilities and temporary directory PDUs.
pdus.push(ClipboardPdu::Capabilities(self.capabilities.clone()));
pdus.push(ClipboardPdu::TemporaryDirectory(
ClientTemporaryDirectory::new(self.backend.temporary_directory())
.map_err(|e| encode_err!(e))?,
));
pdus.push(ClipboardPdu::FormatList(
self.build_format_list(available_formats).map_err(|e| encode_err!(e))?,
));
}
_ => {
error!(?self.state, "Attempted to initiate copy in incorrect state");
}
}
}
@ -337,9 +335,10 @@ impl<R: Role> SvcProcessor for Cliprdr<R> {
self.backend.on_file_contents_response(response);
Ok(Vec::new())
}
_ => self.handle_error_transition(ClipboardError::UnimplementedPdu {
pdu: pdu.message_name(),
}),
ClipboardPdu::TemporaryDirectory(_) => {
// do nothing
Ok(Vec::new())
}
}
}

View file

@ -94,19 +94,13 @@ impl<'a> FileContentsResponse<'a> {
/// Read data as u64 size value
pub fn data_as_size(&self) -> DecodeResult<u64> {
if self.data.len() != 8 {
return Err(invalid_field_err!(
"requestedFileContentsData",
"Invalid data size for u64 size"
));
}
let chunk = self
.data
.as_ref()
.try_into()
.map_err(|_| invalid_field_err!("requestedFileContentsData", "not enough bytes for u64 size"))?;
Ok(u64::from_le_bytes(
self.data
.as_ref()
.try_into()
.expect("data contains exactly eight u8 elements"),
))
Ok(u64::from_le_bytes(chunk))
}
}

View file

@ -42,7 +42,7 @@ pub struct PackedMetafile<'a> {
pub x_ext: u32,
pub y_ext: u32,
/// The variable sized contents of the metafile as specified in [MS-WMF] section 2
data: Cow<'a, [u8]>,
pub data: Cow<'a, [u8]>,
}
impl PackedMetafile<'_> {
@ -62,10 +62,6 @@ impl PackedMetafile<'_> {
data: data.into(),
}
}
pub fn data(&self) -> &[u8] {
&self.data
}
}
impl Encode for PackedMetafile<'_> {

View file

@ -6,6 +6,14 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [[0.8.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-connector-v0.7.1...ironrdp-connector-v0.8.0)] - 2025-12-18
### <!-- 7 -->Build
- Bump picky and sspi ([#1028](https://github.com/Devolutions/IronRDP/issues/1028)) ([5bd319126d](https://github.com/Devolutions/IronRDP/commit/5bd319126d32fbd8e505508e27ab2b1a18a83d04))
This fixes build issues with some dependencies.
## [[0.7.1](https://github.com/Devolutions/IronRDP/compare/ironrdp-connector-v0.7.0...ironrdp-connector-v0.7.1)] - 2025-09-04
### <!-- 1 -->Features

View file

@ -1,6 +1,6 @@
[package]
name = "ironrdp-connector"
version = "0.7.1"
version = "0.8.0"
readme = "README.md"
description = "State machines to drive an RDP connection sequence"
edition.workspace = true
@ -27,13 +27,13 @@ ironrdp-core = { path = "../ironrdp-core", version = "0.1" } # public
ironrdp-error = { path = "../ironrdp-error", version = "0.1" } # public
ironrdp-pdu = { path = "../ironrdp-pdu", version = "0.6", features = ["std"] } # public
arbitrary = { version = "1", features = ["derive"], optional = true } # public
sspi = "0.16" # public
sspi = { version = "0.18", features = ["scard"] }
url = "2.5" # public
rand = { version = "0.9", features = ["std"] } # TODO: dependency injection?
tracing = { version = "0.1", features = ["log"] }
picky-asn1-der = "0.5"
picky-asn1-x509 = "0.14"
picky = "7.0.0-rc.17"
picky-asn1-x509 = "0.15"
picky = "=7.0.0-rc.20" # FIXME: We are pinning with = because the candidate version number counts as the minor number by Cargo, and will be automatically bumped in the Cargo.lock.
[lints]
workspace = true

View file

@ -176,6 +176,9 @@ impl ClientConnector {
matches!(self.state, ClientConnectorState::EnhancedSecurityUpgrade { .. })
}
/// # Panics
///
/// Panics if state is not [ClientConnectorState::EnhancedSecurityUpgrade].
pub fn mark_security_upgrade_as_done(&mut self) {
assert!(self.should_perform_security_upgrade());
self.step(&[], &mut WriteBuf::new()).expect("transition to next state");
@ -186,6 +189,9 @@ impl ClientConnector {
matches!(self.state, ClientConnectorState::Credssp { .. })
}
/// # Panics
///
/// Panics if state is not [ClientConnectorState::Credssp].
pub fn mark_credssp_as_done(&mut self) {
assert!(self.should_perform_credssp());
let res = self.step(&[], &mut WriteBuf::new()).expect("transition to next state");
@ -547,7 +553,7 @@ impl Sequence for ClientConnector {
mut connection_activation,
} => {
let written = connection_activation.step(input, output)?;
match connection_activation.state {
match connection_activation.connection_activation_state() {
ConnectionActivationState::ConnectionFinalization { .. } => (
written,
ClientConnectorState::ConnectionFinalization { connection_activation },
@ -564,10 +570,10 @@ impl Sequence for ClientConnector {
} => {
let written = connection_activation.step(input, output)?;
let next_state = if !connection_activation.state.is_terminal() {
let next_state = if !connection_activation.connection_activation_state().is_terminal() {
ClientConnectorState::ConnectionFinalization { connection_activation }
} else {
match connection_activation.state {
match connection_activation.connection_activation_state() {
ConnectionActivationState::Finalized {
io_channel_id,
user_channel_id,
@ -693,9 +699,9 @@ fn create_gcc_blocks<'a>(
desktop_physical_width: Some(0), // 0 per FreeRDP
desktop_physical_height: Some(0), // 0 per FreeRDP
desktop_orientation: if config.desktop_size.width > config.desktop_size.height {
Some(MonitorOrientation::Landscape as u16)
Some(MonitorOrientation::Landscape.as_u16())
} else {
Some(MonitorOrientation::Portrait as u16)
Some(MonitorOrientation::Portrait.as_u16())
},
desktop_scale_factor: Some(config.desktop_scale_factor),
device_scale_factor: if config.desktop_scale_factor >= 100 && config.desktop_scale_factor <= 500 {

View file

@ -1,7 +1,7 @@
use core::mem;
use ironrdp_pdu::rdp;
use ironrdp_pdu::rdp::capability_sets::CapabilitySet;
use ironrdp_pdu::rdp::{self};
use tracing::{debug, warn};
use crate::{
@ -22,7 +22,7 @@ use crate::{
/// [Server Deactivate All PDU]: https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-rdpbcgr/8a29971a-df3c-48da-add2-8ed9a05edc89
#[derive(Debug, Clone)]
pub struct ConnectionActivationSequence {
pub state: ConnectionActivationState,
state: ConnectionActivationState,
config: Config,
}
@ -37,6 +37,11 @@ impl ConnectionActivationSequence {
}
}
/// Returns the current state as a district type, rather than `&dyn State` provided by [`Self::state`].
pub fn connection_activation_state(&self) -> ConnectionActivationState {
self.state
}
#[must_use]
pub fn reset_clone(&self) -> Self {
self.clone().reset()
@ -215,7 +220,7 @@ impl Sequence for ConnectionActivationSequence {
}
}
#[derive(Default, Debug, Clone)]
#[derive(Default, Debug, Copy, Clone)]
pub enum ConnectionActivationState {
#[default]
Consumed,

View file

@ -9,7 +9,7 @@ use tracing::{debug, warn};
use crate::{general_err, legacy, reason_err, ConnectorResult, Sequence, State, Written};
#[derive(Default, Debug, Clone)]
#[derive(Default, Debug, Copy, Clone)]
#[non_exhaustive]
#[cfg_attr(feature = "arbitrary", derive(arbitrary::Arbitrary))]
pub enum ConnectionFinalizationState {
@ -48,7 +48,7 @@ impl State for ConnectionFinalizationState {
}
}
#[derive(Debug, Clone)]
#[derive(Debug, Copy, Clone)]
#[cfg_attr(feature = "arbitrary", derive(arbitrary::Arbitrary))]
pub struct ConnectionFinalizationSequence {
pub state: ConnectionFinalizationState,

View file

@ -5,6 +5,7 @@ use picky_asn1_x509::{oids, Certificate, ExtensionView, GeneralName};
use sspi::credssp::{self, ClientState, CredSspClient};
use sspi::generator::{Generator, NetworkRequest};
use sspi::negotiate::ProtocolConfig;
use sspi::Secret;
use sspi::Username;
use tracing::debug;
@ -123,11 +124,13 @@ impl CredsspSequence {
certificate: cert,
reader_name: config.reader_name.clone(),
card_name: None,
container_name: config.container_name.clone(),
container_name: Some(config.container_name.clone()),
csp_name: config.csp_name.clone(),
pin: pin.as_bytes().to_vec().into(),
private_key_file_index: None,
private_key: Some(key.into()),
scard_type: sspi::SmartCardType::Emulated {
scard_pin: Secret::new(pin.as_bytes().to_vec()),
},
};
sspi::Credentials::SmartCard(Box::new(identity))
}

View file

@ -406,7 +406,7 @@ pub trait ConnectorResultExt {
impl<T> ConnectorResultExt for ConnectorResult<T> {
fn with_context(self, context: &'static str) -> Self {
self.map_err(|mut e| {
e.context = context;
e.set_context(context);
e
})
}

View file

@ -116,8 +116,8 @@ impl SvcProcessor for DrdynvcClient {
}
DrdynvcServerPdu::Create(create_request) => {
debug!("Got DVC Create Request PDU: {create_request:?}");
let channel_name = create_request.channel_name;
let channel_id = create_request.channel_id;
let channel_id = create_request.channel_id();
let channel_name = create_request.into_channel_name();
if !self.cap_handshake_done {
debug!(
@ -156,9 +156,9 @@ impl SvcProcessor for DrdynvcClient {
}
DrdynvcServerPdu::Close(close_request) => {
debug!("Got DVC Close Request PDU: {close_request:?}");
self.dynamic_channels.remove_by_channel_id(close_request.channel_id);
self.dynamic_channels.remove_by_channel_id(close_request.channel_id());
let close_response = DrdynvcClientPdu::Close(ClosePdu::new(close_request.channel_id));
let close_response = DrdynvcClientPdu::Close(ClosePdu::new(close_request.channel_id()));
debug!("Send DVC Close Response PDU: {close_response:?}");
responses.push(SvcMessage::from(close_response));

View file

@ -28,7 +28,7 @@ impl CompleteData {
}
fn process_data_first_pdu(&mut self, data_first: DataFirstPdu) -> DecodeResult<Option<Vec<u8>>> {
let total_data_size: DecodeResult<_> = cast_length!("DataFirstPdu::length", data_first.length);
let total_data_size: DecodeResult<_> = cast_length!("DataFirstPdu::length", data_first.length());
let total_data_size = total_data_size?;
if self.total_size != 0 || !self.data.is_empty() {
error!("Incomplete DVC message, it will be skipped");
@ -36,11 +36,11 @@ impl CompleteData {
self.data.clear();
}
if total_data_size == data_first.data.len() {
Ok(Some(data_first.data))
if total_data_size == data_first.data().len() {
Ok(Some(data_first.into_data()))
} else {
self.total_size = total_data_size;
self.data = data_first.data;
self.data = data_first.into_data();
Ok(None)
}
@ -49,22 +49,22 @@ impl CompleteData {
fn process_data_pdu(&mut self, mut data: DataPdu) -> DecodeResult<Option<Vec<u8>>> {
if self.total_size == 0 && self.data.is_empty() {
// message is not fragmented
return Ok(Some(data.data));
return Ok(Some(data.into_data()));
}
// The message is fragmented and needs to be reassembled.
match self.data.len().checked_add(data.data.len()) {
match self.data.len().checked_add(data.data().len()) {
Some(actual_data_length) => {
match actual_data_length.cmp(&(self.total_size)) {
cmp::Ordering::Less => {
// this is one of the fragmented messages, just append it
self.data.append(&mut data.data);
self.data.append(data.data_mut());
Ok(None)
}
cmp::Ordering::Equal => {
// this is the last fragmented message, need to return the whole reassembled message
self.total_size = 0;
self.data.append(&mut data.data);
self.data.append(data.data_mut());
Ok(Some(self.data.drain(..).collect()))
}
cmp::Ordering::Greater => {

View file

@ -73,6 +73,8 @@ pub fn encode_dvc_messages(
while off < total_length {
let first = off == 0;
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (checked underflow)")]
let remaining_length = total_length.checked_sub(off).expect("never overflow");
let size = core::cmp::min(remaining_length, DrdynvcDataPdu::MAX_DATA_SIZE);
let end = off

View file

@ -200,7 +200,7 @@ impl Header {
fn encode(&self, dst: &mut WriteCursor<'_>) -> EncodeResult<()> {
ensure_fixed_part_size!(in: dst);
dst.write_u8(((self.cmd as u8) << 4) | (Into::<u8>::into(self.sp) << 2) | Into::<u8>::into(self.cb_id));
dst.write_u8(((self.cmd.as_u8()) << 4) | (Into::<u8>::into(self.sp) << 2) | Into::<u8>::into(self.cb_id));
Ok(())
}
@ -235,6 +235,16 @@ enum Cmd {
SoftSyncResponse = 0x09,
}
impl Cmd {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
fn as_u8(self) -> u8 {
self as u8
}
}
impl TryFrom<u8> for Cmd {
type Error = DecodeError;
@ -282,12 +292,12 @@ impl From<Cmd> for String {
#[derive(Debug, PartialEq)]
pub struct DataFirstPdu {
header: Header,
pub channel_id: DynamicChannelId,
channel_id: DynamicChannelId,
/// Length is the *total* length of the data to be sent, including the length
/// of the data that will be sent by subsequent DVC_DATA PDUs.
pub length: u32,
length: u32,
/// Data is just the data to be sent in this PDU.
pub data: Vec<u8>,
data: Vec<u8>,
}
impl DataFirstPdu {
@ -322,6 +332,18 @@ impl DataFirstPdu {
}
}
pub fn length(&self) -> u32 {
self.length
}
pub fn data(&self) -> &[u8] {
&self.data
}
pub fn into_data(self) -> Vec<u8> {
self.data
}
fn decode(header: Header, src: &mut ReadCursor<'_>) -> DecodeResult<Self> {
let fixed_part_size = checked_sum(&[header.cb_id.size_of_val(), header.sp.size_of_val()])?;
ensure_size!(in: src, size: fixed_part_size);
@ -434,8 +456,8 @@ impl From<FieldType> for u8 {
#[derive(Debug, PartialEq)]
pub struct DataPdu {
header: Header,
pub channel_id: DynamicChannelId,
pub data: Vec<u8>,
channel_id: DynamicChannelId,
data: Vec<u8>,
}
impl DataPdu {
@ -447,6 +469,18 @@ impl DataPdu {
}
}
pub fn data(&self) -> &[u8] {
&self.data
}
pub fn into_data(self) -> Vec<u8> {
self.data
}
pub fn data_mut(&mut self) -> &mut Vec<u8> {
&mut self.data
}
fn decode(header: Header, src: &mut ReadCursor<'_>) -> DecodeResult<Self> {
ensure_size!(in: src, size: header.cb_id.size_of_val());
let channel_id = header.cb_id.decode_val(src)?;
@ -485,8 +519,8 @@ impl DataPdu {
#[derive(Debug, PartialEq)]
pub struct CreateResponsePdu {
header: Header,
pub channel_id: DynamicChannelId,
pub creation_status: CreationStatus,
channel_id: DynamicChannelId,
creation_status: CreationStatus,
}
impl CreateResponsePdu {
@ -498,6 +532,14 @@ impl CreateResponsePdu {
}
}
pub fn channel_id(&self) -> DynamicChannelId {
self.channel_id
}
pub fn creation_status(&self) -> CreationStatus {
self.creation_status
}
fn name() -> &'static str {
"DYNVC_CREATE_RSP"
}
@ -564,7 +606,7 @@ impl From<CreationStatus> for u32 {
#[derive(Debug, PartialEq)]
pub struct ClosePdu {
header: Header,
pub channel_id: DynamicChannelId,
channel_id: DynamicChannelId,
}
impl ClosePdu {
@ -583,6 +625,10 @@ impl ClosePdu {
}
}
pub fn channel_id(&self) -> DynamicChannelId {
self.channel_id
}
fn decode(header: Header, src: &mut ReadCursor<'_>) -> DecodeResult<Self> {
ensure_size!(in: src, size: Self::headerless_size(&header));
let channel_id = header.cb_id.decode_val(src)?;
@ -666,7 +712,7 @@ impl CapsVersion {
fn encode(&self, dst: &mut WriteCursor<'_>) -> EncodeResult<()> {
ensure_size!(in: dst, size: Self::size());
dst.write_u16(*self as u16);
dst.write_u16(u16::from(*self));
Ok(())
}
@ -689,6 +735,10 @@ impl TryFrom<u16> for CapsVersion {
}
impl From<CapsVersion> for u16 {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
fn from(version: CapsVersion) -> Self {
version as u16
}
@ -798,8 +848,8 @@ impl CapabilitiesRequestPdu {
#[derive(Debug, PartialEq)]
pub struct CreateRequestPdu {
header: Header,
pub channel_id: DynamicChannelId,
pub channel_name: String,
channel_id: DynamicChannelId,
channel_name: String,
}
impl CreateRequestPdu {
@ -811,6 +861,18 @@ impl CreateRequestPdu {
}
}
pub fn channel_id(&self) -> DynamicChannelId {
self.channel_id
}
pub fn channel_name(&self) -> &str {
&self.channel_name
}
pub fn into_channel_name(self) -> String {
self.channel_name
}
fn decode(header: Header, src: &mut ReadCursor<'_>) -> DecodeResult<Self> {
ensure_size!(in: src, size: Self::headerless_fixed_part_size(&header));
let channel_id = header.cb_id.decode_val(src)?;

View file

@ -139,22 +139,24 @@ impl SvcProcessor for DrdynvcServer {
}
DrdynvcClientPdu::Create(create_resp) => {
debug!("Got DVC Create Response PDU: {create_resp:?}");
let id = create_resp.channel_id;
let id = create_resp.channel_id();
let c = self.channel_by_id(id).map_err(|e| decode_err!(e))?;
if c.state != ChannelState::Creation {
return Err(pdu_other_err!("invalid channel state"));
}
if create_resp.creation_status != CreationStatus::OK {
c.state = ChannelState::CreationFailed(create_resp.creation_status.into());
if create_resp.creation_status() != CreationStatus::OK {
c.state = ChannelState::CreationFailed(create_resp.creation_status().into());
return Ok(resp);
}
c.state = ChannelState::Opened;
let msg = c.processor.start(create_resp.channel_id)?;
let msg = c.processor.start(create_resp.channel_id())?;
resp.extend(encode_dvc_messages(id, msg, ChannelFlags::SHOW_PROTOCOL).map_err(|e| encode_err!(e))?);
}
DrdynvcClientPdu::Close(close_resp) => {
debug!("Got DVC Close Response PDU: {close_resp:?}");
let c = self.channel_by_id(close_resp.channel_id).map_err(|e| decode_err!(e))?;
let c = self
.channel_by_id(close_resp.channel_id())
.map_err(|e| decode_err!(e))?;
if c.state != ChannelState::Opened {
return Err(pdu_other_err!("invalid channel state"));
}

View file

@ -23,8 +23,8 @@ impl<T> Source for T where T: fmt::Display + fmt::Debug + Send + Sync + 'static
#[derive(Debug)]
pub struct Error<Kind> {
pub context: &'static str,
pub kind: Kind,
context: &'static str,
kind: Kind,
#[cfg(feature = "std")]
source: Option<Box<dyn core::error::Error + Sync + Send>>,
#[cfg(all(not(feature = "std"), feature = "alloc"))]
@ -80,6 +80,10 @@ impl<Kind> Error<Kind> {
&self.kind
}
pub fn set_context(&mut self, context: &'static str) {
self.context = context;
}
pub fn report(&self) -> ErrorReport<'_, Kind> {
ErrorReport(self)
}

View file

@ -6,13 +6,15 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [[0.6.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-futures-v0.5.0...ironrdp-futures-v0.6.0)] - 2025-12-18
## [[0.1.3](https://github.com/Devolutions/IronRDP/compare/ironrdp-futures-v0.1.2...ironrdp-futures-v0.1.3)] - 2025-03-12
### <!-- 7 -->Build
- Update dependencies (#695) ([c21fa44fd6](https://github.com/Devolutions/IronRDP/commit/c21fa44fd6f3c6a6b74788ff68e83133c1314caa))
## [[0.1.2](https://github.com/Devolutions/IronRDP/compare/ironrdp-futures-v0.1.1...ironrdp-futures-v0.1.2)] - 2025-01-28
### <!-- 6 -->Documentation

View file

@ -1,6 +1,6 @@
[package]
name = "ironrdp-futures"
version = "0.5.0"
version = "0.6.0"
readme = "README.md"
description = "`Framed*` traits implementation above futuress traits"
edition.workspace = true
@ -17,7 +17,7 @@ test = false
[dependencies]
futures-util = { version = "0.3", features = ["io"] } # public
ironrdp-async = { path = "../ironrdp-async", version = "0.7" } # public
ironrdp-async = { path = "../ironrdp-async", version = "0.8" } # public
bytes = "1" # public
[lints]

View file

@ -114,8 +114,8 @@ pub fn rdp6_decode_bitmap_stream_to_rgb24(input: &BitmapInput<'_>) {
let _ = BitmapStreamDecoder::default().decode_bitmap_stream_to_rgb24(
input.src,
&mut out,
input.width as usize,
input.height as usize,
usize::from(input.width),
usize::from(input.height),
);
}

View file

@ -6,6 +6,16 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [[0.7.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-graphics-v0.6.0...ironrdp-graphics-v0.7.0)] - 2025-12-18
### Added
- [**breaking**] `InvalidIntegralConversion` variant in `RlgrError` and `ZgfxError`
### <!-- 7 -->Build
- Bump bytemuck from 1.23.2 to 1.24.0 ([#1008](https://github.com/Devolutions/IronRDP/issues/1008)) ([a24a1fa9e8](https://github.com/Devolutions/IronRDP/commit/a24a1fa9e8f1898b2fcdd41d87660ab9e38f89ed))
## [[0.6.0](https://github.com/Devolutions/IronRDP/compare/ironrdp-graphics-v0.5.0...ironrdp-graphics-v0.6.0)] - 2025-06-27
### <!-- 4 -->Bug Fixes

View file

@ -1,6 +1,6 @@
[package]
name = "ironrdp-graphics"
version = "0.6.0"
version = "0.7.0"
readme = "README.md"
description = "RDP image processing primitives"
edition.workspace = true
@ -22,14 +22,13 @@ bitvec = "1.0"
ironrdp-core = { path = "../ironrdp-core", version = "0.1" } # public
ironrdp-pdu = { path = "../ironrdp-pdu", version = "0.6", features = ["std"] } # public
byteorder = "1.5" # TODO: remove
lazy_static.workspace = true # Legacy crate; prefer std::sync::LazyLock or LazyCell
num-derive.workspace = true # TODO: remove
num-traits.workspace = true # TODO: remove
yuv = { version = "0.8", features = ["rdp"] }
[dev-dependencies]
bmp = "0.5"
bytemuck = "1.23"
bytemuck = "1.24"
expect-test.workspace = true
[lints]

View file

@ -40,6 +40,10 @@ pub fn ycbcr_to_rgba(input: YCbCrBuffer<'_>, output: &mut [u8]) -> io::Result<()
rdp_yuv444_to_rgba(&planar, output, len).map_err(io::Error::other)
}
/// # Panics
///
/// - Panics if `width` > 64.
/// - Panics if `height` > 64.
#[expect(clippy::too_many_arguments)]
pub fn to_64x64_ycbcr_tile(
input: &[u8],
@ -79,10 +83,15 @@ pub fn to_64x64_ycbcr_tile(
/// Convert a 16-bit RDP color to RGB representation. Input value should be represented in
/// little-endian format.
pub fn rdp_16bit_to_rgb(color: u16) -> [u8; 3] {
let r = (((((color >> 11) & 0x1f) * 527) + 23) >> 6) as u8;
let g = (((((color >> 5) & 0x3f) * 259) + 33) >> 6) as u8;
let b = ((((color & 0x1f) * 527) + 23) >> 6) as u8;
[r, g, b]
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (checked integer underflow)")]
let out = {
let r = u8::try_from(((((color >> 11) & 0x1f) * 527) + 23) >> 6).expect("max possible value is 255");
let g = u8::try_from(((((color >> 5) & 0x3f) * 259) + 33) >> 6).expect("max possible value is 255");
let b = u8::try_from((((color & 0x1f) * 527) + 23) >> 6).expect("max possible value is 255");
[r, g, b]
};
out
}
#[derive(Debug)]

View file

@ -22,17 +22,21 @@ fn dwt_vertical<const SUBBAND_WIDTH: usize>(buffer: &[i16], dwt: &mut [i16]) {
let h_index = l_index + SUBBAND_WIDTH * total_width;
let src_index = y * total_width + x;
dwt[h_index] = ((i32::from(buffer[src_index + total_width])
- ((i32::from(buffer[src_index])
+ i32::from(buffer[src_index + if n < SUBBAND_WIDTH - 1 { 2 * total_width } else { 0 }]))
>> 1))
>> 1) as i16;
dwt[l_index] = (i32::from(buffer[src_index])
+ if n == 0 {
i32::from(dwt[h_index])
} else {
(i32::from(dwt[h_index - total_width]) + i32::from(dwt[h_index])) >> 1
}) as i16;
dwt[h_index] = i32_to_i16_possible_truncation(
(i32::from(buffer[src_index + total_width])
- ((i32::from(buffer[src_index])
+ i32::from(buffer[src_index + if n < SUBBAND_WIDTH - 1 { 2 * total_width } else { 0 }]))
>> 1))
>> 1,
);
dwt[l_index] = i32_to_i16_possible_truncation(
i32::from(buffer[src_index])
+ if n == 0 {
i32::from(dwt[h_index])
} else {
(i32::from(dwt[h_index - total_width]) + i32::from(dwt[h_index])) >> 1
},
);
}
}
}
@ -57,16 +61,20 @@ fn dwt_horizontal<const SUBBAND_WIDTH: usize>(mut buffer: &mut [i16], dwt: &[i16
let x = n * 2;
// HL
hl[n] = ((i32::from(l_src[x + 1])
- ((i32::from(l_src[x]) + i32::from(l_src[if n < SUBBAND_WIDTH - 1 { x + 2 } else { x }])) >> 1))
>> 1) as i16;
hl[n] = i32_to_i16_possible_truncation(
(i32::from(l_src[x + 1])
- ((i32::from(l_src[x]) + i32::from(l_src[if n < SUBBAND_WIDTH - 1 { x + 2 } else { x }])) >> 1))
>> 1,
);
// LL
ll[n] = (i32::from(l_src[x])
+ if n == 0 {
i32::from(hl[n])
} else {
(i32::from(hl[n - 1]) + i32::from(hl[n])) >> 1
}) as i16;
ll[n] = i32_to_i16_possible_truncation(
i32::from(l_src[x])
+ if n == 0 {
i32::from(hl[n])
} else {
(i32::from(hl[n - 1]) + i32::from(hl[n])) >> 1
},
);
}
// H
@ -74,16 +82,20 @@ fn dwt_horizontal<const SUBBAND_WIDTH: usize>(mut buffer: &mut [i16], dwt: &[i16
let x = n * 2;
// HH
hh[n] = ((i32::from(h_src[x + 1])
- ((i32::from(h_src[x]) + i32::from(h_src[if n < SUBBAND_WIDTH - 1 { x + 2 } else { x }])) >> 1))
>> 1) as i16;
hh[n] = i32_to_i16_possible_truncation(
(i32::from(h_src[x + 1])
- ((i32::from(h_src[x]) + i32::from(h_src[if n < SUBBAND_WIDTH - 1 { x + 2 } else { x }])) >> 1))
>> 1,
);
// LH
lh[n] = (i32::from(h_src[x])
+ if n == 0 {
i32::from(hh[n])
} else {
(i32::from(hh[n - 1]) + i32::from(hh[n])) >> 1
}) as i16;
lh[n] = i32_to_i16_possible_truncation(
i32::from(h_src[x])
+ if n == 0 {
i32::from(hh[n])
} else {
(i32::from(hh[n - 1]) + i32::from(hh[n])) >> 1
},
);
}
hl = &mut hl[SUBBAND_WIDTH..];
@ -124,24 +136,30 @@ fn inverse_horizontal(mut buffer: &[i16], temp_buffer: &mut [i16], subband_width
for _ in 0..subband_width {
// Even coefficients
l_dst[0] = (i32::from(ll[0]) - ((i32::from(hl[0]) + i32::from(hl[0]) + 1) >> 1)) as i16;
h_dst[0] = (i32::from(lh[0]) - ((i32::from(hh[0]) + i32::from(hh[0]) + 1) >> 1)) as i16;
l_dst[0] = i32_to_i16_possible_truncation(i32::from(ll[0]) - ((i32::from(hl[0]) + i32::from(hl[0]) + 1) >> 1));
h_dst[0] = i32_to_i16_possible_truncation(i32::from(lh[0]) - ((i32::from(hh[0]) + i32::from(hh[0]) + 1) >> 1));
for n in 1..subband_width {
let x = n * 2;
l_dst[x] = (i32::from(ll[n]) - ((i32::from(hl[n - 1]) + i32::from(hl[n]) + 1) >> 1)) as i16;
h_dst[x] = (i32::from(lh[n]) - ((i32::from(hh[n - 1]) + i32::from(hh[n]) + 1) >> 1)) as i16;
l_dst[x] =
i32_to_i16_possible_truncation(i32::from(ll[n]) - ((i32::from(hl[n - 1]) + i32::from(hl[n]) + 1) >> 1));
h_dst[x] =
i32_to_i16_possible_truncation(i32::from(lh[n]) - ((i32::from(hh[n - 1]) + i32::from(hh[n]) + 1) >> 1));
}
// Odd coefficients
for n in 0..subband_width - 1 {
let x = n * 2;
l_dst[x + 1] = (i32::from(hl[n] << 1) + ((i32::from(l_dst[x]) + i32::from(l_dst[x + 2])) >> 1)) as i16;
h_dst[x + 1] = (i32::from(hh[n] << 1) + ((i32::from(h_dst[x]) + i32::from(h_dst[x + 2])) >> 1)) as i16;
l_dst[x + 1] = i32_to_i16_possible_truncation(
i32::from(hl[n] << 1) + ((i32::from(l_dst[x]) + i32::from(l_dst[x + 2])) >> 1),
);
h_dst[x + 1] = i32_to_i16_possible_truncation(
i32::from(hh[n] << 1) + ((i32::from(h_dst[x]) + i32::from(h_dst[x + 2])) >> 1),
);
}
let n = subband_width - 1;
let x = n * 2;
l_dst[x + 1] = (i32::from(hl[n] << 1) + i32::from(l_dst[x])) as i16;
h_dst[x + 1] = (i32::from(hh[n] << 1) + i32::from(h_dst[x])) as i16;
l_dst[x + 1] = i32_to_i16_possible_truncation(i32::from(hl[n] << 1) + i32::from(l_dst[x]));
h_dst[x + 1] = i32_to_i16_possible_truncation(i32::from(hh[n] << 1) + i32::from(h_dst[x]));
hl = &hl[subband_width..];
lh = &lh[subband_width..];
@ -157,8 +175,9 @@ fn inverse_vertical(mut buffer: &mut [i16], mut temp_buffer: &[i16], subband_wid
let total_width = subband_width * 2;
for _ in 0..total_width {
buffer[0] =
(i32::from(temp_buffer[0]) - ((i32::from(temp_buffer[subband_width * total_width]) * 2 + 1) >> 1)) as i16;
buffer[0] = i32_to_i16_possible_truncation(
i32::from(temp_buffer[0]) - ((i32::from(temp_buffer[subband_width * total_width]) * 2 + 1) >> 1),
);
let mut l = temp_buffer;
let mut lh = &temp_buffer[(subband_width - 1) * total_width..];
@ -171,18 +190,28 @@ fn inverse_vertical(mut buffer: &mut [i16], mut temp_buffer: &[i16], subband_wid
h = &h[total_width..];
// Even coefficients
dst[2 * total_width] = (i32::from(l[0]) - ((i32::from(lh[0]) + i32::from(h[0]) + 1) >> 1)) as i16;
dst[2 * total_width] =
i32_to_i16_possible_truncation(i32::from(l[0]) - ((i32::from(lh[0]) + i32::from(h[0]) + 1) >> 1));
// Odd coefficients
dst[total_width] =
(i32::from(lh[0] << 1) + ((i32::from(dst[0]) + i32::from(dst[2 * total_width])) >> 1)) as i16;
dst[total_width] = i32_to_i16_possible_truncation(
i32::from(lh[0] << 1) + ((i32::from(dst[0]) + i32::from(dst[2 * total_width])) >> 1),
);
dst = &mut dst[2 * total_width..];
}
dst[total_width] = (i32::from(lh[total_width] << 1) + ((i32::from(dst[0]) + i32::from(dst[0])) >> 1)) as i16;
dst[total_width] = i32_to_i16_possible_truncation(
i32::from(lh[total_width] << 1) + ((i32::from(dst[0]) + i32::from(dst[0])) >> 1),
);
temp_buffer = &temp_buffer[1..];
buffer = &mut buffer[1..];
}
}
#[expect(clippy::as_conversions)]
#[expect(clippy::cast_possible_truncation)]
fn i32_to_i16_possible_truncation(value: i32) -> i16 {
value as i16
}

View file

@ -1,10 +1,6 @@
#![cfg_attr(doc, doc = include_str!("../README.md"))]
#![doc(html_logo_url = "https://cdnweb.devolutions.net/images/projects/devolutions/logos/devolutions-icon-shadow.svg")]
#![allow(clippy::arithmetic_side_effects)] // FIXME: remove
#![allow(clippy::cast_lossless)] // FIXME: remove
#![allow(clippy::cast_possible_truncation)] // FIXME: remove
#![allow(clippy::cast_possible_wrap)] // FIXME: remove
#![allow(clippy::cast_sign_loss)] // FIXME: remove
pub mod color_conversion;
pub mod diff;
@ -21,6 +17,9 @@ pub mod zgfx;
mod utils;
/// # Panics
///
/// Panics if `input.len()` is not 4096 (64 * 46).
pub fn rfx_encode_component(
input: &mut [i16],
output: &mut [u8],

View file

@ -260,9 +260,12 @@ impl DecodedPointer {
} else if target.should_premultiply_alpha() {
// Calculate premultiplied alpha via integer arithmetic
let with_premultiplied_alpha = [
((color[0] as u16 * color[0] as u16) >> 8) as u8,
((color[1] as u16 * color[1] as u16) >> 8) as u8,
((color[2] as u16 * color[2] as u16) >> 8) as u8,
u8::try_from((u16::from(color[0]) * u16::from(color[0])) >> 8)
.expect("(u16 >> 8) fits into u8"),
u8::try_from((u16::from(color[1]) * u16::from(color[1])) >> 8)
.expect("(u16 >> 8) fits into u8"),
u8::try_from((u16::from(color[2]) * u16::from(color[2])) >> 8)
.expect("(u16 >> 8) fits into u8"),
color[3],
];
bitmap_data.extend_from_slice(&with_premultiplied_alpha);

View file

@ -11,7 +11,7 @@ pub fn decode(buffer: &mut [i16], quant: &Quant) {
let (first_level, buffer) = buffer.split_at_mut(FIRST_LEVEL_SUBBANDS_COUNT * FIRST_LEVEL_SIZE);
let (second_level, third_level) = buffer.split_at_mut(SECOND_LEVEL_SUBBANDS_COUNT * SECOND_LEVEL_SIZE);
let decode_chunk = |a: (&mut [i16], u8)| decode_block(a.0, a.1 as i16 - 1);
let decode_chunk = |a: (&mut [i16], u8)| decode_block(a.0, i16::from(a.1) - 1);
first_level
.chunks_mut(FIRST_LEVEL_SIZE)
@ -49,7 +49,7 @@ pub fn encode(buffer: &mut [i16], quant: &Quant) {
let (first_level, buffer) = buffer.split_at_mut(FIRST_LEVEL_SUBBANDS_COUNT * FIRST_LEVEL_SIZE);
let (second_level, third_level) = buffer.split_at_mut(SECOND_LEVEL_SUBBANDS_COUNT * SECOND_LEVEL_SIZE);
let encode_chunk = |a: (&mut [i16], u8)| encode_block(a.0, a.1 as i16 - 1);
let encode_chunk = |a: (&mut [i16], u8)| encode_block(a.0, i16::from(a.1) - 1);
first_level
.chunks_mut(FIRST_LEVEL_SIZE)

View file

@ -195,8 +195,9 @@ impl<'a> BitmapStreamDecoderImpl<'a> {
}
fn write_aycocg_planes_to_rgb24(&self, params: AYCoCgParams, planes: &[u8], dst: &mut Vec<u8>) {
#![allow(clippy::similar_names)] // Its hard to find better names for co, cg, etc.
let sample_shift = params.chroma_subsampling as usize;
#![allow(clippy::similar_names, reason = "its hard to find better names for co, cg, etc")]
let sample_shift = usize::from(params.chroma_subsampling);
let (y_offset, co_offset, cg_offset) = (
self.color_plane_offsets[0],
@ -265,12 +266,13 @@ fn ycocg_with_cll_to_rgb(cll: u8, y: u8, co: u8, cg: u8) -> Rgb {
// |R| |1 1/2 -1/2| |Y |
// |G| = |1 0 1/2| * |Co|
// |B| |1 -1/2 -1/2| |Cg|
let chroma_shift = (cll - 1) as usize;
let chroma_shift = cll - 1;
let clip_i16 = |v: i16| v.clamp(0, 255) as u8;
let clip_i16 =
|v: i16| u8::try_from(v.clamp(0, 255)).expect("fits into u8 because the value is clamped to [0..256]");
let co_signed = (co << chroma_shift) as i8;
let cg_signed = (cg << chroma_shift) as i8;
let co_signed = (co << chroma_shift).cast_signed();
let cg_signed = (cg << chroma_shift).cast_signed();
let y = i16::from(y);
let co = i16::from(co_signed);

View file

@ -93,9 +93,9 @@ impl RlePlaneDecoder {
let raw_bytes_field = (control_byte >> 4) & 0x0F;
let (run_length, raw_bytes_count) = match rle_bytes_field {
1 => (16 + raw_bytes_field as usize, 0),
2 => (32 + raw_bytes_field as usize, 0),
rle_control => (rle_control as usize, raw_bytes_field as usize),
1 => (16 + usize::from(raw_bytes_field), 0),
2 => (32 + usize::from(raw_bytes_field), 0),
rle_control => (usize::from(rle_control), usize::from(raw_bytes_field)),
};
self.decoded_data_len = raw_bytes_count + run_length;
@ -207,7 +207,8 @@ impl<I: Iterator> RleEncoderScanlineIterator<I> {
}
fn delta_value(prev: u8, next: u8) -> u8 {
let mut result = (next as i16 - prev as i16) as u8;
let mut result = u8::try_from((i16::from(next) - i16::from(prev)) & 0xFF)
.expect("masking with 0xFF ensures that the value fits into u8");
// bit magic from 3.1.9.2.1 of [MS-RDPEGDI].
if result < 128 {
@ -326,7 +327,10 @@ impl RlePlaneEncoder {
raw = &raw[15..];
}
let control = ((raw.len() as u8) << 4) + cmp::min(run, 15) as u8;
let raw_len = u8::try_from(raw.len()).expect("max value is guaranteed to be 15 due to the prior while loop");
let run_capped = u8::try_from(cmp::min(run, 15)).expect("max value is guaranteed to be 15");
let control = (raw_len << 4) + run_capped;
ensure_size!(dst: dst, size: raw.len() + 1);
@ -352,7 +356,8 @@ impl RlePlaneEncoder {
while run >= 16 {
ensure_size!(dst: dst, size: 1);
let current = cmp::min(run, MAX_DECODED_SEGMENT_SIZE) as u8;
let current = u8::try_from(cmp::min(run, MAX_DECODED_SEGMENT_SIZE))
.expect("max value is guaranteed to be MAX_DECODED_SEGMENT_SIZE (47)");
let c_raw_bytes = cmp::min(current / 16, 2);
let n_run_length = current - c_raw_bytes * 16;
@ -361,7 +366,7 @@ impl RlePlaneEncoder {
dst.write_u8(control);
written += 1;
run -= current as usize;
run -= usize::from(current);
}
if run > 0 {

View file

@ -400,88 +400,86 @@ fn bands_internals_equal(first_band: &[InclusiveRectangle], second_band: &[Inclu
#[cfg(test)]
mod tests {
use lazy_static::lazy_static;
use std::sync::LazyLock;
use super::*;
lazy_static! {
static ref REGION_FOR_RECTANGLES_INTERSECTION: Region = Region {
extents: InclusiveRectangle {
static REGION_FOR_RECTANGLES_INTERSECTION: LazyLock<Region> = LazyLock::new(|| Region {
extents: InclusiveRectangle {
left: 1,
top: 1,
right: 11,
bottom: 9,
},
rectangles: vec![
InclusiveRectangle {
left: 1,
top: 1,
right: 5,
bottom: 3,
},
InclusiveRectangle {
left: 7,
top: 1,
right: 8,
bottom: 3,
},
InclusiveRectangle {
left: 9,
top: 1,
right: 11,
bottom: 3,
},
InclusiveRectangle {
left: 7,
top: 3,
right: 11,
bottom: 4,
},
InclusiveRectangle {
left: 3,
top: 4,
right: 6,
bottom: 6,
},
InclusiveRectangle {
left: 7,
top: 4,
right: 11,
bottom: 6,
},
InclusiveRectangle {
left: 1,
top: 6,
right: 3,
bottom: 8,
},
InclusiveRectangle {
left: 4,
top: 6,
right: 5,
bottom: 8,
},
InclusiveRectangle {
left: 6,
top: 6,
right: 10,
bottom: 8,
},
InclusiveRectangle {
left: 4,
top: 8,
right: 5,
bottom: 9,
},
rectangles: vec![
InclusiveRectangle {
left: 1,
top: 1,
right: 5,
bottom: 3,
},
InclusiveRectangle {
left: 7,
top: 1,
right: 8,
bottom: 3,
},
InclusiveRectangle {
left: 9,
top: 1,
right: 11,
bottom: 3,
},
InclusiveRectangle {
left: 7,
top: 3,
right: 11,
bottom: 4,
},
InclusiveRectangle {
left: 3,
top: 4,
right: 6,
bottom: 6,
},
InclusiveRectangle {
left: 7,
top: 4,
right: 11,
bottom: 6,
},
InclusiveRectangle {
left: 1,
top: 6,
right: 3,
bottom: 8,
},
InclusiveRectangle {
left: 4,
top: 6,
right: 5,
bottom: 8,
},
InclusiveRectangle {
left: 6,
top: 6,
right: 10,
bottom: 8,
},
InclusiveRectangle {
left: 4,
top: 8,
right: 5,
bottom: 9,
},
InclusiveRectangle {
left: 6,
top: 8,
right: 10,
bottom: 9,
},
],
};
}
InclusiveRectangle {
left: 6,
top: 8,
right: 10,
bottom: 9,
},
],
});
#[test]
fn union_rectangle_sets_extents_and_single_rectangle_for_empty_region() {

View file

@ -63,17 +63,23 @@ impl<'a> BitStream<'a> {
}
pub fn encode(mode: EntropyAlgorithm, input: &[i16], tile: &mut [u8]) -> Result<usize, RlgrError> {
let mut k: u32 = 1;
let kr: u32 = 1;
let mut kp: u32 = k << LS_GR;
let mut krp: u32 = kr << LS_GR;
#![expect(
clippy::as_conversions,
reason = "u32-to-usize and usize-to-u32 conversions, mostly fine, and hot loop"
)]
if input.is_empty() {
return Err(RlgrError::EmptyTile);
}
let mut k: u32 = 1;
let kr: u32 = 1;
let mut kp: u32 = k << LS_GR;
let mut krp: u32 = kr << LS_GR;
let mut bits = BitStream::new(tile);
let mut input = input.iter().peekable();
while input.peek().is_some() {
match CompressionMode::from(k) {
CompressionMode::RunLength => {
@ -98,7 +104,7 @@ pub fn encode(mode: EntropyAlgorithm, input: &[i16], tile: &mut [u8]) -> Result<
bits.output_bits(k as usize, nz);
if let Some(val) = input.next() {
let mag = val.unsigned_abs() as u32;
let mag = u32::from(val.unsigned_abs());
bits.output_bit(1, *val < 0);
code_gr(&mut bits, &mut krp, mag - 1);
}
@ -106,9 +112,11 @@ pub fn encode(mode: EntropyAlgorithm, input: &[i16], tile: &mut [u8]) -> Result<
k = kp >> LS_GR;
}
CompressionMode::GolombRice => {
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (prior check)")]
let input_first = *input
.next()
.expect("value is guaranteed to be `Some` due to the prior check");
match mode {
EntropyAlgorithm::Rlgr1 => {
let two_ms = get_2magsign(input_first);
@ -150,37 +158,53 @@ pub fn encode(mode: EntropyAlgorithm, input: &[i16], tile: &mut [u8]) -> Result<
fn get_2magsign(val: i16) -> u32 {
let sign = if val < 0 { 1 } else { 0 };
(val.unsigned_abs() as u32) * 2 - sign
(u32::from(val.unsigned_abs())) * 2 - sign
}
fn code_gr(bits: &mut BitStream<'_>, krp: &mut u32, val: u32) {
let kr = (*krp >> LS_GR) as usize;
let vk = (val >> kr) as usize;
#![expect(
clippy::as_conversions,
reason = "u32-to-usize and usize-to-u32 conversions, mostly fine, and hot loop"
)]
bits.output_bit(vk, true);
let kr = (*krp >> LS_GR) as usize;
let vk = val >> kr;
let vk_usize = vk as usize;
bits.output_bit(vk_usize, true);
bits.output_bit(1, false);
if kr != 0 {
let remainder = val & ((1 << kr) - 1);
bits.output_bits(kr, remainder);
}
if vk == 0 {
*krp = krp.saturating_sub(2);
} else if vk > 1 {
*krp = min(*krp + vk as u32, KP_MAX);
*krp = min(*krp + vk, KP_MAX);
}
}
pub fn decode(mode: EntropyAlgorithm, tile: &[u8], mut output: &mut [i16]) -> Result<(), RlgrError> {
let mut k: u32 = 1;
let mut kr: u32 = 1;
let mut kp: u32 = k << LS_GR;
let mut krp: u32 = kr << LS_GR;
#![expect(
clippy::as_conversions,
clippy::cast_possible_truncation,
reason = "u32-to-usize and usize-to-u32 conversions, mostly fine, and hot loop"
)]
if tile.is_empty() {
return Err(RlgrError::EmptyTile);
}
let mut k: u32 = 1;
let mut kr: u32 = 1;
let mut kp: u32 = k << LS_GR;
let mut krp: u32 = kr << LS_GR;
let mut bits = Bits::new(BitSlice::from_slice(tile));
while !bits.is_empty() && !output.is_empty() {
match CompressionMode::from(k) {
CompressionMode::RunLength => {
@ -199,7 +223,7 @@ pub fn decode(mode: EntropyAlgorithm, tile: &[u8], mut output: &mut [i16]) -> Re
kp = kp.saturating_sub(DN_GR);
k = kp >> LS_GR;
let magnitude = compute_rl_magnitude(sign_bit, code_remainder);
let magnitude = compute_rl_magnitude(sign_bit, code_remainder)?;
let size = min(run as usize, output.len());
fill(&mut output[..size], 0);
@ -216,7 +240,7 @@ pub fn decode(mode: EntropyAlgorithm, tile: &[u8], mut output: &mut [i16]) -> Re
match mode {
EntropyAlgorithm::Rlgr1 => {
let magnitude = compute_rlgr1_magnitude(code_remainder, &mut k, &mut kp);
let magnitude = compute_rlgr1_magnitude(code_remainder, &mut k, &mut kp)?;
write_byte!(output, magnitude);
}
EntropyAlgorithm::Rlgr3 => {
@ -232,10 +256,10 @@ pub fn decode(mode: EntropyAlgorithm, tile: &[u8], mut output: &mut [i16]) -> Re
k = kp >> LS_GR;
}
let magnitude = compute_rlgr3_magnitude(val1);
let magnitude = compute_rlgr3_magnitude(val1)?;
write_byte!(output, magnitude);
let magnitude = compute_rlgr3_magnitude(val2);
let magnitude = compute_rlgr3_magnitude(val2)?;
write_byte!(output, magnitude);
}
}
@ -243,7 +267,7 @@ pub fn decode(mode: EntropyAlgorithm, tile: &[u8], mut output: &mut [i16]) -> Re
}
}
// fill remaining buffer with zeros
// Fill remaining buffer with zeros.
fill(output, 0);
Ok(())
@ -286,37 +310,41 @@ fn count_run(number_of_zeros: usize, k: &mut u32, kp: &mut u32) -> u32 {
.sum()
}
fn compute_rl_magnitude(sign_bit: u8, code_remainder: u32) -> i16 {
fn compute_rl_magnitude(sign_bit: u8, code_remainder: u32) -> Result<i16, RlgrError> {
let rl_magnitude =
i16::try_from(code_remainder + 1).map_err(|_| RlgrError::InvalidIntegralConversion("code remainder + 1"))?;
if sign_bit != 0 {
-((code_remainder + 1) as i16)
Ok(-rl_magnitude)
} else {
(code_remainder + 1) as i16
Ok(rl_magnitude)
}
}
fn compute_rlgr1_magnitude(code_remainder: u32, k: &mut u32, kp: &mut u32) -> i16 {
fn compute_rlgr1_magnitude(code_remainder: u32, k: &mut u32, kp: &mut u32) -> Result<i16, RlgrError> {
if code_remainder == 0 {
*kp = min(*kp + UQ_GR, KP_MAX);
*k = *kp >> LS_GR;
0
Ok(0)
} else {
*kp = kp.saturating_sub(DQ_GR);
*k = *kp >> LS_GR;
if code_remainder % 2 != 0 {
-(((code_remainder + 1) >> 1) as i16)
Ok(-i16::try_from((code_remainder + 1) >> 1)
.map_err(|_| RlgrError::InvalidIntegralConversion("(code remainder + 1) >> 1"))?)
} else {
(code_remainder >> 1) as i16
i16::try_from(code_remainder >> 1).map_err(|_| RlgrError::InvalidIntegralConversion("code remainder >> 1"))
}
}
}
fn compute_rlgr3_magnitude(val: u32) -> i16 {
fn compute_rlgr3_magnitude(val: u32) -> Result<i16, RlgrError> {
if val % 2 != 0 {
-(((val + 1) >> 1) as i16)
Ok(-i16::try_from((val + 1) >> 1).map_err(|_| RlgrError::InvalidIntegralConversion("(val + 1) >> 1"))?)
} else {
(val >> 1) as i16
i16::try_from(val >> 1).map_err(|_| RlgrError::InvalidIntegralConversion("val >> 1"))
}
}
@ -333,11 +361,17 @@ fn compute_n_index(code_remainder: u32) -> usize {
}
fn update_parameters_according_to_number_of_ones(number_of_ones: usize, kr: &mut u32, krp: &mut u32) {
#![expect(
clippy::as_conversions,
clippy::cast_possible_truncation,
reason = "usize-to-u32 conversions, hot loop"
)]
if number_of_ones == 0 {
*krp = (*krp).saturating_sub(2);
*kr = *krp >> LS_GR;
} else if number_of_ones > 1 {
*krp = min(*krp + number_of_ones as u32, KP_MAX);
*krp = min(*krp + (number_of_ones as u32), KP_MAX);
*kr = *krp >> LS_GR;
}
}
@ -363,6 +397,7 @@ pub enum RlgrError {
Io(io::Error),
Yuv(YuvError),
EmptyTile,
InvalidIntegralConversion(&'static str),
}
impl core::fmt::Display for RlgrError {
@ -371,6 +406,7 @@ impl core::fmt::Display for RlgrError {
Self::Io(_) => write!(f, "IO error"),
Self::Yuv(_) => write!(f, "YUV error"),
Self::EmptyTile => write!(f, "the input tile is empty"),
Self::InvalidIntegralConversion(s) => write!(f, "invalid `{s}`: out of range integral type conversion"),
}
}
}
@ -381,6 +417,7 @@ impl core::error::Error for RlgrError {
Self::Io(error) => Some(error),
Self::Yuv(error) => Some(error),
Self::EmptyTile => None,
Self::InvalidIntegralConversion(_) => None,
}
}
}

View file

@ -23,12 +23,14 @@ impl<'a> SegmentedDataPdu<'a> {
match descriptor {
SegmentedDescriptor::Single => Ok(SegmentedDataPdu::Single(BulkEncodedData::from_buffer(buffer)?)),
SegmentedDescriptor::Multipart => {
let segment_count = buffer.read_u16::<LittleEndian>()? as usize;
let uncompressed_size = buffer.read_u32::<LittleEndian>()? as usize;
let segment_count = usize::from(buffer.read_u16::<LittleEndian>()?);
let uncompressed_size = usize::try_from(buffer.read_u32::<LittleEndian>()?)
.map_err(|_| ZgfxError::InvalidIntegralConversion("segments uncompressed size"))?;
let mut segments = Vec::with_capacity(segment_count);
for _ in 0..segment_count {
let size = buffer.read_u32::<LittleEndian>()? as usize;
let size = usize::try_from(buffer.read_u32::<LittleEndian>()?)
.map_err(|_| ZgfxError::InvalidIntegralConversion("segment data size"))?;
let (segment_data, new_buffer) = buffer.split_at(size);
buffer = new_buffer;
@ -84,7 +86,7 @@ bitflags! {
#[cfg(test)]
mod test {
use lazy_static::lazy_static;
use std::sync::LazyLock;
use super::*;
@ -111,29 +113,30 @@ mod test {
0x02, // the third segment: data
];
lazy_static! {
static ref SINGLE_SEGMENTED_DATA_PDU: SegmentedDataPdu<'static> = SegmentedDataPdu::Single(BulkEncodedData {
static SINGLE_SEGMENTED_DATA_PDU: LazyLock<SegmentedDataPdu<'static>> = LazyLock::new(|| {
SegmentedDataPdu::Single(BulkEncodedData {
compression_flags: CompressionFlags::COMPRESSED,
data: &SINGLE_SEGMENTED_DATA_PDU_BUFFER[2..],
});
static ref MULTIPART_SEGMENTED_DATA_PDU: SegmentedDataPdu<'static> = SegmentedDataPdu::Multipart {
})
});
static MULTIPART_SEGMENTED_DATA_PDU: LazyLock<SegmentedDataPdu<'static>> =
LazyLock::new(|| SegmentedDataPdu::Multipart {
uncompressed_size: 0x2B,
segments: vec![
BulkEncodedData {
compression_flags: CompressionFlags::empty(),
data: &MULTIPART_SEGMENTED_DATA_PDU_BUFFER[12..12 + 16]
data: &MULTIPART_SEGMENTED_DATA_PDU_BUFFER[12..12 + 16],
},
BulkEncodedData {
compression_flags: CompressionFlags::empty(),
data: &MULTIPART_SEGMENTED_DATA_PDU_BUFFER[33..33 + 13]
data: &MULTIPART_SEGMENTED_DATA_PDU_BUFFER[33..33 + 13],
},
BulkEncodedData {
compression_flags: CompressionFlags::COMPRESSED,
data: &MULTIPART_SEGMENTED_DATA_PDU_BUFFER[51..]
data: &MULTIPART_SEGMENTED_DATA_PDU_BUFFER[51..],
},
],
};
}
});
#[test]
fn from_buffer_correctly_parses_zgfx_single_segmented_data_pdu() {

View file

@ -4,6 +4,7 @@ mod circular_buffer;
mod control_messages;
use std::io::{self, Write as _};
use std::sync::LazyLock;
use bitvec::bits;
use bitvec::field::BitField as _;
@ -78,8 +79,8 @@ impl Decompressor {
let mut bits = BitSlice::from_slice(encoded_data);
// The value of the last byte indicates the number of unused bits in the final byte
bits =
&bits[..8 * (encoded_data.len() - 1) - *encoded_data.last().expect("encoded_data is not empty") as usize];
bits = &bits
[..8 * (encoded_data.len() - 1) - usize::from(*encoded_data.last().expect("encoded_data is not empty"))];
let mut bits = Bits::new(bits);
let mut bytes_written = 0;
@ -134,14 +135,15 @@ fn handle_match(
distance_base: u32,
history: &mut FixedCircularBuffer,
output: &mut Vec<u8>,
) -> io::Result<usize> {
) -> Result<usize, ZgfxError> {
// Each token has been assigned a different base distance
// and number of additional value bits to be added to compute the full distance.
let distance = (distance_base + bits.split_to(distance_value_size).load_be::<u32>()) as usize;
let distance = usize::try_from(distance_base + bits.split_to(distance_value_size).load_be::<u32>())
.map_err(|_| ZgfxError::InvalidIntegralConversion("token's full distance"))?;
if distance == 0 {
read_unencoded_bytes(bits, history, output)
read_unencoded_bytes(bits, history, output).map_err(ZgfxError::from)
} else {
read_encoded_bytes(bits, distance, history, output)
}
@ -155,7 +157,7 @@ fn read_unencoded_bytes(
// A match distance of zero is a special case,
// which indicates that an unencoded run of bytes follows.
// The count of bytes is encoded as a 15-bit value
let length = bits.split_to(15).load_be::<u32>() as usize;
let length = bits.split_to(15).load_be::<usize>();
if bits.remaining_bits_of_last_byte() > 0 {
let pad_to_byte_boundary = 8 - bits.remaining_bits_of_last_byte();
@ -178,7 +180,7 @@ fn read_encoded_bytes(
distance: usize,
history: &mut FixedCircularBuffer,
output: &mut Vec<u8>,
) -> io::Result<usize> {
) -> Result<usize, ZgfxError> {
// A match length prefix follows the token and indicates
// how many additional bits will be needed to get the full length
// (the number of bytes to be copied).
@ -191,9 +193,12 @@ fn read_encoded_bytes(
3
} else {
let length = bits.split_to(length_token_size + 1).load_be::<u32>() as usize;
let length = bits.split_to(length_token_size + 1).load_be::<usize>();
let base = 2u32.pow(length_token_size as u32 + 1) as usize;
let length_token_size = u32::try_from(length_token_size)
.map_err(|_| ZgfxError::InvalidIntegralConversion("length of the token size"))?;
let base = 2usize.pow(length_token_size + 1);
base + length
};
@ -223,8 +228,8 @@ enum TokenType {
},
}
lazy_static::lazy_static! {
static ref TOKEN_TABLE: [Token; 40] = [
static TOKEN_TABLE: LazyLock<[Token; 40]> = LazyLock::new(|| {
[
Token {
prefix: bits![static u8, Msb0; 0],
ty: TokenType::NullLiteral,
@ -427,8 +432,8 @@ lazy_static::lazy_static! {
distance_base: 17_094_304,
},
},
];
}
]
});
#[derive(Debug)]
pub enum ZgfxError {
@ -440,6 +445,7 @@ pub enum ZgfxError {
uncompressed_size: usize,
},
TokenBitsNotFound,
InvalidIntegralConversion(&'static str),
}
impl core::fmt::Display for ZgfxError {
@ -456,6 +462,7 @@ impl core::fmt::Display for ZgfxError {
"decompressed size of segments ({decompressed_size}) does not equal to uncompressed size ({uncompressed_size})",
),
Self::TokenBitsNotFound => write!(f, "token bits not found"),
Self::InvalidIntegralConversion(type_name) => write!(f, "invalid `{type_name}`: out of range integral type conversion"),
}
}
}
@ -468,6 +475,7 @@ impl core::error::Error for ZgfxError {
Self::InvalidSegmentedDescriptor => None,
Self::InvalidDecompressedSize { .. } => None,
Self::TokenBitsNotFound => None,
Self::InvalidIntegralConversion(_) => None,
}
}
}

View file

@ -24,6 +24,10 @@ pub enum MouseButton {
}
impl MouseButton {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
pub fn as_idx(self) -> usize {
self as usize
}
@ -78,7 +82,11 @@ impl Scancode {
pub const fn from_u16(scancode: u16) -> Self {
let extended = scancode & 0xE000 == 0xE000;
#[expect(clippy::cast_possible_truncation)] // truncating on purpose
#[expect(
clippy::as_conversions,
clippy::cast_possible_truncation,
reason = "truncating on purpose"
)]
let code = scancode as u8;
Self { code, extended }
@ -86,6 +94,7 @@ impl Scancode {
pub fn as_idx(self) -> usize {
if self.extended {
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (integer upcast)")]
usize::from(self.code).checked_add(256).expect("never overflow")
} else {
usize::from(self.code)
@ -343,6 +352,7 @@ impl Database {
let mut events = SmallVec::new();
for idx in self.mouse_buttons.iter_ones() {
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (checked integer downcast)")]
let button = MouseButton::from_idx(idx).expect("in-range index");
let event = match MouseButtonFlags::from(button) {
@ -365,9 +375,12 @@ impl Database {
// The keyboard bit array size is 512.
for idx in self.keyboard.iter_ones() {
let (scancode, extended) = if idx >= 256 {
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (checked integer underflow)")]
let extended_code = idx.checked_sub(256).expect("never underflow");
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (checked integer downcast)")]
(u8::try_from(extended_code).expect("always in the range"), true)
} else {
#[expect(clippy::missing_panics_doc, reason = "unreachable panic (checked integer downcast)")]
(u8::try_from(idx).expect("always in the range"), false)
};

View file

@ -30,12 +30,12 @@ hyper-util = { version = "0.1", features = ["tokio"] }
hyper = { version = "1.7", features = ["client", "http1"] }
ironrdp-core = { path = "../ironrdp-core", version = "0.1", features = ["std"] }
ironrdp-error = { path = "../ironrdp-error", version = "0.1" }
ironrdp-tls = { path = "../ironrdp-tls", version = "0.1" }
ironrdp-tls = { path = "../ironrdp-tls", version = "0.2" }
log = "0.4"
tokio-tungstenite = { version = "0.27" }
tokio-tungstenite = { version = "0.28" }
tokio-util = { version = "0.7" }
tokio = { version = "1.43", features = ["macros", "rt"] }
uuid = { version = "1.16", features = ["v4"] }
uuid = { version = "1.19", features = ["v4"] }
[lints]
workspace = true

View file

@ -144,7 +144,7 @@ impl GwClient {
.header(hyper::header::SEC_WEBSOCKET_VERSION, "13")
.header(hyper::header::SEC_WEBSOCKET_KEY, generate_key())
.body(http_body_util::Empty::<Bytes>::new())
.expect("Failed to build request");
.map_err(|e| custom_err!("failed to build request", e))?;
let stream = hyper_util::rt::tokio::TokioIo::new(stream);
let (mut sender, mut conn) = hyper::client::conn::http1::handshake(stream)
@ -200,8 +200,7 @@ impl GwClient {
let work = tokio::spawn(async move {
let iv = Duration::from_secs(15 * 60);
let mut keepalive_interval: tokio::time::Interval =
tokio::time::interval_at(tokio::time::Instant::now() + iv, iv);
let mut keepalive_interval = tokio::time::interval_at(tokio::time::Instant::now() + iv, iv);
loop {
let mut wsbuf = [0u8; 8192];
@ -222,7 +221,8 @@ impl GwClient {
let mut cur = ReadCursor::new(&msg);
let hdr = PktHdr::decode(&mut cur).map_err(|e| custom_err!("Header Decode", e))?;
assert!(cur.len() >= hdr.length as usize - hdr.size());
let header_length = usize::try_from(hdr.length).map_err(|_| Error::new("PktHdr too big", GwErrorKind::Decode))?;
assert!(cur.len() >= header_length - hdr.size());
match hdr.ty {
PktTy::Keepalive => {
continue;
@ -288,7 +288,10 @@ impl GwConn {
let mut cur = ReadCursor::new(&msg);
let hdr = PktHdr::decode(&mut cur).map_err(|_| Error::new("PktHdr", GwErrorKind::Decode))?;
if cur.len() != hdr.length as usize - hdr.size() {
let header_length =
usize::try_from(hdr.length).map_err(|_| Error::new("PktHdr too big", GwErrorKind::Decode))?;
if cur.len() != header_length - hdr.size() {
return Err(Error::new("read_packet", GwErrorKind::PacketEof));
}
@ -316,7 +319,7 @@ impl GwConn {
async fn tunnel(&mut self) -> Result<(), Error> {
let req = TunnelReqPkt {
// Havent seen any server working without this.
caps: HttpCapsTy::MessagingConsentSign as u32,
caps: HttpCapsTy::MessagingConsentSign.as_u32(),
fields_present: 0,
..TunnelReqPkt::default()
};
@ -351,7 +354,7 @@ impl GwConn {
let resp: TunnelAuthRespPkt =
TunnelAuthRespPkt::decode(&mut cur).map_err(|_| Error::new("TunnelAuth", GwErrorKind::Decode))?;
if resp.error_code != 0 {
if resp.error_code() != 0 {
return Err(Error::new("TunnelAuth", GwErrorKind::Connect));
}
Ok(())
@ -370,7 +373,7 @@ impl GwConn {
let mut cur: ReadCursor<'_> = ReadCursor::new(&bytes);
let resp: ChannelResp =
ChannelResp::decode(&mut cur).map_err(|_| Error::new("ChannelResp", GwErrorKind::Decode))?;
if resp.error_code != 0 {
if resp.error_code() != 0 {
return Err(Error::new("ChannelCreate", GwErrorKind::Connect));
}
assert!(cur.eof());

View file

@ -37,6 +37,16 @@ pub(crate) enum PktTy {
Keepalive = 0x0D,
}
impl PktTy {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
fn as_u16(self) -> u16 {
self as u16
}
}
impl TryFrom<u16> for PktTy {
type Error = ();
@ -66,7 +76,7 @@ impl TryFrom<u16> for PktTy {
#[derive(Default, Debug)]
pub(crate) struct PktHdr {
pub ty: PktTy,
_reserved: u16,
pub _reserved: u16,
pub length: u32,
}
@ -78,7 +88,7 @@ impl Encode for PktHdr {
fn encode(&self, dst: &mut WriteCursor<'_>) -> ironrdp_core::EncodeResult<()> {
ensure_size!(in: dst, size: self.size());
dst.write_u16(self.ty as u16);
dst.write_u16(self.ty.as_u16());
dst.write_u16(self._reserved);
dst.write_u32(self.length);
@ -183,7 +193,7 @@ impl Decode<'_> for HandshakeRespPkt {
pub(crate) struct TunnelReqPkt {
pub caps: u32,
pub fields_present: u16,
pub(crate) _reserved: u16,
pub _reserved: u16,
}
impl Encode for TunnelReqPkt {
@ -215,6 +225,7 @@ impl Encode for TunnelReqPkt {
/// 2.2.5.3.9 HTTP_CAPABILITY_TYPE Enumeration
#[repr(u32)]
#[expect(dead_code)]
#[derive(Copy, Clone)]
pub(crate) enum HttpCapsTy {
QuarSOH = 1,
IdleTimeout = 2,
@ -224,8 +235,19 @@ pub(crate) enum HttpCapsTy {
UdpTransport = 0x20,
}
impl HttpCapsTy {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
pub(crate) fn as_u32(self) -> u32 {
self as u32
}
}
/// 2.2.5.3.8 HTTP_TUNNEL_RESPONSE_FIELDS_PRESENT_FLAGS
#[repr(u16)]
#[derive(Copy, Clone)]
enum HttpTunnelResponseFields {
TunnelID = 1,
Caps = 2,
@ -234,6 +256,16 @@ enum HttpTunnelResponseFields {
Consent = 0x10,
}
impl HttpTunnelResponseFields {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
fn as_u16(self) -> u16 {
self as u16
}
}
/// 2.2.10.20 HTTP_TUNNEL_RESPONSE Structure
#[derive(Debug, Default)]
pub(crate) struct TunnelRespPkt {
@ -266,26 +298,26 @@ impl Decode<'_> for TunnelRespPkt {
..TunnelRespPkt::default()
};
if pkt.fields_present & (HttpTunnelResponseFields::TunnelID as u16) != 0 {
if pkt.fields_present & (HttpTunnelResponseFields::TunnelID.as_u16()) != 0 {
ensure_size!(in: src, size: 4);
pkt.tunnel_id = Some(src.read_u32());
}
if pkt.fields_present & (HttpTunnelResponseFields::Caps as u16) != 0 {
if pkt.fields_present & (HttpTunnelResponseFields::Caps.as_u16()) != 0 {
ensure_size!(in: src, size: 4);
pkt.caps_flags = Some(src.read_u32());
}
if pkt.fields_present & (HttpTunnelResponseFields::Soh as u16) != 0 {
if pkt.fields_present & (HttpTunnelResponseFields::Soh.as_u16()) != 0 {
ensure_size!(in: src, size: 2 + 2);
pkt.nonce = Some(src.read_u16());
let len = src.read_u16();
ensure_size!(in: src, size: len as usize);
pkt.server_cert = src.read_slice(len as usize).to_vec();
let len = usize::from(src.read_u16());
ensure_size!(in: src, size: len);
pkt.server_cert = src.read_slice(len).to_vec();
}
if pkt.fields_present & (HttpTunnelResponseFields::Consent as u16) != 0 {
if pkt.fields_present & (HttpTunnelResponseFields::Consent.as_u16()) != 0 {
ensure_size!(in: src, size: 2);
let len = src.read_u16();
ensure_size!(in: src, size: len as usize);
pkt.consent_msg = src.read_slice(len as usize).to_vec();
let len = usize::from(src.read_u16());
ensure_size!(in: src, size: len);
pkt.consent_msg = src.read_slice(len).to_vec();
}
Ok(pkt)
@ -330,12 +362,12 @@ impl Decode<'_> for ExtendedAuthPkt {
fn decode(src: &mut ReadCursor<'_>) -> ironrdp_core::DecodeResult<Self> {
ensure_size!(in: src, size: 4 + 2);
let error_code = src.read_u32();
let len = src.read_u16();
ensure_size!(in: src, size: len as usize);
let len = usize::from(src.read_u16());
ensure_size!(in: src, size: len);
Ok(ExtendedAuthPkt {
error_code,
blob: src.read_slice(len as usize).to_vec(),
blob: src.read_slice(len).to_vec(),
})
}
}
@ -384,13 +416,17 @@ impl Encode for TunnelAuthPkt {
/// 2.2.10.16 HTTP_TUNNEL_AUTH_RESPONSE Structure
#[derive(Debug)]
pub(crate) struct TunnelAuthRespPkt {
pub error_code: u32,
error_code: u32,
_fields_present: u16,
_reserved: u16,
}
impl TunnelAuthRespPkt {
const FIXED_PART_SIZE: usize = 4 /* error_code */ + 2 /* fields_present */ + 2 /* _reserved */;
pub(crate) fn error_code(&self) -> u32 {
self.error_code
}
}
impl Decode<'_> for TunnelAuthRespPkt {
@ -455,7 +491,7 @@ impl Encode for ChannelPkt {
/// 2.2.10.4 HTTP_CHANNEL_RESPONSE
#[derive(Default, Debug)]
pub(crate) struct ChannelResp {
pub error_code: u32,
error_code: u32,
fields_present: u16,
_reserved: u16,
@ -467,6 +503,10 @@ pub(crate) struct ChannelResp {
impl ChannelResp {
const FIXED_PART_SIZE: usize = 4 /* error_code */ + 2 /* fields_present */ + 2 /* _reserved */;
pub(crate) fn error_code(&self) -> u32 {
self.error_code
}
}
impl Decode<'_> for ChannelResp {
@ -489,9 +529,9 @@ impl Decode<'_> for ChannelResp {
}
if resp.fields_present & 4 != 0 {
ensure_size!(in: src, size: 2);
let len = src.read_u16();
ensure_size!(in: src, size: len as usize);
resp.authn_cookie = src.read_slice(len as usize).to_vec();
let len = usize::from(src.read_u16());
ensure_size!(in: src, size: len);
resp.authn_cookie = src.read_slice(len).to_vec();
}
Ok(resp)
}
@ -530,10 +570,10 @@ impl Encode for DataPkt<'_> {
impl<'a> Decode<'a> for DataPkt<'a> {
fn decode(src: &mut ReadCursor<'a>) -> ironrdp_core::DecodeResult<Self> {
ensure_size!(in: src, size: 2);
let len = src.read_u16();
ensure_size!(in: src, size: len as usize);
let len = usize::from(src.read_u16());
ensure_size!(in: src, size: len);
Ok(DataPkt {
data: src.read_slice(len as usize),
data: src.read_slice(len),
})
}
}

View file

@ -44,7 +44,6 @@ pkcs1 = "0.7"
[dev-dependencies]
expect-test.workspace = true
lazy_static.workspace = true # TODO: remove in favor of https://doc.rust-lang.org/std/sync/struct.OnceLock.html
[lints]
workspace = true

View file

@ -7,8 +7,8 @@ use core::fmt::{self, Debug};
use bitflags::bitflags;
use ironrdp_core::{
ensure_fixed_part_size, ensure_size, invalid_field_err, Decode, DecodeResult, Encode, EncodeResult, ReadCursor,
WriteCursor,
cast_length, ensure_fixed_part_size, ensure_size, invalid_field_err, Decode, DecodeResult, Encode, EncodeResult,
ReadCursor, WriteCursor,
};
use crate::geometry::InclusiveRectangle;
@ -41,11 +41,9 @@ impl Encode for BitmapUpdateData<'_> {
fn encode(&self, dst: &mut WriteCursor<'_>) -> EncodeResult<()> {
ensure_size!(in: dst, size: self.size());
if self.rectangles.len() > u16::MAX as usize {
return Err(invalid_field_err!("numberRectangles", "rectangle count is too big"));
}
let rectangle_count = cast_length!("number of rectangles", self.rectangles.len())?;
Self::encode_header(self.rectangles.len() as u16, dst)?;
Self::encode_header(rectangle_count, dst)?;
for bitmap_data in self.rectangles.iter() {
bitmap_data.encode(dst)?;
@ -74,10 +72,10 @@ impl<'de> Decode<'de> for BitmapUpdateData<'de> {
return Err(invalid_field_err!("updateType", "invalid update type"));
}
let rectangles_number = src.read_u16() as usize;
let mut rectangles = Vec::with_capacity(rectangles_number);
let rectangle_count = usize::from(src.read_u16());
let mut rectangles = Vec::with_capacity(rectangle_count);
for _ in 0..rectangles_number {
for _ in 0..rectangle_count {
rectangles.push(BitmapData::decode(src)?);
}
@ -111,16 +109,14 @@ impl Encode for BitmapData<'_> {
ensure_size!(in: dst, size: self.size());
let encoded_bitmap_data_length = self.encoded_bitmap_data_length();
if encoded_bitmap_data_length > u16::MAX as usize {
return Err(invalid_field_err!("bitmapLength", "bitmap data length is too big"));
}
let encoded_bitmap_data_length = cast_length!("bitmap data length", encoded_bitmap_data_length)?;
self.rectangle.encode(dst)?;
dst.write_u16(self.width);
dst.write_u16(self.height);
dst.write_u16(self.bits_per_pixel);
dst.write_u16(self.compression_flags.bits());
dst.write_u16(encoded_bitmap_data_length as u16);
dst.write_u16(encoded_bitmap_data_length);
if let Some(compressed_data_header) = &self.compressed_data_header {
compressed_data_header.encode(dst)?;
};
@ -150,25 +146,25 @@ impl<'de> Decode<'de> for BitmapData<'de> {
// A 16-bit, unsigned integer. The size in bytes of the data in the bitmapComprHdr
// and bitmapDataStream fields.
let encoded_bitmap_data_length = src.read_u16();
let encoded_bitmap_data_length = usize::from(src.read_u16());
ensure_size!(in: src, size: encoded_bitmap_data_length as usize);
ensure_size!(in: src, size: encoded_bitmap_data_length);
let (compressed_data_header, buffer_length) = if compression_flags.contains(Compression::BITMAP_COMPRESSION)
&& !compression_flags.contains(Compression::NO_BITMAP_COMPRESSION_HDR)
{
// Check if encoded_bitmap_data_length is at least CompressedDataHeader::ENCODED_SIZE
if encoded_bitmap_data_length < CompressedDataHeader::ENCODED_SIZE as u16 {
if encoded_bitmap_data_length < CompressedDataHeader::ENCODED_SIZE {
return Err(invalid_field_err!(
"cbCompEncodedBitmapDataLength",
"length is less than CompressedDataHeader::ENCODED_SIZE"
));
}
let buffer_length = encoded_bitmap_data_length as usize - CompressedDataHeader::ENCODED_SIZE;
let buffer_length = encoded_bitmap_data_length - CompressedDataHeader::ENCODED_SIZE;
(Some(CompressedDataHeader::decode(src)?), buffer_length)
} else {
(None, encoded_bitmap_data_length as usize)
(None, encoded_bitmap_data_length)
};
let bitmap_data = src.read_slice(buffer_length);

View file

@ -56,7 +56,7 @@ impl Encode for BitmapStreamHeader {
fn encode(&self, dst: &mut WriteCursor<'_>) -> EncodeResult<()> {
ensure_size!(in: dst, size: self.size());
let mut header = ((self.enable_rle_compression as u8) << 4) | ((!self.use_alpha as u8) << 5);
let mut header = (u8::from(self.enable_rle_compression) << 4) | (u8::from(!self.use_alpha) << 5);
match self.color_plane_definition {
ColorPlaneDefinition::Argb => {
@ -68,7 +68,7 @@ impl Encode for BitmapStreamHeader {
..
} => {
// Add cll and cs flags to header
header |= (color_loss_level & 0x07) | ((use_chroma_subsampling as u8) << 3);
header |= (color_loss_level & 0x07) | (u8::from(use_chroma_subsampling) << 3);
}
}

View file

@ -1,5 +1,6 @@
use std::sync::LazyLock;
use ironrdp_core::{decode, encode};
use lazy_static::lazy_static;
use super::*;
@ -31,31 +32,29 @@ const BITMAP_BUFFER: [u8; 114] = [
0x55, 0xad, 0x10, 0x10, 0xa8, 0xd8, 0x60, 0x12,
];
lazy_static! {
static ref BITMAP: BitmapUpdateData<'static> = BitmapUpdateData {
rectangles: {
let vec = vec![BitmapData {
rectangle: InclusiveRectangle {
left: 1792,
top: 1024,
right: 1855,
bottom: 1079,
},
width: 64,
height: 56,
bits_per_pixel: 16,
compression_flags: Compression::BITMAP_COMPRESSION,
compressed_data_header: Some(CompressedDataHeader {
main_body_size: 80,
scan_width: 28,
uncompressed_size: 4,
}),
bitmap_data: &BITMAP_BUFFER[30..],
}];
vec
}
};
}
static BITMAP: LazyLock<BitmapUpdateData<'static>> = LazyLock::new(|| BitmapUpdateData {
rectangles: {
let vec = vec![BitmapData {
rectangle: InclusiveRectangle {
left: 1792,
top: 1024,
right: 1855,
bottom: 1079,
},
width: 64,
height: 56,
bits_per_pixel: 16,
compression_flags: Compression::BITMAP_COMPRESSION,
compressed_data_header: Some(CompressedDataHeader {
main_body_size: 80,
scan_width: 28,
uncompressed_size: 4,
}),
bitmap_data: &BITMAP_BUFFER[30..],
}];
vec
},
});
#[test]
fn from_buffer_bitmap_data_parsses_correctly() {

View file

@ -4,8 +4,8 @@ mod tests;
use bit_field::BitField as _;
use bitflags::bitflags;
use ironrdp_core::{
decode_cursor, ensure_fixed_part_size, ensure_size, invalid_field_err, Decode, DecodeError, DecodeResult, Encode,
EncodeResult, InvalidFieldErr as _, ReadCursor, WriteCursor,
cast_length, decode_cursor, ensure_fixed_part_size, ensure_size, invalid_field_err, Decode, DecodeError,
DecodeResult, Encode, EncodeResult, InvalidFieldErr as _, ReadCursor, WriteCursor,
};
use num_derive::FromPrimitive;
use num_traits::FromPrimitive as _;
@ -19,6 +19,10 @@ use crate::rdp::headers::{CompressionFlags, SHARE_DATA_HEADER_COMPRESSION_MASK};
/// Implements the Fast-Path RDP message header PDU.
/// TS_FP_UPDATE_PDU
#[expect(
clippy::partial_pub_fields,
reason = "this structure is used in the match expression in the integration tests"
)]
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct FastPathHeader {
pub flags: EncryptionFlags,
@ -42,7 +46,7 @@ impl FastPathHeader {
// it may then be +2 if > 0x7f
let len = self.data_length + Self::FIXED_PART_SIZE + 1;
Self::FIXED_PART_SIZE + per::sizeof_length(len as u16)
Self::FIXED_PART_SIZE + per::sizeof_length(len)
}
}
@ -56,15 +60,13 @@ impl Encode for FastPathHeader {
dst.write_u8(header);
let length = self.data_length + self.size();
if length > u16::MAX as usize {
return Err(invalid_field_err!("length", "fastpath PDU length is too big"));
}
let length = cast_length!("length", length)?;
if self.forced_long_length {
// Preserve same layout for header as received
per::write_long_length(dst, length as u16);
per::write_long_length(dst, length);
} else {
per::write_length(dst, length as u16);
per::write_length(dst, length);
}
Ok(())
@ -93,14 +95,15 @@ impl<'de> Decode<'de> for FastPathHeader {
let (length, sizeof_length) = per::read_length(src).map_err(|e| {
DecodeError::invalid_field("", "length", "Invalid encoded fast path PDU length").with_source(e)
})?;
if (length as usize) < sizeof_length + Self::FIXED_PART_SIZE {
let length = usize::from(length);
if length < sizeof_length + Self::FIXED_PART_SIZE {
return Err(invalid_field_err!(
"length",
"received fastpath PDU length is smaller than header size"
));
}
let data_length = length as usize - sizeof_length - Self::FIXED_PART_SIZE;
// Detect case, when received packet has non-optimal packet length packing
let data_length = length - sizeof_length - Self::FIXED_PART_SIZE;
// Detect case, when received packet has non-optimal packet length packing.
let forced_long_length = per::sizeof_length(length) != sizeof_length;
Ok(FastPathHeader {
@ -131,9 +134,7 @@ impl Encode for FastPathUpdatePdu<'_> {
fn encode(&self, dst: &mut WriteCursor<'_>) -> EncodeResult<()> {
ensure_size!(in: dst, size: self.size());
if self.data.len() > u16::MAX as usize {
return Err(invalid_field_err!("data", "fastpath PDU data is too big"));
}
let data_len = cast_length!("data length", self.data.len())?;
let mut header = 0u8;
header.set_bits(0..4, self.update_code.as_u8());
@ -148,7 +149,7 @@ impl Encode for FastPathUpdatePdu<'_> {
dst.write_u8(compression_flags_with_type);
}
dst.write_u16(self.data.len() as u16);
dst.write_u16(data_len);
dst.write_slice(self.data);
Ok(())
@ -200,7 +201,7 @@ impl<'de> Decode<'de> for FastPathUpdatePdu<'de> {
(None, None)
};
let data_length = src.read_u16() as usize;
let data_length = usize::from(src.read_u16());
ensure_size!(in: src, size: data_length);
let data = src.read_slice(data_length);

View file

@ -1,5 +1,6 @@
use std::sync::LazyLock;
use ironrdp_core::{decode, encode};
use lazy_static::lazy_static;
use super::*;
@ -29,15 +30,13 @@ const FAST_PATH_HEADER_WITH_FORCED_LONG_LEN_PDU: FastPathHeader = FastPathHeader
forced_long_length: true,
};
lazy_static! {
static ref FAST_PATH_UPDATE_PDU: FastPathUpdatePdu<'static> = FastPathUpdatePdu {
fragmentation: Fragmentation::Single,
update_code: UpdateCode::SurfaceCommands,
compression_flags: None,
compression_type: None,
data: &FAST_PATH_UPDATE_PDU_BUFFER[3..],
};
}
static FAST_PATH_UPDATE_PDU: LazyLock<FastPathUpdatePdu<'static>> = LazyLock::new(|| FastPathUpdatePdu {
fragmentation: Fragmentation::Single,
update_code: UpdateCode::SurfaceCommands,
compression_flags: None,
compression_type: None,
data: &FAST_PATH_UPDATE_PDU_BUFFER[3..],
});
#[test]
fn from_buffer_correctly_parses_fast_path_header_with_short_length() {

View file

@ -1,6 +1,6 @@
use ironrdp_core::{
ensure_fixed_part_size, ensure_size, invalid_field_err, Decode, DecodeResult, Encode, EncodeResult, ReadCursor,
WriteCursor,
cast_int, cast_length, ensure_fixed_part_size, ensure_size, invalid_field_err, Decode, DecodeResult, Encode,
EncodeResult, ReadCursor, WriteCursor,
};
// Represents `TS_POINT16` described in [MS-RDPBCGR] 2.2.9.1.1.4.1
@ -69,21 +69,24 @@ macro_rules! check_masks_alignment {
($and_mask:expr, $xor_mask:expr, $pointer_height:expr, $large_ptr:expr) => {{
const AND_MASK_SIZE_FIELD: &str = "lengthAndMask";
const XOR_MASK_SIZE_FIELD: &str = "lengthXorMask";
const U32_MAX: usize = 0xFFFFFFFF;
let pointer_height: usize = cast_int!("pointer height", $pointer_height)?;
let check_mask = |mask: &[u8], field: &'static str| {
if $pointer_height == 0 {
return Err(invalid_field_err!(field, "pointer height cannot be zero"));
}
if $large_ptr && (mask.len() > u32::MAX as usize) {
if $large_ptr && (mask.len() > U32_MAX) {
return Err(invalid_field_err!(field, "pointer mask is too big for u32 size"));
}
if !$large_ptr && (mask.len() > u16::MAX as usize) {
if !$large_ptr && (mask.len() > usize::from(u16::MAX)) {
return Err(invalid_field_err!(field, "pointer mask is too big for u16 size"));
}
if (mask.len() % $pointer_height as usize) != 0 {
if (mask.len() % pointer_height) != 0 {
return Err(invalid_field_err!(field, "pointer mask have incomplete scanlines"));
}
if (mask.len() / $pointer_height as usize) % 2 != 0 {
if (mask.len() / pointer_height) % 2 != 0 {
return Err(invalid_field_err!(
field,
"pointer mask scanlines should be aligned to 16 bits"
@ -108,8 +111,8 @@ impl Encode for ColorPointerAttribute<'_> {
dst.write_u16(self.width);
dst.write_u16(self.height);
dst.write_u16(self.and_mask.len() as u16);
dst.write_u16(self.xor_mask.len() as u16);
dst.write_u16(cast_length!("and mask length", self.and_mask.len())?);
dst.write_u16(cast_length!("xor mask length", self.xor_mask.len())?);
// Note that masks are written in reverse order. It is not a mistake, that is how the
// message is defined in [MS-RDPBCGR]
dst.write_slice(self.xor_mask);
@ -135,15 +138,15 @@ impl<'a> Decode<'a> for ColorPointerAttribute<'a> {
let hot_spot = Point16::decode(src)?;
let width = src.read_u16();
let height = src.read_u16();
let length_and_mask = src.read_u16();
let length_xor_mask = src.read_u16();
// Convert to usize during the addition to prevent overflow and match expected type
let expected_masks_size = (length_and_mask as usize) + (length_xor_mask as usize);
let length_and_mask = usize::from(src.read_u16());
let length_xor_mask = usize::from(src.read_u16());
let expected_masks_size = length_and_mask + length_xor_mask;
ensure_size!(in: src, size: expected_masks_size);
let xor_mask = src.read_slice(length_xor_mask as usize);
let and_mask = src.read_slice(length_and_mask as usize);
let xor_mask = src.read_slice(length_xor_mask);
let and_mask = src.read_slice(length_and_mask);
check_masks_alignment!(and_mask, xor_mask, height, false)?;
@ -270,8 +273,8 @@ impl Encode for LargePointerAttribute<'_> {
dst.write_u16(self.width);
dst.write_u16(self.height);
dst.write_u32(self.and_mask.len() as u32);
dst.write_u32(self.xor_mask.len() as u32);
dst.write_u32(cast_length!("and mask length", self.and_mask.len())?);
dst.write_u32(cast_length!("xor mask length", self.xor_mask.len())?);
// See comment in `ColorPointerAttribute::encode` about encoding order
dst.write_slice(self.xor_mask);
dst.write_slice(self.and_mask);
@ -298,8 +301,8 @@ impl<'a> Decode<'a> for LargePointerAttribute<'a> {
let width = src.read_u16();
let height = src.read_u16();
// Convert to usize to prevent overflow during addition
let length_and_mask = src.read_u32() as usize;
let length_xor_mask = src.read_u32() as usize;
let length_and_mask = cast_length!("and mask length", src.read_u32())?;
let length_xor_mask = cast_length!("xor mask length", src.read_u32())?;
let expected_masks_size = length_and_mask + length_xor_mask;
ensure_size!(in: src, size: expected_masks_size);

View file

@ -3,10 +3,10 @@ mod tests;
use bitflags::bitflags;
use ironrdp_core::{
ensure_fixed_part_size, ensure_size, invalid_field_err, Decode, DecodeResult, Encode, EncodeResult, ReadCursor,
WriteCursor,
cast_length, ensure_fixed_part_size, ensure_size, invalid_field_err, Decode, DecodeResult, Encode, EncodeResult,
ReadCursor, WriteCursor,
};
use num_derive::{FromPrimitive, ToPrimitive};
use num_derive::FromPrimitive;
use num_traits::FromPrimitive as _;
use crate::geometry::ExclusiveRectangle;
@ -126,7 +126,7 @@ impl Encode for FrameMarkerPdu {
fn encode(&self, dst: &mut WriteCursor<'_>) -> EncodeResult<()> {
ensure_fixed_part_size!(in: dst);
dst.write_u16(self.frame_action as u16);
dst.write_u16(self.frame_action.as_u16());
dst.write_u32(self.frame_id.unwrap_or(0));
Ok(())
@ -197,9 +197,7 @@ impl Encode for ExtendedBitmapDataPdu<'_> {
fn encode(&self, dst: &mut WriteCursor<'_>) -> EncodeResult<()> {
ensure_size!(in: dst, size: self.size());
if self.data.len() > u32::MAX as usize {
return Err(invalid_field_err!("bitmapDataLength", "bitmap data is too big"));
}
let data_len = cast_length!("bitmap data length", self.data.len())?;
dst.write_u8(self.bpp);
let flags = if self.header.is_some() {
@ -212,7 +210,7 @@ impl Encode for ExtendedBitmapDataPdu<'_> {
dst.write_u8(self.codec_id);
dst.write_u16(self.width);
dst.write_u16(self.height);
dst.write_u32(self.data.len() as u32);
dst.write_u32(data_len);
if let Some(header) = &self.header {
header.encode(dst)?;
}
@ -240,7 +238,7 @@ impl<'de> Decode<'de> for ExtendedBitmapDataPdu<'de> {
let codec_id = src.read_u8();
let width = src.read_u16();
let height = src.read_u16();
let data_length = src.read_u32() as usize;
let data_length = cast_length!("bitmap data length", src.read_u32())?;
let expected_remaining_size = if flags.contains(BitmapDataFlags::COMPRESSED_BITMAP_HEADER_PRESENT) {
data_length + BitmapDataHeader::ENCODED_SIZE
@ -352,13 +350,23 @@ impl From<&SurfaceCommand<'_>> for SurfaceCommandType {
}
}
#[derive(Debug, Copy, Clone, PartialEq, Eq, FromPrimitive, ToPrimitive)]
#[derive(Debug, Copy, Clone, PartialEq, Eq, FromPrimitive)]
#[repr(u16)]
pub enum FrameAction {
Begin = 0x00,
End = 0x01,
}
impl FrameAction {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
pub fn as_u16(self) -> u16 {
self as u16
}
}
bitflags! {
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
struct BitmapDataFlags: u8 {

View file

@ -1,5 +1,6 @@
use std::sync::LazyLock;
use ironrdp_core::{decode, encode};
use lazy_static::lazy_static;
use super::*;
@ -75,8 +76,8 @@ const FRAME_MARKER_PDU: SurfaceCommand<'_> = SurfaceCommand::FrameMarker(FrameMa
frame_id: Some(5),
});
lazy_static! {
static ref SURFACE_BITS_PDU: SurfaceCommand<'static> = SurfaceCommand::StreamSurfaceBits(SurfaceBitsPdu {
static SURFACE_BITS_PDU: LazyLock<SurfaceCommand<'static>> = LazyLock::new(|| {
SurfaceCommand::StreamSurfaceBits(SurfaceBitsPdu {
destination: ExclusiveRectangle {
left: 0,
top: 0,
@ -91,8 +92,8 @@ lazy_static! {
header: None,
data: &SURFACE_BITS_BUFFER[22..],
},
});
}
})
});
#[test]
fn from_buffer_correctly_parses_surface_command_frame_marker() {

View file

@ -3,13 +3,25 @@ use ironrdp_core::{cast_length, ensure_size, invalid_field_err, ReadCursor, Writ
use crate::{DecodeResult, EncodeResult};
#[repr(u8)]
#[derive(Copy, Clone)]
pub(crate) enum Pc {
Primitive = 0x00,
Construct = 0x20,
}
impl Pc {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
fn as_u8(self) -> u8 {
self as u8
}
}
#[repr(u8)]
#[expect(unused)]
#[derive(Copy, Clone)]
enum Class {
Universal = 0x00,
Application = 0x40,
@ -17,8 +29,19 @@ enum Class {
Private = 0xC0,
}
impl Class {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
fn as_u8(self) -> u8 {
self as u8
}
}
#[repr(u8)]
#[expect(unused)]
#[derive(Copy, Clone)]
enum Tag {
Mask = 0x1F,
Boolean = 0x01,
@ -30,6 +53,16 @@ enum Tag {
Sequence = 0x10,
}
impl Tag {
#[expect(
clippy::as_conversions,
reason = "guarantees discriminant layout, and as is the only way to cast enum -> primitive"
)]
fn as_u8(self) -> u8 {
self as u8
}
}
pub(crate) const SIZEOF_ENUMERATED: usize = 3;
pub(crate) const SIZEOF_BOOL: usize = 3;
@ -46,7 +79,7 @@ pub(crate) fn sizeof_sequence_tag(length: u16) -> usize {
}
pub(crate) fn sizeof_octet_string(length: u16) -> usize {
1 + sizeof_length(length) + length as usize
1 + sizeof_length(length) + usize::from(length)
}
pub(crate) fn sizeof_integer(value: u32) -> usize {
@ -71,7 +104,7 @@ pub(crate) fn read_sequence_tag(stream: &mut ReadCursor<'_>) -> DecodeResult<u16
ensure_size!(in: stream, size: 1);
let identifier = stream.read_u8();
if identifier != Class::Universal as u8 | Pc::Construct as u8 | (TAG_MASK & Tag::Sequence as u8) {
if identifier != Class::Universal.as_u8() | Pc::Construct.as_u8() | (TAG_MASK & Tag::Sequence.as_u8()) {
Err(invalid_field_err!("identifier", "invalid sequence tag identifier"))
} else {
read_length(stream)
@ -82,11 +115,11 @@ pub(crate) fn write_application_tag(stream: &mut WriteCursor<'_>, tagnum: u8, le
ensure_size!(in: stream, size: sizeof_application_tag(tagnum, length));
let taglen = if tagnum > 0x1E {
stream.write_u8(Class::Application as u8 | Pc::Construct as u8 | TAG_MASK);
stream.write_u8(Class::Application.as_u8() | Pc::Construct.as_u8() | TAG_MASK);
stream.write_u8(tagnum);
2
} else {
stream.write_u8(Class::Application as u8 | Pc::Construct as u8 | (TAG_MASK & tagnum));
stream.write_u8(Class::Application.as_u8() | Pc::Construct.as_u8() | (TAG_MASK & tagnum));
1
};
@ -98,14 +131,14 @@ pub(crate) fn read_application_tag(stream: &mut ReadCursor<'_>, tagnum: u8) -> D
let identifier = stream.read_u8();
if tagnum > 0x1E {
if identifier != Class::Application as u8 | Pc::Construct as u8 | TAG_MASK {
if identifier != Class::Application.as_u8() | Pc::Construct.as_u8() | TAG_MASK {
return Err(invalid_field_err!("identifier", "invalid application tag identifier"));
}
ensure_size!(in: stream, size: 1);
if stream.read_u8() != tagnum {
return Err(invalid_field_err!("tagnum", "invalid application tag identifier"));
}
} else if identifier != Class::Application as u8 | Pc::Construct as u8 | (TAG_MASK & tagnum) {
} else if identifier != Class::Application.as_u8() | Pc::Construct.as_u8() | (TAG_MASK & tagnum) {
return Err(invalid_field_err!("identifier", "invalid application tag identifier"));
}
@ -146,20 +179,22 @@ pub(crate) fn write_integer(stream: &mut WriteCursor<'_>, value: u32) -> EncodeR
if value < 0x0000_0080 {
write_length(stream, 1)?;
ensure_size!(in: stream, size: 1);
stream.write_u8(value as u8);
stream.write_u8(u8::try_from(value).expect("value is guaranteed to fit into u8 due to the prior check"));
Ok(3)
} else if value < 0x0000_8000 {
write_length(stream, 2)?;
ensure_size!(in: stream, size: 2);
stream.write_u16_be(value as u16);
stream.write_u16_be(u16::try_from(value).expect("value is guaranteed to fit into u16 due to the prior check"));
Ok(4)
} else if value < 0x0080_0000 {
write_length(stream, 3)?;
ensure_size!(in: stream, size: 3);
stream.write_u8((value >> 16) as u8);
stream.write_u16_be((value & 0xFFFF) as u16);
stream.write_u8(u8::try_from(value >> 16).expect("value is guaranteed to fit into u8 due to the prior check"));
stream.write_u16_be(
u16::try_from(value & 0xFFFF).expect("masking with 0xFFFF ensures that the value fits into u16"),
);
Ok(5)
} else {
@ -251,7 +286,7 @@ pub(crate) fn read_octet_string_tag(stream: &mut ReadCursor<'_>) -> DecodeResult
fn write_universal_tag(stream: &mut WriteCursor<'_>, tag: Tag, pc: Pc) -> EncodeResult<usize> {
ensure_size!(in: stream, size: 1);
let identifier = Class::Universal as u8 | pc as u8 | (TAG_MASK & tag as u8);
let identifier = Class::Universal.as_u8() | pc.as_u8() | (TAG_MASK & tag.as_u8());
stream.write_u8(identifier);
Ok(1)
@ -262,7 +297,7 @@ fn read_universal_tag(stream: &mut ReadCursor<'_>, tag: Tag, pc: Pc) -> DecodeRe
let identifier = stream.read_u8();
if identifier != Class::Universal as u8 | pc as u8 | (TAG_MASK & tag as u8) {
if identifier != Class::Universal.as_u8() | pc.as_u8() | (TAG_MASK & tag.as_u8()) {
Err(invalid_field_err!("identifier", "invalid universal tag identifier"))
} else {
Ok(())
@ -279,11 +314,11 @@ fn write_length(stream: &mut WriteCursor<'_>, length: u16) -> EncodeResult<usize
Ok(3)
} else if length > 0x7F {
stream.write_u8(0x80 ^ 0x1);
stream.write_u8(length as u8);
stream.write_u8(u8::try_from(length).expect("length is guaranteed to fit into u8 due to the prior check"));
Ok(2)
} else {
stream.write_u8(length as u8);
stream.write_u8(u8::try_from(length).expect("length is guaranteed to fit into u8 due to the prior check"));
Ok(1)
}

View file

@ -304,7 +304,7 @@ impl Encode for TileSetPdu<'_> {
dst.write_u16(properties);
dst.write_u8(cast_length!("numQuant", self.quants.len())?);
dst.write_u8(TILE_SIZE as u8);
dst.write_u8(u8::try_from(TILE_SIZE).expect("TILE_SIZE value fits into u8"));
dst.write_u16(cast_length!("numTiles", self.tiles.len())?);
let tiles_data_size = self.tiles.iter().map(|t| Block::Tile(t.clone()).size()).sum::<usize>();
@ -384,7 +384,7 @@ impl<'de> Decode<'de> for TileSetPdu<'de> {
}
let number_of_tiles = usize::from(src.read_u16());
let _tiles_data_size = src.read_u32() as usize;
let _tiles_data_size = src.read_u32();
let quants = iter::repeat_with(|| Quant::decode(src))
.take(number_of_quants)
@ -544,24 +544,32 @@ impl Encode for Quant {
impl<'de> Decode<'de> for Quant {
fn decode(src: &mut ReadCursor<'de>) -> DecodeResult<Self> {
#![allow(clippy::similar_names)] // Its hard to do better than ll3, lh3, etc without going overly verbose.
#![allow(
clippy::similar_names,
reason = "its hard to do better than ll3, lh3, etc without going overly verbose"
)]
ensure_fixed_part_size!(in: src);
let level3 = src.read_u16();
let ll3 = level3.get_bits(0..4) as u8;
let lh3 = level3.get_bits(4..8) as u8;
let hl3 = level3.get_bits(8..12) as u8;
let hh3 = level3.get_bits(12..16) as u8;
let ll3lh3 = src.read_u8();
let ll3 = ll3lh3.get_bits(0..4);
let lh3 = ll3lh3.get_bits(4..8);
let level2_with_lh1 = src.read_u16();
let lh2 = level2_with_lh1.get_bits(0..4) as u8;
let hl2 = level2_with_lh1.get_bits(4..8) as u8;
let hh2 = level2_with_lh1.get_bits(8..12) as u8;
let lh1 = level2_with_lh1.get_bits(12..16) as u8;
let hl3hh3 = src.read_u8();
let hl3 = hl3hh3.get_bits(0..4);
let hh3 = hl3hh3.get_bits(4..8);
let level1 = src.read_u8();
let hl1 = level1.get_bits(0..4);
let hh1 = level1.get_bits(4..8);
let lh2hl2 = src.read_u8();
let lh2 = lh2hl2.get_bits(0..4);
let hl2 = lh2hl2.get_bits(4..8);
let hh2lh1 = src.read_u8();
let hh2 = hh2lh1.get_bits(0..4);
let lh1 = hh2lh1.get_bits(4..8);
let hl1hh1 = src.read_u8();
let hl1 = hl1hh1.get_bits(0..4);
let hh1 = hl1hh1.get_bits(4..8);
Ok(Self {
ll3,

View file

@ -208,7 +208,7 @@ impl<'de> Decode<'de> for BlockHeader {
let ty = src.read_u16();
let ty = BlockType::from_u16(ty).ok_or_else(|| invalid_field_err!("blockType", "Invalid block type"))?;
let data_length = src.read_u32() as usize;
let data_length: usize = cast_length!("block length", src.read_u32())?;
data_length
.checked_sub(Self::FIXED_PART_SIZE)
.ok_or_else(|| invalid_field_err!("blockLen", "Invalid block length"))?;

View file

@ -11,12 +11,12 @@ impl Rc4 {
pub(crate) fn new(key: &[u8]) -> Self {
// key scheduling
let mut state = State::default();
for (i, item) in state.iter_mut().enumerate().take(256) {
*item = i as u8;
for (i, item) in (0..=255).zip(state.iter_mut()) {
*item = i;
}
let mut j = 0usize;
for i in 0..256 {
j = (j + state[i] as usize + key[i % key.len()] as usize) % 256;
j = (j + usize::from(state[i]) + usize::from(key[i % key.len()])) % 256;
state.swap(i, j);
}
@ -28,9 +28,9 @@ impl Rc4 {
let mut output = Vec::with_capacity(message.len());
while output.capacity() > output.len() {
self.i = (self.i + 1) % 256;
self.j = (self.j + self.state[self.i] as usize) % 256;
self.j = (self.j + usize::from(self.state[self.i])) % 256;
self.state.swap(self.i, self.j);
let idx_k = (self.state[self.i] as usize + self.state[self.j] as usize) % 256;
let idx_k = (usize::from(self.state[self.i]) + usize::from(self.state[self.j])) % 256;
let k = self.state[idx_k];
let idx_msg = output.len();
output.push(k ^ message[idx_msg]);

View file

@ -56,9 +56,8 @@ impl<'de> Decode<'de> for ClientClusterData {
let flags = RedirectionFlags::from_bits(flags_with_version & !REDIRECTION_VERSION_MASK)
.ok_or_else(|| invalid_field_err!("flags", "invalid redirection flags"))?;
let redirection_version =
RedirectionVersion::from_u8(((flags_with_version & REDIRECTION_VERSION_MASK) >> 2) as u8)
.ok_or_else(|| invalid_field_err!("redirVersion", "invalid redirection version"))?;
let redirection_version = RedirectionVersion::from_u32((flags_with_version & REDIRECTION_VERSION_MASK) >> 2)
.ok_or_else(|| invalid_field_err!("redirVersion", "invalid redirection version"))?;
Ok(Self {
flags,

View file

@ -114,9 +114,9 @@ impl Encode for ConferenceCreateRequest {
per::CHOICE_SIZE
+ CONFERENCE_REQUEST_OBJECT_ID.len()
+ per::sizeof_length(req_length)
+ per::sizeof_length(usize::from(req_length))
+ CONFERENCE_REQUEST_CONNECT_PDU_SIZE
+ per::sizeof_length(length)
+ per::sizeof_length(usize::from(length))
+ gcc_blocks_buffer_length
}
}
@ -286,9 +286,9 @@ impl Encode for ConferenceCreateResponse {
per::CHOICE_SIZE
+ CONFERENCE_REQUEST_OBJECT_ID.len()
+ per::sizeof_length(req_length)
+ per::sizeof_length(usize::from(req_length))
+ CONFERENCE_RESPONSE_CONNECT_PDU_SIZE
+ per::sizeof_length(length)
+ per::sizeof_length(usize::from(length))
+ gcc_blocks_buffer_length
}
}

Some files were not shown because too many files have changed in this diff Show more