This change configures V8 isolates to respect memory limits imposed by
cgroups on Linux.
It adds support for detecting both cgroups v1 and v2 memory limits,
enabling Deno to properly adapt to containerized environments with
memory constraints. When cgroups information is unavailable or not
applicable, it falls back to using the system's total memory as before.
Closes#29077
## Test
For testing, I created a ubuntu VM with 1Gi memory. Within this VM, set
up a cgroup with 512Mi memory limit, then ran the following script to
see how much heap size limit the V8 isolate had.
```js
import * as v8 from "node:v8";
console.log(v8.getHeapStatistics());
```
### Ubuntu 20.04
In this version of ubuntu, hybrid mode is enabled by default.
```
$ cat /proc/self/cgroup
12:rdma:/
11:blkio:/user.slice
10:devices:/user.slice
9:cpu,cpuacct:/user.slice
8:pids:/user.slice/user-1000.slice/session-3.scope
7:memory:/user.slice/user-1000.slice/session-3.scope
6:perf_event:/
5:freezer:/
4:net_cls,net_prio:/
3:hugetlb:/
2:cpuset:/
1:name=systemd:/user.slice/user-1000.slice/session-3.scope
0::/user.slice/user-1000.slice/session-3.scope
```
Create a new cgroup with 512Mi memory limit and run the above script in
this cgroup:
```
$ sudo cgcreate -g memory:/mygroup
$ sudo cgset -r memory.limit_in_bytes=$((512 * 1024 * 1024)) mygroup
$ sudo cgexec -g memory:mygroup ./deno run main.mjs
{
total_heap_size: 7745536,
total_heap_size_executable: 0,
total_physical_size: 7090176,
total_available_size: 266348216,
used_heap_size: 6276752,
heap_size_limit: 271581184,
malloced_memory: 303200,
peak_malloced_memory: 140456,
does_zap_garbage: 0,
number_of_native_contexts: 1,
number_of_detached_contexts: 0,
total_global_handles_size: 24576,
used_global_handles_size: 22432,
external_memory: 3232012
}
```
This indicates that the isolate was informed of cgroup-constrained
memory limit (512Mi) and hence got ~270M heap limit.
### Ubuntu 22.04
In this version of ubuntu, cgroup v2 is used.
```
$ cat /proc/self/cgroup
0::/user.slice/user-1000.slice/session-3.scope
```
Run the above script using `systemd-run`:
```
$ sudo systemd-run --property=MemoryMax=512M --pty bash -c '/home/ubuntu/deno run /home/ubuntu/main.mjs'
{
total_heap_size: 7745536,
total_heap_size_executable: 0,
total_physical_size: 7090176,
total_available_size: 266348184,
used_heap_size: 6276784,
heap_size_limit: 271581184,
malloced_memory: 303200,
peak_malloced_memory: 140456,
does_zap_garbage: 0,
number_of_native_contexts: 1,
number_of_detached_contexts: 0,
total_global_handles_size: 24576,
used_global_handles_size: 22432,
external_memory: 3232012
}
```
Again the isolate got ~270M heap limit properly.
Note that it should have had bigger heap limit if the entire system
memory, i.e. 1Gi, had been passed to V8. In fact, if we run the same
script outside the cgroup, it does display larger `heap_size_limit` like
below:
```
$ ./deno run main.mjs
{
total_heap_size: 7745536,
total_heap_size_executable: 0,
total_physical_size: 7090176,
total_available_size: 546580152,
used_heap_size: 6276752,
heap_size_limit: 551813120,
malloced_memory: 303200,
peak_malloced_memory: 140456,
does_zap_garbage: 0,
number_of_native_contexts: 1,
number_of_detached_contexts: 0,
total_global_handles_size: 24576,
used_global_handles_size: 22432,
external_memory: 3232012
}
```
---------
Signed-off-by: Yusuke Tanaka <wing0920@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
implement lazy(?) mode. an unconfigured jsruntime is created if
DENO_UNSTABLE_CONTROL_SOCK is present, and later passed into deno_runtime to be
configured and used.
This commit adds "deno_features" crate that contains definitions of all
unstable features in Deno.
Based on these definitions, both Rust and JS code is generated ensuring
that the two are always in sync.
In addition some of flag handling was rewritten to use the generated
definitions, instead of hand rolling these flag definitions.
---------
Co-authored-by: snek <snek@deno.com>
Aligns version between `deno`, `denort` and `deno_lib` crates.
Removes `cli/lib/version.txt` and some build-script shenanigans to
override crate version.
We upgraded `env_logger`, which uses a new color detection backend that
is backwards incompatible with the old backend and does not detect
GitHub Actions as a terminal that supports color. We now force
env_logger to respect the color decision that `deno_terminal` has
already made.
This is the release commit being forwarded back to main for 2.2.11
---------
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This extracts out the shared libraries and `.node` native modules to a
temp file and opens them from there. **This means that this
implementation will not work in every scenario.** For example, a library
could require other files that only exist in the in-memory file system.
To solve that, we'll introduce
https://github.com/denoland/deno/issues/28918 later or adapt this
solution to solve more issues.
Additionally, this will not work when run on readonly file systems.
This is the release commit being forwarded back to main for 2.2.10
Co-authored-by: nathanwhit <nathanwhit@users.noreply.github.com>
Co-authored-by: Nathan Whitaker <nathan@deno.com>
Fixes#27264. Fixes https://github.com/denoland/deno/issues/28161.
Currently the new lockfile version is gated behind an unstable flag
(`--unstable-lockfile-v5`) until the next minor release, where it will
become the default.
The main motivation here is that it improves startup performance when
using the global cache or `--node-modules-dir=auto`.
In a create-next-app project, running an empty file:
```
❯ hyperfine --warmup 25 -N --setup "rm -f deno.lock" "deno run --node-modules-dir=auto -A empty.js" "deno-this-pr run --node-modules-dir=auto -A empty.js" "deno-this-pr run --node-modules-dir=auto --unstable-lockfile-v5 empty.js" "deno run --node-modules-dir=manual -A empty.js" "deno-this-pr run --node-modules-dir=manual -A empty.js"
Benchmark 1: deno run --node-modules-dir=auto -A empty.js
Time (mean ± σ): 247.6 ms ± 1.7 ms [User: 228.7 ms, System: 19.0 ms]
Range (min … max): 245.5 ms … 251.5 ms 12 runs
Benchmark 2: deno-this-pr run --node-modules-dir=auto -A empty.js
Time (mean ± σ): 169.8 ms ± 1.0 ms [User: 152.9 ms, System: 17.9 ms]
Range (min … max): 168.9 ms … 172.5 ms 17 runs
Benchmark 3: deno-this-pr run --node-modules-dir=auto --unstable-lockfile-v5 empty.js
Time (mean ± σ): 16.2 ms ± 0.7 ms [User: 12.3 ms, System: 5.7 ms]
Range (min … max): 15.2 ms … 19.2 ms 185 runs
Benchmark 4: deno run --node-modules-dir=manual -A empty.js
Time (mean ± σ): 16.2 ms ± 0.8 ms [User: 11.6 ms, System: 5.5 ms]
Range (min … max): 14.9 ms … 19.7 ms 187 runs
Benchmark 5: deno-this-pr run --node-modules-dir=manual -A empty.js
Time (mean ± σ): 16.0 ms ± 0.9 ms [User: 12.0 ms, System: 5.5 ms]
Range (min … max): 14.8 ms … 22.3 ms 190 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Summary
deno-this-pr run --node-modules-dir=manual -A empty.js ran
1.01 ± 0.08 times faster than deno run --node-modules-dir=manual -A empty.js
1.01 ± 0.07 times faster than deno-this-pr run --node-modules-dir=auto --unstable-lockfile-v5 empty.js
10.64 ± 0.60 times faster than deno-this-pr run --node-modules-dir=auto -A empty.js
15.51 ± 0.88 times faster than deno run --node-modules-dir=auto -A empty.js
```
When using the new lockfile version, this leads to a 15.5x faster
startup time compared to the current deno version.
Install times benefit as well, though to a lesser degree.
`deno install` on a create-next-app project, with everything cached
(just setting up node_modules from scratch):
```
❯ hyperfine --warmup 5 -N --prepare "rm -rf node_modules" --setup "rm -rf deno.lock" "deno i" "deno-this-pr i" "deno-this-pr i --unstable-lockfile-v5"
Benchmark 1: deno i
Time (mean ± σ): 464.4 ms ± 8.8 ms [User: 227.7 ms, System: 217.3 ms]
Range (min … max): 452.6 ms … 478.3 ms 10 runs
Benchmark 2: deno-this-pr i
Time (mean ± σ): 368.8 ms ± 22.0 ms [User: 150.8 ms, System: 198.1 ms]
Range (min … max): 344.8 ms … 397.6 ms 10 runs
Benchmark 3: deno-this-pr i --unstable-lockfile-v5
Time (mean ± σ): 211.9 ms ± 17.1 ms [User: 7.1 ms, System: 177.2 ms]
Range (min … max): 191.3 ms … 233.4 ms 10 runs
Summary
deno-this-pr i --unstable-lockfile-v5 ran
1.74 ± 0.17 times faster than deno-this-pr i
2.19 ± 0.18 times faster than deno i
```
With lockfile v5, a 2.19x faster install time compared to the current
deno.
This is the release commit being forwarded back to main for 2.2.7
Signed-off-by: Divy Srivastava <dj.srivastava23@gmail.com>
Co-authored-by: Divy Srivastava <dj.srivastava23@gmail.com>
Deno.serve `Request` abort signals are aborted by default even when it
is finished successfully. This PR gates this behavior behind the
"legacy_abort" which is the default right now.
Turning the `no_legacy_abort` runtime option on is a **breaking change**
and will only abort request signals when there is a failure, thereby
cannot be used to determine if the request finished. This aligns with
`fetch` API.
Ref https://github.com/denoland/deno/issues/27005
NOTE: Commit 27363d389 was incorrectly landed in main before the release
completed and is not included in v2.2.5. The official v2.2.5 release was made
from the v2.2 branch.
Adds support for re-exporting an ES module from a CJS one and then
importing the CJS module from ESM. Also fixes a bug where require esm
wasn't working in deno compile.
Now that ArrayBuffer/ArrayBufferView is a generic Value type, we have to
handle it being passed any value. To do this, thread
FastApiCallbackOptions through the function, and add error raising
logic.
If we run conversion and the value is not valid, we return `isize::MAX`,
and then in cranelift we use this value to know that we should branch to
the error logic.
An example compilation looks like this:
```rust
extern "C" fn print_buffer(ptr: *const u8, len: usize);
```
```clif
function %print_buffer_wrapper(i64, i64, i64, i64) system_v {
sig0 = (i64, i64) system_v
sig1 = (i64) -> i64 system_v
sig2 = (i64) system_v
block0(v0: i64, v1: i64, v2: i64, v3: i64):
v4 = iconst.i64 0x6525_9198_2d00 ; turbocall_ab_contents
v5 = call_indirect sig1, v4(v1)
v6 = iconst.i64 0x7fff_ffff_ffff_ffff
v7 = icmp eq v5, v6
brif v7, block1, block2
block2:
v8 = iconst.i64 0x7558_4c0c_0700 ; sym.ptr
call_indirect sig0, v8(v5, v2)
return
block1 cold:
v9 = iconst.i64 0x6525_9198_2d70 ; turbocall_raise
call_indirect sig2, v9(v3)
return
}
```
Also cleaned up all the `unwrap`s and added some logging.