Since `rust 1.87.0` reported `undefined symbol:
ring::pbkdf2::PBKDF2_HMAC_SHA1::*` in CI and it was difficult to debug
locally, use `rust 1.86.0` in CI tests for troubleshoot the errors
This commit adds a signal handler for SIGUSR2 that helps reduce the
memory usage of both main worker and web worker by:
1. Triggering `malloc_trim(0)` to release memory back to the system
2. Invoking V8 isolate's `low_memory_notification` function
This is only available on Linux and enabled when `DENO_USR2_MEMORY_TRIM`
env var is specified.
---------
Co-authored-by: Yusuke Tanaka <yusuktan@maguro.dev>
This commit changes the `deno jupyter` subcommand:
- `deno jupyter` now accepts additional `--name` argument to
allow installing and maintaing multiple kernelsspec - useful when
one wants to install a stable kernel and a debug/canary kernel
- `deno jupyter --install` now accepts additional `--display`
argument to allow customizing display name of the kernel - the
default one is "Deno"
- `deno jupyter --install` no longer blindly installs kernelspec,
instead it first checks if a kernelspec already exists and if so,
returns an error suggesting to use `--force` flag
- `deno jupyter --help` no longer shows `--unstable` flag
Closes https://github.com/denoland/deno/issues/29219
Closes https://github.com/denoland/deno/issues/29220
This change configures V8 isolates to respect memory limits imposed by
cgroups on Linux.
It adds support for detecting both cgroups v1 and v2 memory limits,
enabling Deno to properly adapt to containerized environments with
memory constraints. When cgroups information is unavailable or not
applicable, it falls back to using the system's total memory as before.
Closes#29077
## Test
For testing, I created a ubuntu VM with 1Gi memory. Within this VM, set
up a cgroup with 512Mi memory limit, then ran the following script to
see how much heap size limit the V8 isolate had.
```js
import * as v8 from "node:v8";
console.log(v8.getHeapStatistics());
```
### Ubuntu 20.04
In this version of ubuntu, hybrid mode is enabled by default.
```
$ cat /proc/self/cgroup
12:rdma:/
11:blkio:/user.slice
10:devices:/user.slice
9:cpu,cpuacct:/user.slice
8:pids:/user.slice/user-1000.slice/session-3.scope
7:memory:/user.slice/user-1000.slice/session-3.scope
6:perf_event:/
5:freezer:/
4:net_cls,net_prio:/
3:hugetlb:/
2:cpuset:/
1:name=systemd:/user.slice/user-1000.slice/session-3.scope
0::/user.slice/user-1000.slice/session-3.scope
```
Create a new cgroup with 512Mi memory limit and run the above script in
this cgroup:
```
$ sudo cgcreate -g memory:/mygroup
$ sudo cgset -r memory.limit_in_bytes=$((512 * 1024 * 1024)) mygroup
$ sudo cgexec -g memory:mygroup ./deno run main.mjs
{
total_heap_size: 7745536,
total_heap_size_executable: 0,
total_physical_size: 7090176,
total_available_size: 266348216,
used_heap_size: 6276752,
heap_size_limit: 271581184,
malloced_memory: 303200,
peak_malloced_memory: 140456,
does_zap_garbage: 0,
number_of_native_contexts: 1,
number_of_detached_contexts: 0,
total_global_handles_size: 24576,
used_global_handles_size: 22432,
external_memory: 3232012
}
```
This indicates that the isolate was informed of cgroup-constrained
memory limit (512Mi) and hence got ~270M heap limit.
### Ubuntu 22.04
In this version of ubuntu, cgroup v2 is used.
```
$ cat /proc/self/cgroup
0::/user.slice/user-1000.slice/session-3.scope
```
Run the above script using `systemd-run`:
```
$ sudo systemd-run --property=MemoryMax=512M --pty bash -c '/home/ubuntu/deno run /home/ubuntu/main.mjs'
{
total_heap_size: 7745536,
total_heap_size_executable: 0,
total_physical_size: 7090176,
total_available_size: 266348184,
used_heap_size: 6276784,
heap_size_limit: 271581184,
malloced_memory: 303200,
peak_malloced_memory: 140456,
does_zap_garbage: 0,
number_of_native_contexts: 1,
number_of_detached_contexts: 0,
total_global_handles_size: 24576,
used_global_handles_size: 22432,
external_memory: 3232012
}
```
Again the isolate got ~270M heap limit properly.
Note that it should have had bigger heap limit if the entire system
memory, i.e. 1Gi, had been passed to V8. In fact, if we run the same
script outside the cgroup, it does display larger `heap_size_limit` like
below:
```
$ ./deno run main.mjs
{
total_heap_size: 7745536,
total_heap_size_executable: 0,
total_physical_size: 7090176,
total_available_size: 546580152,
used_heap_size: 6276752,
heap_size_limit: 551813120,
malloced_memory: 303200,
peak_malloced_memory: 140456,
does_zap_garbage: 0,
number_of_native_contexts: 1,
number_of_detached_contexts: 0,
total_global_handles_size: 24576,
used_global_handles_size: 22432,
external_memory: 3232012
}
```
---------
Signed-off-by: Yusuke Tanaka <wing0920@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
This removes the `moduleGraphX` data from the `<version>_meta.json`
files for jsr packages when copying from the global cache to the local
one. This property is not really useful to vendor because it's just a
performance optimization when downloading the files and someone may be
changing the data in the files leading to the `moduleGraph` data to be
out of date.
Closes https://github.com/denoland/deno/issues/27229.
TODO:
- [x] Tests
- [x] Make some changes to `deno_cache_dir` so we can get the paths for
the local http cache
- [x] Right now this leaves the node modules setup cache in an incorrect
state (removes the symlinks, but doesn't update the setup cache)
- [ ] ~~Handle code cache and other sqlite caches?~~
This PR adds detection of `tsconfig.json` at the root of a workspace
when there exists either a deno.json or package.json. If a project
already has `deno.json` with a `compilerOptions` value the
`tsconfig.json` is ignored.
Change:
Supported --open flag with deno serve -> (deno serve --open
somescript.ts/js).
The action that takes place is openning the browser on the address that
the server is running on.
Signed-off-by: HasanAlrimawi <141642411+HasanAlrimawi@users.noreply.github.com>
This commit adds two env vars:
- "DENO_CACHE_DB_MODE"
- "DENO_KV_DB_MODE"
Both of these env vars accept either "disk" or "memory" values and
control the modes of backing databases for Web Cache API and
"Deno.openKv()" API.
By default both APIs use disk backed DBs, but they can be changed to use
in-memory
DB, making them effectively ephemeral.
This commit adds "deno_features" crate that contains definitions of all
unstable features in Deno.
Based on these definitions, both Rust and JS code is generated ensuring
that the two are always in sync.
In addition some of flag handling was rewritten to use the generated
definitions, instead of hand rolling these flag definitions.
---------
Co-authored-by: snek <snek@deno.com>
Basically just update deno_lockfile, deno_npm, and eszip, and then adapt
to those changes. The main changes were the removal of the lockfile v4
resolution snapshot loading, and a terser formatting for the `os` and
`cpu` fields in the lockfile.
Fixes two issues:
- If a cached packument was out of date and missing a version from the
lockfile, we would fail. Instead we should try again with a forced
re-fetch
- We weren't threading through the workspace patch packages correctly
This PR updates the behavior of `deno test --coverage` option. Now if
`--coverage` option is specified, `deno test` command automatically
shows summary report in the terminal, and generates the lcov report in
`$coverage_dir/lcov.info` and html report in `$coverage_dir/html/`
This change also adds `--coverage-raw-data-only` flag, which prevents
the above reports generated, instead only generates the raw json
coverage data (which is the same as current behavior)
Apparently things like the `bin` field can appear in the version info
from the registry, but not the package's `package.json`. I'm still not
sure how you actually achieve this, but it's the case for
`esbuild-wasm`. This fixes the following panic:
```
❯ deno i --node-modules-dir npm:esbuild-wasm
Add npm:esbuild-wasm@0.25.2
Initialize ⣯ [00:00]
- esbuild-wasm@0.25.2
============================================================
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: macos aarch64
Version: 2.2.8+58c6c0b
Args: ["deno", "i", "--node-modules-dir", "npm:esbuild-wasm"]
View stack trace at:
https://panic.deno.com/v2.2.8+58c6c0bc9c1b4ee08645be936ff9268f17028f0f/aarch64-apple-darwin/g4h6Jo393pB4k4kqBo-3kqBg6klqBogtyLg13yLw_t0Lw549Hgj8-Hgw__H428-F4yv_HgjkpKww7gIon4gIw54rKwi5MorzMw5y7G42g7Iw---I40s-I4vu4Jw2rEw8z7Dwnr6J4tp7Bo_vvK
thread 'main' panicked at cli/npm/installer/common/bin_entries.rs:108:30:
called `Option::unwrap()` on a `None` value
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
Fixes#27264. Fixes https://github.com/denoland/deno/issues/28161.
Currently the new lockfile version is gated behind an unstable flag
(`--unstable-lockfile-v5`) until the next minor release, where it will
become the default.
The main motivation here is that it improves startup performance when
using the global cache or `--node-modules-dir=auto`.
In a create-next-app project, running an empty file:
```
❯ hyperfine --warmup 25 -N --setup "rm -f deno.lock" "deno run --node-modules-dir=auto -A empty.js" "deno-this-pr run --node-modules-dir=auto -A empty.js" "deno-this-pr run --node-modules-dir=auto --unstable-lockfile-v5 empty.js" "deno run --node-modules-dir=manual -A empty.js" "deno-this-pr run --node-modules-dir=manual -A empty.js"
Benchmark 1: deno run --node-modules-dir=auto -A empty.js
Time (mean ± σ): 247.6 ms ± 1.7 ms [User: 228.7 ms, System: 19.0 ms]
Range (min … max): 245.5 ms … 251.5 ms 12 runs
Benchmark 2: deno-this-pr run --node-modules-dir=auto -A empty.js
Time (mean ± σ): 169.8 ms ± 1.0 ms [User: 152.9 ms, System: 17.9 ms]
Range (min … max): 168.9 ms … 172.5 ms 17 runs
Benchmark 3: deno-this-pr run --node-modules-dir=auto --unstable-lockfile-v5 empty.js
Time (mean ± σ): 16.2 ms ± 0.7 ms [User: 12.3 ms, System: 5.7 ms]
Range (min … max): 15.2 ms … 19.2 ms 185 runs
Benchmark 4: deno run --node-modules-dir=manual -A empty.js
Time (mean ± σ): 16.2 ms ± 0.8 ms [User: 11.6 ms, System: 5.5 ms]
Range (min … max): 14.9 ms … 19.7 ms 187 runs
Benchmark 5: deno-this-pr run --node-modules-dir=manual -A empty.js
Time (mean ± σ): 16.0 ms ± 0.9 ms [User: 12.0 ms, System: 5.5 ms]
Range (min … max): 14.8 ms … 22.3 ms 190 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Summary
deno-this-pr run --node-modules-dir=manual -A empty.js ran
1.01 ± 0.08 times faster than deno run --node-modules-dir=manual -A empty.js
1.01 ± 0.07 times faster than deno-this-pr run --node-modules-dir=auto --unstable-lockfile-v5 empty.js
10.64 ± 0.60 times faster than deno-this-pr run --node-modules-dir=auto -A empty.js
15.51 ± 0.88 times faster than deno run --node-modules-dir=auto -A empty.js
```
When using the new lockfile version, this leads to a 15.5x faster
startup time compared to the current deno version.
Install times benefit as well, though to a lesser degree.
`deno install` on a create-next-app project, with everything cached
(just setting up node_modules from scratch):
```
❯ hyperfine --warmup 5 -N --prepare "rm -rf node_modules" --setup "rm -rf deno.lock" "deno i" "deno-this-pr i" "deno-this-pr i --unstable-lockfile-v5"
Benchmark 1: deno i
Time (mean ± σ): 464.4 ms ± 8.8 ms [User: 227.7 ms, System: 217.3 ms]
Range (min … max): 452.6 ms … 478.3 ms 10 runs
Benchmark 2: deno-this-pr i
Time (mean ± σ): 368.8 ms ± 22.0 ms [User: 150.8 ms, System: 198.1 ms]
Range (min … max): 344.8 ms … 397.6 ms 10 runs
Benchmark 3: deno-this-pr i --unstable-lockfile-v5
Time (mean ± σ): 211.9 ms ± 17.1 ms [User: 7.1 ms, System: 177.2 ms]
Range (min … max): 191.3 ms … 233.4 ms 10 runs
Summary
deno-this-pr i --unstable-lockfile-v5 ran
1.74 ± 0.17 times faster than deno-this-pr i
2.19 ± 0.18 times faster than deno i
```
With lockfile v5, a 2.19x faster install time compared to the current
deno.
Deno.serve `Request` abort signals are aborted by default even when it
is finished successfully. This PR gates this behavior behind the
"legacy_abort" which is the default right now.
Turning the `no_legacy_abort` runtime option on is a **breaking change**
and will only abort request signals when there is a failure, thereby
cannot be used to determine if the request finished. This aligns with
`fetch` API.
Ref https://github.com/denoland/deno/issues/27005
This adds support for using a local copy of an npm package.
```js
// deno.json
{
"patch": [
"../path/to/local_npm_package"
],
// required until Deno 2.3, but it will still be considered unstable
"unstable": ["npm-patch"]
}
```
1. Requires using a node_modules folder.
2. When using `"nodeModulesDir": "auto"`, it recreates the folder in the
node_modules directory on each run which will slightly increase startup
time.
3. When using the default with a package.json (`"nodeModulesDir":
"manual"`), updating the package requires running `deno install`. This
is to get the package into the node_modules directory of the current
workspace. This is necessary instead of linking because packages can
have multiple "copy packages" due to peer dep resolution.
Caveat: Specifying a local copy of an npm package or making changes to
its dependencies will purge npm packages from the lockfile. This might
cause npm resolution to resolve differently and it may end up not using
the local copy of the npm package. It's very difficult to only
invalidate resolution midway through the graph and then only rebuild
that part of the resolution, so this is just a first pass that can be
improved in the future. In practice, this probably won't be an issue for
most people.
Another limitation is this also requires the npm package name to exist
in the registry at the moment.