feat: benchmarking (#999)

* feat: add benchmarking dashboard, CI hook on PR, and store lifetime results

* refactor: change python env to 3.13 in benchmarks

* refactor: add verbosity, use 3.11 for benchmarking

* fix: OSError: [Errno 7] Argument list too long

* refactor: add debug statements

* refactor: remove extraneous -e

* refactor: fix tests and linter errors

* fix: track main package in coverage

* refactor: fix test coverage testing

* refactor: fix repo owner name in benchmark on pushing comment

* refactor: add asv monkeypatch to docs workflow

* refactor: temporarily allow building docs in forks

* refactor: use py 3.13 for benchmarking

* refactor: run only a single benchmark for PRs to speed them up

* refactor: install asv in the docs build workflow

* refactor: use hatch docs env to generate benhcmarks in docs CI

* refactor: more trying

* refactor: move tests

* Add benchmark results for 0.137

* Trigger Build

* Add benchmark results for 0.138

* refactor: set constant machine name when benchmarking

* Add benchmark results for 0.139

* refactor: fix issue with paths too long

* Add benchmark results for 0.140

* docs: update comment

* refactor: remove test benchmarking data

* refactor: fix comment

* refactor: allow the benchmark workflow to write to PRs

* refactor: use personal access token to set up the PR benchmark bot

* refactor: split the benchmark PR flow into two to make it work with PRs from forks

* refactor: update deprecated actions/upload-artifact@v3 to v4

* refactor: fix missing directory in benchmarking workflow

* refactor: fix triggering of second workflow

* refactor: fix workflow finally?

* docs: add comments to cut-offs and direct people to benchmarks PR

---------

Co-authored-by: github-actions <github-actions@github.com>
This commit is contained in:
Juro Oravec 2025-02-23 16:18:57 +01:00 committed by GitHub
parent dcd4203eea
commit f36581ed86
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
90 changed files with 40817 additions and 443 deletions

195
benchmarks/README.md Normal file
View file

@ -0,0 +1,195 @@
# Benchmarks
## Overview
[`asv`](https://github.com/airspeed-velocity/) (Airspeed Velocity) is used for benchmarking performance.
`asv` covers the entire benchmarking workflow. We can:
1. Define benchmark tests similarly to writing pytest tests (supports both timing and memory benchmarks)
2. Run the benchmarks and generate results for individual git commits, tags, or entire branches
3. View results as an HTML report (dashboard with charts)
4. Compare performance between two commits / tags / branches for CI integration
![asv dashboard](./assets/asv_dashboard.png)
django-components uses `asv` for these use cases:
- Benchmarking across releases:
1. When a git tag is created and pushed, this triggers a Github Action workflow (see `docs.yml`).
2. The workflow runs the benchmarks with the latest release, and commits the results to the repository.
Thus, we can see how performance changes across releases.
- Displaying performance results on the website:
1. When a git tag is created and pushed, we also update the documentation website (see `docs.yml`).
2. Before we publish the docs website, we generate the HTML report for the benchmark results.
3. The generated report is placed in the `docs/benchmarks/` directory, and is thus
published with the rest of the docs website and available under [`/benchmarks/`](https://django-components.github.io/django-components/benchmarks).
- NOTE: The location where the report is placed is defined in `asv.conf.json`.
- Compare performance between commits on pull requests:
1. When a pull request is made, this triggers a Github Action workflow (see `benchmark.yml`).
2. The workflow compares performance between commits.
3. The report is added to the PR as a comment made by a bot.
## Interpreting benchmarks
The results CANNOT be taken as ABSOLUTE values e.g.:
"This example took 200ms to render, so my page will also take 200ms to render."
Each UI may consist of different number of Django templates, template tags, and components, and all these may influence the rendering time differently.
Instead, the results MUST be understood as RELATIVE values.
- If a commit is 10% slower than the master branch, that's valid.
- If Django components are 10% slower than vanilla Django templates, that's valid.
- If "isolated" mode is 10% slower than "django" mode, that's valid.
## Development
Let's say we want to generate results for the last 5 commits.
1. Install `asv`
```bash
pip install asv
```
2. Run benchmarks and generate results
```bash
asv run HEAD --steps 5 -e
```
- `HEAD` means that we want to run benchmarks against the [current branch](https://stackoverflow.com/a/2304106/9788634).
- `--steps 5` means that we want to run benchmarks for the last 5 commits.
- `-e` to print out any errors.
The results will be stored in `.asv/results/`, as configured in `asv.conf.json`.
3. Generate HTML report
```bash
asv publish
asv preview
```
- `publish` generates the HTML report and stores it in `docs/benchmarks/`, as configured in `asv.conf.json`.
- `preview` starts a local server and opens the report in the browser.
NOTE: Since the results are stored in `docs/benchmarks/`, you can also view the results
with `mkdocs serve` and navigating to `http://localhost:9000/django-components/benchmarks/`.
NOTE 2: Running `publish` will overwrite the existing contents of `docs/benchmarks/`.
## Writing benchmarks
`asv` supports writing different [types of benchmarks](https://asv.readthedocs.io/en/latest/writing_benchmarks.html#benchmark-types). What's relevant for us is:
- [Raw timing benchmarks](https://asv.readthedocs.io/en/latest/writing_benchmarks.html#raw-timing-benchmarks)
- [Peak memory benchmarks](https://asv.readthedocs.io/en/latest/writing_benchmarks.html#peak-memory)
Notes:
- The difference between "raw timing" and "timing" tests is that "raw timing" is ran in a separate process.
And instead of running the logic within the test function itself, we return a script (string)
that will be executed in the separate process.
- The difference between "peak memory" and "memory" tests is that "memory" calculates the memory
of the object returned from the test function. On the other hand, "peak memory" detects the
peak memory usage during the execution of the test function (including the setup function).
You can write the test file anywhere in the `benchmarks/` directory, `asv` will automatically find it.
Inside the file, write a test function. Depending on the type of the benchmark,
prefix the test function name with `timeraw_` or `peakmem_`. See [`benchmarks/benchmark_templating.py`](benchmark_templating.py) for examples.
### Ensuring that the benchmarked logic is correct
The approach I (Juro) took with benchmarking the overall template rendering is that
I've defined the actual logic in `tests/test_benchmark_*.py` files. So those files
are part of the normal pytest testing, and even contain a section with pytest tests.
This ensures that the benchmarked logic remains functional and error-free.
However, there's some caveats:
1. I wasn't able to import files from `tests/`.
2. When running benchmarks, we don't want to run the pytest tests.
To work around that, the approach I used for loading the files from the `tests/` directory is to:
1. Get the file's source code as a string.
2. Cut out unwanted sections (like the pytest tests).
3. Append the benchmark-specific code to the file (e.g. to actually render the templates).
4. In case of "timeraw" benchmarks, we can simply return the remaining code as a string
to be run in a separate process.
5. In case of "peakmem" benchmarks, we need to access this modified source code as Python objects.
So the code is made available as a "virtual" module, which makes it possible to import Python objects like so:
```py
from my_virtual_module import run_my_benchmark
```
## Using `asv`
### Compare latest commit against master
Note: Before comparing, you must run the benchmarks first to generate the results. The `continuous` command does not generate the results by itself.
```bash
asv continuous master^! HEAD^! --factor 1.1
```
- Factor of `1.1` means that the new commit is allowed to be 10% slower/faster than the master commit.
- `^` means that we mean the COMMIT of the branch, not the BRANCH itself.
Without it, we would run benchmarks for the whole branch history.
With it, we run benchmarks FROM the latest commit (incl) TO ...
- `!` means that we want to select range spanning a single commit.
Without it, we would run benchmarks for all commits FROM the latest commit
TO the start of the branch history.
With it, we run benchmarks ONLY FOR the latest commit.
### More Examples
Notes:
- Use `~1` to select the second-latest commit, `~2` for the third-latest, etc..
Generate benchmarks for the latest commit in `master` branch.
```bash
asv run master^!
```
Generate benchmarks for second-latest commit in `master` branch.
```bash
asv run master~1^!
```
Generate benchmarks for all commits in `master` branch.
```bash
asv run master
```
Generate benchmarks for all commits in `master` branch, but exclude the latest commit.
```bash
asv run master~1
```
Generate benchmarks for the LAST 5 commits in `master` branch, but exclude the latest commit.
```bash
asv run master~1 --steps 5
```

0
benchmarks/__init__.py Normal file
View file

Binary file not shown.

After

Width:  |  Height:  |  Size: 321 KiB

View file

@ -0,0 +1,447 @@
# Write the benchmarking functions here
# See "Writing benchmarks" in the asv docs for more information.
import re
from pathlib import Path
from types import ModuleType
from typing import Literal
# Fix for for https://github.com/airspeed-velocity/asv_runner/pull/44
import benchmarks.monkeypatch_asv # noqa: F401
from benchmarks.utils import benchmark, create_virtual_module
DJC_VS_DJ_GROUP = "Components vs Django"
DJC_ISOLATED_VS_NON_GROUP = "isolated vs django modes"
OTHER_GROUP = "Other"
DjcContextMode = Literal["isolated", "django"]
TemplatingRenderer = Literal["django", "django-components", "none"]
TemplatingTestSize = Literal["lg", "sm"]
TemplatingTestType = Literal[
"first", # Testing performance of the first time the template is rendered
"subsequent", # Testing performance of the subsequent times the template is rendered
"startup", # Testing performance of the startup time (e.g. defining classes and templates)
]
def _get_templating_filepath(renderer: TemplatingRenderer, size: TemplatingTestSize) -> Path:
if renderer == "none":
raise ValueError("Cannot get filepath for renderer 'none'")
elif renderer not in ["django", "django-components"]:
raise ValueError(f"Invalid renderer: {renderer}")
if size not in ("lg", "sm"):
raise ValueError(f"Invalid size: {size}, must be one of ('lg', 'sm')")
# At this point, we know the renderer is either "django" or "django-components"
root = file_path = Path(__file__).parent.parent
if renderer == "django":
if size == "lg":
file_path = root / "tests" / "test_benchmark_django.py"
else:
file_path = root / "tests" / "test_benchmark_django_small.py"
else:
if size == "lg":
file_path = root / "tests" / "test_benchmark_djc.py"
else:
file_path = root / "tests" / "test_benchmark_djc_small.py"
return file_path
def _get_templating_script(
renderer: TemplatingRenderer,
size: TemplatingTestSize,
context_mode: DjcContextMode,
imports_only: bool,
) -> str:
if renderer == "none":
return ""
elif renderer not in ["django", "django-components"]:
raise ValueError(f"Invalid renderer: {renderer}")
# At this point, we know the renderer is either "django" or "django-components"
file_path = _get_templating_filepath(renderer, size)
contents = file_path.read_text()
# The files with benchmarked code also have a section for testing them with pytest.
# We remove that pytest section, so the script is only the benchmark code.
contents = contents.split("# ----------- TESTS START ------------ #")[0]
if imports_only:
# There is a benchmark test for measuring the time it takes to import the module.
# For that, we exclude from the code everything AFTER this line
contents = contents.split("# ----------- IMPORTS END ------------ #")[0]
else:
# Set the context mode by replacing variable in the script
contents = re.sub(r"CONTEXT_MODE.*?\n", f"CONTEXT_MODE = '{context_mode}'\n", contents, count=1)
return contents
def _get_templating_module(
renderer: TemplatingRenderer,
size: TemplatingTestSize,
context_mode: DjcContextMode,
imports_only: bool,
) -> ModuleType:
if renderer not in ("django", "django-components"):
raise ValueError(f"Invalid renderer: {renderer}")
file_path = _get_templating_filepath(renderer, size)
script = _get_templating_script(renderer, size, context_mode, imports_only)
# This makes it possible to import the module in the benchmark function
# as `import test_templating`
module = create_virtual_module("test_templating", script, str(file_path))
return module
# The `timeraw_` tests run in separate processes. But when running memory benchmarks,
# the tested logic runs in the same process as the where we run the benchmark functions
# (e.g. `peakmem_render_lg_first()`). Thus, the `peakmem_` functions have access to this file
# when the tested logic runs.
#
# Secondly, `asv` doesn't offer any way to pass data from `setup` to actual test.
#
# And so we define this global, which, when running memory benchmarks, the `setup` function
# populates. And then we trigger the actual render from within the test body.
do_render = lambda: None # noqa: E731
def setup_templating_memory_benchmark(
renderer: TemplatingRenderer,
size: TemplatingTestSize,
test_type: TemplatingTestType,
context_mode: DjcContextMode,
imports_only: bool = False,
):
global do_render
module = _get_templating_module(renderer, size, context_mode, imports_only)
data = module.gen_render_data()
render = module.render
do_render = lambda: render(data) # noqa: E731
# Do the first render as part of setup if we're testing the subsequent renders
if test_type == "subsequent":
do_render()
# The timing benchmarks run the actual code in a separate process, by using the `timeraw_` prefix.
# As such, we don't actually load the code in this file. Instead, we only prepare a script (raw string)
# that will be run in the new process.
def prepare_templating_benchmark(
renderer: TemplatingRenderer,
size: TemplatingTestSize,
test_type: TemplatingTestType,
context_mode: DjcContextMode,
imports_only: bool = False,
):
global do_render
setup_script = _get_templating_script(renderer, size, context_mode, imports_only)
# If we're testing the startup time, then the setup is actually the tested code
if test_type == "startup":
return setup_script
else:
# Otherwise include also data generation as part of setup
setup_script += "\n\n" "render_data = gen_render_data()\n"
# Do the first render as part of setup if we're testing the subsequent renders
if test_type == "subsequent":
setup_script += "render(render_data)\n"
benchmark_script = "render(render_data)\n"
return benchmark_script, setup_script
# - Group: django-components vs django
# - time: djc vs django (startup lg)
# - time: djc vs django (lg - FIRST)
# - time: djc vs django (sm - FIRST)
# - time: djc vs django (lg - SUBSEQUENT)
# - time: djc vs django (sm - SUBSEQUENT)
# - mem: djc vs django (lg - FIRST)
# - mem: djc vs django (sm - FIRST)
# - mem: djc vs django (lg - SUBSEQUENT)
# - mem: djc vs django (sm - SUBSEQUENT)
#
# NOTE: While the name suggests we're comparing Django and Django-components, be aware that
# in our "Django" tests, we still install and import django-components. We also use
# django-components's `{% html_attrs %}` tag in the Django scenario. `{% html_attrs %}`
# was used because the original sample code was from django-components.
#
# As such, these tests should seen not as "Using Django vs Using Components". But instead,
# it should be "What is the relative cost of using Components?".
#
# As an example, the benchmarking for the startup time and memory usage is not comparing
# two independent approaches. Rather, the test is checking if defining Components classes
# is more expensive than vanilla Django templates.
class DjangoComponentsVsDjangoTests:
# Testing startup time (e.g. defining classes and templates)
@benchmark(
pretty_name="startup - large",
group_name=DJC_VS_DJ_GROUP,
number=1,
rounds=5,
params={
"renderer": ["django", "django-components"],
},
)
def timeraw_startup_lg(self, renderer: TemplatingRenderer):
return prepare_templating_benchmark(renderer, "lg", "startup", "isolated")
@benchmark(
pretty_name="render - small - first render",
group_name=DJC_VS_DJ_GROUP,
number=1,
rounds=5,
params={
"renderer": ["django", "django-components"],
},
)
def timeraw_render_sm_first(self, renderer: TemplatingRenderer):
return prepare_templating_benchmark(renderer, "sm", "first", "isolated")
@benchmark(
pretty_name="render - small - second render",
group_name=DJC_VS_DJ_GROUP,
number=1,
rounds=5,
params={
"renderer": ["django", "django-components"],
},
)
def timeraw_render_sm_subsequent(self, renderer: TemplatingRenderer):
return prepare_templating_benchmark(renderer, "sm", "subsequent", "isolated")
@benchmark(
pretty_name="render - large - first render",
group_name=DJC_VS_DJ_GROUP,
number=1,
rounds=5,
params={
"renderer": ["django", "django-components"],
},
include_in_quick_benchmark=True,
)
def timeraw_render_lg_first(self, renderer: TemplatingRenderer):
return prepare_templating_benchmark(renderer, "lg", "first", "isolated")
@benchmark(
pretty_name="render - large - second render",
group_name=DJC_VS_DJ_GROUP,
number=1,
rounds=5,
params={
"renderer": ["django", "django-components"],
},
)
def timeraw_render_lg_subsequent(self, renderer: TemplatingRenderer):
return prepare_templating_benchmark(renderer, "lg", "subsequent", "isolated")
@benchmark(
pretty_name="render - small - first render (mem)",
group_name=DJC_VS_DJ_GROUP,
number=1,
rounds=5,
params={
"renderer": ["django", "django-components"],
},
setup=lambda renderer: setup_templating_memory_benchmark(renderer, "sm", "first", "isolated"),
)
def peakmem_render_sm_first(self, renderer: TemplatingRenderer):
do_render()
@benchmark(
pretty_name="render - small - second render (mem)",
group_name=DJC_VS_DJ_GROUP,
number=1,
rounds=5,
params={
"renderer": ["django", "django-components"],
},
setup=lambda renderer: setup_templating_memory_benchmark(renderer, "sm", "subsequent", "isolated"),
)
def peakmem_render_sm_subsequent(self, renderer: TemplatingRenderer):
do_render()
@benchmark(
pretty_name="render - large - first render (mem)",
group_name=DJC_VS_DJ_GROUP,
number=1,
rounds=5,
params={
"renderer": ["django", "django-components"],
},
setup=lambda renderer: setup_templating_memory_benchmark(renderer, "lg", "first", "isolated"),
)
def peakmem_render_lg_first(self, renderer: TemplatingRenderer):
do_render()
@benchmark(
pretty_name="render - large - second render (mem)",
group_name=DJC_VS_DJ_GROUP,
number=1,
rounds=5,
params={
"renderer": ["django", "django-components"],
},
setup=lambda renderer: setup_templating_memory_benchmark(renderer, "lg", "subsequent", "isolated"),
)
def peakmem_render_lg_subsequent(self, renderer: TemplatingRenderer):
do_render()
# - Group: Django-components "isolated" vs "django" modes
# - time: Isolated vs django djc (startup lg)
# - time: Isolated vs django djc (lg - FIRST)
# - time: Isolated vs django djc (sm - FIRST)
# - time: Isolated vs django djc (lg - SUBSEQUENT)
# - time: Isolated vs django djc (sm - SUBSEQUENT)
# - mem: Isolated vs django djc (lg - FIRST)
# - mem: Isolated vs django djc (sm - FIRST)
# - mem: Isolated vs django djc (lg - SUBSEQUENT)
# - mem: Isolated vs django djc (sm - SUBSEQUENT)
class IsolatedVsDjangoContextModesTests:
# Testing startup time (e.g. defining classes and templates)
@benchmark(
pretty_name="startup - large",
group_name=DJC_ISOLATED_VS_NON_GROUP,
number=1,
rounds=5,
params={
"context_mode": ["isolated", "django"],
},
)
def timeraw_startup_lg(self, context_mode: DjcContextMode):
return prepare_templating_benchmark("django-components", "lg", "startup", context_mode)
@benchmark(
pretty_name="render - small - first render",
group_name=DJC_ISOLATED_VS_NON_GROUP,
number=1,
rounds=5,
params={
"context_mode": ["isolated", "django"],
},
)
def timeraw_render_sm_first(self, context_mode: DjcContextMode):
return prepare_templating_benchmark("django-components", "sm", "first", context_mode)
@benchmark(
pretty_name="render - small - second render",
group_name=DJC_ISOLATED_VS_NON_GROUP,
number=1,
rounds=5,
params={
"context_mode": ["isolated", "django"],
},
)
def timeraw_render_sm_subsequent(self, context_mode: DjcContextMode):
return prepare_templating_benchmark("django-components", "sm", "subsequent", context_mode)
@benchmark(
pretty_name="render - large - first render",
group_name=DJC_ISOLATED_VS_NON_GROUP,
number=1,
rounds=5,
params={
"context_mode": ["isolated", "django"],
},
)
def timeraw_render_lg_first(self, context_mode: DjcContextMode):
return prepare_templating_benchmark("django-components", "lg", "first", context_mode)
@benchmark(
pretty_name="render - large - second render",
group_name=DJC_ISOLATED_VS_NON_GROUP,
number=1,
rounds=5,
params={
"context_mode": ["isolated", "django"],
},
)
def timeraw_render_lg_subsequent(self, context_mode: DjcContextMode):
return prepare_templating_benchmark("django-components", "lg", "subsequent", context_mode)
@benchmark(
pretty_name="render - small - first render (mem)",
group_name=DJC_ISOLATED_VS_NON_GROUP,
number=1,
rounds=5,
params={
"context_mode": ["isolated", "django"],
},
setup=lambda context_mode: setup_templating_memory_benchmark("django-components", "sm", "first", context_mode),
)
def peakmem_render_sm_first(self, context_mode: DjcContextMode):
do_render()
@benchmark(
pretty_name="render - small - second render (mem)",
group_name=DJC_ISOLATED_VS_NON_GROUP,
number=1,
rounds=5,
params={
"context_mode": ["isolated", "django"],
},
setup=lambda context_mode: setup_templating_memory_benchmark(
"django-components",
"sm",
"subsequent",
context_mode,
),
)
def peakmem_render_sm_subsequent(self, context_mode: DjcContextMode):
do_render()
@benchmark(
pretty_name="render - large - first render (mem)",
group_name=DJC_ISOLATED_VS_NON_GROUP,
number=1,
rounds=5,
params={
"context_mode": ["isolated", "django"],
},
setup=lambda context_mode: setup_templating_memory_benchmark(
"django-components",
"lg",
"first",
context_mode,
),
)
def peakmem_render_lg_first(self, context_mode: DjcContextMode):
do_render()
@benchmark(
pretty_name="render - large - second render (mem)",
group_name=DJC_ISOLATED_VS_NON_GROUP,
number=1,
rounds=5,
params={
"context_mode": ["isolated", "django"],
},
setup=lambda context_mode: setup_templating_memory_benchmark(
"django-components",
"lg",
"subsequent",
context_mode,
),
)
def peakmem_render_lg_subsequent(self, context_mode: DjcContextMode):
do_render()
class OtherTests:
@benchmark(
pretty_name="import time",
group_name=OTHER_GROUP,
number=1,
rounds=5,
)
def timeraw_import_time(self):
return prepare_templating_benchmark("django-components", "lg", "startup", "isolated", imports_only=True)

View file

@ -1,174 +0,0 @@
from time import perf_counter
from django.template import Context, Template
from django_components import Component, registry, types
from django_components.dependencies import CSS_DEPENDENCY_PLACEHOLDER, JS_DEPENDENCY_PLACEHOLDER
from tests.django_test_setup import * # NOQA
from tests.testutils import BaseTestCase, create_and_process_template_response
class SlottedComponent(Component):
template: types.django_html = """
{% load component_tags %}
<custom-template>
<header>{% slot "header" %}Default header{% endslot %}</header>
<main>{% slot "main" %}Default main{% endslot %}</main>
<footer>{% slot "footer" %}Default footer{% endslot %}</footer>
</custom-template>
"""
class SimpleComponent(Component):
template: types.django_html = """
Variable: <strong>{{ variable }}</strong>
"""
css_file = "style.css"
js_file = "script.js"
def get_context_data(self, variable, variable2="default"):
return {
"variable": variable,
"variable2": variable2,
}
class BreadcrumbComponent(Component):
template: types.django_html = """
<div class="breadcrumb-container">
<nav class="breadcrumbs">
<ol typeof="BreadcrumbList" vocab="https://schema.org/" aria-label="breadcrumbs">
{% for label, url in links %}
<li property="itemListElement" typeof="ListItem">
<a class="breadcrumb-current-page" property="item" typeof="WebPage" href="{{ url }}">
<span property="name">{{ label }}</span>
</a>
<meta property="position" content="4">
</li>
{% endfor %}
</ol>
</nav>
</div>
"""
css_file = "test.css"
js_file = "test.js"
LINKS = [
(
"https://developer.mozilla.org/en-US/docs/Learn",
"Learn web development",
),
(
"https://developer.mozilla.org/en-US/docs/Learn/HTML",
"Structuring the web with HTML",
),
(
"https://developer.mozilla.org/en-US/docs/Learn/HTML/Introduction_to_HTML",
"Introduction to HTML",
),
(
"https://developer.mozilla.org/en-US/docs/Learn/HTML/Introduction_to_HTML/Document_and_website_structure",
"Document and website structure",
),
]
def get_context_data(self, items):
if items > 4:
items = 4
elif items < 0:
items = 0
return {"links": self.LINKS[: items - 1]}
EXPECTED_CSS = """<link href="test.css" media="all" rel="stylesheet">"""
EXPECTED_JS = """<script src="test.js"></script>"""
class RenderBenchmarks(BaseTestCase):
def setUp(self):
registry.clear()
registry.register("test_component", SlottedComponent)
registry.register("inner_component", SimpleComponent)
registry.register("breadcrumb_component", BreadcrumbComponent)
@staticmethod
def timed_loop(func, iterations=1000):
"""Run func iterations times, and return the time in ms per iteration."""
start_time = perf_counter()
for _ in range(iterations):
func()
end_time = perf_counter()
total_elapsed = end_time - start_time # NOQA
return total_elapsed * 1000 / iterations
def test_render_time_for_small_component(self):
template_str: types.django_html = """
{% load component_tags %}
{% component 'test_component' %}
{% slot "header" %}
{% component 'inner_component' variable='foo' %}{% endcomponent %}
{% endslot %}
{% endcomponent %}
"""
template = Template(template_str)
print(f"{self.timed_loop(lambda: template.render(Context({})))} ms per iteration")
def test_middleware_time_with_dependency_for_small_page(self):
template_str: types.django_html = """
{% load component_tags %}
{% component_js_dependencies %}
{% component_css_dependencies %}
{% component 'test_component' %}
{% slot "header" %}
{% component 'inner_component' variable='foo' %}{% endcomponent %}
{% endslot %}
{% endcomponent %}
"""
template = Template(template_str)
# Sanity tests
response_content = create_and_process_template_response(template)
self.assertNotIn(CSS_DEPENDENCY_PLACEHOLDER, response_content)
self.assertNotIn(JS_DEPENDENCY_PLACEHOLDER, response_content)
self.assertIn("style.css", response_content)
self.assertIn("script.js", response_content)
without_middleware = self.timed_loop(
lambda: create_and_process_template_response(template, use_middleware=False)
)
with_middleware = self.timed_loop(lambda: create_and_process_template_response(template, use_middleware=True))
print("Small page middleware test")
self.report_results(with_middleware, without_middleware)
def test_render_time_with_dependency_for_large_page(self):
from django.template.loader import get_template
template = get_template("mdn_complete_page.html")
response_content = create_and_process_template_response(template, {})
self.assertNotIn(CSS_DEPENDENCY_PLACEHOLDER, response_content)
self.assertNotIn(JS_DEPENDENCY_PLACEHOLDER, response_content)
self.assertIn("test.css", response_content)
self.assertIn("test.js", response_content)
without_middleware = self.timed_loop(
lambda: create_and_process_template_response(template, {}, use_middleware=False)
)
with_middleware = self.timed_loop(
lambda: create_and_process_template_response(template, {}, use_middleware=True)
)
print("Large page middleware test")
self.report_results(with_middleware, without_middleware)
@staticmethod
def report_results(with_middleware, without_middleware):
print(f"Middleware active\t\t{with_middleware:.3f} ms per iteration")
print(f"Middleware inactive\t{without_middleware:.3f} ms per iteration")
time_difference = with_middleware - without_middleware
if without_middleware > with_middleware:
print(f"Decrease of {-100 * time_difference / with_middleware:.2f}%")
else:
print(f"Increase of {100 * time_difference / without_middleware:.2f}%")

View file

@ -0,0 +1,29 @@
from asv_runner.benchmarks.timeraw import TimerawBenchmark, _SeparateProcessTimer
# Fix for https://github.com/airspeed-velocity/asv_runner/pull/44
def _get_timer(self, *param):
"""
Returns a timer that runs the benchmark function in a separate process.
#### Parameters
**param** (`tuple`)
: The parameters to pass to the benchmark function.
#### Returns
**timer** (`_SeparateProcessTimer`)
: A timer that runs the function in a separate process.
"""
if param:
def func():
# ---------- OUR CHANGES: ADDED RETURN STATEMENT ----------
return self.func(*param)
# ---------- OUR CHANGES END ----------
else:
func = self.func
return _SeparateProcessTimer(func)
TimerawBenchmark._get_timer = _get_timer

View file

@ -0,0 +1,66 @@
# ------------ FIX FOR #45 ------------
# See https://github.com/airspeed-velocity/asv_runner/issues/45
# This fix is applied in CI in the `benchmark.yml` file.
# This file is intentionally named `monkeypatch_asv_ci.txt` to avoid being
# loaded as a python file by `asv`.
# -------------------------------------
def timeit(self, number):
"""
Run the function's code `number` times in a separate Python process, and
return the execution time.
#### Parameters
**number** (`int`)
: The number of times to execute the function's code.
#### Returns
**time** (`float`)
: The time it took to execute the function's code `number` times.
#### Notes
The function's code is executed in a separate Python process to avoid
interference from the parent process. The function can return either a
single string of code to be executed, or a tuple of two strings: the
code to be executed and the setup code to be run before timing.
"""
stmt = self.func()
if isinstance(stmt, tuple):
stmt, setup = stmt
else:
setup = ""
stmt = textwrap.dedent(stmt)
setup = textwrap.dedent(setup)
stmt = stmt.replace(r'"""', r"\"\"\"")
setup = setup.replace(r'"""', r"\"\"\"")
# TODO
# -----------ORIGINAL CODE-----------
# code = self.subprocess_tmpl.format(stmt=stmt, setup=setup, number=number)
# res = subprocess.check_output([sys.executable, "-c", code])
# return float(res.strip())
# -----------NEW CODE-----------
code = self.subprocess_tmpl.format(stmt=stmt, setup=setup, number=number)
evaler = textwrap.dedent(
"""
import sys
code = sys.stdin.read()
exec(code)
"""
)
proc = subprocess.Popen([sys.executable, "-c", evaler],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = proc.communicate(input=code.encode("utf-8"))
if proc.returncode != 0:
raise RuntimeError(f"Subprocess failed: {stderr.decode()}")
return float(stdout.decode("utf-8").strip())
_SeparateProcessTimer.timeit = timeit
# ------------ END FIX #45 ------------

View file

@ -1,195 +0,0 @@
# NOTE: This file is more of a playground than a proper test
import timeit
from typing import List, Tuple
from django.template.base import DebugLexer, Lexer, Token
from django_components.util.template_parser import parse_template
def django_lexer(template: str) -> List[Token]:
"""Use Django's built-in lexer to tokenize a template."""
lexer = Lexer(template)
return list(lexer.tokenize())
def django_debug_lexer(template: str) -> List[Token]:
"""Use Django's built-in lexer to tokenize a template."""
lexer = DebugLexer(template)
return list(lexer.tokenize())
def run_benchmark(template: str, num_iterations: int = 5000) -> Tuple[float, float]:
"""Run performance comparison between Django and custom lexer."""
# django_time = timeit.timeit(lambda: django_lexer(template), number=num_iterations)
django_debug_time = timeit.timeit(lambda: django_debug_lexer(template), number=num_iterations)
custom_time = timeit.timeit(lambda: parse_template(template), number=num_iterations)
# return django_time, django_debug_time
return django_debug_time, custom_time
def print_benchmark_results(template: str, django_time: float, custom_time: float, num_iterations: int) -> None:
"""Print formatted benchmark results."""
print(f"\nTemplate: {template}")
print(f"Iterations: {num_iterations}")
print(f"Django Lexer: {django_time:.6f} seconds")
print(f"Custom Lexer: {custom_time:.6f} seconds")
print(f"Difference: {abs(django_time - custom_time):.6f} seconds")
print(f"Custom lexer is {(django_time / custom_time):.2f}x {'faster' if custom_time < django_time else 'slower'}")
if __name__ == "__main__":
test_cases = [
# Simple text
"Hello World",
# Simple variable
"Hello {{ name }}",
# Simple block
"{% if condition %}Hello{% endif %}",
# Complex nested template
"""
{% extends "base.html" %}
{% block content %}
<h1>{{ title }}</h1>
{% for item in items %}
<div class="{{ item.class }}">
{{ item.name }}
{% if item.description %}
<p>{{ item.description }}</p>
{% endif %}
</div>
{% endfor %}
{% endblock %}
""",
# Component with nested tags
"""
{% component 'table'
headers=headers
rows=rows
footer="{% slot 'footer' %}Total: {{ total }}{% endslot %}"
title="{% trans 'Data Table' %}"
%}
""",
# Real world example
"""
<div class="prose flex flex-col gap-8">
{# Info section #}
<div class="border-b border-neutral-300">
<div class="flex justify-between items-start">
<h3 class="mt-0">Project Info</h3>
{% if editable %}
{% component "Button"
href=project_edit_url
attrs:class="not-prose"
footer="{% slot 'footer' %}Total: {{ total }}{% endslot %}"
title="{% trans 'Data Table' %}"
%}
Edit Project
{% endcomponent %}
{% endif %}
</div>
<table>
{% for key, value in project_info %}
<tr>
<td class="font-bold pr-4">
{{ key }}:
</td>
<td>
{{ value }}
</td>
</tr>
{% endfor %}
</table>
</div>
{# Status Updates section #}
{% component "ProjectStatusUpdates"
project_id=project.pk
status_updates=status_updates
editable=editable
footer="{% slot 'footer' %}Total: {{ total }}{% endslot %}"
title="{% trans 'Data Table' %}"
/ %}
<div class="xl:grid xl:grid-cols-2 gap-10">
{# Team section #}
<div class="border-b border-neutral-300">
<div class="flex justify-between items-start">
<h3 class="mt-0">Dcode Team</h3>
{% if editable %}
{% component "Button"
href=edit_project_roles_url
attrs:class="not-prose"
footer="{% slot 'footer' %}Total: {{ total }}{% endslot %}"
title="{% trans 'Data Table' %}"
%}
Edit Team
{% endcomponent %}
{% endif %}
</div>
{% component "ProjectUsers"
project_id=project.pk
roles_with_users=roles_with_users
editable=False
footer="{% slot 'footer' %}Total: {{ total }}{% endslot %}"
title="{% trans 'Data Table' %}"
/ %}
</div>
{# POCs section #}
<div>
<div class="flex justify-between items-start max-xl:mt-6">
<h3 class="mt-0">Client POCs</h3>
{% if editable %}
{% component "Button"
href=edit_pocs_url
attrs:class="not-prose"
footer="{% slot 'footer' %}Total: {{ total }}{% endslot %}"
title="{% trans 'Data Table' %}"
%}
Edit POCs
{% endcomponent %}
{% endif %}
</div>
{% if poc_data %}
<table>
<tr>
<th>Name</th>
<th>Job Title</th>
<th>Hubspot Profile</th>
</tr>
{% for data in poc_data %}
<tr>
<td>{{ data.poc.contact.first_name }} {{ data.poc.contact.last_name }}</td>
<td>{{ data.poc.contact.job_title }}</td>
<td>
{% component "Icon"
href=data.hubspot_url
name="arrow-top-right-on-square"
variant="outline"
color="text-gray-400 hover:text-gray-500"
footer="{% slot 'footer' %}Total: {{ total }}{% endslot %}"
title="{% trans 'Data Table' %}"
/ %}
</td>
</tr>
{% endfor %}
</table>
{% else %}
<p class="text-sm italic">No entries</p>
{% endif %}
</div>
</div>
</div>
""",
]
for template in test_cases:
django_time, custom_time = run_benchmark(template)
print_benchmark_results(template, django_time, custom_time, 200)

99
benchmarks/utils.py Normal file
View file

@ -0,0 +1,99 @@
import os
import sys
from importlib.abc import Loader
from importlib.util import spec_from_loader, module_from_spec
from types import ModuleType
from typing import Any, Dict, List, Optional
# NOTE: benchmark_name constraints:
# - MUST BE UNIQUE
# - MUST NOT CONTAIN `-`
# - MUST START WITH `time_`, `mem_`, `peakmem_`
# See https://github.com/airspeed-velocity/asv/pull/1470
def benchmark(
*,
pretty_name: Optional[str] = None,
timeout: Optional[int] = None,
group_name: Optional[str] = None,
params: Optional[Dict[str, List[Any]]] = None,
number: Optional[int] = None,
min_run_count: Optional[int] = None,
include_in_quick_benchmark: bool = False,
**kwargs,
):
def decorator(func):
# For pull requests, we want to run benchmarks only for a subset of tests,
# because the full set of tests takes about 10 minutes to run (5 min per commit).
# This is done by setting DJC_BENCHMARK_QUICK=1 in the environment.
if os.getenv("DJC_BENCHMARK_QUICK") and not include_in_quick_benchmark:
# By setting the benchmark name to something that does NOT start with
# valid prefixes like `time_`, `mem_`, or `peakmem_`, this function will be ignored by asv.
func.benchmark_name = "noop"
return func
# "group_name" is our custom field, which we actually convert to asv's "benchmark_name"
if group_name is not None:
benchmark_name = f"{group_name}.{func.__name__}"
func.benchmark_name = benchmark_name
# Also "params" is custom, so we normalize it to "params" and "param_names"
if params is not None:
func.params, func.param_names = list(params.values()), list(params.keys())
if pretty_name is not None:
func.pretty_name = pretty_name
if timeout is not None:
func.timeout = timeout
if number is not None:
func.number = number
if min_run_count is not None:
func.min_run_count = min_run_count
# Additional, untyped kwargs
for k, v in kwargs.items():
setattr(func, k, v)
return func
return decorator
class VirtualModuleLoader(Loader):
def __init__(self, code_string):
self.code_string = code_string
def exec_module(self, module):
exec(self.code_string, module.__dict__)
def create_virtual_module(name: str, code_string: str, file_path: str) -> ModuleType:
"""
To avoid the headaches of importing the tested code from another diretory,
we create a "virtual" module that we can import from anywhere.
E.g.
```py
from benchmarks.utils import create_virtual_module
create_virtual_module("my_module", "print('Hello, world!')", __file__)
# Now you can import my_module from anywhere
import my_module
```
"""
# Create the module specification
spec = spec_from_loader(name, VirtualModuleLoader(code_string))
# Create the module
module = module_from_spec(spec) # type: ignore[arg-type]
module.__file__ = file_path
module.__name__ = name
# Add it to sys.modules
sys.modules[name] = module
# Execute the module
spec.loader.exec_module(module) # type: ignore[union-attr]
return module