Compare commits

...

381 commits

Author SHA1 Message Date
Paul Gauthier
1354e0bfa4 copy
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-12-18 10:37:28 -08:00
Paul Gauthier
2d54feaa09 chore: Update dependencies in requirements files 2025-12-18 10:37:02 -08:00
Paul Gauthier
7ef1dcd64a fix: Add ImageFetchError to aider's exceptions list
Co-authored-by: aider (gemini/gemini-3-flash-preview) <aider@aider.chat>
2025-12-18 10:34:52 -08:00
Paul Gauthier
656301cc87 fix: Add ErrorEventError to aider's exceptions list
Co-authored-by: aider (gemini/gemini-3-flash-preview) <aider@aider.chat>
2025-12-18 10:34:30 -08:00
Paul Gauthier
2c39cb68c7 fix: Add BadGatewayError to exceptions list
Co-authored-by: aider (gemini/gemini-3-flash-preview) <aider@aider.chat>
2025-12-18 10:33:52 -08:00
Paul Gauthier
7e292a2efc feat: Add gemini-3-flash-preview model entries
Co-authored-by: aider (gemini/gemini-3-flash-preview) <aider@aider.chat>
2025-12-18 10:29:33 -08:00
paul-gauthier
7301d4522d
Merge pull request #4698 from codeofdusk/gpt-5.2
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-12-11 11:28:27 -08:00
Bill Dengler
4ff959f55b Add gpt-5.2 models 2025-12-11 11:14:33 -08:00
Paul Gauthier
5683f1c089 bump ts
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-11-22 11:06:46 -08:00
Paul Gauthier
3bdd49c8ab bump deps
Some checks failed
pre-commit / pre-commit (push) Waiting to run
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-11-21 15:22:54 -08:00
Paul Gauthier
dbc7ee0d0b copy 2025-11-21 15:22:43 -08:00
Paul Gauthier
a65e0892b9 feat: Add use_temperature: false to openrouter/openai/gpt-5-pro
Co-authored-by: aider (gemini/gemini-3-pro-preview) <aider@aider.chat>
2025-11-21 15:22:14 -08:00
Paul Gauthier
7360bb5064 feat: Add use_temperature: false to gpt-5 models 2025-11-21 15:22:14 -08:00
Paul Gauthier
f626e44a0d copy
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-11-20 20:34:03 -08:00
Paul Gauthier
90ac33cb88 copy 2025-11-20 20:33:46 -08:00
Paul Gauthier
5b0f6ce9e9 Merge branch 'main' of github.com:Aider-AI/aider
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-11-20 11:13:35 -08:00
paul-gauthier
58eae2f94a
Merge pull request #4656 from codeofdusk/new-models-20251118
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-11-19 09:08:49 -08:00
Bill Dengler
ab29b99518 Add Gemini 3 2025-11-18 13:54:03 -08:00
Bill Dengler
a719c2848b Add gpt-5.1 2025-11-18 13:54:03 -08:00
Bill Dengler
749dee8f30 Add support for gpt-5-pro 2025-11-18 13:54:03 -08:00
paul-gauthier
c74f5efb2f
Merge pull request #4621 from TimPut/repomap-haskell
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-11-02 09:43:18 -08:00
paul-gauthier
9fbfa36c27
Merge pull request #4620 from TimPut/repomap-zig 2025-11-02 09:42:56 -08:00
timput
be8da40b1f add initial zig-tags.scm for repomap 2025-11-02 08:03:26 -07:00
timput
93f20a6d23 add initial haskell-tags.scm for repomap 2025-11-02 07:57:29 -07:00
Paul Gauthier
92de49a50d Merge branch 'main' of github.com:Aider-AI/aider 2025-10-08 06:53:50 -07:00
paul-gauthier
11516d6d6b
Merge pull request #4557 from muravvv/remove_duplicated_language
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-10-05 12:11:17 -07:00
muravvv
bfed819c19 Remove duplicate instruction in what language model should respond
the same instruction included in {final_reminders}
2025-10-05 20:54:04 +03:00
paul-gauthier
1a6d035653
Merge pull request #4552 from gcp/deepseek32-pricing
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-10-04 05:19:09 -07:00
Gian-Carlo Pascutto
cb6a152e5e chore: update deepseek model names and metadata 2025-10-04 11:14:08 +02:00
paul-gauthier
d47d689d18
Merge pull request #4551 from gcp/deepseek32-pricing
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-10-03 05:54:06 -07:00
Gian-Carlo Pascutto
484e47d029 chore: add deepseek model test results to leaderboard 2025-10-03 13:11:00 +02:00
Gian-Carlo Pascutto
cbb5376197 feat: update deepseek model metadata and add deepseek-reasoner 2025-10-03 11:18:37 +02:00
paul-gauthier
6963e65887
Merge pull request #4547 from mlang/gpt-5-codex
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-09-30 06:23:28 -07:00
Mario Lang
a3bbb5ec22 Support for gpt-5-codex 2025-09-30 07:17:50 +02:00
paul-gauthier
73409a52c3
Merge pull request #4544 from zlemisie/main
Some checks failed
pre-commit / pre-commit (push) Waiting to run
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
Feature request: support for Bedrock/Claude 4.5
2025-09-29 12:30:06 -07:00
michal.sliwa
82a31cc7cc Feature request: support for Bedrock/Claude 4.5 #4543 2025-09-29 21:21:58 +02:00
Paul Gauthier
249e389765 copy 2025-09-29 10:51:58 -07:00
Paul Gauthier
a1214101c6 chore: Remove ethicalads.io scripts and divs 2025-09-29 10:30:01 -07:00
paul-gauthier
b2379d585f
Merge pull request #4541 from khulnasoft-bot/patch-1
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-09-29 06:15:16 -07:00
KhulnaSoft bot
39b0c25ae3
added tags for FORTRAN
Reference:
Merge commit for PR #4534
Commit: 5777ab9703
2025-09-29 11:46:56 +06:00
paul-gauthier
f8aa80396d
Merge pull request #4534 from varchasgopalaswamy/main
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
Added tree-sitter tags for FORTRAN
2025-09-26 10:36:51 -07:00
Varchas Gopalaswamy
5777ab9703 added tags for FORTRAN 2025-09-25 19:04:44 -04:00
paul-gauthier
e4fc2f515d
Merge pull request #4493 from mubashir1osmani/main
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-09-05 07:09:23 -07:00
mubashir1osmani
60c578e2a1 added source + license 2025-09-05 02:20:09 -04:00
Paul Gauthier
b3d339a583 copy
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-09-02 11:00:09 -07:00
Paul Gauthier
c4b06c0870 copy
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-09-02 08:53:17 -07:00
Paul Gauthier
bd3c5df505 Merge branch 'main' of github.com:Aider-AI/aider 2025-09-02 08:52:52 -07:00
paul-gauthier
37ab5e4b56
Merge pull request #4475 from lreeves/patch-1
Add GPT-5 with all reasoning settings to polyglot leaderboard
2025-09-02 08:52:13 -07:00
mubashir1osmani
f6ad53ea8c added julia tree sitter 2025-08-31 20:05:44 -04:00
Luke Reeves
54b266f289
Update polyglot_leaderboard.yml with medium and low reasoning 2025-08-25 10:52:12 -04:00
Luke Reeves
bfef1906bb
Update polyglot_leaderboard.yml
add GPT-5 with high reasoning
2025-08-23 13:11:56 -04:00
Paul Gauthier
ad19c7b5aa bump deps 2025-08-16 14:06:32 -07:00
Paul Gauthier
32faf82b31 chore: Update base image to python:3.10-slim-bookworm
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-08-13 10:17:40 -07:00
Paul Gauthier
59250e070e set version to 0.86.2.dev 2025-08-13 08:46:02 -07:00
Paul Gauthier
b8b521f143 version bump to 0.86.1 2025-08-13 08:46:00 -07:00
Paul Gauthier
07e25599e9 copy 2025-08-13 08:44:21 -07:00
Paul Gauthier
2b98a9ecf0 copy 2025-08-13 08:43:58 -07:00
Paul Gauthier
450a535548 feat: Add reasoning_effort setting to gpt-5 models
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-08-12 18:23:38 -07:00
Paul Gauthier
5761b087b5 refactor: Remove unused ad styles from head_custom.html
Some checks failed
pre-commit / pre-commit (push) Waiting to run
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-08-12 06:36:43 -07:00
Paul Gauthier
da45632087 fix: Adjust ad placement for narrow screens
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
Co-authored-by: aider (gpt-5) <aider@aider.chat>
2025-08-11 12:37:40 -07:00
Paul Gauthier
0a88f7ce34 feat: Add EthicalAds script and ad placement
Co-authored-by: aider (gpt-5) <aider@aider.chat>
2025-08-11 12:14:42 -07:00
Paul Gauthier
9fda5c93cc chore: Update polyglot leaderboard model name and command 2025-08-11 09:45:20 -07:00
paul-gauthier
2f5bb772b5
Merge pull request #4412 from Oct4Pie/bench/gpt-oss-120b-high
Add test results for gpt-oss-120b (high) to polyglot leaderboard
2025-08-11 12:44:38 -04:00
Paul Gauthier
5a3b2f34b6 feat: Add flash-lite model alias
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-08-09 13:44:27 -04:00
Paul Gauthier
a7d3fdc23b copy 2025-08-09 12:59:45 -03:00
Paul Gauthier
0862128d36 copy 2025-08-09 12:56:55 -03:00
Paul Gauthier
01a9b88df1 set version to 0.86.1.dev 2025-08-09 12:55:11 -03:00
Paul Gauthier
a4be6ccd87 version bump to 0.86.0 2025-08-09 12:54:51 -03:00
Paul Gauthier
4cd71acebe copy 2025-08-09 12:35:59 -03:00
Paul Gauthier
3d8290cdef copy 2025-08-09 12:29:10 -03:00
Paul Gauthier
b782437918 copy 2025-08-09 12:22:31 -03:00
Paul Gauthier
f3d5f20ad7 blame 2025-08-09 11:23:25 -03:00
Paul Gauthier
f57c0f624a feat: blame: Detect aider commits using co-authored-by
Co-authored-by: aider (gpt-5) <aider@aider.chat>
2025-08-09 09:55:52 -03:00
Paul Gauthier
071d177309 feat: Add OpenAI and OpenRouter GPT-5 model settings
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
Co-authored-by: aider (gpt-5) <aider@aider.chat>
2025-08-09 09:53:34 -03:00
Paul Gauthier
4e7c9f2fcd fix: Remove editor settings from models using gpt-5 nano weak model
Co-authored-by: aider (gpt-5) <aider@aider.chat>
2025-08-08 10:24:33 -03:00
Paul Gauthier
a14cb222c0 feat: Add GPT-5 model family settings
Co-authored-by: aider (gpt-5) <aider@aider.chat>
2025-08-08 09:37:44 -03:00
Paul Gauthier
3b919646a5 set version to 0.85.6.dev
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-08-07 17:54:56 -03:00
Paul Gauthier
9702b1c199 version bump to 0.85.5 2025-08-07 17:54:54 -03:00
Paul Gauthier
7440a01015 copy
Some checks failed
pre-commit / pre-commit (push) Waiting to run
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-08-07 17:10:47 -03:00
Paul Gauthier
3a6f217dcd feat: Add reasoning_effort setting support for GPT-5 models 2025-08-07 17:04:44 -03:00
Paul Gauthier
ceb81369ea feat: Enforce diff edit format for GPT-5 models 2025-08-07 17:03:10 -03:00
Paul Gauthier
ad49e56b24 fix: Accurately match gpt-5 and gpt-5-2025-08-07 models
Co-authored-by: aider (gpt-5) <aider@aider.chat>
2025-08-07 16:58:47 -03:00
Paul Gauthier
53c14329bf set version to 0.85.5.dev 2025-08-07 15:33:44 -03:00
Paul Gauthier
0b13b27b51 version bump to 0.85.4 2025-08-07 15:33:43 -03:00
Paul Gauthier
d9d13f23b3 copy 2025-08-07 15:31:28 -03:00
Paul Gauthier
8c982f83ce feat: Disable temperature for GPT-5 models
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-08-07 15:26:20 -03:00
Paul Gauthier
ac7e274fe0 fix: Adapt to new PostHog SDK capture method signature
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-08-07 15:26:02 -03:00
Paul Gauthier
c23ebfe688 set version to 0.85.4.dev 2025-08-07 10:42:13 -03:00
Paul Gauthier
9d778bfdac version bump to 0.85.3 2025-08-07 10:42:12 -03:00
Paul Gauthier
70f2bbb796 copy 2025-08-07 10:17:58 -03:00
Paul Gauthier
6c7870dbcf copy 2025-08-07 10:16:46 -03:00
Paul Gauthier
ece9803fdc bump deps without llama-index-core==0.12.26 2025-08-07 09:18:12 -03:00
Paul Gauthier
ad39fdb2d1 bump deps
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-08-07 09:00:03 -03:00
Paul Gauthier
8904e2966d Merge branch 'main' of github.com:Aider-AI/aider 2025-08-07 08:50:45 -03:00
Paul Gauthier
3c9e180b54 bump deps 2025-08-07 08:50:39 -03:00
oct4pie
ac40a4c5cb
add test results for gpt-oss-120b (high) to polyglot leaderboard 2025-08-05 23:33:06 -07:00
paul-gauthier
1af0e59149
Merge pull request #4410 from liam61/main
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
chore: prettier scripting usage for faq
2025-08-05 08:29:08 -03:00
liam.liu
3402b151f7 chore: prettier scripting usage for faq 2025-08-05 19:18:16 +08:00
Paul Gauthier
f38200c511 copy
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-07-18 11:05:51 +00:00
Paul Gauthier
89ad2ba2cb copy
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-07-17 20:26:38 +00:00
Paul Gauthier
9d6ddcd0fc chore: Remove Kimi K2 model metadata and add test results 2025-07-17 18:03:14 +00:00
paul-gauthier
915ebffc8e
Update polyglot_leaderboard.yml
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-07-17 09:34:34 +00:00
Paul Gauthier
b336dee9b0 set version to 0.85.3.dev
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-07-15 23:22:15 +00:00
Paul Gauthier
853532c48c version bump to 0.85.2 2025-07-15 23:22:12 +00:00
Paul Gauthier
1a0ef64011 copy 2025-07-15 23:20:34 +00:00
Paul Gauthier
fe3f77176e copy 2025-07-15 23:18:53 +00:00
Paul Gauthier
2a18a186b4 copy 2025-07-15 23:17:07 +00:00
Paul Gauthier
102f6ef284 feat: Add Kimi K2 model data to polyglot leaderboard 2025-07-15 19:44:40 +00:00
paul-gauthier
90dffa9eae
Merge pull request #4342 from sentienthouseplant/bau-add-kimi-k2
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-07-12 13:26:26 -03:00
Jack Harrington
63d3dbcc9b Add source for openrouter kimi-k2 information. Remove reminder: sys. 2025-07-12 17:10:35 +01:00
Jack Harrington
c3f0bdd391 Add kimi-k2 to model resources. 2025-07-12 16:51:17 +01:00
Jack Harrington
c24c2c862d
Update model-metadata.json 2025-07-12 16:11:31 +01:00
Paul Gauthier
7bc2e4e911 feat: Add Grok-4 and Gemini Flash Lite, enhance CLI, fix model settings
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-07-11 19:36:06 -03:00
Paul Gauthier
f7870b6d03 Merge branch 'main' of github.com:Aider-AI/aider
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-07-10 12:10:01 -03:00
Paul Gauthier
bd78b9fe9d feat: Add openrouter/x-ai/grok-4 model setting
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-07-10 11:35:25 -03:00
Paul Gauthier
eab51242bb feat: Add xai/grok-4 model settings 2025-07-10 11:35:24 -03:00
paul-gauthier
6a28864c22
Merge pull request #4324 from tamirzb/main
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
Add gemini 2.5 flash lite preview 06-17
2025-07-09 12:18:56 -03:00
Tamir Zahavi-Brunner
ef59ecbcd8 Add gemini 2.5 flash lite preview 06-17 2025-07-09 23:08:55 +08:00
Paul Gauthier
6c16498de1 fix: Display first line of commit messages in /undo output
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-07-08 11:29:59 -03:00
paul-gauthier
f22fbf9b3a
Merge pull request #4321 from yzx9/missing-cmd-output
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
fix: add missing output for clear command
2025-07-08 08:05:07 -03:00
Zexin Yuan
1a57730884
fix: add missing output for clear command 2025-07-08 11:28:34 +08:00
paul-gauthier
0967024304
Merge pull request #4307 from o-nix/patch-1
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-07-04 16:47:06 -03:00
Kirill Vergun
456db697f0
Remove extra duplicated line in default commit instructions 2025-07-04 21:03:54 +02:00
paul-gauthier
3db4d378eb
Merge pull request #4300 from ei-grad/robust-model-settings-override
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
fix: Remove existing model settings before adding new ones
2025-07-01 11:12:13 -03:00
Andrew Grigorev
02c27732af fix: Remove existing model settings before adding new ones
Fix #4298

Co-authored-by: aider (vertex_ai/gemini-2.5-pro) <aider@aider.chat>
2025-07-01 11:59:14 +03:00
Paul Gauthier
966cf2b9fb copy
Some checks failed
pre-commit / pre-commit (push) Waiting to run
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-30 09:52:19 -03:00
Paul Gauthier
ac46e14ce4 set version to 0.85.2.dev 2025-06-30 09:51:05 -03:00
Paul Gauthier
9c9c6fe0b8 version bump to 0.85.1 2025-06-30 09:51:03 -03:00
Paul Gauthier
59a5190267 copy 2025-06-30 09:49:49 -03:00
Paul Gauthier
302b0cb5f9 chore: Add latest polyglot leaderboard results and adjust display cap 2025-06-30 09:49:21 -03:00
Paul Gauthier
f4605b2a86 feat: Display model announcements with no-arg /model command
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-06-29 09:01:10 -07:00
Paul Gauthier
3fec90340b chore: Remove commented out extra_params from model settings 2025-06-28 06:46:35 -07:00
Paul Gauthier
531838096b chore: Update polyglot leaderboard data
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-27 17:33:58 -07:00
Paul Gauthier
540b27b577 blame 2025-06-27 17:03:14 -07:00
Paul Gauthier
4f4f00f37c set version to 0.85.1.dev 2025-06-27 16:39:50 -07:00
Paul Gauthier
a544112ff3 version bump to 0.85.0 2025-06-27 16:39:48 -07:00
Paul Gauthier
e0e2cb109a chore: Ignore Docker bash history file 2025-06-27 16:37:31 -07:00
Paul Gauthier
66cdfdefd5 copy 2025-06-27 16:37:02 -07:00
Paul Gauthier
d5785b57a4 copy 2025-06-27 16:36:27 -07:00
Paul Gauthier
ae539fb1f5 copy
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-25 15:30:36 -07:00
Paul Gauthier
f5a512ba65 refactor: Remove -n short flag from benchmark --new option 2025-06-25 14:04:48 -07:00
Paul Gauthier
c48fea64a1 copy 2025-06-25 13:20:51 -07:00
Paul Gauthier
68f05f5b4f bump deps 2025-06-25 13:08:34 -07:00
Paul Gauthier
323910be11 test: Improve summary test with list-aware token counting mock 2025-06-25 12:57:59 -07:00
Paul Gauthier
19a7864168 refactor: Remove unused head variable in ChatSummary 2025-06-25 11:53:24 -07:00
Paul Gauthier
320ee06cc3 style: Add trailing commas to improve formatting 2025-06-25 11:52:31 -07:00
Paul Gauthier
8fe52d7b0d Merge branch 'main' of github.com:Aider-AI/aider 2025-06-25 11:51:28 -07:00
Paul Gauthier
338cfb46e4 Revert "bump deps"
This reverts commit 63a7d261ae.
2025-06-25 11:51:22 -07:00
paul-gauthier
63c92771f2
Merge pull request #3764 from jayeshthk/main
Optimize head‑truncation loop in summarize_real()
2025-06-25 11:51:07 -07:00
paul-gauthier
a0ffc5761c
Merge pull request #3870 from susliko/custom-posthog-instance
feat: add configurable PostHog host and API key parameters
2025-06-25 11:47:18 -07:00
Paul Gauthier
d47cb40518 test: Add Clojure language repomap test fixture
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-06-25 11:40:26 -07:00
Paul Gauthier
d9e3ede000 Merge branch 'main' of github.com:Aider-AI/aider 2025-06-25 11:37:02 -07:00
paul-gauthier
ba97c83be6
Merge pull request #4028 from garrett-hopper/clojure-repomap
Add Clojure repomap queries
2025-06-25 11:36:53 -07:00
Paul Gauthier
22cdacc8e2 copy 2025-06-25 11:36:38 -07:00
Paul Gauthier
15806aa6ab feat: Enable co-authored-by by default
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-06-25 11:33:40 -07:00
Paul Gauthier
74ee340101 chore: Increase Deepseek v3 max tokens to 65536
Co-authored-by: aider (gemini/gemini-2.5-pro) <aider@aider.chat>
2025-06-25 11:28:24 -07:00
Paul Gauthier
63a7d261ae bump deps 2025-06-25 11:26:17 -07:00
Paul Gauthier
e2d3fc4594 fix: Update Co-authored-by email to aider@aider.chat 2025-06-25 11:17:07 -07:00
Paul Gauthier (aider)
14af218ea2 fix: Create parent directories for history files and improve error handling 2025-06-25 11:12:24 -07:00
Paul Gauthier
5b317e5ec0 copy 2025-06-25 11:10:21 -07:00
Paul Gauthier
75f1a33292 Merge branch 'main' of github.com:Aider-AI/aider 2025-06-25 11:08:15 -07:00
Paul Gauthier
32cdb7cfad copy 2025-06-25 11:07:59 -07:00
paul-gauthier
d022f4ac63
Merge pull request #4264 from FeepingCreature/fix/4277-accept-taggy-diff
refactor: update HEAD regex to accept optional closing tag in search blocks
2025-06-25 10:57:35 -07:00
Paul Gauthier
b787e17924 Revert "feat: better place to create history file dirs (InputOutput ctr)"
This reverts commit f695e71398.
2025-06-25 10:53:55 -07:00
Paul Gauthier
5e9daa3c56 Revert "fix: check for input_history_file none"
This reverts commit fb4d2f90c1.
2025-06-25 10:53:49 -07:00
Paul Gauthier
d5ae9eff88 Merge branch 'main' of github.com:Aider-AI/aider 2025-06-25 10:51:42 -07:00
paul-gauthier
856d94c1dc
Merge pull request #4271 from contributor/chat-history-files-auto-create-dir
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
fix: Auto-create parent directories for chat history files
2025-06-25 06:29:10 -07:00
contributor
fb4d2f90c1 fix: check for input_history_file none 2025-06-25 16:09:40 +03:00
paul-gauthier
d85078a610
Merge pull request #4257 from contributor/chat-history-files-auto-create-dir 2025-06-25 04:57:04 -07:00
contributor
f695e71398 feat: better place to create history file dirs (InputOutput ctr)
this decreases number of IO operations
2025-06-25 13:48:38 +03:00
Paul Gauthier
d90936662b copy 2025-06-24 16:00:13 -07:00
paul-gauthier
6a00d8ff5f
Merge pull request #4269 from iamFIREcracker/fix-for-literal-read-only-files
Some checks failed
pre-commit / pre-commit (push) Waiting to run
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-24 12:34:14 -07:00
Matteo Landi
c4b9f14b90 fix: Resolve literal paths correctly in /read-only command 2025-06-24 20:58:43 +02:00
paul-gauthier
a785b0f463
Merge pull request #4268 from therealmarv/add-gemini-2.5-vertex-general-availability 2025-06-24 09:09:45 -07:00
Paul Gauthier
90ecde4da9 copy 2025-06-24 08:23:08 -07:00
Paul Gauthier
37b7a7b44f Merge branch 'main' of github.com:Aider-AI/aider 2025-06-24 08:22:33 -07:00
Paul Gauthier
89356e897e copy 2025-06-24 08:22:30 -07:00
therealmarv
05ca9e5c24 add Gemini 2.5 non-preview Vertex models 2025-06-24 17:07:01 +02:00
Mathis Beer (aider)
7c9cff2f6e refactor: update HEAD regex to accept optional closing tag in search blocks
When working with HTML text, the network has a strong bias to go "well, this is '<' followed by text, it's a tag! I should close it with '>'." Then the edit would be ignored.
2025-06-24 10:23:25 +02:00
paul-gauthier
f9fc2c6a44
Merge pull request #4260 from mtofano/add-matlab-repomap-support
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-06-23 14:10:55 -07:00
Matthew Tofano
20429b6852 add MATLAB tags to enable repo map support 2025-06-23 21:24:52 +01:00
contributor
52d04430db feat: Auto-create parent directories for chat history files
This prevents aider startup errors like:
```
Warning: Unable to write to chat history file .chat/.aider.chat.history.md.
[Errno 2] No such file or directory: '.chat/.aider.chat.history.md'
```
2025-06-23 14:22:10 +03:00
Paul Gauthier (aider)
f16110717b fix: Ensure pip is available before installation 2025-06-20 14:23:04 -07:00
Paul Gauthier
a2d345fe3d docs: Condense docstring for think_tokens command 2025-06-20 14:08:41 -07:00
Paul Gauthier (aider)
1bdd4f0269 style: Format code 2025-06-20 14:08:09 -07:00
Paul Gauthier (aider)
9188cedc72 style: Fix line length in cmd_think_tokens docstring 2025-06-20 14:08:05 -07:00
Paul Gauthier (aider)
fdb49e18cd docs: Add Hacker News quote to Kind Words section
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-20 14:01:47 -07:00
Paul Gauthier (aider)
ae927d85f0 style: Use triple quotes for docstring 2025-06-20 13:45:31 -07:00
Paul Gauthier
a1c2eeb88a style: Line wrap long string literals 2025-06-20 13:45:12 -07:00
Paul Gauthier
caf212c8d1 bump deps 2025-06-20 13:43:37 -07:00
Paul Gauthier
a29ae3ef37 Merge branch 'main' of github.com:Aider-AI/aider 2025-06-20 13:43:04 -07:00
paul-gauthier
90f9c813a0
Update analytics.md
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-06-20 05:39:56 -07:00
paul-gauthier
9fdc6d4a44
Merge pull request #4254 from KennyDizi/main 2025-06-20 05:34:18 -07:00
Trung Dinh
bb1b9e8e2d Add meta data for openrouter/google/gemini-2.5-pro 2025-06-20 17:04:30 +07:00
Trung Dinh
0c480b7ea4 Support model openrouter/google/gemini-2.5-pro official 2025-06-20 17:04:13 +07:00
paul-gauthier
3cb120e0a9
Merge pull request #4242 from nims11/update-gemini
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
Update gemini models in model-settings.yml
2025-06-19 14:56:39 -07:00
Nimesh Ghelani
1677db3ca7 Add gemini model metadata 2025-06-19 12:42:26 +00:00
Nimesh Ghelani
ae94521242 Update gemini models in model-settings.yml 2025-06-18 12:46:34 +00:00
paul-gauthier
f8855ebc58
Merge pull request #4236 from daniel-sc/patch-1
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-17 03:31:09 -07:00
Daniel Schreiber
262117d124
doc: add correct path for github copilot token for windows users 2025-06-17 11:39:18 +02:00
paul-gauthier
72c23800a9
Merge pull request #4228 from maliayas/reset-thinking-tokens
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-15 16:52:38 -07:00
Ali Ayas (claude-sonnet-4-20250514)
e91efda8fe feat: add support for disabling thinking tokens with value 0 2025-06-16 01:13:47 +03:00
Paul Gauthier
1daeb01ff0 Merge branch 'main' of github.com:Aider-AI/aider 2025-06-13 06:15:28 -07:00
paul-gauthier
2df4beb6e9
Merge pull request #4022 from ei-grad/fix-vertex-ai
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-12 09:27:20 -07:00
Andrew Grigorev
e54ac087cb fix: Vertex AI model names use vertex_ai/ prefix
Don't be confused with `llm_provider` which is `vertex_ai-language-models`
340a0453d3/litellm/__init__.py (L509)
2025-06-12 18:54:21 +03:00
paul-gauthier
17d40a62c9
Merge pull request #4210 from solatis/main
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-06-10 19:28:54 -07:00
Leon Mergen
67e190c8d1
Adds support for openai/o3-pro 2025-06-11 08:53:20 +07:00
Paul Gauthier
5562caae0c copy
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-09 06:41:04 -07:00
Paul Gauthier
d55beb5f24 fix: Adjust analytics repo file count condition 2025-06-09 06:39:11 -07:00
Paul Gauthier
df5b780d6c Merge branch 'main' of github.com:Aider-AI/aider 2025-06-09 06:38:00 -07:00
paul-gauthier
3e07d068ad
Merge pull request #4199 from holoskii/large-repo-speedup
Skip expensive `get_tracked_files` call if `skip_sanity_check_repo` is true
2025-06-09 06:37:53 -07:00
Paul Gauthier
47ddce3e1b chore: Add DeepSeek R1 benchmark results 2025-06-09 06:34:26 -07:00
Paul Gauthier
e256ffd2c6 chore: Add Gemini 2.5 Pro results to leaderboard 2025-06-09 06:28:12 -07:00
Makar Ivashko
7a83f038d8 Skip expensive get_tracked_files call if skip_sanity_check_repo is true 2025-06-09 10:09:06 +03:00
Paul Gauthier (aider)
990a0566bb fix: Remove unused mock_stdout in tests
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-06-08 10:19:47 -07:00
Paul Gauthier (aider)
0ac6068e1d fix: Remove unused mock_stdout in test_main 2025-06-08 10:19:01 -07:00
Paul Gauthier (aider)
1953c9815a fix: Remove unused mock_stdout in tests 2025-06-08 10:17:45 -07:00
Paul Gauthier
cc8be1453f style: Add whitespace in tests 2025-06-08 10:17:28 -07:00
Paul Gauthier
789aab8417 Merge branch 'main' of github.com:Aider-AI/aider 2025-06-08 10:17:09 -07:00
Paul Gauthier
226e23f06d chore: update max tokens for deepseek-coder 2025-06-08 10:17:02 -07:00
paul-gauthier
150711d7e0
Merge pull request #4182 from wietsevenema/main
fix: Correct Vertex AI model name prefixes in settings
2025-06-08 10:16:28 -07:00
paul-gauthier
8d48def24d
Merge pull request #4193 from tanavamsikrishna/main
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-06-08 07:11:31 -07:00
Vamsi Talupula
4c50fc6d69 Let 'rich' use code_theme as inline_code_theme 2025-06-08 17:30:12 +05:30
Wietse Venema
e67c9327fd
fix: Correct Vertex AI model name prefixes in settings 2025-06-07 14:31:00 +02:00
paul-gauthier
837b8a93e9
Merge pull request #3609 from omarcinkonis/main
Some checks failed
pre-commit / pre-commit (push) Waiting to run
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-07 05:16:45 -07:00
Paul Gauthier
4c161f9e12 build: Pin networkx to <3.5 for py3.10 compatibility
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-06-06 09:43:25 -07:00
Paul Gauthier
f827f22f7a cleanup 2025-06-06 09:31:34 -07:00
Paul Gauthier
3064477cb8 bump deps 2025-06-06 09:30:56 -07:00
Paul Gauthier (aider)
a7ccdaf279 feat: Add thinking_tokens setting to Gemini 2.5 Pro 06-05 models 2025-06-06 09:29:13 -07:00
Paul Gauthier
2bc71cfd71 copy 2025-06-06 09:28:24 -07:00
Paul Gauthier
6b7a0565e9 Merge branch 'main' of github.com:Aider-AI/aider 2025-06-06 09:28:12 -07:00
Paul Gauthier (aider)
8c1ae95f87 chore: Add metadata for gemini-2.5-pro-preview-06-05 2025-06-06 09:27:54 -07:00
Paul Gauthier (aider)
c0509add21 chore: Update gemini alias to use 06-05 model 2025-06-06 06:54:19 -07:00
Paul Gauthier (aider)
77472e5913 feat: Add gemini-2.5-pro-preview-06-05 model settings 2025-06-06 06:53:57 -07:00
paul-gauthier
836aaece4f
Merge pull request #4172 from jesstelford/patch-1
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-06-05 07:51:45 -07:00
Jess Telford
c4fcc5ad70
VSCode Copilot no longer writes out tokens 2025-06-05 16:55:21 +10:00
Paul Gauthier
b259226770 lint
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-06-03 09:27:57 -07:00
Paul Gauthier
db0f7d158d Merge branch 'main' of github.com:Aider-AI/aider 2025-06-03 09:27:42 -07:00
paul-gauthier
b9a9b4cf61
Merge pull request #4163 from vinnymac/vt/copilot-token-debug 2025-06-03 09:22:51 -07:00
Vincent Taverna
29874f1222 feat: validation and errors for copilot requests 2025-06-03 11:50:37 -04:00
paul-gauthier
7897d027d4
Merge pull request #4161 from lreeves/main
Use system prompt prefix for commit messages
2025-06-03 08:13:57 -07:00
Luke Reeves
09b2d49f11 Use system prompt prefix for commit messages
I've been using Qwen3 with reasoning disabled via a /no_think in the
system prompt prefix. I found that the commit message generation was
ignoring this setting. This change updates the commit message generation
loop to incorporate that setting if defined.
2025-06-03 10:30:42 -04:00
paul-gauthier
295122fc97
Merge pull request #4057 from emmanuel-ferdman/main
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-06-02 14:03:17 -07:00
paul-gauthier
fa0aa9459b
Merge pull request #4156 from stackbuilders/always_pass_extra_headers_to_copilot_models
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
2025-06-02 07:06:10 -07:00
Sebastian Estrella
c67f6905a5 fix: Always pass extra_headers to Copilot models 2025-06-02 08:39:48 -05:00
paul-gauthier
3266eaca91
Merge pull request #4150 from muravvv/fix_encoding
Some checks are pending
pre-commit / pre-commit (push) Waiting to run
Fix issues on repositories with non-Unicode encodings
2025-06-01 12:46:17 -07:00
muravvv
bfaad12cac add missing encoding conversion for diff contents 2025-06-01 22:31:43 +03:00
muravvv
395188043b set fixed utf-8 encoding for llm history log 2025-06-01 20:15:51 +03:00
paul-gauthier
484b8a3603
Merge pull request #4146 from ktakayama/issues/4049-commit-language
Some checks failed
pre-commit / pre-commit (push) Waiting to run
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-06-01 05:20:22 -07:00
Kyosuke Takayama
6eaf75f760
test: add test for commit-language option 2025-06-01 19:29:28 +09:00
Kyosuke Takayama
91f34e37f7
docs: add commit-language option to config and documentation files 2025-06-01 19:09:39 +09:00
Kyosuke Takayama
7ffd9c1859
feat: add commit language option for commit message localization 2025-06-01 18:52:38 +09:00
Paul Gauthier (aider)
0bb0f169d2 docs: Add link to release notes
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-05-30 17:11:21 -07:00
Paul Gauthier
45ad3cdf47 copy 2025-05-30 16:32:21 -07:00
Paul Gauthier
fc30409f74 blame 2025-05-30 16:31:26 -07:00
Paul Gauthier
6d872b6dc0 copy 2025-05-30 16:30:10 -07:00
Paul Gauthier
6fdc956b9e set version to 0.84.1.dev 2025-05-30 16:27:25 -07:00
Paul Gauthier
196721d27d version bump to 0.84.0 2025-05-30 16:27:24 -07:00
Paul Gauthier (aider)
e331a967a6 fix: Update expected OpenRouter costs in tests 2025-05-30 15:57:45 -07:00
Paul Gauthier (aider)
48376e59c2 style: Apply formatting 2025-05-30 15:07:32 -07:00
Paul Gauthier (aider)
52510c7da5 test: Update OpenRouter default model expectations 2025-05-30 15:07:29 -07:00
Paul Gauthier
c24798c44f Merge branch 'main' of github.com:Aider-AI/aider 2025-05-30 14:46:36 -07:00
Paul Gauthier
6085be5883 copy 2025-05-30 14:46:28 -07:00
Paul Gauthier (aider)
05c56fe904 fix: Fix OpenRouter token cost calculation 2025-05-30 14:33:34 -07:00
Paul Gauthier
a7afbd0708 feat: Add claude-opus-4 and update default OpenRouter models 2025-05-30 14:30:10 -07:00
Paul Gauthier (aider)
3f2c403cf0 feat: Add openrouter/anthropic/claude-opus-4 model config 2025-05-30 14:26:50 -07:00
Paul Gauthier
d7504bed21 copy 2025-05-30 14:25:35 -07:00
paul-gauthier
119a44debe
Merge pull request #4114 from therealmarv/increase-deepseek-v3-openrouter-context
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
increase context window of Deepseek V3 to new OpenRouter limits
2025-05-27 13:33:42 -07:00
therealmarv
87dee0a5f2 reduce output tokens again 2025-05-27 23:18:45 +03:00
therealmarv
1d0e463d83 increase context window of Deepseek V3 to new OpenRouter limits 2025-05-27 23:02:11 +03:00
Paul Gauthier
8304029b92 lint 2025-05-26 16:29:28 -07:00
Paul Gauthier
ef2986a231 Merge branch 'main' of github.com:Aider-AI/aider 2025-05-26 16:29:14 -07:00
Paul Gauthier
b79a777936 copy 2025-05-26 16:29:10 -07:00
Paul Gauthier
9c9eedd9c5 chore: Update polyglot leaderboard data for gemini-2.5-flash 2025-05-26 12:18:22 -07:00
paul-gauthier
ebaad9d865
Merge pull request #4092 from noitcudni/main 2025-05-26 10:43:53 -07:00
paul-gauthier
d922023815
Merge pull request #4096 from KennyDizi/main 2025-05-26 10:41:33 -07:00
Paul Gauthier
acebc11237 chore: Update model names in polyglot leaderboard 2025-05-26 08:56:35 -07:00
Paul Gauthier
214b811ef9 chore: Add new polyglot benchmark results 2025-05-26 08:56:01 -07:00
Trung Dinh
de9df51b47 Change max_tokens to 32000 for claude-sonnet-4-20250514 family model 2025-05-26 11:17:50 +07:00
Paul Gauthier
3194a35230 feat: Add Gemini 2.5 Flash model and leaderboard data 2025-05-25 16:00:07 -07:00
Paul Gauthier
a8568c3c4f build: Update website source path in Dockerfile 2025-05-25 15:30:30 -07:00
Paul Gauthier (aider)
114ec42563 feat: Add Claude Opus 4 provider variants to model-settings.yml 2025-05-25 15:03:41 -07:00
Paul Gauthier (aider)
f7df96d224 feat: Add missing Sonnet 4 models to settings 2025-05-25 15:01:43 -07:00
Paul Gauthier
79edb0e1e0 added opus polyglot 2025-05-25 14:57:49 -07:00
Lih Chen
5a0951caaf Refresh Github's open api key automatically 2025-05-25 14:54:03 -07:00
Paul Gauthier (aider)
6b2bcf651e fix: Update expected model name for 'opus' alias test 2025-05-25 13:15:15 -07:00
Paul Gauthier (aider)
fea0ff189f test: Update sonnet model alias test 2025-05-25 13:14:47 -07:00
Paul Gauthier
803a8db60c noop 2025-05-25 12:52:47 -07:00
Paul Gauthier
414b4e3882 feat: Add new Claude models and update aliases 2025-05-25 12:51:45 -07:00
Paul Gauthier (aider)
a17599152f feat: Add Claude Sonnet 4 settings for multiple providers 2025-05-25 12:49:51 -07:00
Paul Gauthier
7b9d8e6ba7 feat: add claude-opus-4-20250514 model settings 2025-05-25 12:49:47 -07:00
Paul Gauthier
9ef3211365 proper pin for configargparse 2025-05-24 15:44:31 -07:00
Paul Gauthier
d9bf69041c Revert "bump deps"
This reverts commit ef3f8bb301.
2025-05-24 15:43:50 -07:00
Paul Gauthier
e3cb907767 claude-sonnet-4-20250514 ex-user 2025-05-24 15:06:57 -07:00
Paul Gauthier
ef3f8bb301 bump deps 2025-05-24 13:33:19 -07:00
Paul Gauthier
03a489ea35 set version to 0.83.3.dev 2025-05-23 16:02:30 -07:00
Paul Gauthier
81389b87d7 version bump to 0.83.2 2025-05-23 16:02:26 -07:00
Paul Gauthier
0d8ff295d6 copy 2025-05-23 15:48:36 -07:00
Paul Gauthier
6176a8dee3 Patch yanked configargparse 1.7 #4072 2025-05-23 15:24:03 -07:00
Paul Gauthier
299e6ae7a2 Revert "bump deps"
This reverts commit cb88b7e62a.
2025-05-23 15:20:40 -07:00
Paul Gauthier
0b1d49d630 Revert "drop pin of torch"
This reverts commit 2b9e669930.
2025-05-23 15:20:27 -07:00
Paul Gauthier
037a36edba Revert "unpin llama-index-core"
This reverts commit 66bc9cf292.
2025-05-23 15:20:13 -07:00
Paul Gauthier
66bc9cf292 unpin llama-index-core 2025-05-23 14:59:09 -07:00
Paul Gauthier
2b9e669930 drop pin of torch 2025-05-23 14:42:09 -07:00
Paul Gauthier
cb88b7e62a bump deps 2025-05-23 14:29:57 -07:00
Paul Gauthier
4e9943f2aa copy 2025-05-23 14:29:42 -07:00
Emmanuel Ferdman
999c292482
fix: display marker on error
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-05-21 17:15:45 -07:00
Paul Gauthier
9f5018e89e YAML -> yaml
Some checks failed
Deploy Jekyll site to Pages / build (push) Has been cancelled
pre-commit / pre-commit (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-05-21 09:58:54 -07:00
Garrett Hopper
59dbce8575 Add Clojure repomap queries 2025-05-15 12:39:23 -05:00
Paul Gauthier (aider)
3caab85931 fix: Mark unused variable in test
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
2025-05-13 13:44:59 -07:00
Paul Gauthier
756372809e test: Update tests for git integration 2025-05-13 13:44:23 -07:00
Paul Gauthier (aider)
6aa05ab11c style: Format test file 2025-05-13 13:42:04 -07:00
Paul Gauthier (aider)
9cf373039e test: Add test for skipping gitignored files on init 2025-05-13 13:41:58 -07:00
Paul Gauthier (aider)
bc1272f029 fix: base coder not ignoring gitignore if --file is used. 2025-05-13 13:38:18 -07:00
Paul Gauthier
0049e78250 Merge branch 'main' of github.com:Aider-AI/aider 2025-05-12 13:20:59 -07:00
paul-gauthier
56b45ce1d3
Merge pull request #4008 from facelezzzz/main
Some checks failed
pre-commit / pre-commit (push) Has been cancelled
fix: Fix #3987 Pass the coder object to repo.commit
2025-05-12 09:44:39 -07:00
wangboxue
bdd67eb229 fix: Fix #3987 Pass the coder object to repo.commit 2025-05-12 11:56:39 +08:00
Paul Gauthier (aider)
57020a2d5e test: Assert specific stderr messages for invalid edit format
Some checks failed
pre-commit / pre-commit (push) Waiting to run
Deploy Jekyll site to Pages / build (push) Has been cancelled
Deploy Jekyll site to Pages / deploy (push) Has been cancelled
2025-05-11 08:16:08 -07:00
Paul Gauthier (aider)
6b9045a2a2 fix: Fix F841 unused variable in test 2025-05-11 08:15:19 -07:00
Paul Gauthier (aider)
5f24a0013a test: Fix invalid edit format test assertion 2025-05-11 08:15:03 -07:00
Paul Gauthier (aider)
b79052501d style: Shorten comment to fix E501 2025-05-11 08:13:27 -07:00
Paul Gauthier (aider)
9e0d7d9c46 style: Fix code style in test 2025-05-11 08:13:18 -07:00
Paul Gauthier (aider)
a53ab7d937 fix: Correct test for invalid --edit-format argument 2025-05-11 08:13:11 -07:00
Paul Gauthier
c055602c6f Merge branch 'main' into completions 2025-05-11 08:01:15 -07:00
Paul Gauthier
170e8fc9a1 Merge branch 'main' of github.com:Aider-AI/aider 2025-05-11 08:01:03 -07:00
paul-gauthier
ee177054b8
Merge pull request #3993 from savioursho/file-path-completion
feat: Enable file completion for all file-related arguments
2025-05-11 08:00:49 -07:00
Paul Gauthier
f018b5fab5 include pip in uv installs 2025-05-11 07:56:55 -07:00
savioursho
5a29ba03dc
Merge branch 'Aider-AI:main' into file-path-completion 2025-05-11 20:06:30 +09:00
Paul Gauthier (aider)
035d99d3d3 fix: Remove unused import Coder 2025-05-10 19:45:04 -07:00
Paul Gauthier (aider)
702eff1033 refactor: Get edit format choices from coder classes 2025-05-10 19:44:44 -07:00
Paul Gauthier (aider)
97f3885357 chore: Apply linter rules 2025-05-10 19:42:28 -07:00
Paul Gauthier (aider)
f8653613bc feat: Add shell completion for edit format options 2025-05-10 19:42:22 -07:00
Paul Gauthier (aider)
b1d47c47d9 chore: Add shtab file completions to args 2025-05-10 17:56:29 -07:00
Paul Gauthier
2c4a126093 copy
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-05-10 17:13:36 -07:00
Paul Gauthier (aider)
cdd1546243 docs: Improve GitHub Copilot connection docs tone and structure 2025-05-10 17:12:55 -07:00
Paul Gauthier
6a3bb0f4ec docs: Remove closing line and clarify Copilot billing 2025-05-10 17:12:52 -07:00
Paul Gauthier (aider)
24c0fbd326 docs: Add doc page for GitHub Copilot model access 2025-05-10 17:10:25 -07:00
Paul Gauthier
7b9eae117f docs: Add GitHub LLM docs 2025-05-10 17:10:23 -07:00
Paul Gauthier (aider)
512b4d891b chore: Remove unused imports in test_openrouter.py
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-05-10 12:57:14 -07:00
Paul Gauthier (aider)
a6b0f43dce chore: Run linter 2025-05-10 12:56:13 -07:00
Paul Gauthier (aider)
e8d9ae9a1f test: add tests for OpenRouter model info handling 2025-05-10 12:56:08 -07:00
Paul Gauthier
2ab0074915 test: Add OpenRouter tests 2025-05-10 12:56:06 -07:00
Paul Gauthier (aider)
225e01717c feat: Show active model metadata in /settings command 2025-05-10 12:54:11 -07:00
Paul Gauthier (aider)
4d39b88110 style: Reorder imports in models.py 2025-05-10 12:50:41 -07:00
Paul Gauthier (aider)
5052150e2e feat: Add local cache for OpenRouter models 2025-05-10 12:50:34 -07:00
Paul Gauthier
d8fbd9cbd3 feat: Add OpenRouter API support 2025-05-10 12:50:32 -07:00
Paul Gauthier
53cda2cc10 set version to 0.83.2.dev 2025-05-10 12:36:47 -07:00
Paul Gauthier
543e5570ae version bump to 0.83.1 2025-05-10 12:36:45 -07:00
Paul Gauthier
62c7e15a36 copy 2025-05-10 12:33:40 -07:00
Paul Gauthier
17a2773a22 refactor: Validate locale language result 2025-05-10 12:28:51 -07:00
Paul Gauthier (aider)
b8758ca791 test: Fix mock for babel.Locale in test_normalize_language 2025-05-10 11:55:03 -07:00
Paul Gauthier (aider)
bf9522a2fb style: Format test file 2025-05-10 11:53:08 -07:00
Paul Gauthier (aider)
ddc8621d6e fix: Correctly normalize hyphenated language codes 2025-05-10 11:53:00 -07:00
Paul Gauthier (aider)
7875de078a fix: Remove unused import/var and fix line length in test 2025-05-10 11:46:49 -07:00
Paul Gauthier (aider)
ea1189b8ec style: Format test_coder.py 2025-05-10 11:46:27 -07:00
Paul Gauthier (aider)
1127b8b559 test: Add tests for user language detection and normalization 2025-05-10 11:46:17 -07:00
Paul Gauthier
64f218a06e ask prompt 2025-05-10 11:43:37 -07:00
Paul Gauthier (aider)
efde8e867e fix: Prevent "Reply in C." instruction for C/POSIX locales 2025-05-10 11:43:35 -07:00
Paul Gauthier
f815f0377e copy
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-05-10 07:52:39 -07:00
Paul Gauthier (aider)
883aa9e03d docs: Include platform in testimonial link text 2025-05-10 07:45:43 -07:00
Paul Gauthier (aider)
2a410fab81 docs: Add platform/community to kind words in README 2025-05-10 07:24:33 -07:00
Paul Gauthier
34409311a3 chore: Adjust spinner text and spinner timing 2025-05-10 07:21:21 -07:00
Paul Gauthier (aider)
97379aa02f fix: Update Spinner import path 2025-05-10 07:11:08 -07:00
Paul Gauthier (aider)
ee4e9c9711 refactor: Remove unused time import 2025-05-10 07:10:28 -07:00
Paul Gauthier (aider)
7d3c817664 style: apply code formatting 2025-05-10 07:10:19 -07:00
Paul Gauthier (aider)
8c755bf032 refactor: move Spinner class to waiting module 2025-05-10 07:10:12 -07:00
saviour
0b112e948f feat: Enable file completion for all file-related arguments 2025-05-10 21:38:08 +09:00
Paul Gauthier (aider)
c11d21a230 style: apply linter fixes 2025-05-09 18:08:04 -07:00
Paul Gauthier (aider)
a9cb1a9d61 chore: Move commit message spinner into model loop and show model name 2025-05-09 18:07:58 -07:00
Paul Gauthier (aider)
43cd0164e0 style: Apply linter formatting 2025-05-09 18:07:05 -07:00
Paul Gauthier (aider)
49b3f85cc5 feat: Add spinner when generating commit message 2025-05-09 18:07:00 -07:00
Paul Gauthier
3daf7d4df3 copy
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-05-09 18:04:04 -07:00
Paul Gauthier
3dcb23c193 qwen3 whole official api 2025-05-09 18:03:17 -07:00
Paul Gauthier
cad31b638b copy 2025-05-09 15:57:04 -07:00
Paul Gauthier
7fbe0d25f5 Merge branch 'main' into qwen3 2025-05-09 15:51:55 -07:00
Paul Gauthier
637a31e083 blame
Some checks are pending
Deploy Jekyll site to Pages / build (push) Waiting to run
Deploy Jekyll site to Pages / deploy (push) Blocked by required conditions
pre-commit / pre-commit (push) Waiting to run
2025-05-09 15:51:13 -07:00
Paul Gauthier
09880ee8f4 qwen3 official 2025-05-09 15:15:41 -07:00
Vasil Markoukin (aider)
98ee78edf0 feat: add configurable PostHog host and API key parameters 2025-04-30 14:46:54 +03:00
jayesh thakare
23c9d9c34d
Merge branch 'Aider-AI:main' into main 2025-04-10 02:35:41 +05:30
jayeshthk
188e9e1114 Optimize head‑truncation loop in summarize_real 2025-04-10 02:29:40 +05:30
omarcinkonis
3447f06208 feat: Add --add-gitignore-files flag 2025-03-23 01:15:16 +02:00
99 changed files with 7786 additions and 2919 deletions

3
.gitignore vendored
View file

@ -15,4 +15,5 @@ aider/_version.py
.venv/
.#*
.gitattributes
tmp.benchmarks/
tmp.benchmarks/
.docker_bash_history

View file

@ -1,6 +1,96 @@
# Release history
### main branch
### Aider v0.86.0
- Expanded GPT-5 model support across family variants and providers (OpenAI, Azure, OpenRouter), including dated and chat/mini/nano variants.
- Aider wrote 88% of the code in this release.
### Aider v0.85.5
- Enforced diff edit format for GPT-5 models.
- Added support for the reasoning_effort setting for GPT-5 models.
- Fixed model detection to correctly apply GPT-5 settings to versioned names (gpt-5 and gpt-5-2025-08-07).
### Aider v0.85.4
- Added support for openai/gpt-5
- Fixed analytics to support the latest PostHog SDK event-capture API.
- Disabled temperature when using GPT-5 models for more deterministic outputs.
### Aider v0.85.3
- Bumped dependencies to pick up latest litellm==1.75.0.
### Aider v0.85.2
- Added support for Grok-4 via `xai/grok-4` and `openrouter/x-ai/grok-4` model names.
- Added support for `gemini/gemini-2.5-flash-lite-preview-06-17` model, by Tamir Zahavi-Brunner.
- `/clear` now prints “All chat history cleared.” so you know it worked, by Zexin Yuan.
- `/undo` output now shows only the first line of each commit message, making it easier to read.
- Fixed an issue where new settings for an existing model didn't replace the old ones, by Andrew Grigorev.
- Added support for `openrouter/moonshotai/kimi-k2` model, by Jack Harrington.
### Aider v0.85.1
- Display model announcements with no-arg `/model` command.
### Aider v0.85.0
- Support for Responses API models like o1-pro, o3-pro.
- Updated pricing for o3.
- Added support for new Gemini models including `gemini-2.5-pro`, `gemini-2.5-flash`, and `gemini-2.5-pro-preview-06-05` with thinking tokens support.
- Updated model aliases: `flash` now points to `gemini-2.5-flash` and `gemini` now points to `gemini-2.5-pro`.
- Added `--add-gitignore-files` flag to enable adding files listed in .gitignore to Aider's editing scope, by omarcinkonis.
- Added `--commit-language` option to specify the language for commit messages, by Kyosuke Takayama.
- Enhanced thinking tokens support: can now be disabled by setting to 0, and improved help text with examples.
- Added MATLAB language support for repository maps, by Matthew Tofano.
- Added support for OpenAI o3-pro model across multiple providers.
- Improved GitHub Copilot token handling with better validation and error messages, by Vincent Taverna and Sebastian Estrella.
- Fixed encoding issues in git diff output and LLM history logging.
- Enhanced commit message generation to use system prompt prefixes, by Luke Reeves.
- Improved inline code rendering in Rich markdown output, by Vamsi Talupula.
- Fixed Vertex AI model name prefixes in settings, by Wietse Venema.
- Improved `/read-only` command to resolve literal paths correctly, by Matteo Landi.
- Skip expensive file tracking operations when `--skip-sanity-check-repo` is enabled for better performance, by Makar Ivashko.
- Ensure pip is available before package installation.
- Auto-create parent directories for chat history files to prevent startup errors, by contributor.
- Fixed search block regex to accept optional closing tags when working with HTML content, by Mathis Beer.
- Co-authored-by attribution is now enabled by default for commit messages.
- Added Clojure language support for repository maps, by Garrett Hopper.
- Added custom PostHog analytics configuration options with `--analytics-posthog-host` and `--analytics-posthog-project-api-key` flags, by Vasil Markoukin.
- Optimized chat history summarization performance, by jayeshthk.
- Improved kebab-case identifier recognition in repository maps for better code analysis.
- Increased max tokens for Deepseek models to 65536 for better performance.
- Aider wrote 21% of the code in this release.
### Aider v0.84.0
- Added support for new Claude models including the Sonnet 4 and Opus 4 series (e.g., `claude-sonnet-4-20250514`,
`claude-opus-4-20250514`) across various providers. The default `sonnet` and `opus` aliases were updated to these newer
versions.
- Added support for the `vertex_ai/gemini-2.5-flash-preview-05-20` model.
- Fixed OpenRouter token cost calculation for improved accuracy.
- Updated default OpenRouter models during onboarding to `deepseek/deepseek-r1:free` for the free tier and
`anthropic/claude-sonnet-4` for paid tiers.
- Automatically refresh GitHub Copilot tokens when used as OpenAI API keys, by Lih Chen.
- Aider wrote 79% of the code in this release.
### Aider v0.83.2
- Bumped configargparse to 1.7.1 as 1.7 was pulled.
- Added shell tab completion for file path arguments (by saviour) and for `--edit-format`/`--editor-edit-format` options.
- Improved OpenRouter model metadata handling by introducing a local cache, increasing reliability and performance.
- The `/settings` command now displays detailed metadata for active main, editor, and weak models.
- Fixed an issue where files explicitly added via the command line were not correctly ignored if listed in `.gitignore`.
- Improved automatic commit messages by providing more context during their generation, by wangboxue.
### Aider v0.83.1
- Improved user language detection by correctly normalizing hyphenated language codes (e.g., `en-US` to `en`) and enhancing the validation of locale results.
- Prevented Aider from instructing the LLM to reply in 'C' or 'POSIX' when these are detected as the system locale.
- Displayed a spinner with the model name when generating commit messages.
### Aider v0.83.0
- Added support for `gemini-2.5-pro-preview-05-06` models.
- Added support for `qwen3-235b` models.
@ -404,7 +494,7 @@
- [Aider works with LLM web chat UIs](https://aider.chat/docs/usage/copypaste.html).
- New `--copy-paste` mode.
- New `/copy-context` command.
- [Set API keys and other environment variables for all providers from command line or yaml conf file](https://aider.chat/docs/config/aider_conf.html#storing-llm-keys).
- [Set API keys and other environment variables for all providers from command line or YAML conf file](https://aider.chat/docs/config/aider_conf.html#storing-llm-keys).
- New `--api-key provider=key` setting.
- New `--set-env VAR=value` setting.
- Added bash and zsh support to `--watch-files`.
@ -572,7 +662,7 @@
### Aider v0.59.1
- Check for obsolete `yes: true` in yaml config, show helpful error.
- Check for obsolete `yes: true` in YAML config, show helpful error.
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
### Aider v0.59.0
@ -582,7 +672,7 @@
- Still auto-completes the full paths of the repo files like `/add`.
- Now supports globs like `src/**/*.py`
- Renamed `--yes` to `--yes-always`.
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` YAML key.
- Existing YAML and .env files will need to be updated.
- Can still abbreviate to `--yes` on the command line.
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
@ -789,7 +879,7 @@
- Use `--map-refresh <always|files|manual|auto>` to configure.
- Improved cost estimate logic for caching.
- Improved editing performance on Jupyter Notebook `.ipynb` files.
- Show which config yaml file is loaded with `--verbose`.
- Show which config YAML file is loaded with `--verbose`.
- Bumped dependency versions.
- Bugfix: properly load `.aider.models.metadata.json` data.
- Bugfix: Using `--msg /ask ...` caused an exception.

View file

@ -27,13 +27,13 @@ cog.out(text)
<a href="https://github.com/Aider-AI/aider/stargazers"><img alt="GitHub Stars" title="Total number of GitHub stars the Aider project has received"
src="https://img.shields.io/github/stars/Aider-AI/aider?style=flat-square&logo=github&color=f1c40f&labelColor=555555"/></a>
<a href="https://pypi.org/project/aider-chat/"><img alt="PyPI Downloads" title="Total number of installations via pip from PyPI"
src="https://img.shields.io/badge/📦%20Installs-2.2M-2ecc71?style=flat-square&labelColor=555555"/></a>
src="https://img.shields.io/badge/📦%20Installs-4.1M-2ecc71?style=flat-square&labelColor=555555"/></a>
<img alt="Tokens per week" title="Number of tokens processed weekly by Aider users"
src="https://img.shields.io/badge/📈%20Tokens%2Fweek-15B-3498db?style=flat-square&labelColor=555555"/>
<a href="https://openrouter.ai/#options-menu"><img alt="OpenRouter Ranking" title="Aider's ranking among applications on the OpenRouter platform"
src="https://img.shields.io/badge/🏆%20OpenRouter-Top%2020-9b59b6?style=flat-square&labelColor=555555"/></a>
<a href="https://aider.chat/HISTORY.html"><img alt="Singularity" title="Percentage of the new code in Aider's last release written by Aider itself"
src="https://img.shields.io/badge/🔄%20Singularity-92%25-e74c3c?style=flat-square&labelColor=555555"/></a>
src="https://img.shields.io/badge/🔄%20Singularity-88%25-e74c3c?style=flat-square&labelColor=555555"/></a>
<!--[[[end]]]-->
</p>
@ -136,43 +136,45 @@ See the [installation instructions](https://aider.chat/docs/install.html) and [u
- [LLM Leaderboards](https://aider.chat/docs/leaderboards/)
- [GitHub Repository](https://github.com/Aider-AI/aider)
- [Discord Community](https://discord.gg/Y7X7bhMQFV)
- [Release notes](https://aider.chat/HISTORY.html)
- [Blog](https://aider.chat/blog/)
## Kind Words From Users
- *"My life has changed... There's finally an AI coding tool that's good enough to keep up with me... Aider... It's going to rock your world."* — [Eric S. Raymond](https://x.com/esrtweet/status/1910809356381413593)
- *"The best free open source AI coding assistant."* — [IndyDevDan](https://youtu.be/YALpX8oOn78)
- *"The best AI coding assistant so far."* — [Matthew Berman](https://www.youtube.com/watch?v=df8afeb1FY8)
- *"Aider ... has easily quadrupled my coding productivity."* — [SOLAR_FIELDS](https://news.ycombinator.com/item?id=36212100)
- *"It's a cool workflow... Aider's ergonomics are perfect for me."* — [qup](https://news.ycombinator.com/item?id=38185326)
- *"It's really like having your senior developer live right in your Git repo - truly amazing!"* — [rappster](https://github.com/Aider-AI/aider/issues/124)
- *"What an amazing tool. It's incredible."* — [valyagolev](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
- *"Aider is such an astounding thing!"* — [cgrothaus](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
- *"It was WAY faster than I would be getting off the ground and making the first few working versions."* — [Daniel Feldman](https://twitter.com/d_feldman/status/1662295077387923456)
- *"THANK YOU for Aider! It really feels like a glimpse into the future of coding."* — [derwiki](https://news.ycombinator.com/item?id=38205643)
- *"It's just amazing. It is freeing me to do things I felt were out my comfort zone before."* — [Dougie](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
- *"This project is stellar."* — [funkytaco](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
- *"Amazing project, definitely the best AI coding assistant I've used."* — [joshuavial](https://github.com/Aider-AI/aider/issues/84)
- *"I absolutely love using Aider ... It makes software development feel so much lighter as an experience."* — [principalideal0](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
- *"I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity."* — [codeninja](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *"I am an aider addict. I'm getting so much more work done, but in less time."* — [dandandan](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *"After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever."* — [SystemSculpt](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *"Aider is amazing, coupled with Sonnet 3.5 it's quite mind blowing."* — [Josh Dingus](https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548)
- *"Hands down, this is the best AI coding assistant tool so far."* — [IndyDevDan](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *"[Aider] changed my daily coding workflows. It's mind-blowing how a single Python application can change your life."* — [maledorak](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
- *"Best agent for actual dev work in existing codebases."* — [Nick Dobos](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
- *"One of my favorite pieces of software. Blazing trails on new paradigms!"* — [Chris Wall](https://x.com/chris65536/status/1905053299251798432)
- *"Aider has been revolutionary for me and my work."* — [Starry Hope](https://x.com/starryhopeblog/status/1904985812137132056)
- *"Try aider! One of the best ways to vibe code."* — [Chris Wall](https://x.com/Chris65536/status/1905053418961391929)
- *"Aider is hands down the best. And it's free and opensource."* — [AriyaSavakaLurker](https://www.reddit.com/r/ChatGPTCoding/comments/1ik16y6/whats_your_take_on_aider/mbip39n/)
- *"Aider is also my best friend."* — [jzn21](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27dcnb/)
- *"Try Aider, it's worth it."* — [jorgejhms](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27cp99/)
- *"I like aider :)"* — [Chenwei Cui](https://x.com/ccui42/status/1904965344999145698)
- *"Aider is the precision tool of LLM code gen... Minimal, thoughtful and capable of surgical changes to your codebase all while keeping the developer in control."* — [Reilly Sweetland](https://x.com/rsweetland/status/1904963807237259586)
- *"Cannot believe aider vibe coded a 650 LOC feature across service and cli today in 1 shot."* - [autopoietist](https://discord.com/channels/1131200896827654144/1131200896827654149/1355675042259796101)
- *"Oh no the secret is out! Yes, Aider is the best coding tool around. I highly, highly recommend it to anyone."* — [Joshua D Vander Hook](https://x.com/jodavaho/status/1911154899057795218)
- *"thanks to aider, i have started and finished three personal projects within the last two days"* — [joseph stalzyn](https://x.com/anitaheeder/status/1908338609645904160)
- *"Been using aider as my daily driver for over a year ... I absolutely love the tool, like beyond words."* — [koleok](https://discord.com/channels/1131200896827654144/1273248471394291754/1356727448372252783)
- *"Aider ... is the tool to benchmark against."* — [BeetleB](https://news.ycombinator.com/item?id=43930201)
- *"aider is really cool"* — [kache (@yacineMTB)](https://x.com/yacineMTB/status/1911224442430124387)
- *"My life has changed... Aider... It's going to rock your world."* — [Eric S. Raymond on X](https://x.com/esrtweet/status/1910809356381413593)
- *"The best free open source AI coding assistant."* — [IndyDevDan on YouTube](https://youtu.be/YALpX8oOn78)
- *"The best AI coding assistant so far."* — [Matthew Berman on YouTube](https://www.youtube.com/watch?v=df8afeb1FY8)
- *"Aider ... has easily quadrupled my coding productivity."* — [SOLAR_FIELDS on Hacker News](https://news.ycombinator.com/item?id=36212100)
- *"It's a cool workflow... Aider's ergonomics are perfect for me."* — [qup on Hacker News](https://news.ycombinator.com/item?id=38185326)
- *"It's really like having your senior developer live right in your Git repo - truly amazing!"* — [rappster on GitHub](https://github.com/Aider-AI/aider/issues/124)
- *"What an amazing tool. It's incredible."* — [valyagolev on GitHub](https://github.com/Aider-AI/aider/issues/6#issue-1722897858)
- *"Aider is such an astounding thing!"* — [cgrothaus on GitHub](https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700)
- *"It was WAY faster than I would be getting off the ground and making the first few working versions."* — [Daniel Feldman on X](https://twitter.com/d_feldman/status/1662295077387923456)
- *"THANK YOU for Aider! It really feels like a glimpse into the future of coding."* — [derwiki on Hacker News](https://news.ycombinator.com/item?id=38205643)
- *"It's just amazing. It is freeing me to do things I felt were out my comfort zone before."* — [Dougie on Discord](https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656)
- *"This project is stellar."* — [funkytaco on GitHub](https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008)
- *"Amazing project, definitely the best AI coding assistant I've used."* — [joshuavial on GitHub](https://github.com/Aider-AI/aider/issues/84)
- *"I absolutely love using Aider ... It makes software development feel so much lighter as an experience."* — [principalideal0 on Discord](https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468)
- *"I have been recovering from ... surgeries ... aider ... has allowed me to continue productivity."* — [codeninja on Reddit](https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG)
- *"I am an aider addict. I'm getting so much more work done, but in less time."* — [dandandan on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470)
- *"Aider... blows everything else out of the water hands down, there's no competition whatsoever."* — [SystemSculpt on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548)
- *"Aider is amazing, coupled with Sonnet 3.5 it's quite mind blowing."* — [Josh Dingus on Discord](https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548)
- *"Hands down, this is the best AI coding assistant tool so far."* — [IndyDevDan on YouTube](https://www.youtube.com/watch?v=MPYFPvxfGZs)
- *"[Aider] changed my daily coding workflows. It's mind-blowing how ...(it)... can change your life."* — [maledorak on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264)
- *"Best agent for actual dev work in existing codebases."* — [Nick Dobos on X](https://twitter.com/NickADobos/status/1690408967963652097?s=20)
- *"One of my favorite pieces of software. Blazing trails on new paradigms!"* — [Chris Wall on X](https://x.com/chris65536/status/1905053299251798432)
- *"Aider has been revolutionary for me and my work."* — [Starry Hope on X](https://x.com/starryhopeblog/status/1904985812137132056)
- *"Try aider! One of the best ways to vibe code."* — [Chris Wall on X](https://x.com/Chris65536/status/1905053418961391929)
- *"Freaking love Aider."* — [hztar on Hacker News](https://news.ycombinator.com/item?id=44035015)
- *"Aider is hands down the best. And it's free and opensource."* — [AriyaSavakaLurker on Reddit](https://www.reddit.com/r/ChatGPTCoding/comments/1ik16y6/whats_your_take_on_aider/mbip39n/)
- *"Aider is also my best friend."* — [jzn21 on Reddit](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27dcnb/)
- *"Try Aider, it's worth it."* — [jorgejhms on Reddit](https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27cp99/)
- *"I like aider :)"* — [Chenwei Cui on X](https://x.com/ccui42/status/1904965344999145698)
- *"Aider is the precision tool of LLM code gen... Minimal, thoughtful and capable of surgical changes ... while keeping the developer in control."* — [Reilly Sweetland on X](https://x.com/rsweetland/status/1904963807237259586)
- *"Cannot believe aider vibe coded a 650 LOC feature across service and cli today in 1 shot."* - [autopoietist on Discord](https://discord.com/channels/1131200896827654144/1131200896827654149/1355675042259796101)
- *"Oh no the secret is out! Yes, Aider is the best coding tool around. I highly, highly recommend it to anyone."* — [Joshua D Vander Hook on X](https://x.com/jodavaho/status/1911154899057795218)
- *"thanks to aider, i have started and finished three personal projects within the last two days"* — [joseph stalzyn on X](https://x.com/anitaheeder/status/1908338609645904160)
- *"Been using aider as my daily driver for over a year ... I absolutely love the tool, like beyond words."* — [koleok on Discord](https://discord.com/channels/1131200896827654144/1273248471394291754/1356727448372252783)
- *"Aider ... is the tool to benchmark against."* — [BeetleB on Hacker News](https://news.ycombinator.com/item?id=43930201)
- *"aider is really cool"* — [kache on X](https://x.com/yacineMTB/status/1911224442430124387)

View file

@ -1,6 +1,6 @@
from packaging import version
__version__ = "0.83.1.dev"
__version__ = "0.86.2.dev"
safe_version = __version__
try:

View file

@ -70,9 +70,17 @@ class Analytics:
# ephemeral
logfile = None
def __init__(self, logfile=None, permanently_disable=False):
def __init__(
self,
logfile=None,
permanently_disable=False,
posthog_host=None,
posthog_project_api_key=None,
):
self.logfile = logfile
self.get_or_create_uuid()
self.custom_posthog_host = posthog_host
self.custom_posthog_project_api_key = posthog_project_api_key
if self.permanently_disable or permanently_disable or not self.asked_opt_in:
self.disable(permanently_disable)
@ -92,8 +100,8 @@ class Analytics:
# self.mp = Mixpanel(mixpanel_project_token)
self.ph = Posthog(
project_api_key=posthog_project_api_key,
host=posthog_host,
project_api_key=self.custom_posthog_project_api_key or posthog_project_api_key,
host=self.custom_posthog_host or posthog_host,
on_error=self.posthog_error,
enable_exception_autocapture=True,
super_properties=self.get_system_info(), # Add system info to all events
@ -229,7 +237,7 @@ class Analytics:
self.mp = None # Disable mixpanel on connection errors
if self.ph:
self.ph.capture(self.user_id, event_name, dict(properties))
self.ph.capture(event_name, distinct_id=self.user_id, properties=dict(properties))
if self.logfile:
log_entry = {

View file

@ -40,10 +40,22 @@ def get_parser(default_config_files, git_root):
config_file_parser_class=configargparse.YAMLConfigFileParser,
auto_env_var_prefix="AIDER_",
)
# List of valid edit formats for argparse validation & shtab completion.
# Dynamically gather them from the registered coder classes so the list
# stays in sync if new formats are added.
from aider import coders as _aider_coders
edit_format_choices = sorted(
{
c.edit_format
for c in _aider_coders.__all__
if hasattr(c, "edit_format") and c.edit_format is not None
}
)
group = parser.add_argument_group("Main model")
group.add_argument(
"files", metavar="FILE", nargs="*", help="files to edit with an LLM (optional)"
)
).complete = shtab.FILE
group.add_argument(
"--model",
metavar="MODEL",
@ -110,13 +122,13 @@ def get_parser(default_config_files, git_root):
metavar="MODEL_SETTINGS_FILE",
default=".aider.model.settings.yml",
help="Specify a file with aider model settings for unknown models",
)
).complete = shtab.FILE
group.add_argument(
"--model-metadata-file",
metavar="MODEL_METADATA_FILE",
default=".aider.model.metadata.json",
help="Specify a file with context window and costs for unknown models",
)
).complete = shtab.FILE
group.add_argument(
"--alias",
action="append",
@ -131,7 +143,10 @@ def get_parser(default_config_files, git_root):
group.add_argument(
"--thinking-tokens",
type=str,
help="Set the thinking token budget for models that support it (default: not set)",
help=(
"Set the thinking token budget for models that support it. Use 0 to disable. (default:"
" not set)"
),
)
group.add_argument(
"--verify-ssl",
@ -149,6 +164,7 @@ def get_parser(default_config_files, git_root):
"--edit-format",
"--chat-mode",
metavar="EDIT_FORMAT",
choices=edit_format_choices,
default=None,
help="Specify what edit format the LLM should use (default depends on model)",
)
@ -183,6 +199,7 @@ def get_parser(default_config_files, git_root):
group.add_argument(
"--editor-edit-format",
metavar="EDITOR_EDIT_FORMAT",
choices=edit_format_choices,
default=None,
help="Specify the edit format for the editor model (default: depends on editor model)",
)
@ -262,13 +279,13 @@ def get_parser(default_config_files, git_root):
metavar="INPUT_HISTORY_FILE",
default=default_input_history_file,
help=f"Specify the chat input history file (default: {default_input_history_file})",
)
).complete = shtab.FILE
group.add_argument(
"--chat-history-file",
metavar="CHAT_HISTORY_FILE",
default=default_chat_history_file,
help=f"Specify the chat history file (default: {default_chat_history_file})",
)
).complete = shtab.FILE
group.add_argument(
"--restore-chat-history",
action=argparse.BooleanOptionalAction,
@ -280,7 +297,7 @@ def get_parser(default_config_files, git_root):
metavar="LLM_HISTORY_FILE",
default=None,
help="Log the conversation with the LLM to this file (for example, .aider.llm.history)",
)
).complete = shtab.FILE
##########
group = parser.add_argument_group("Output settings")
@ -396,6 +413,12 @@ def get_parser(default_config_files, git_root):
default=True,
help="Enable/disable adding .aider* to .gitignore (default: True)",
)
group.add_argument(
"--add-gitignore-files",
action=argparse.BooleanOptionalAction,
default=False,
help="Enable/disable the addition of files listed in .gitignore to Aider's editing scope.",
)
default_aiderignore_file = (
os.path.join(git_root, ".aiderignore") if git_root else ".aiderignore"
)
@ -406,7 +429,7 @@ def get_parser(default_config_files, git_root):
type=lambda path_str: resolve_aiderignore_path(path_str, git_root),
default=default_aiderignore_file,
help="Specify the aider ignore file (default: .aiderignore in git root)",
)
).complete = shtab.FILE
group.add_argument(
"--subtree-only",
action="store_true",
@ -458,10 +481,10 @@ def get_parser(default_config_files, git_root):
group.add_argument(
"--attribute-co-authored-by",
action=argparse.BooleanOptionalAction,
default=False,
default=True,
help=(
"Attribute aider edits using the Co-authored-by trailer in the commit message"
" (default: False). If True, this takes precedence over default --attribute-author and"
" (default: True). If True, this takes precedence over default --attribute-author and"
" --attribute-committer behavior unless they are explicitly set to True."
),
)
@ -552,13 +575,23 @@ def get_parser(default_config_files, git_root):
"--analytics-log",
metavar="ANALYTICS_LOG_FILE",
help="Specify a file to log analytics events",
)
).complete = shtab.FILE
group.add_argument(
"--analytics-disable",
action="store_true",
help="Permanently disable analytics",
default=False,
)
group.add_argument(
"--analytics-posthog-host",
metavar="ANALYTICS_POSTHOG_HOST",
help="Send analytics to custom PostHog instance",
)
group.add_argument(
"--analytics-posthog-project-api-key",
metavar="ANALYTICS_POSTHOG_PROJECT_API_KEY",
help="Send analytics to custom PostHog project",
)
#########
group = parser.add_argument_group("Upgrading")
@ -619,7 +652,7 @@ def get_parser(default_config_files, git_root):
"Specify a file containing the message to send the LLM, process reply, then exit"
" (disables chat mode)"
),
)
).complete = shtab.FILE
group.add_argument(
"--gui",
"--browser",
@ -637,7 +670,7 @@ def get_parser(default_config_files, git_root):
"--apply",
metavar="FILE",
help="Apply the changes from the given file instead of running the chat (debug)",
)
).complete = shtab.FILE
group.add_argument(
"--apply-clipboard-edits",
action="store_true",
@ -698,13 +731,13 @@ def get_parser(default_config_files, git_root):
action="append",
metavar="FILE",
help="specify a file to edit (can be used multiple times)",
)
).complete = shtab.FILE
group.add_argument(
"--read",
action="append",
metavar="FILE",
help="specify a read-only file (can be used multiple times)",
)
).complete = shtab.FILE
group.add_argument(
"--vim",
action="store_true",
@ -717,6 +750,12 @@ def get_parser(default_config_files, git_root):
default=None,
help="Specify the language to use in the chat (default: None, uses system settings)",
)
group.add_argument(
"--commit-language",
metavar="COMMIT_LANGUAGE",
default=None,
help="Specify the language to use in the commit message (default: None, user language)",
)
group.add_argument(
"--yes-always",
action="store_true",
@ -734,7 +773,7 @@ def get_parser(default_config_files, git_root):
"--load",
metavar="LOAD_FILE",
help="Load and execute /commands from a file on launch",
)
).complete = shtab.FILE
group.add_argument(
"--encoding",
default="utf-8",
@ -755,7 +794,7 @@ def get_parser(default_config_files, git_root):
"Specify the config file (default: search for .aider.conf.yml in git root, cwd"
" or home directory)"
),
)
).complete = shtab.FILE
# This is a duplicate of the argument in the preparser and is a no-op by this time of
# argument parsing, but it's here so that the help is displayed as expected.
group.add_argument(
@ -763,7 +802,7 @@ def get_parser(default_config_files, git_root):
metavar="ENV_FILE",
default=default_env_file(git_root),
help="Specify the .env file to load (default: .env in git root)",
)
).complete = shtab.FILE
group.add_argument(
"--suggest-shell-commands",
action=argparse.BooleanOptionalAction,

View file

@ -96,7 +96,7 @@ class YamlHelpFormatter(argparse.HelpFormatter):
# Place in your home dir, or at the root of your git repo.
##########################################################
# Note: You can only put OpenAI and Anthropic API keys in the yaml
# Note: You can only put OpenAI and Anthropic API keys in the YAML
# config file. Keys for all APIs can be stored in a .env file
# https://aider.chat/docs/config/dotenv.html

View file

@ -8,8 +8,7 @@ class AskPrompts(CoderPrompts):
Answer questions about the supplied code.
Always reply to the user in {language}.
Describe code changes however you like, but elide unchanging code.
Don't use SEARCH/REPLACE blocks or return huge swaths of unchanging code.
If you need to describe code changes, do so *briefly*.
"""
example_messages = []

View file

@ -118,6 +118,7 @@ class Coder:
detect_urls = True
ignore_mentions = None
chat_language = None
commit_language = None
file_watcher = None
@classmethod
@ -301,6 +302,7 @@ class Coder:
io,
repo=None,
fnames=None,
add_gitignore_files=False,
read_only_fnames=None,
show_diffs=False,
auto_commits=True,
@ -328,6 +330,7 @@ class Coder:
num_cache_warming_pings=0,
suggest_shell_commands=True,
chat_language=None,
commit_language=None,
detect_urls=True,
ignore_mentions=None,
total_tokens_sent=0,
@ -341,6 +344,7 @@ class Coder:
self.event = self.analytics.event
self.chat_language = chat_language
self.commit_language = commit_language
self.commit_before_message = []
self.aider_commit_hashes = set()
self.rejected_urls = set()
@ -386,6 +390,7 @@ class Coder:
self.verbose = verbose
self.abs_fnames = set()
self.abs_read_only_fnames = set()
self.add_gitignore_files = add_gitignore_files
if cur_messages:
self.cur_messages = cur_messages
@ -443,8 +448,9 @@ class Coder:
for fname in fnames:
fname = Path(fname)
if self.repo and self.repo.git_ignored_file(fname):
if self.repo and self.repo.git_ignored_file(fname) and not self.add_gitignore_files:
self.io.tool_warning(f"Skipping {fname} that matches gitignore spec.")
continue
if self.repo and self.repo.ignored_file(fname):
self.io.tool_warning(f"Skipping {fname} that matches aiderignore spec.")
@ -1049,6 +1055,9 @@ class Coder:
if not lang_code:
return None
if lang_code.upper() in ("C", "POSIX"):
return None
# Probably already a language name
if (
len(lang_code) > 3
@ -1079,7 +1088,8 @@ class Coder:
"ko": "Korean",
"ru": "Russian",
}
return fallback.get(lang_code.split("_")[0].lower(), lang_code)
primary_lang_code = lang_code.replace("-", "_").split("_")[0].lower()
return fallback.get(primary_lang_code, lang_code)
def get_user_language(self):
"""
@ -1090,6 +1100,7 @@ class Coder:
2. ``locale.getlocale()``
3. ``LANG`` / ``LANGUAGE`` / ``LC_ALL`` / ``LC_MESSAGES`` environment variables
"""
# Explicit override
if self.chat_language:
return self.normalize_language(self.chat_language)
@ -1098,9 +1109,11 @@ class Coder:
try:
lang = locale.getlocale()[0]
if lang:
return self.normalize_language(lang)
lang = self.normalize_language(lang)
if lang:
return lang
except Exception:
pass # pragma: no cover
pass
# Environment variables
for env_var in ("LANG", "LANGUAGE", "LC_ALL", "LC_MESSAGES"):
@ -1182,10 +1195,10 @@ class Coder:
)
rename_with_shell = ""
if self.chat_language:
language = self.chat_language
if user_lang: # user_lang is the result of self.get_user_language()
language = user_lang
else:
language = "the same language they are using"
language = "the same language they are using" # Default if no specific lang detected
if self.fence[0] == "`" * 4:
quad_backtick_reminder = (

View file

@ -383,7 +383,7 @@ def do_replace(fname, content, before_text, after_text, fence=None):
return new_content
HEAD = r"^<{5,9} SEARCH\s*$"
HEAD = r"^<{5,9} SEARCH>?\s*$"
DIVIDER = r"^={5,9}\s*$"
UPDATED = r"^>{5,9} REPLACE\s*$"

View file

@ -12,8 +12,6 @@ Respect and use existing conventions, libraries, etc that are already present in
Take requests for changes to the supplied code.
If the request is ambiguous, ask questions.
Always reply to the user in {language}.
Once you understand the request you MUST:
1. Decide if you need to propose *SEARCH/REPLACE* edits to any files that haven't been added to the chat. You can create new files without asking!

View file

@ -15,8 +15,6 @@ Respect and use existing conventions, libraries, etc that are already present in
Take requests for changes to the supplied code.
If the request is ambiguous, ask questions.
Always reply to the user in {language}.
Once you understand the request you MUST:
1. Decide if you need to propose edits to any files that haven't been added to the chat. You can create new files without asking!

View file

@ -109,7 +109,7 @@ class RelativeIndenter:
"""
if self.marker in text:
raise ValueError("Text already contains the outdent marker: {self.marker}")
raise ValueError(f"Text already contains the outdent marker: {self.marker}")
lines = text.splitlines(keepends=True)

View file

@ -13,8 +13,6 @@ Respect and use existing conventions, libraries, etc that are already present in
Take requests for changes to the supplied code.
If the request is ambiguous, ask questions.
Always reply to the user in {language}.
For each file that needs to be changed, write out the changes similar to a unified diff like `diff -U0` would produce.
"""

View file

@ -7,9 +7,6 @@ class WholeFilePrompts(CoderPrompts):
main_system = """Act as an expert software developer.
Take requests for changes to the supplied code.
If the request is ambiguous, ask questions.
Always reply to the user in {language}.
{final_reminders}
Once you understand the request you MUST:
1. Determine if any code changes are needed.

View file

@ -88,6 +88,11 @@ class Commands:
"Switch the Main Model to a new LLM"
model_name = args.strip()
if not model_name:
announcements = "\n".join(self.coder.get_announcements())
self.io.tool_output(announcements)
return
model = models.Model(
model_name,
editor_model=self.coder.main_model.editor_model.name,
@ -346,7 +351,7 @@ class Commands:
return
commit_message = args.strip() if args else None
self.coder.repo.commit(message=commit_message)
self.coder.repo.commit(message=commit_message, coder=self.coder)
def cmd_lint(self, args="", fnames=None):
"Lint and fix in-chat files or all dirty files if none in chat"
@ -407,6 +412,7 @@ class Commands:
"Clear the chat history"
self._clear_chat_history()
self.io.tool_output("All chat history cleared.")
def _drop_all_files(self):
self.coder.abs_fnames = set()
@ -563,6 +569,7 @@ class Commands:
last_commit_hash = self.coder.repo.get_head_commit_sha(short=True)
last_commit_message = self.coder.repo.get_head_commit_message("(unknown)").strip()
last_commit_message = (last_commit_message.splitlines() or [""])[0]
if last_commit_hash not in self.coder.aider_commit_hashes:
self.io.tool_error("The last commit was not made by aider in this chat session.")
self.io.tool_output(
@ -641,6 +648,7 @@ class Commands:
# Get the current HEAD after undo
current_head_hash = self.coder.repo.get_head_commit_sha(short=True)
current_head_message = self.coder.repo.get_head_commit_message("(unknown)").strip()
current_head_message = (current_head_message.splitlines() or [""])[0]
self.io.tool_output(f"Now at: {current_head_hash} {current_head_message}")
if self.coder.main_model.send_undo_reply:
@ -844,7 +852,11 @@ class Commands:
)
continue
if self.coder.repo and self.coder.repo.git_ignored_file(matched_file):
if (
self.coder.repo
and self.coder.repo.git_ignored_file(matched_file)
and not self.coder.add_gitignore_files
):
self.io.tool_error(f"Can't add {matched_file} which is in gitignore")
continue
@ -1312,12 +1324,23 @@ class Commands:
# First collect all expanded paths
for pattern in filenames:
expanded_pattern = expanduser(pattern)
if os.path.isabs(expanded_pattern):
# For absolute paths, glob it
matches = list(glob.glob(expanded_pattern))
path_obj = Path(expanded_pattern)
is_abs = path_obj.is_absolute()
if not is_abs:
path_obj = Path(self.coder.root) / path_obj
matches = []
# Check for literal path existence first
if path_obj.exists():
matches = [path_obj]
else:
# For relative paths and globs, use glob from the root directory
matches = list(Path(self.coder.root).glob(expanded_pattern))
# If literal path doesn't exist, try globbing
if is_abs:
# For absolute paths, glob it
matches = [Path(p) for p in glob.glob(expanded_pattern)]
else:
# For relative paths and globs, use glob from the root directory
matches = list(Path(self.coder.root).glob(expanded_pattern))
if not matches:
self.io.tool_error(f"No matches found for: {pattern}")
@ -1392,7 +1415,30 @@ class Commands:
"Print out the current settings"
settings = format_settings(self.parser, self.args)
announcements = "\n".join(self.coder.get_announcements())
# Build metadata for the active models (main, editor, weak)
model_sections = []
active_models = [
("Main model", self.coder.main_model),
("Editor model", getattr(self.coder.main_model, "editor_model", None)),
("Weak model", getattr(self.coder.main_model, "weak_model", None)),
]
for label, model in active_models:
if not model:
continue
info = getattr(model, "info", {}) or {}
if not info:
continue
model_sections.append(f"{label} ({model.name}):")
for k, v in sorted(info.items()):
model_sections.append(f" {k}: {v}")
model_sections.append("") # blank line between models
model_metadata = "\n".join(model_sections)
output = f"{announcements}\n{settings}"
if model_metadata:
output += "\n" + model_metadata
self.io.tool_output(output)
def completions_raw_load(self, document, complete_event):
@ -1514,7 +1560,7 @@ class Commands:
return self.cmd_editor(args)
def cmd_think_tokens(self, args):
"Set the thinking token budget (supports formats like 8096, 8k, 10.5k, 0.5M)"
"""Set the thinking token budget, eg: 8096, 8k, 10.5k, 0.5M, or 0 to disable."""
model = self.coder.main_model
if not args.strip():
@ -1532,10 +1578,16 @@ class Commands:
value = args.strip()
model.set_thinking_tokens(value)
formatted_budget = model.get_thinking_tokens()
budget = model.get_raw_thinking_tokens()
# Handle the special case of 0 to disable thinking tokens
if value == "0":
self.io.tool_output("Thinking tokens disabled.")
else:
formatted_budget = model.get_thinking_tokens()
budget = model.get_raw_thinking_tokens()
self.io.tool_output(
f"Set thinking token budget to {budget:,} tokens ({formatted_budget})."
)
self.io.tool_output(f"Set thinking token budget to {budget:,} tokens ({formatted_budget}).")
self.io.tool_output()
# Output announcements

View file

@ -20,6 +20,7 @@ EXCEPTIONS = [
"The API provider is not able to authenticate you. Check your API key.",
),
ExInfo("AzureOpenAIError", True, None),
ExInfo("BadGatewayError", True, "The API provider's servers are down or overloaded."),
ExInfo("BadRequestError", False, None),
ExInfo("BudgetExceededError", True, None),
ExInfo(
@ -28,6 +29,8 @@ EXCEPTIONS = [
"The API provider has refused the request due to a safety policy about the content.",
),
ExInfo("ContextWindowExceededError", False, None), # special case handled in base_coder
ExInfo("ErrorEventError", True, None),
ExInfo("ImageFetchError", False, "The API provider was unable to fetch one or more images."),
ExInfo("InternalServerError", True, "The API provider's servers are down or overloaded."),
ExInfo("InvalidRequestError", True, None),
ExInfo("JSONSchemaValidationError", True, None),

View file

@ -63,37 +63,37 @@ class ChatSummary:
if split_index <= min_split:
return self.summarize_all(messages)
head = messages[:split_index]
# Split head and tail
tail = messages[split_index:]
sized = sized[:split_index]
head.reverse()
sized.reverse()
# Only size the head once
sized_head = sized[:split_index]
# Precompute token limit (fallback to 4096 if undefined)
model_max_input_tokens = self.models[0].info.get("max_input_tokens") or 4096
model_max_input_tokens -= 512 # reserve buffer for safety
keep = []
total = 0
# These sometimes come set with value = None
model_max_input_tokens = self.models[0].info.get("max_input_tokens") or 4096
model_max_input_tokens -= 512
for i in range(split_index):
total += sized[i][0]
# Iterate in original order, summing tokens until limit
for tokens, msg in sized_head:
total += tokens
if total > model_max_input_tokens:
break
keep.append(head[i])
keep.reverse()
keep.append(msg)
# No need to reverse lists back and forth
summary = self.summarize_all(keep)
tail_tokens = sum(tokens for tokens, msg in sized[split_index:])
# If the combined summary and tail still fits, return directly
summary_tokens = self.token_count(summary)
result = summary + tail
tail_tokens = sum(tokens for tokens, _ in sized[split_index:])
if summary_tokens + tail_tokens < self.max_tokens:
return result
return summary + tail
return self.summarize_real(result, depth + 1)
# Otherwise recurse with increased depth
return self.summarize_real(summary + tail, depth + 1)
def summarize_all(self, messages):
content = ""

View file

@ -308,6 +308,12 @@ class InputOutput:
self.yes = yes
self.input_history_file = input_history_file
if self.input_history_file:
try:
Path(self.input_history_file).parent.mkdir(parents=True, exist_ok=True)
except (PermissionError, OSError) as e:
self.tool_warning(f"Could not create directory for input history: {e}")
self.input_history_file = None
self.llm_history_file = llm_history_file
if chat_history_file is not None:
self.chat_history_file = Path(chat_history_file)
@ -749,9 +755,14 @@ class InputOutput:
if not self.llm_history_file:
return
timestamp = datetime.now().isoformat(timespec="seconds")
with open(self.llm_history_file, "a", encoding=self.encoding) as log_file:
log_file.write(f"{role.upper()} {timestamp}\n")
log_file.write(content + "\n")
try:
Path(self.llm_history_file).parent.mkdir(parents=True, exist_ok=True)
with open(self.llm_history_file, "a", encoding="utf-8") as log_file:
log_file.write(f"{role.upper()} {timestamp}\n")
log_file.write(content + "\n")
except (PermissionError, OSError) as err:
self.tool_warning(f"Unable to write to llm history file {self.llm_history_file}: {err}")
self.llm_history_file = None
def display_user_input(self, inp):
if self.pretty and self.user_input_color:
@ -1001,7 +1012,11 @@ class InputOutput:
self.console.print(*messages, style=style)
def get_assistant_mdstream(self):
mdargs = dict(style=self.assistant_output_color, code_theme=self.code_theme)
mdargs = dict(
style=self.assistant_output_color,
code_theme=self.code_theme,
inline_code_lexer="text",
)
mdStream = MarkdownStream(mdargs=mdargs)
return mdStream
@ -1112,6 +1127,7 @@ class InputOutput:
text += "\n"
if self.chat_history_file is not None:
try:
self.chat_history_file.parent.mkdir(parents=True, exist_ok=True)
with self.chat_history_file.open("a", encoding=self.encoding, errors="ignore") as f:
f.write(text)
except (PermissionError, OSError) as err:

View file

@ -633,7 +633,12 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
)
os.environ["OPENAI_ORGANIZATION"] = args.openai_organization_id
analytics = Analytics(logfile=args.analytics_log, permanently_disable=args.analytics_disable)
analytics = Analytics(
logfile=args.analytics_log,
permanently_disable=args.analytics_disable,
posthog_host=args.analytics_posthog_host,
posthog_project_api_key=args.analytics_posthog_project_api_key,
)
if args.analytics is not False:
if analytics.need_to_ask(args.analytics):
io.tool_output(
@ -921,8 +926,9 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
analytics.event("exit", reason="Repository sanity check failed")
return 1
if repo:
analytics.event("repo", num_files=len(repo.get_tracked_files()))
if repo and not args.skip_sanity_check_repo:
num_files = len(repo.get_tracked_files())
analytics.event("repo", num_files=num_files)
else:
analytics.event("no-repo")
@ -993,9 +999,11 @@ def main(argv=None, input=None, output=None, force_git_root=None, return_coder=F
num_cache_warming_pings=args.cache_keepalive_pings,
suggest_shell_commands=args.suggest_shell_commands,
chat_language=args.chat_language,
commit_language=args.commit_language,
detect_urls=args.detect_urls,
auto_copy_context=args.copy_paste,
auto_accept_architect=args.auto_accept_architect,
add_gitignore_files=args.add_gitignore_files,
)
except UnknownEditFormat as err:
io.tool_error(str(err))

View file

@ -8,6 +8,7 @@ import platform
import sys
import time
from dataclasses import dataclass, fields
from datetime import datetime
from pathlib import Path
from typing import Optional, Union
@ -15,8 +16,10 @@ import json5
import yaml
from PIL import Image
from aider import __version__
from aider.dump import dump # noqa: F401
from aider.llm import litellm
from aider.openrouter import OpenRouterModelManager
from aider.sendchat import ensure_alternating_roles, sanity_check_messages
from aider.utils import check_pip_install_extra
@ -69,6 +72,8 @@ claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-5-sonnet-20240620
claude-3-5-sonnet-20241022
claude-sonnet-4-20250514
claude-opus-4-20250514
"""
ANTHROPIC_MODELS = [ln.strip() for ln in ANTHROPIC_MODELS.splitlines() if ln.strip()]
@ -76,9 +81,9 @@ ANTHROPIC_MODELS = [ln.strip() for ln in ANTHROPIC_MODELS.splitlines() if ln.str
# Mapping of model aliases to their canonical names
MODEL_ALIASES = {
# Claude models
"sonnet": "anthropic/claude-3-7-sonnet-20250219",
"sonnet": "anthropic/claude-sonnet-4-20250514",
"haiku": "claude-3-5-haiku-20241022",
"opus": "claude-3-opus-20240229",
"opus": "claude-opus-4-20250514",
# GPT models
"4": "gpt-4-0613",
"4o": "gpt-4o",
@ -88,11 +93,13 @@ MODEL_ALIASES = {
"3": "gpt-3.5-turbo",
# Other models
"deepseek": "deepseek/deepseek-chat",
"flash": "gemini/gemini-2.5-flash-preview-04-17",
"flash": "gemini/gemini-2.5-flash",
"flash-lite": "gemini/gemini-2.5-flash-lite",
"quasar": "openrouter/openrouter/quasar-alpha",
"r1": "deepseek/deepseek-reasoner",
"gemini-2.5-pro": "gemini/gemini-2.5-pro-preview-05-06",
"gemini": "gemini/gemini-2.5-pro-preview-05-06",
"gemini-2.5-pro": "gemini/gemini-2.5-pro",
"gemini-3-pro-preview": "gemini/gemini-3-pro-preview",
"gemini": "gemini/gemini-3-pro-preview",
"gemini-exp": "gemini/gemini-2.5-pro-exp-03-25",
"grok3": "xai/grok-3-beta",
"optimus": "openrouter/openrouter/optimus-alpha",
@ -149,8 +156,13 @@ class ModelInfoManager:
self.verify_ssl = True
self._cache_loaded = False
# Manager for the cached OpenRouter model database
self.openrouter_manager = OpenRouterModelManager()
def set_verify_ssl(self, verify_ssl):
self.verify_ssl = verify_ssl
if hasattr(self, "openrouter_manager"):
self.openrouter_manager.set_verify_ssl(verify_ssl)
def _load_cache(self):
if self._cache_loaded:
@ -232,6 +244,12 @@ class ModelInfoManager:
return litellm_info
if not cached_info and model.startswith("openrouter/"):
# First try using the locally cached OpenRouter model database
openrouter_info = self.openrouter_manager.get_model_info(model)
if openrouter_info:
return openrouter_info
# Fallback to legacy web-scraping if the API cache does not contain the model
openrouter_info = self.fetch_openrouter_model_info(model)
if openrouter_info:
return openrouter_info
@ -420,6 +438,14 @@ class Model(ModelSettings):
self.examples_as_sys_msg = False
return # <--
last_segment = model.split("/")[-1]
if last_segment in ("gpt-5", "gpt-5-2025-08-07"):
self.use_temperature = False
self.edit_format = "diff"
if "reasoning_effort" not in self.accepts_settings:
self.accepts_settings.append("reasoning_effort")
return # <--
if "/o1-mini" in model:
self.use_repo_map = True
self.use_temperature = False
@ -778,6 +804,7 @@ class Model(ModelSettings):
"""
Set the thinking token budget for models that support it.
Accepts formats: 8096, "8k", "10.5k", "0.5M", "10K", etc.
Pass "0" to disable thinking tokens.
"""
if value is not None:
num_tokens = self.parse_token_value(value)
@ -789,9 +816,17 @@ class Model(ModelSettings):
if self.name.startswith("openrouter/"):
if "extra_body" not in self.extra_params:
self.extra_params["extra_body"] = {}
self.extra_params["extra_body"]["reasoning"] = {"max_tokens": num_tokens}
if num_tokens > 0:
self.extra_params["extra_body"]["reasoning"] = {"max_tokens": num_tokens}
else:
if "reasoning" in self.extra_params["extra_body"]:
del self.extra_params["extra_body"]["reasoning"]
else:
self.extra_params["thinking"] = {"type": "enabled", "budget_tokens": num_tokens}
if num_tokens > 0:
self.extra_params["thinking"] = {"type": "enabled", "budget_tokens": num_tokens}
else:
if "thinking" in self.extra_params:
del self.extra_params["thinking"]
def get_raw_thinking_tokens(self):
"""Get formatted thinking token budget if available"""
@ -861,6 +896,57 @@ class Model(ModelSettings):
def is_ollama(self):
return self.name.startswith("ollama/") or self.name.startswith("ollama_chat/")
def github_copilot_token_to_open_ai_key(self, extra_headers):
# check to see if there's an openai api key
# If so, check to see if it's expire
openai_api_key = "OPENAI_API_KEY"
if openai_api_key not in os.environ or (
int(dict(x.split("=") for x in os.environ[openai_api_key].split(";"))["exp"])
< int(datetime.now().timestamp())
):
import requests
class GitHubCopilotTokenError(Exception):
"""Custom exception for GitHub Copilot token-related errors."""
pass
# Validate GitHub Copilot token exists
if "GITHUB_COPILOT_TOKEN" not in os.environ:
raise KeyError("GITHUB_COPILOT_TOKEN environment variable not found")
github_token = os.environ["GITHUB_COPILOT_TOKEN"]
if not github_token.strip():
raise KeyError("GITHUB_COPILOT_TOKEN environment variable is empty")
headers = {
"Authorization": f"Bearer {os.environ['GITHUB_COPILOT_TOKEN']}",
"Editor-Version": extra_headers["Editor-Version"],
"Copilot-Integration-Id": extra_headers["Copilot-Integration-Id"],
"Content-Type": "application/json",
}
url = "https://api.github.com/copilot_internal/v2/token"
res = requests.get(url, headers=headers)
if res.status_code != 200:
safe_headers = {k: v for k, v in headers.items() if k != "Authorization"}
token_preview = github_token[:5] + "..." if len(github_token) >= 5 else github_token
safe_headers["Authorization"] = f"Bearer {token_preview}"
raise GitHubCopilotTokenError(
f"GitHub Copilot API request failed (Status: {res.status_code})\n"
f"URL: {url}\n"
f"Headers: {json.dumps(safe_headers, indent=2)}\n"
f"JSON: {res.text}"
)
response_data = res.json()
token = response_data.get("token")
if not token:
raise GitHubCopilotTokenError("Response missing 'token' field")
os.environ[openai_api_key] = token
def send_completion(self, messages, functions, stream, temperature=None):
if os.environ.get("AIDER_SANITY_CHECK_TURNS"):
sanity_check_messages(messages)
@ -902,6 +988,16 @@ class Model(ModelSettings):
dump(kwargs)
kwargs["messages"] = messages
# Are we using github copilot?
if "GITHUB_COPILOT_TOKEN" in os.environ:
if "extra_headers" not in kwargs:
kwargs["extra_headers"] = {
"Editor-Version": f"aider/{__version__}",
"Copilot-Integration-Id": "vscode-chat",
}
self.github_copilot_token_to_open_ai_key(kwargs["extra_headers"])
res = litellm.completion(**kwargs)
return hash_object, res
@ -966,12 +1062,10 @@ def register_models(model_settings_fnames):
for model_settings_dict in model_settings_list:
model_settings = ModelSettings(**model_settings_dict)
existing_model_settings = next(
(ms for ms in MODEL_SETTINGS if ms.name == model_settings.name), None
)
if existing_model_settings:
MODEL_SETTINGS.remove(existing_model_settings)
# Remove all existing settings for this model name
MODEL_SETTINGS[:] = [ms for ms in MODEL_SETTINGS if ms.name != model_settings.name]
# Add the new settings
MODEL_SETTINGS.append(model_settings)
except Exception as e:
raise Exception(f"Error loading model settings from {model_settings_fname}: {e}")

View file

@ -55,9 +55,9 @@ def try_to_select_default_model():
# Check if the user is on a free tier
is_free_tier = check_openrouter_tier(openrouter_key)
if is_free_tier:
return "openrouter/google/gemini-2.5-pro-exp-03-25:free"
return "openrouter/deepseek/deepseek-r1:free"
else:
return "openrouter/anthropic/claude-3.7-sonnet"
return "openrouter/anthropic/claude-sonnet-4"
# Select model based on other available API keys
model_key_pairs = [

128
aider/openrouter.py Normal file
View file

@ -0,0 +1,128 @@
"""
OpenRouter model metadata caching and lookup.
This module keeps a local cached copy of the OpenRouter model list
(downloaded from ``https://openrouter.ai/api/v1/models``) and exposes a
helper class that returns metadata for a given model in a format compatible
with litellms ``get_model_info``.
"""
from __future__ import annotations
import json
import time
from pathlib import Path
from typing import Dict
import requests
def _cost_per_token(val: str | None) -> float | None:
"""Convert a price string (USD per token) to a float."""
if val in (None, "", "0"):
return 0.0 if val == "0" else None
try:
return float(val)
except Exception: # noqa: BLE001
return None
class OpenRouterModelManager:
MODELS_URL = "https://openrouter.ai/api/v1/models"
CACHE_TTL = 60 * 60 * 24 # 24 h
def __init__(self) -> None:
self.cache_dir = Path.home() / ".aider" / "caches"
self.cache_file = self.cache_dir / "openrouter_models.json"
self.content: Dict | None = None
self.verify_ssl: bool = True
self._cache_loaded = False
# ------------------------------------------------------------------ #
# Public API #
# ------------------------------------------------------------------ #
def set_verify_ssl(self, verify_ssl: bool) -> None:
"""Enable/disable SSL verification for API requests."""
self.verify_ssl = verify_ssl
def get_model_info(self, model: str) -> Dict:
"""
Return metadata for *model* or an empty ``dict`` when unknown.
``model`` should use the aider naming convention, e.g.
``openrouter/nousresearch/deephermes-3-mistral-24b-preview:free``.
"""
self._ensure_content()
if not self.content or "data" not in self.content:
return {}
route = self._strip_prefix(model)
# Consider both the exact id and id without any “:suffix”.
candidates = {route}
if ":" in route:
candidates.add(route.split(":", 1)[0])
record = next((item for item in self.content["data"] if item.get("id") in candidates), None)
if not record:
return {}
context_len = (
record.get("top_provider", {}).get("context_length")
or record.get("context_length")
or None
)
pricing = record.get("pricing", {})
return {
"max_input_tokens": context_len,
"max_tokens": context_len,
"max_output_tokens": context_len,
"input_cost_per_token": _cost_per_token(pricing.get("prompt")),
"output_cost_per_token": _cost_per_token(pricing.get("completion")),
"litellm_provider": "openrouter",
}
# ------------------------------------------------------------------ #
# Internal helpers #
# ------------------------------------------------------------------ #
def _strip_prefix(self, model: str) -> str:
return model[len("openrouter/") :] if model.startswith("openrouter/") else model
def _ensure_content(self) -> None:
self._load_cache()
if not self.content:
self._update_cache()
def _load_cache(self) -> None:
if self._cache_loaded:
return
try:
self.cache_dir.mkdir(parents=True, exist_ok=True)
if self.cache_file.exists():
cache_age = time.time() - self.cache_file.stat().st_mtime
if cache_age < self.CACHE_TTL:
try:
self.content = json.loads(self.cache_file.read_text())
except json.JSONDecodeError:
self.content = None
except OSError:
# Cache directory might be unwritable; ignore.
pass
self._cache_loaded = True
def _update_cache(self) -> None:
try:
response = requests.get(self.MODELS_URL, timeout=10, verify=self.verify_ssl)
if response.status_code == 200:
self.content = response.json()
try:
self.cache_file.write_text(json.dumps(self.content, indent=2))
except OSError:
pass # Non-fatal if we cant write the cache
except Exception as ex: # noqa: BLE001
print(f"Failed to fetch OpenRouter model list: {ex}")
try:
self.cache_file.write_text("{}")
except OSError:
pass

View file

@ -19,9 +19,6 @@ Ensure the commit message:{language_instruction}
- Does not exceed 72 characters.
Reply only with the one-line commit message, without any additional text, explanations, or line breaks.
Reply only with the one-line commit message, without any additional text, explanations, \
or line breaks.
"""
# COMMANDS

View file

@ -0,0 +1,7 @@
(list_lit
meta: _*
. (sym_lit name: (sym_name) @ignore)
. (sym_lit name: (sym_name) @name.definition.method)
(#match? @ignore "^def.*"))
(sym_lit name: (sym_name) @name.reference.call)

View file

@ -0,0 +1,10 @@
(class_definition
name: (identifier) @name.definition.class) @definition.class
(function_definition
name: (identifier) @name.definition.function) @definition.function
(function_call
name: (identifier) @name.reference.call) @reference.call
(command (command_name) @name.reference.call) @reference.call

View file

@ -21,3 +21,4 @@ tree-sitter language implementations:
* [https://github.com/tree-sitter/tree-sitter-ruby](https://github.com/tree-sitter/tree-sitter-ruby) — licensed under the MIT License.
* [https://github.com/tree-sitter/tree-sitter-rust](https://github.com/tree-sitter/tree-sitter-rust) — licensed under the MIT License.
* [https://github.com/tree-sitter/tree-sitter-typescript](https://github.com/tree-sitter/tree-sitter-typescript) — licensed under the MIT License.
* [https://github.com/starelmanma/tree-sitter-fortran](https://github.com/starelmanma/tree-sitter-fortran) — licensed under the MIT License.

View file

@ -0,0 +1,15 @@
;; derived from: https://github.com/stadelmanma/tree-sitter-fortran
;; License: MIT
(module_statement
(name) @name.definition.class) @definition.class
(function_statement
name: (name) @name.definition.function) @definition.function
(subroutine_statement
name: (name) @name.definition.function) @definition.function
(module_procedure_statement
name: (name) @name.definition.function) @definition.function

View file

@ -0,0 +1,3 @@
(function (variable) @name.definition.function)
(bind (variable) @name.definition.function)
(signature (variable) @name.definition.type)

View file

@ -0,0 +1,60 @@
;; derived from: https://github.com/tree-sitter/tree-sitter-julia
;; License: MIT
(module
name: (identifier) @name.definition.module) @definition.module
(module
name: (scoped_identifier) @name.definition.module) @definition.module
(struct_definition
name: (type_identifier) @name.definition.class) @definition.class
(mutable_struct_definition
name: (type_identifier) @name.definition.class) @definition.class
(abstract_type_declaration
name: (type_identifier) @name.definition.class) @definition.class
(constant_assignment
left: (identifier) @name.definition.class) @definition.class
(function_definition
name: (identifier) @name.definition.function) @definition.function
(function_definition
name: (scoped_identifier) @name.definition.function) @definition.function
(assignment
left: (call_expression
function: (identifier) @name.definition.function)) @definition.function
(method_definition
name: (identifier) @name.definition.method) @definition.method
(macro_definition
name: (identifier) @name.definition.macro) @definition.macro
(macro_call
name: (identifier) @name.reference.call) @reference.call
(call_expression
function: (identifier) @name.reference.call) @reference.call
(call_expression
function: (scoped_identifier) @name.reference.call) @reference.call
(type_expression
name: (type_identifier) @name.reference.type) @reference.type
(constant_assignment
left: (identifier) @name.definition.constant) @definition.constant
(export_statement
(identifier) @name.reference.export) @reference.export
(using_statement
(identifier) @name.reference.module) @reference.module
(import_statement
(identifier) @name.reference.module) @reference.module

View file

@ -0,0 +1,10 @@
(class_definition
name: (identifier) @name.definition.class) @definition.class
(function_definition
name: (identifier) @name.definition.function) @definition.function
(function_call
name: (identifier) @name.reference.call) @reference.call
(command (command_name) @name.reference.call) @reference.call

View file

@ -0,0 +1,3 @@
(FnProto) @name.definition.function
(VarDecl "const" @name.definition.constant)
(VarDecl "var" @name.definition.variable)

View file

@ -21,6 +21,7 @@ import pathspec
from aider import prompts, utils
from .dump import dump # noqa: F401
from .waiting import WaitingSpinner
ANY_GIT_ERROR += [
OSError,
@ -164,7 +165,7 @@ class GitRepo:
- --attribute-author: Modify Author name to "User Name (aider)".
- --attribute-committer: Modify Committer name to "User Name (aider)".
- --attribute-co-authored-by: Add
"Co-authored-by: aider (<model>) <noreply@aider.chat>" trailer to commit message.
"Co-authored-by: aider (<model>) <aider@aider.chat>" trailer to commit message.
Behavior Summary:
@ -209,7 +210,9 @@ class GitRepo:
else:
user_language = None
if coder:
user_language = coder.get_user_language()
user_language = coder.commit_language
if not user_language:
user_language = coder.get_user_language()
commit_message = self.get_commit_message(diffs, context, user_language)
# Retrieve attribute settings, prioritizing coder.args if available
@ -246,9 +249,7 @@ class GitRepo:
model_name = "unknown-model"
if coder and hasattr(coder, "main_model") and coder.main_model.name:
model_name = coder.main_model.name
commit_message_trailer = (
f"\n\nCo-authored-by: aider ({model_name}) <noreply@aider.chat>"
)
commit_message_trailer = f"\n\nCo-authored-by: aider ({model_name}) <aider@aider.chat>"
# Determine if author/committer names should be modified
# Author modification applies only to aider edits.
@ -331,25 +332,35 @@ class GitRepo:
content += diffs
system_content = self.commit_prompt or prompts.commit_system
language_instruction = ""
if user_language:
language_instruction = f"\n- Is written in {user_language}."
system_content = system_content.format(language_instruction=language_instruction)
messages = [
dict(role="system", content=system_content),
dict(role="user", content=content),
]
commit_message = None
for model in self.models:
num_tokens = model.token_count(messages)
max_tokens = model.info.get("max_input_tokens") or 0
if max_tokens and num_tokens > max_tokens:
continue
commit_message = model.simple_send_with_retries(messages)
if commit_message:
break
spinner_text = f"Generating commit message with {model.name}"
with WaitingSpinner(spinner_text):
if model.system_prompt_prefix:
current_system_content = model.system_prompt_prefix + "\n" + system_content
else:
current_system_content = system_content
messages = [
dict(role="system", content=current_system_content),
dict(role="user", content=content),
]
num_tokens = model.token_count(messages)
max_tokens = model.info.get("max_input_tokens") or 0
if max_tokens and num_tokens > max_tokens:
continue
commit_message = model.simple_send_with_retries(messages)
if commit_message:
break # Found a model that could generate the message
if not commit_message:
self.io.tool_error("Failed to generate commit message!")
@ -386,14 +397,20 @@ class GitRepo:
try:
if current_branch_has_commits:
args = ["HEAD", "--"] + list(fnames)
diffs += self.repo.git.diff(*args)
diffs += self.repo.git.diff(*args, stdout_as_string=False).decode(
self.io.encoding, "replace"
)
return diffs
wd_args = ["--"] + list(fnames)
index_args = ["--cached"] + wd_args
diffs += self.repo.git.diff(*index_args)
diffs += self.repo.git.diff(*wd_args)
diffs += self.repo.git.diff(*index_args, stdout_as_string=False).decode(
self.io.encoding, "replace"
)
diffs += self.repo.git.diff(*wd_args, stdout_as_string=False).decode(
self.io.encoding, "replace"
)
return diffs
except ANY_GIT_ERROR as err:
@ -407,7 +424,9 @@ class GitRepo:
args += ["--color=never"]
args += [from_commit, to_commit]
diffs = self.repo.git.diff(*args)
diffs = self.repo.git.diff(*args, stdout_as_string=False).decode(
self.io.encoding, "replace"
)
return diffs

View file

@ -19,7 +19,7 @@ from tqdm import tqdm
from aider.dump import dump
from aider.special import filter_important_files
from aider.utils import Spinner
from aider.waiting import Spinner
# tree_sitter is throwing a FutureWarning
warnings.simplefilter("ignore", category=FutureWarning)
@ -468,10 +468,11 @@ class RepoMap:
mul = 1.0
is_snake = ("_" in ident) and any(c.isalpha() for c in ident)
is_kebab = ("-" in ident) and any(c.isalpha() for c in ident)
is_camel = any(c.isupper() for c in ident) and any(c.islower() for c in ident)
if ident in mentioned_idents:
mul *= 10
if (is_snake or is_camel) and len(ident) >= 8:
if (is_snake or is_kebab or is_camel) and len(ident) >= 8:
mul *= 10
if ident.startswith("_"):
mul *= 0.1

View file

@ -1,16 +1,32 @@
{
"deepseek-reasoner": {
"max_tokens": 8192,
"max_input_tokens": 64000,
"max_output_tokens": 8192,
"input_cost_per_token": 0.00000055,
"input_cost_per_token_cache_hit": 0.00000014,
"cache_read_input_token_cost": 0.00000014,
"deepseek/deepseek-reasoner": {
"max_tokens": 64000,
"max_input_tokens": 128000,
"max_output_tokens": 64000,
"input_cost_per_token": 0.00000028,
"input_cost_per_token_cache_hit": 0.000000028,
"cache_read_input_token_cost": 0.000000028,
"cache_creation_input_token_cost": 0.0,
"output_cost_per_token": 0.00000219,
"output_cost_per_token": 0.00000042,
"litellm_provider": "deepseek",
"mode": "chat",
//"supports_function_calling": true,
//"supports_function_calling": true,
"supports_assistant_prefill": true,
"supports_tool_choice": false,
"supports_prompt_caching": true
},
"deepseek/deepseek-chat": {
"max_tokens": 8192,
"max_input_tokens": 128000,
"max_output_tokens": 8192,
"input_cost_per_token": 0.00000028,
"input_cost_per_token_cache_hit": 0.000000028,
"cache_read_input_token_cost": 0.000000028,
"cache_creation_input_token_cost": 0.0,
"output_cost_per_token": 0.00000042,
"litellm_provider": "deepseek",
"mode": "chat",
//"supports_function_calling": true,
"supports_assistant_prefill": true,
//"supports_tool_choice": true,
"supports_prompt_caching": true
@ -49,7 +65,7 @@
},
"openrouter/deepseek/deepseek-chat-v3-0324": {
"max_tokens": 8192,
"max_input_tokens": 64000,
"max_input_tokens": 131072,
"max_output_tokens": 8192,
"input_cost_per_token": 0.00000055,
"input_cost_per_token_cache_hit": 0.00000014,
@ -276,6 +292,61 @@
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"vertex_ai/gemini-2.5-pro": {
"max_tokens": 65536,
"max_input_tokens": 1048576,
"max_output_tokens": 65536,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 20,
"input_cost_per_token": 0.00000125,
"input_cost_per_token_above_200k_tokens": 0.0000025,
"output_cost_per_token": 0.00001,
"output_cost_per_token_above_200k_tokens": 0.000015,
"litellm_provider": "vertex_ai-language-models",
"mode": "chat",
"rpm": 2000,
"tpm": 8000000,
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_response_schema": true,
"supports_audio_output": false,
"supports_tool_choice": true,
"supported_modalities": ["text", "image", "audio", "video"],
"supported_output_modalities": ["text"],
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"vertex_ai/gemini-2.5-flash": {
"max_tokens": 65536,
"max_input_tokens": 1048576,
"max_output_tokens": 65536,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 20,
"input_cost_per_token": 0.0000003,
"input_cost_per_audio_token": 0.000001,
"output_cost_per_token": 0.0000025,
"litellm_provider": "vertex_ai-language-models",
"mode": "chat",
"rpm": 10000,
"tpm": 8000000,
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_response_schema": true,
"supports_audio_output": false,
"supports_tool_choice": true,
"supported_modalities": ["text", "image", "audio", "video"],
"supported_output_modalities": ["text"],
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"openrouter/google/gemini-2.5-pro-preview-03-25": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
@ -348,6 +419,42 @@
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"openrouter/google/gemini-2.5": {
"max_tokens": 8192,
"max_input_tokens": 1048576,
"max_output_tokens": 64000,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 30,
"input_cost_per_image": 0,
"input_cost_per_video_per_second": 0,
"input_cost_per_audio_per_second": 0,
"input_cost_per_token": 0,
"input_cost_per_character": 0,
"input_cost_per_token_above_128k_tokens": 0,
"input_cost_per_character_above_128k_tokens": 0,
"input_cost_per_image_above_128k_tokens": 0,
"input_cost_per_video_per_second_above_128k_tokens": 0,
"input_cost_per_audio_per_second_above_128k_tokens": 0,
"output_cost_per_token": 0,
"output_cost_per_character": 0,
"output_cost_per_token_above_128k_tokens": 0,
"output_cost_per_character_above_128k_tokens": 0,
"litellm_provider": "openrouter",
"mode": "chat",
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_audio_input": true,
"supports_video_input": true,
"supports_pdf_input": true,
"supports_response_schema": true,
"supports_tool_choice": true,
"source": "https://cloud.google.com/vertex-ai/generative-ai/pricing"
},
"openrouter/x-ai/grok-3-beta": {
"max_tokens": 131072,
"max_input_tokens": 131072,
@ -432,6 +539,35 @@
"supported_output_modalities": ["text"],
"source": "https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-preview"
},
"gemini-2.5-pro-preview-06-05": {
"max_tokens": 65536,
"max_input_tokens": 1048576,
"max_output_tokens": 65536,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 30,
"input_cost_per_audio_token": 0.00000125,
"input_cost_per_token": 0.00000125,
"input_cost_per_token_above_200k_tokens": 0.0000025,
"output_cost_per_token": 0.00001,
"output_cost_per_token_above_200k_tokens": 0.000015,
"litellm_provider": "vertex_ai-language-models",
"mode": "chat",
"supports_reasoning": true,
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_response_schema": true,
"supports_audio_output": false,
"supports_tool_choice": true,
"supported_endpoints": ["/v1/chat/completions", "/v1/completions", "/v1/batch"],
"supported_modalities": ["text", "image", "audio", "video"],
"supported_output_modalities": ["text"],
"source": "https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-preview"
},
"gemini/gemini-2.5-pro-preview-05-06": {
"max_tokens": 65536,
"max_input_tokens": 1048576,
@ -461,6 +597,117 @@
"supported_output_modalities": ["text"],
"source": "https://ai.google.dev/gemini-api/docs/pricing#gemini-2.5-pro-preview"
},
"gemini/gemini-2.5-pro-preview-06-05": {
"max_tokens": 65536,
"max_input_tokens": 1048576,
"max_output_tokens": 65536,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 30,
"input_cost_per_audio_token": 0.0000007,
"input_cost_per_token": 0.00000125,
"input_cost_per_token_above_200k_tokens": 0.0000025,
"output_cost_per_token": 0.00001,
"output_cost_per_token_above_200k_tokens": 0.000015,
"litellm_provider": "gemini",
"mode": "chat",
"rpm": 10000,
"tpm": 10000000,
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_response_schema": true,
"supports_audio_output": false,
"supports_tool_choice": true,
"supported_modalities": ["text", "image", "audio", "video"],
"supported_output_modalities": ["text"],
"source": "https://ai.google.dev/gemini-api/docs/pricing#gemini-2.5-pro-preview"
},
"gemini/gemini-2.5-pro": {
"max_tokens": 65536,
"max_input_tokens": 1048576,
"max_output_tokens": 65536,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 20,
"input_cost_per_token": 0.00000125,
"input_cost_per_token_above_200k_tokens": 0.0000025,
"output_cost_per_token": 0.00001,
"output_cost_per_token_above_200k_tokens": 0.000015,
"litellm_provider": "gemini",
"mode": "chat",
"rpm": 2000,
"tpm": 8000000,
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_response_schema": true,
"supports_audio_output": false,
"supports_tool_choice": true,
"supported_modalities": ["text", "image", "audio", "video"],
"supported_output_modalities": ["text"],
"source": "https://ai.google.dev/gemini-api/docs/pricing#gemini-2.5-pro"
},
"gemini/gemini-2.5-flash": {
"max_tokens": 65536,
"max_input_tokens": 1048576,
"max_output_tokens": 65536,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 20,
"input_cost_per_token": 0.00000035,
"input_cost_per_audio_token": 0.000001,
"output_cost_per_token": 0.0000025,
"litellm_provider": "gemini",
"mode": "chat",
"rpm": 10000,
"tpm": 8000000,
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_response_schema": true,
"supports_audio_output": false,
"supports_tool_choice": true,
"supported_modalities": ["text", "image", "audio", "video"],
"supported_output_modalities": ["text"],
"source": "https://ai.google.dev/gemini-api/docs/pricing#gemini-2.5-flash"
},
"gemini/gemini-2.5-flash-lite-preview-06-17": {
"max_tokens": 64000,
"max_input_tokens": 1000000,
"max_output_tokens": 64000,
"max_images_per_prompt": 3000,
"max_videos_per_prompt": 10,
"max_video_length": 1,
"max_audio_length_hours": 8.4,
"max_audio_per_prompt": 1,
"max_pdf_size_mb": 20,
"input_cost_per_token": 0.00000001,
"input_cost_per_audio_token": 0.0000005,
"output_cost_per_token": 0.0000004,
"litellm_provider": "gemini",
"mode": "chat",
"rpm": 30000,
"tpm": 30000000,
"supports_system_messages": true,
"supports_function_calling": true,
"supports_vision": true,
"supports_response_schema": true,
"supports_audio_output": false,
"supports_tool_choice": true,
"supported_modalities": ["text", "image", "audio", "video"],
"supported_output_modalities": ["text"],
"source": "https://ai.google.dev/gemini-api/docs/pricing#gemini-2.5-flash-lite"
},
"together_ai/Qwen/Qwen3-235B-A22B-fp8-tput": {
"input_cost_per_token": 0.0000002,
"output_cost_per_token": 0.0000006,

File diff suppressed because it is too large Load diff

View file

@ -3,13 +3,12 @@ import platform
import subprocess
import sys
import tempfile
import time
from pathlib import Path
import oslex
from rich.console import Console
from aider.dump import dump # noqa: F401
from aider.waiting import Spinner
IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".bmp", ".tiff", ".webp", ".pdf"}
@ -212,6 +211,13 @@ def run_install(cmd):
print()
print("Installing:", printable_shell_command(cmd))
# First ensure pip is available
ensurepip_cmd = [sys.executable, "-m", "ensurepip", "--upgrade"]
try:
subprocess.run(ensurepip_cmd, capture_output=True, check=False)
except Exception:
pass # Continue even if ensurepip fails
try:
output = []
process = subprocess.Popen(
@ -251,154 +257,6 @@ def run_install(cmd):
return False, output
class Spinner:
"""
Minimal spinner that scans a single marker back and forth across a line.
The animation is pre-rendered into a list of frames. If the terminal
cannot display unicode the frames are converted to plain ASCII.
"""
last_frame_idx = 0 # Class variable to store the last frame index
def __init__(self, text: str, width: int = 7):
self.text = text
self.start_time = time.time()
self.last_update = 0.0
self.visible = False
self.is_tty = sys.stdout.isatty()
self.console = Console()
# Pre-render the animation frames using pure ASCII so they will
# always display, even on very limited terminals.
ascii_frames = [
"#= ", # C1 C2 space(8)
"=# ", # C2 C1 space(8)
" =# ", # space(1) C2 C1 space(7)
" =# ", # space(2) C2 C1 space(6)
" =# ", # space(3) C2 C1 space(5)
" =# ", # space(4) C2 C1 space(4)
" =# ", # space(5) C2 C1 space(3)
" =# ", # space(6) C2 C1 space(2)
" =# ", # space(7) C2 C1 space(1)
" =#", # space(8) C2 C1
" #=", # space(8) C1 C2
" #= ", # space(7) C1 C2 space(1)
" #= ", # space(6) C1 C2 space(2)
" #= ", # space(5) C1 C2 space(3)
" #= ", # space(4) C1 C2 space(4)
" #= ", # space(3) C1 C2 space(5)
" #= ", # space(2) C1 C2 space(6)
" #= ", # space(1) C1 C2 space(7)
]
self.unicode_palette = "░█"
xlate_from, xlate_to = ("=#", self.unicode_palette)
# If unicode is supported, swap the ASCII chars for nicer glyphs.
if self._supports_unicode():
translation_table = str.maketrans(xlate_from, xlate_to)
frames = [f.translate(translation_table) for f in ascii_frames]
self.scan_char = xlate_to[xlate_from.find("#")]
else:
frames = ascii_frames
self.scan_char = "#"
# Bounce the scanner back and forth.
self.frames = frames
self.frame_idx = Spinner.last_frame_idx # Initialize from class variable
self.width = len(frames[0]) - 2 # number of chars between the brackets
self.animation_len = len(frames[0])
self.last_display_len = 0 # Length of the last spinner line (frame + text)
def _supports_unicode(self) -> bool:
if not self.is_tty:
return False
try:
out = self.unicode_palette
out += "\b" * len(self.unicode_palette)
out += " " * len(self.unicode_palette)
out += "\b" * len(self.unicode_palette)
sys.stdout.write(out)
sys.stdout.flush()
return True
except UnicodeEncodeError:
return False
except Exception:
return False
def _next_frame(self) -> str:
frame = self.frames[self.frame_idx]
self.frame_idx = (self.frame_idx + 1) % len(self.frames)
Spinner.last_frame_idx = self.frame_idx # Update class variable
return frame
def step(self, text: str = None) -> None:
if text is not None:
self.text = text
if not self.is_tty:
return
now = time.time()
if not self.visible and now - self.start_time >= 0.5:
self.visible = True
self.last_update = 0.0
if self.is_tty:
self.console.show_cursor(False)
if not self.visible or now - self.last_update < 0.1:
return
self.last_update = now
frame_str = self._next_frame()
# Determine the maximum width for the spinner line
# Subtract 2 as requested, to leave a margin or prevent cursor wrapping issues
max_spinner_width = self.console.width - 2
if max_spinner_width < 0: # Handle extremely narrow terminals
max_spinner_width = 0
current_text_payload = f" {self.text}"
line_to_display = f"{frame_str}{current_text_payload}"
# Truncate the line if it's too long for the console width
if len(line_to_display) > max_spinner_width:
line_to_display = line_to_display[:max_spinner_width]
len_line_to_display = len(line_to_display)
# Calculate padding to clear any remnants from a longer previous line
padding_to_clear = " " * max(0, self.last_display_len - len_line_to_display)
# Write the spinner frame, text, and any necessary clearing spaces
sys.stdout.write(f"\r{line_to_display}{padding_to_clear}")
self.last_display_len = len_line_to_display
# Calculate number of backspaces to position cursor at the scanner character
scan_char_abs_pos = frame_str.find(self.scan_char)
# Total characters written to the line (frame + text + padding)
total_chars_written_on_line = len_line_to_display + len(padding_to_clear)
# num_backspaces will be non-positive if scan_char_abs_pos is beyond
# total_chars_written_on_line (e.g., if the scan char itself was truncated).
# (e.g., if the scan char itself was truncated).
# In such cases, (effectively) 0 backspaces are written,
# and the cursor stays at the end of the line.
num_backspaces = total_chars_written_on_line - scan_char_abs_pos
sys.stdout.write("\b" * num_backspaces)
sys.stdout.flush()
def end(self) -> None:
if self.visible and self.is_tty:
clear_len = self.last_display_len # Use the length of the last displayed content
sys.stdout.write("\r" + " " * clear_len + "\r")
sys.stdout.flush()
self.console.show_cursor(True)
self.visible = False
def find_common_root(abs_fnames):
try:
if len(abs_fnames) == 1:
@ -485,20 +343,3 @@ def printable_shell_command(cmd_list):
str: Shell-escaped command string.
"""
return oslex.join(cmd_list)
def main():
spinner = Spinner("Running spinner...")
try:
for _ in range(100):
time.sleep(0.15)
spinner.step()
print("Success!")
except KeyboardInterrupt:
print("\nInterrupted by user.")
finally:
spinner.end()
if __name__ == "__main__":
main()

View file

@ -13,10 +13,159 @@ Use it like:
spinner.stop()
"""
import sys
import threading
import time
from aider.utils import Spinner
from rich.console import Console
class Spinner:
"""
Minimal spinner that scans a single marker back and forth across a line.
The animation is pre-rendered into a list of frames. If the terminal
cannot display unicode the frames are converted to plain ASCII.
"""
last_frame_idx = 0 # Class variable to store the last frame index
def __init__(self, text: str, width: int = 7):
self.text = text
self.start_time = time.time()
self.last_update = 0.0
self.visible = False
self.is_tty = sys.stdout.isatty()
self.console = Console()
# Pre-render the animation frames using pure ASCII so they will
# always display, even on very limited terminals.
ascii_frames = [
"#= ", # C1 C2 space(8)
"=# ", # C2 C1 space(8)
" =# ", # space(1) C2 C1 space(7)
" =# ", # space(2) C2 C1 space(6)
" =# ", # space(3) C2 C1 space(5)
" =# ", # space(4) C2 C1 space(4)
" =# ", # space(5) C2 C1 space(3)
" =# ", # space(6) C2 C1 space(2)
" =# ", # space(7) C2 C1 space(1)
" =#", # space(8) C2 C1
" #=", # space(8) C1 C2
" #= ", # space(7) C1 C2 space(1)
" #= ", # space(6) C1 C2 space(2)
" #= ", # space(5) C1 C2 space(3)
" #= ", # space(4) C1 C2 space(4)
" #= ", # space(3) C1 C2 space(5)
" #= ", # space(2) C1 C2 space(6)
" #= ", # space(1) C1 C2 space(7)
]
self.unicode_palette = "░█"
xlate_from, xlate_to = ("=#", self.unicode_palette)
# If unicode is supported, swap the ASCII chars for nicer glyphs.
if self._supports_unicode():
translation_table = str.maketrans(xlate_from, xlate_to)
frames = [f.translate(translation_table) for f in ascii_frames]
self.scan_char = xlate_to[xlate_from.find("#")]
else:
frames = ascii_frames
self.scan_char = "#"
# Bounce the scanner back and forth.
self.frames = frames
self.frame_idx = Spinner.last_frame_idx # Initialize from class variable
self.width = len(frames[0]) - 2 # number of chars between the brackets
self.animation_len = len(frames[0])
self.last_display_len = 0 # Length of the last spinner line (frame + text)
def _supports_unicode(self) -> bool:
if not self.is_tty:
return False
try:
out = self.unicode_palette
out += "\b" * len(self.unicode_palette)
out += " " * len(self.unicode_palette)
out += "\b" * len(self.unicode_palette)
sys.stdout.write(out)
sys.stdout.flush()
return True
except UnicodeEncodeError:
return False
except Exception:
return False
def _next_frame(self) -> str:
frame = self.frames[self.frame_idx]
self.frame_idx = (self.frame_idx + 1) % len(self.frames)
Spinner.last_frame_idx = self.frame_idx # Update class variable
return frame
def step(self, text: str = None) -> None:
if text is not None:
self.text = text
if not self.is_tty:
return
now = time.time()
if not self.visible and now - self.start_time >= 0.5:
self.visible = True
self.last_update = 0.0
if self.is_tty:
self.console.show_cursor(False)
if not self.visible or now - self.last_update < 0.1:
return
self.last_update = now
frame_str = self._next_frame()
# Determine the maximum width for the spinner line
# Subtract 2 as requested, to leave a margin or prevent cursor wrapping issues
max_spinner_width = self.console.width - 2
if max_spinner_width < 0: # Handle extremely narrow terminals
max_spinner_width = 0
current_text_payload = f" {self.text}"
line_to_display = f"{frame_str}{current_text_payload}"
# Truncate the line if it's too long for the console width
if len(line_to_display) > max_spinner_width:
line_to_display = line_to_display[:max_spinner_width]
len_line_to_display = len(line_to_display)
# Calculate padding to clear any remnants from a longer previous line
padding_to_clear = " " * max(0, self.last_display_len - len_line_to_display)
# Write the spinner frame, text, and any necessary clearing spaces
sys.stdout.write(f"\r{line_to_display}{padding_to_clear}")
self.last_display_len = len_line_to_display
# Calculate number of backspaces to position cursor at the scanner character
scan_char_abs_pos = frame_str.find(self.scan_char)
# Total characters written to the line (frame + text + padding)
total_chars_written_on_line = len_line_to_display + len(padding_to_clear)
# num_backspaces will be non-positive if scan_char_abs_pos is beyond
# total_chars_written_on_line (e.g., if the scan char itself was truncated).
# (e.g., if the scan char itself was truncated).
# In such cases, (effectively) 0 backspaces are written,
# and the cursor stays at the end of the line.
num_backspaces = total_chars_written_on_line - scan_char_abs_pos
sys.stdout.write("\b" * num_backspaces)
sys.stdout.flush()
def end(self) -> None:
if self.visible and self.is_tty:
clear_len = self.last_display_len # Use the length of the last displayed content
sys.stdout.write("\r" + " " * clear_len + "\r")
sys.stdout.flush()
self.console.show_cursor(True)
self.visible = False
class WaitingSpinner:
@ -30,8 +179,8 @@ class WaitingSpinner:
def _spin(self):
while not self._stop_event.is_set():
time.sleep(self.delay)
self.spinner.step()
time.sleep(self.delay)
self.spinner.end()
def start(self):
@ -43,7 +192,7 @@ class WaitingSpinner:
"""Request the spinner to stop and wait briefly for the thread to exit."""
self._stop_event.set()
if self._thread.is_alive():
self._thread.join(timeout=0.1)
self._thread.join(timeout=self.delay)
self.spinner.end()
# Allow use as a context-manager
@ -53,3 +202,20 @@ class WaitingSpinner:
def __exit__(self, exc_type, exc_val, exc_tb):
self.stop()
def main():
spinner = Spinner("Running spinner...")
try:
for _ in range(100):
time.sleep(0.15)
spinner.step()
print("Success!")
except KeyboardInterrupt:
print("\nInterrupted by user.")
finally:
spinner.end()
if __name__ == "__main__":
main()

View file

@ -24,7 +24,97 @@ cog.out(text)
]]]-->
### main branch
### Aider v0.86.0
- Expanded GPT-5 model support across family variants and providers (OpenAI, Azure, OpenRouter), including dated and chat/mini/nano variants.
- Aider wrote 88% of the code in this release.
### Aider v0.85.5
- Enforced diff edit format for GPT-5 models.
- Added support for the reasoning_effort setting for GPT-5 models.
- Fixed model detection to correctly apply GPT-5 settings to versioned names (gpt-5 and gpt-5-2025-08-07).
### Aider v0.85.4
- Added support for openai/gpt-5
- Fixed analytics to support the latest PostHog SDK event-capture API.
- Disabled temperature when using GPT-5 models for more deterministic outputs.
### Aider v0.85.3
- Bumped dependencies to pick up latest litellm==1.75.0.
### Aider v0.85.2
- Added support for Grok-4 via `xai/grok-4` and `openrouter/x-ai/grok-4` model names.
- Added support for `gemini/gemini-2.5-flash-lite-preview-06-17` model, by Tamir Zahavi-Brunner.
- `/clear` now prints “All chat history cleared.” so you know it worked, by Zexin Yuan.
- `/undo` output now shows only the first line of each commit message, making it easier to read.
- Fixed an issue where new settings for an existing model didn't replace the old ones, by Andrew Grigorev.
- Added support for `openrouter/moonshotai/kimi-k2` model, by Jack Harrington.
### Aider v0.85.1
- Display model announcements with no-arg `/model` command.
### Aider v0.85.0
- Support for Responses API models like o1-pro, o3-pro.
- Updated pricing for o3.
- Added support for new Gemini models including `gemini-2.5-pro`, `gemini-2.5-flash`, and `gemini-2.5-pro-preview-06-05` with thinking tokens support.
- Updated model aliases: `flash` now points to `gemini-2.5-flash` and `gemini` now points to `gemini-2.5-pro`.
- Added `--add-gitignore-files` flag to enable adding files listed in .gitignore to Aider's editing scope, by omarcinkonis.
- Added `--commit-language` option to specify the language for commit messages, by Kyosuke Takayama.
- Enhanced thinking tokens support: can now be disabled by setting to 0, and improved help text with examples.
- Added MATLAB language support for repository maps, by Matthew Tofano.
- Added support for OpenAI o3-pro model across multiple providers.
- Improved GitHub Copilot token handling with better validation and error messages, by Vincent Taverna and Sebastian Estrella.
- Fixed encoding issues in git diff output and LLM history logging.
- Enhanced commit message generation to use system prompt prefixes, by Luke Reeves.
- Improved inline code rendering in Rich markdown output, by Vamsi Talupula.
- Fixed Vertex AI model name prefixes in settings, by Wietse Venema.
- Improved `/read-only` command to resolve literal paths correctly, by Matteo Landi.
- Skip expensive file tracking operations when `--skip-sanity-check-repo` is enabled for better performance, by Makar Ivashko.
- Ensure pip is available before package installation.
- Auto-create parent directories for chat history files to prevent startup errors, by contributor.
- Fixed search block regex to accept optional closing tags when working with HTML content, by Mathis Beer.
- Co-authored-by attribution is now enabled by default for commit messages.
- Added Clojure language support for repository maps, by Garrett Hopper.
- Added custom PostHog analytics configuration options with `--analytics-posthog-host` and `--analytics-posthog-project-api-key` flags, by Vasil Markoukin.
- Optimized chat history summarization performance, by jayeshthk.
- Improved kebab-case identifier recognition in repository maps for better code analysis.
- Increased max tokens for Deepseek models to 65536 for better performance.
- Aider wrote 21% of the code in this release.
### Aider v0.84.0
- Added support for new Claude models including the Sonnet 4 and Opus 4 series (e.g., `claude-sonnet-4-20250514`,
`claude-opus-4-20250514`) across various providers. The default `sonnet` and `opus` aliases were updated to these newer
versions.
- Added support for the `vertex_ai/gemini-2.5-flash-preview-05-20` model.
- Fixed OpenRouter token cost calculation for improved accuracy.
- Updated default OpenRouter models during onboarding to `deepseek/deepseek-r1:free` for the free tier and
`anthropic/claude-sonnet-4` for paid tiers.
- Automatically refresh GitHub Copilot tokens when used as OpenAI API keys, by Lih Chen.
- Aider wrote 79% of the code in this release.
### Aider v0.83.2
- Bumped configargparse to 1.7.1 as 1.7 was pulled.
- Added shell tab completion for file path arguments (by saviour) and for `--edit-format`/`--editor-edit-format` options.
- Improved OpenRouter model metadata handling by introducing a local cache, increasing reliability and performance.
- The `/settings` command now displays detailed metadata for active main, editor, and weak models.
- Fixed an issue where files explicitly added via the command line were not correctly ignored if listed in `.gitignore`.
- Improved automatic commit messages by providing more context during their generation, by wangboxue.
### Aider v0.83.1
- Improved user language detection by correctly normalizing hyphenated language codes (e.g., `en-US` to `en`) and enhancing the validation of locale results.
- Prevented Aider from instructing the LLM to reply in 'C' or 'POSIX' when these are detected as the system locale.
- Displayed a spinner with the model name when generating commit messages.
### Aider v0.83.0
- Added support for `gemini-2.5-pro-preview-05-06` models.
- Added support for `qwen3-235b` models.
@ -428,7 +518,7 @@ cog.out(text)
- [Aider works with LLM web chat UIs](https://aider.chat/docs/usage/copypaste.html).
- New `--copy-paste` mode.
- New `/copy-context` command.
- [Set API keys and other environment variables for all providers from command line or yaml conf file](https://aider.chat/docs/config/aider_conf.html#storing-llm-keys).
- [Set API keys and other environment variables for all providers from command line or YAML conf file](https://aider.chat/docs/config/aider_conf.html#storing-llm-keys).
- New `--api-key provider=key` setting.
- New `--set-env VAR=value` setting.
- Added bash and zsh support to `--watch-files`.
@ -596,7 +686,7 @@ cog.out(text)
### Aider v0.59.1
- Check for obsolete `yes: true` in yaml config, show helpful error.
- Check for obsolete `yes: true` in YAML config, show helpful error.
- Model settings for openrouter/anthropic/claude-3.5-sonnet:beta
### Aider v0.59.0
@ -606,7 +696,7 @@ cog.out(text)
- Still auto-completes the full paths of the repo files like `/add`.
- Now supports globs like `src/**/*.py`
- Renamed `--yes` to `--yes-always`.
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` yaml key.
- Now uses `AIDER_YES_ALWAYS` env var and `yes-always:` YAML key.
- Existing YAML and .env files will need to be updated.
- Can still abbreviate to `--yes` on the command line.
- Config file now uses standard YAML list syntax with ` - list entries`, one per line.
@ -813,7 +903,7 @@ cog.out(text)
- Use `--map-refresh <always|files|manual|auto>` to configure.
- Improved cost estimate logic for caching.
- Improved editing performance on Jupyter Notebook `.ipynb` files.
- Show which config yaml file is loaded with `--verbose`.
- Show which config YAML file is loaded with `--verbose`.
- Bumped dependency versions.
- Bugfix: properly load `.aider.models.metadata.json` data.
- Bugfix: Using `--msg /ask ...` caused an exception.

File diff suppressed because it is too large Load diff

View file

@ -1093,32 +1093,6 @@
seconds_per_case: 12.0
total_cost: 0.4281
- dirname: 2025-04-16-21-20-55--o3-high-diff-temp0-exsys
test_cases: 225
model: o3 (high)
edit_format: diff
commit_hash: 24805ff-dirty
pass_rate_1: 36.9
pass_rate_2: 79.6
pass_num_1: 83
pass_num_2: 179
percent_cases_well_formed: 95.1
error_outputs: 11
num_malformed_responses: 11
num_with_malformed_responses: 11
user_asks: 110
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 2
total_tests: 225
command: aider --model o3
date: 2025-04-16
versions: 0.82.1.dev
seconds_per_case: 113.8
total_cost: 111.0325
- dirname: 2025-04-16-22-01-58--o4-mini-high-diff-exsys
test_cases: 225
model: o4-mini (high)
@ -1145,34 +1119,6 @@
seconds_per_case: 176.5
total_cost: 19.6399
- dirname: 2025-04-17-01-20-35--o3-mini-high-diff-arch
test_cases: 225
model: o3 (high) + gpt-4.1
edit_format: architect
commit_hash: 80909e1-dirty
editor_model: gpt-4.1
editor_edit_format: editor-diff
pass_rate_1: 36.0
pass_rate_2: 82.7
pass_num_1: 81
pass_num_2: 186
percent_cases_well_formed: 100.0
error_outputs: 9
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 166
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
test_timeouts: 0
total_tests: 225
command: aider --model o3 --architect
date: 2025-04-17
versions: 0.82.2.dev
seconds_per_case: 110.0
total_cost: 69.2921
- dirname: 2025-04-19-14-43-04--o4-mini-patch
test_cases: 225
model: openhands-lm-32b-v0.1
@ -1279,30 +1225,632 @@
seconds_per_case: 372.2
total_cost: 0.7603
- dirname: 2025-05-08-03-22-37--qwen3-235b-defaults
- dirname: 2025-05-09-17-02-02--qwen3-235b-a22b.unthink_16k_diff
test_cases: 225
model: Qwen3 235B A22B
model: Qwen3 235B A22B diff, no think, Alibaba API
edit_format: diff
commit_hash: aaacee5-dirty
pass_rate_1: 17.3
pass_rate_2: 49.8
pass_num_1: 39
pass_num_2: 112
commit_hash: 91d7fbd-dirty
pass_rate_1: 28.9
pass_rate_2: 59.6
pass_num_1: 65
pass_num_2: 134
percent_cases_well_formed: 92.9
error_outputs: 22
num_malformed_responses: 22
num_with_malformed_responses: 16
user_asks: 111
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2816192
completion_tokens: 342062
test_timeouts: 1
total_tests: 225
command: aider --model openai/qwen3-235b-a22b
date: 2025-05-09
versions: 0.82.4.dev
seconds_per_case: 45.4
total_cost: 0.0000
- dirname: 2025-05-24-21-17-54--sonnet4-diff-exuser
test_cases: 225
model: claude-sonnet-4-20250514 (no thinking)
edit_format: diff
commit_hash: ef3f8bb-dirty
pass_rate_1: 20.4
pass_rate_2: 56.4
pass_num_1: 46
pass_num_2: 127
percent_cases_well_formed: 98.2
error_outputs: 6
num_malformed_responses: 4
num_with_malformed_responses: 4
user_asks: 129
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 1
prompt_tokens: 3460663
completion_tokens: 433373
test_timeouts: 7
total_tests: 225
command: aider --model claude-sonnet-4-20250514
date: 2025-05-24
versions: 0.83.3.dev
seconds_per_case: 29.8
total_cost: 15.8155
- dirname: 2025-05-24-22-10-36--sonnet4-diff-exuser-think32k
test_cases: 225
model: claude-sonnet-4-20250514 (32k thinking)
edit_format: diff
commit_hash: e3cb907
thinking_tokens: 32000
pass_rate_1: 25.8
pass_rate_2: 61.3
pass_num_1: 58
pass_num_2: 138
percent_cases_well_formed: 97.3
error_outputs: 10
num_malformed_responses: 10
num_with_malformed_responses: 6
user_asks: 111
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2863068
completion_tokens: 1271074
test_timeouts: 6
total_tests: 225
command: aider --model claude-sonnet-4-20250514
date: 2025-05-24
versions: 0.83.3.dev
seconds_per_case: 79.9
total_cost: 26.5755
- dirname: 2025-05-25-19-57-20--opus4-diff-exuser
test_cases: 225
model: claude-opus-4-20250514 (no think)
edit_format: diff
commit_hash: 9ef3211
pass_rate_1: 32.9
pass_rate_2: 70.7
pass_num_1: 74
pass_num_2: 159
percent_cases_well_formed: 98.7
error_outputs: 3
num_malformed_responses: 3
num_with_malformed_responses: 3
user_asks: 105
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2671437
completion_tokens: 380717
test_timeouts: 3
total_tests: 225
command: aider --model claude-opus-4-20250514
date: 2025-05-25
versions: 0.83.3.dev
seconds_per_case: 42.5
total_cost: 68.6253
- dirname: 2025-05-25-20-40-51--opus4-diff-exuser
test_cases: 225
model: claude-opus-4-20250514 (32k thinking)
edit_format: diff
commit_hash: 9ef3211
thinking_tokens: 32000
pass_rate_1: 37.3
pass_rate_2: 72.0
pass_num_1: 84
pass_num_2: 162
percent_cases_well_formed: 97.3
error_outputs: 10
num_malformed_responses: 6
num_with_malformed_responses: 6
user_asks: 97
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2567514
completion_tokens: 363142
test_timeouts: 4
total_tests: 225
command: aider --model claude-opus-4-20250514
date: 2025-05-25
versions: 0.83.3.dev
seconds_per_case: 44.1
total_cost: 65.7484
- dirname: 2025-05-26-15-56-31--flash25-05-20-24k-think # dirname is misleading
test_cases: 225
model: gemini-2.5-flash-preview-05-20 (no think)
edit_format: diff
commit_hash: 214b811-dirty
thinking_tokens: 0 # <-- no thinking
pass_rate_1: 20.9
pass_rate_2: 44.0
pass_num_1: 47
pass_num_2: 99
percent_cases_well_formed: 93.8
error_outputs: 16
num_malformed_responses: 16
num_with_malformed_responses: 14
user_asks: 79
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 5512458
completion_tokens: 514145
test_timeouts: 4
total_tests: 225
command: aider --model gemini/gemini-2.5-flash-preview-05-20
date: 2025-05-26
versions: 0.83.3.dev
seconds_per_case: 12.2
total_cost: 1.1354
- dirname: 2025-05-25-22-58-44--flash25-05-20-24k-think
test_cases: 225
model: gemini-2.5-flash-preview-05-20 (24k think)
edit_format: diff
commit_hash: a8568c3-dirty
thinking_tokens: 24576
pass_rate_1: 26.2
pass_rate_2: 55.1
pass_num_1: 59
pass_num_2: 124
percent_cases_well_formed: 95.6
error_outputs: 15
num_malformed_responses: 15
num_with_malformed_responses: 10
user_asks: 101
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 3666792
completion_tokens: 2703162
test_timeouts: 4
total_tests: 225
command: aider --model gemini/gemini-2.5-flash-preview-05-20
date: 2025-05-25
versions: 0.83.3.dev
seconds_per_case: 53.9
total_cost: 8.5625
- dirname: 2025-06-06-18-38-56--gemini0605-diff-fenced
test_cases: 225
model: gemini-2.5-pro-preview-06-05 (default think)
edit_format: diff-fenced
commit_hash: 4c161f9-dirty
pass_rate_1: 44.9
pass_rate_2: 79.1
pass_num_1: 101
pass_num_2: 178
percent_cases_well_formed: 100.0
error_outputs: 4
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 105
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 4
prompt_tokens: 2751296
completion_tokens: 4142197
test_timeouts: 1
total_tests: 225
command: aider --model gemini/gemini-2.5-pro-preview-06-05
date: 2025-06-06
versions: 0.84.1.dev
seconds_per_case: 175.2
total_cost: 45.5961
- dirname: 2025-06-06-16-36-21--gemini0605-32k-think-diff-fenced
test_cases: 225
model: gemini-2.5-pro-preview-06-05 (32k think)
edit_format: diff-fenced
commit_hash: f827f22
thinking_tokens: 32768
pass_rate_1: 46.2
pass_rate_2: 83.1
pass_num_1: 104
pass_num_2: 187
percent_cases_well_formed: 99.6
error_outputs: 1
num_malformed_responses: 1
num_with_malformed_responses: 1
user_asks: 112
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2719961
completion_tokens: 4648227
test_timeouts: 0
total_tests: 225
command: aider --model gemini/gemini-2.5-pro-preview-06-05 --thinking-tokens 32k
date: 2025-06-06
versions: 0.84.1.dev
seconds_per_case: 200.3
total_cost: 49.8822
- dirname: 2025-06-06-16-47-07--r1-diff
test_cases: 224
model: DeepSeek R1 (0528)
edit_format: diff
commit_hash: 4c161f9-dirty
pass_rate_1: 34.4
pass_rate_2: 71.4
pass_num_1: 77
pass_num_2: 160
percent_cases_well_formed: 94.6
error_outputs: 28
num_malformed_responses: 15
num_with_malformed_responses: 12
user_asks: 105
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2644169
completion_tokens: 1842168
test_timeouts: 2
total_tests: 225
command: aider --model deepseek/deepseek-reasoner
date: 2025-06-06
versions: 0.84.1.dev
seconds_per_case: 716.6
total_cost: 4.8016
- dirname: 2025-06-25-21-04-24--o3-price-reduction-high
test_cases: 225
model: o3 (high)
edit_format: diff
commit_hash: c48fea6
reasoning_effort: high
pass_rate_1: 40.0
pass_rate_2: 81.3
pass_num_1: 90
pass_num_2: 183
percent_cases_well_formed: 94.7
error_outputs: 25
num_malformed_responses: 23
num_with_malformed_responses: 12
user_asks: 116
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 1
prompt_tokens: 3148932
completion_tokens: 2047615
test_timeouts: 2
total_tests: 225
command: aider --model o3 --reasoning-effort high
date: 2025-06-25
versions: 0.84.1.dev
seconds_per_case: 197.3
total_cost: 21.2259
- dirname: 2025-06-25-20-30-16--o3-price-reduction
test_cases: 225
model: o3
edit_format: diff
commit_hash: c48fea6
pass_rate_1: 40.9
pass_rate_2: 76.9
pass_num_1: 92
pass_num_2: 173
percent_cases_well_formed: 93.8
error_outputs: 22
num_malformed_responses: 22
num_with_malformed_responses: 14
user_asks: 108
lazy_comments: 2
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2893189
completion_tokens: 1154767
test_timeouts: 1
total_tests: 225
command: aider --model o3
date: 2025-06-25
versions: 0.84.1.dev
seconds_per_case: 101.7
total_cost: 13.7517
- dirname: 2025-06-27-23-53-57--o3-mini-high-diff-arch
test_cases: 224
model: o3 (high) + gpt-4.1
edit_format: architect
commit_hash: 4f4f00f-dirty
editor_model: gpt-4.1
editor_edit_format: editor-diff
reasoning_effort: high
pass_rate_1: 34.8
pass_rate_2: 78.2
pass_num_1: 78
pass_num_2: 176
percent_cases_well_formed: 100.0
error_outputs: 18
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 172
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 1
prompt_tokens: 1306877
completion_tokens: 1327154
test_timeouts: 1
total_tests: 225
command: aider --model o3
date: 2025-06-27
versions: 0.85.1.dev
seconds_per_case: 121.8
total_cost: 17.5518
- dirname: 2025-06-28-00-38-18--o3-pro-high
test_cases: 225
model: o3-pro (high)
edit_format: diff
commit_hash: 5318380
reasoning_effort: high
pass_rate_1: 43.6
pass_rate_2: 84.9
pass_num_1: 98
pass_num_2: 191
percent_cases_well_formed: 97.8
error_outputs: 20
num_malformed_responses: 8
num_with_malformed_responses: 5
user_asks: 100
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2372636
completion_tokens: 1235902
test_timeouts: 1
total_tests: 225
command: aider --model o3-pro
date: 2025-06-28
versions: 0.85.1.dev
seconds_per_case: 449.0
total_cost: 146.3249
- dirname: 2025-07-11-19-37-40--xai-or-grok4-high
test_cases: 225
model: grok-4 (high)
edit_format: diff
commit_hash: f7870b6-dirty
reasoning_effort: high
pass_rate_1: 40.9
pass_rate_2: 79.6
pass_num_1: 92
pass_num_2: 179
percent_cases_well_formed: 97.3
error_outputs: 11
num_malformed_responses: 8
num_with_malformed_responses: 6
user_asks: 133
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2815347
completion_tokens: 3411480
test_timeouts: 0
total_tests: 225
command: aider --model openrouter/x-ai/grok-4
date: 2025-07-11
versions: 0.85.2.dev
seconds_per_case: 403.2
total_cost: 59.6182
- dirname: 2025-07-17-17-41-54--kimi-k2-diff-or-pricing
test_cases: 225
model: Kimi K2
edit_format: diff
commit_hash: 915ebff-dirty
pass_rate_1: 20.4
pass_rate_2: 59.1
pass_num_1: 46
pass_num_2: 133
percent_cases_well_formed: 92.9
error_outputs: 19
num_malformed_responses: 19
num_with_malformed_responses: 16
user_asks: 61
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2355141
completion_tokens: 363846
test_timeouts: 4
total_tests: 225
command: aider --model openrouter/moonshotai/kimi-k2
date: 2025-07-17
versions: 0.85.3.dev
seconds_per_case: 67.6
total_cost: 1.2357
- dirname: 2025-08-06-04-54-48--gpt-oss-120b-high-polyglot
test_cases: 225
model: gpt-oss-120b (high)
edit_format: diff
commit_hash: 1af0e59
pass_rate_1: 13.8
pass_rate_2: 41.8
pass_num_1: 31
pass_num_2: 94
percent_cases_well_formed: 79.1
error_outputs: 95
num_malformed_responses: 77
num_with_malformed_responses: 47
user_asks: 142
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 3123768
completion_tokens: 856495
test_timeouts: 4
total_tests: 225
command: aider --model openrouter/openai/gpt-oss-120b --reasoning-effort high
date: 2025-08-06
versions: 0.85.3.dev
seconds_per_case: 35.5
total_cost: 0.7406
- dirname: 2025-08-23-15-47-21--gpt-5-high
test_cases: 225
model: gpt-5 (high)
edit_format: diff
commit_hash: 32faf82
reasoning_effort: high
pass_rate_1: 52.0
pass_rate_2: 88.0
pass_num_1: 117
pass_num_2: 198
percent_cases_well_formed: 91.6
error_outputs: 58
num_malformed_responses: 29
error_outputs: 23
num_malformed_responses: 22
num_with_malformed_responses: 19
user_asks: 96
lazy_comments: 3
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2675561
completion_tokens: 2623429
test_timeouts: 3
total_tests: 225
command: aider --model openai/gpt-5
date: 2025-08-23
versions: 0.86.2.dev
seconds_per_case: 194.0
total_cost: 29.0829
- dirname: 2025-08-25-13-23-27--gpt-5-medium
test_cases: 225
model: gpt-5 (medium)
edit_format: diff
commit_hash: 32faf82
reasoning_effort: medium
pass_rate_1: 49.8
pass_rate_2: 86.7
pass_num_1: 112
pass_num_2: 195
percent_cases_well_formed: 88.4
error_outputs: 40
num_malformed_responses: 40
num_with_malformed_responses: 26
user_asks: 102
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 0
completion_tokens: 0
prompt_tokens: 2827261
completion_tokens: 1468799
test_timeouts: 0
total_tests: 225
command: aider --model openai/gpt-5
date: 2025-08-25
versions: 0.86.2.dev
seconds_per_case: 118.7
total_cost: 17.6930
- dirname: 2025-08-25-14-16-37--gpt-5-low
test_cases: 225
model: gpt-5 (low)
edit_format: diff
commit_hash: 32faf82
reasoning_effort: low
pass_rate_1: 43.1
pass_rate_2: 81.3
pass_num_1: 97
pass_num_2: 183
percent_cases_well_formed: 86.7
error_outputs: 46
num_malformed_responses: 46
num_with_malformed_responses: 30
user_asks: 113
lazy_comments: 1
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2534059
completion_tokens: 779568
test_timeouts: 1
total_tests: 225
command: aider --model openrouter/qwen/qwen3-235b-a22b
date: 2025-05-08
versions: 0.82.4.dev
seconds_per_case: 428.1
total_cost: 1.8037
command: aider --model openai/gpt-5
date: 2025-08-25
versions: 0.86.2.dev
seconds_per_case: 62.4
total_cost: 10.3713
- dirname: 2025-10-03-09-45-34--deepseek-v3.2-reasoner
test_cases: 225
model: DeepSeek-V3.2-Exp (Reasoner)
edit_format: diff
commit_hash: cbb5376
pass_rate_1: 39.6
pass_rate_2: 74.2
pass_num_1: 89
pass_num_2: 167
percent_cases_well_formed: 97.3
error_outputs: 8
num_malformed_responses: 6
num_with_malformed_responses: 6
user_asks: 67
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 1
prompt_tokens: 2191446
completion_tokens: 1645129
test_timeouts: 1
total_tests: 225
command: aider --model deepseek/deepseek-reasoner
date: 2025-10-03
versions: 0.86.2.dev
seconds_per_case: 291.2
total_cost: 1.3045
- dirname: 2025-10-03-09-21-36--deepseek-v3.2-chat
test_cases: 225
model: DeepSeek-V3.2-Exp (Chat)
edit_format: diff
commit_hash: cbb5376
pass_rate_1: 38.7
pass_rate_2: 70.2
pass_num_1: 87
pass_num_2: 158
percent_cases_well_formed: 98.2
error_outputs: 6
num_malformed_responses: 4
num_with_malformed_responses: 4
user_asks: 60
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 1
prompt_tokens: 2266868
completion_tokens: 573477
test_timeouts: 4
total_tests: 225
command: aider --model deepseek/deepseek-chat
date: 2025-10-03
versions: 0.86.2.dev
seconds_per_case: 104.0
total_cost: 0.8756

View file

@ -213,3 +213,60 @@
versions: 0.82.4.dev
seconds_per_case: 635.2
total_cost: 0.0000
- dirname: 2025-05-09-17-02-02--qwen3-235b-a22b.unthink_16k_diff
test_cases: 225
model: Qwen3 235B A22B diff, no think, via official Alibaba API
edit_format: diff
commit_hash: 91d7fbd-dirty
pass_rate_1: 28.9
pass_rate_2: 59.6
pass_num_1: 65
pass_num_2: 134
percent_cases_well_formed: 92.9
error_outputs: 22
num_malformed_responses: 22
num_with_malformed_responses: 16
user_asks: 111
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2816192
completion_tokens: 342062
test_timeouts: 1
total_tests: 225
command: aider --model openai/qwen3-235b-a22b
date: 2025-05-09
versions: 0.82.4.dev
seconds_per_case: 45.4
total_cost: 0.0000
- dirname: 2025-05-09-23-01-22--qwen3-235b-a22b.unthink_16k_whole
test_cases: 225
model: Qwen3 235B A22B whole, no think, via official Alibaba API
edit_format: whole
commit_hash: 425fb6d
pass_rate_1: 26.7
pass_rate_2: 61.8
pass_num_1: 60
pass_num_2: 139
percent_cases_well_formed: 100.0
error_outputs: 0
num_malformed_responses: 0
num_with_malformed_responses: 0
user_asks: 175
lazy_comments: 0
syntax_errors: 0
indentation_errors: 0
exhausted_context_windows: 0
prompt_tokens: 2768173
completion_tokens: 384000
test_timeouts: 1
total_tests: 225
command: aider --model openai/qwen3-235b-a22b
date: 2025-05-09
versions: 0.82.4.dev
seconds_per_case: 50.8
total_cost: 0.0000

View file

@ -32,6 +32,11 @@
.side-bar {
background: linear-gradient(135deg, #ffffff 0%, rgba(20, 176, 20, 0.01) 25%, rgba(20, 176, 20, 0.04) 40%, rgba(220, 230, 255, 0.4) 60%, rgba(205, 218, 255, 0.4) 80%, #F5F6FA 100%);
}
@media (max-width: 50em) {
.ea-ad--sidebar { display: none; }
.ea-ad--mobile { display: block; }
}
</style>
<link rel="alternate" type="application/rss+xml" title="RSS Feed" href="{{ site.url }}/feed.xml">
<link rel="preconnect" href="https://fonts.gstatic.com">

View file

@ -1,7 +1,7 @@
document.addEventListener('DOMContentLoaded', function() {
let currentMode = 'view'; // 'view', 'select', 'detail'
let selectedRows = new Set(); // Store indices of selected rows
const MAX_DISPLAY_COST_CAP = 75; // Define the constant here
const MAX_DISPLAY_COST_CAP = 200; // Define the constant here
const allMainRows = document.querySelectorAll('tr[id^="main-row-"]');
const allDetailsRows = document.querySelectorAll('tr[id^="details-"]');

View file

@ -15,12 +15,12 @@ nav_exclude: true
I recently wanted to draw a graph showing how LLM code editing skill has been
changing over time as new models have been released by OpenAI, Anthropic and others.
I have all the
[data in a yaml file](https://github.com/Aider-AI/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[data in a YAML file](https://github.com/Aider-AI/aider/blob/main/website/_data/edit_leaderboard.yml) that is used to render
[aider's LLM leaderboards](https://aider.chat/docs/leaderboards/).
Below is the aider chat transcript, which shows:
- I launch aider with the yaml file, a file with other plots I've done recently (so GPT can crib the style) and an empty file called `over_time.py`.
- I launch aider with the YAML file, a file with other plots I've done recently (so GPT can crib the style) and an empty file called `over_time.py`.
- Then I ask GPT to draw the scatterplot I want.
- I run the resulting script and share the error output with GPT so it can fix a small bug.
- I ask it to color the points for GPT-4 and GPT-3.5 family models differently, to better see trends within those model families.
@ -28,7 +28,7 @@ Below is the aider chat transcript, which shows:
- I work through a series of other small style changes, like changing fonts and the graph border.
In the end I have the graph, but I also have the python code in my repo.
So I can update this graph easily whenever I add new entries to the yaml data file.
So I can update this graph easily whenever I add new entries to the YAML data file.
## Aider chat transcript

View file

@ -277,6 +277,31 @@ const LEADERBOARD_CUSTOM_TITLE = "Qwen3 results on the aider polyglot benchmark"
</script>
## No think, via official Alibaba API
These results were obtained running against `https://dashscope.aliyuncs.com/compatible-mode/v1`
with no thinking.
```bash
export OPENAI_API_BASE=https://dashscope.aliyuncs.com/compatible-mode/v1
export OPENAI_API_KEY=<key>
```
```yaml
- name: openai/qwen3-235b-a22b
use_temperature: 0.7
streaming: false
extra_params:
stream: false
max_tokens: 16384
top_p: 0.8
top_k: 20
temperature: 0.7
enable_thinking: false
extra_body:
enable_thinking: false
```
## OpenRouter only TogetherAI, recommended /no_think settings
These results were obtained with the

File diff suppressed because it is too large Load diff

View file

@ -4,7 +4,7 @@
# Place in your home dir, or at the root of your git repo.
##########################################################
# Note: You can only put OpenAI and Anthropic API keys in the yaml
# Note: You can only put OpenAI and Anthropic API keys in the YAML
# config file. Keys for all APIs can be stored in a .env file
# https://aider.chat/docs/config/dotenv.html
@ -83,7 +83,7 @@
## Set the reasoning_effort API parameter (default: not set)
#reasoning-effort: xxx
## Set the thinking token budget for models that support it (default: not set)
## Set the thinking token budget for models that support it. Use 0 to disable. (default: not set)
#thinking-tokens: xxx
## Verify the SSL cert when connecting to models (default: True)
@ -212,6 +212,9 @@
## Enable/disable adding .aider* to .gitignore (default: True)
#gitignore: true
## Enable/disable the addition of files listed in .gitignore to Aider's editing scope.
#add-gitignore-files: false
## Specify the aider ignore file (default: .aiderignore in git root)
#aiderignore: .aiderignore
@ -236,8 +239,8 @@
## Prefix all commit messages with 'aider: ' (default: False)
#attribute-commit-message-committer: false
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
#attribute-co-authored-by: false
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: True). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
#attribute-co-authored-by: true
## Enable/disable git pre-commit hooks with --no-verify (default: False)
#git-commit-verify: false
@ -295,6 +298,12 @@
## Permanently disable analytics
#analytics-disable: false
## Send analytics to custom PostHog instance
#analytics-posthog-host: xxx
## Send analytics to custom PostHog project
#analytics-posthog-project-api-key: xxx
############
# Upgrading:
@ -386,6 +395,9 @@
## Specify the language to use in the chat (default: None, uses system settings)
#chat-language: xxx
## Specify the language to use in the commit message (default: None, user language)
#commit-language: xxx
## Always say yes to every confirmation
#yes-always: false

View file

@ -72,7 +72,7 @@
## Set the reasoning_effort API parameter (default: not set)
#AIDER_REASONING_EFFORT=
## Set the thinking token budget for models that support it (default: not set)
## Set the thinking token budget for models that support it. Use 0 to disable. (default: not set)
#AIDER_THINKING_TOKENS=
## Verify the SSL cert when connecting to models (default: True)
@ -201,6 +201,9 @@
## Enable/disable adding .aider* to .gitignore (default: True)
#AIDER_GITIGNORE=true
## Enable/disable the addition of files listed in .gitignore to Aider's editing scope.
#AIDER_ADD_GITIGNORE_FILES=false
## Specify the aider ignore file (default: .aiderignore in git root)
#AIDER_AIDERIGNORE=.aiderignore
@ -225,8 +228,8 @@
## Prefix all commit messages with 'aider: ' (default: False)
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER=false
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
#AIDER_ATTRIBUTE_CO_AUTHORED_BY=false
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: True). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
#AIDER_ATTRIBUTE_CO_AUTHORED_BY=true
## Enable/disable git pre-commit hooks with --no-verify (default: False)
#AIDER_GIT_COMMIT_VERIFY=false
@ -279,6 +282,12 @@
## Permanently disable analytics
#AIDER_ANALYTICS_DISABLE=false
## Send analytics to custom PostHog instance
#AIDER_ANALYTICS_POSTHOG_HOST=
## Send analytics to custom PostHog project
#AIDER_ANALYTICS_POSTHOG_PROJECT_API_KEY=
############
# Upgrading:
@ -357,6 +366,9 @@
## Specify the language to use in the chat (default: None, uses system settings)
#AIDER_CHAT_LANGUAGE=
## Specify the language to use in the commit message (default: None, user language)
#AIDER_COMMIT_LANGUAGE=
## Always say yes to every confirmation
#AIDER_YES_ALWAYS=

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,7 @@
---
parent: Configuration
nav_order: 15
description: How to configure aider with a yaml config file.
description: How to configure aider with a YAML config file.
---
# YAML config file
@ -58,7 +58,7 @@ cog.outl("```")
# Place in your home dir, or at the root of your git repo.
##########################################################
# Note: You can only put OpenAI and Anthropic API keys in the yaml
# Note: You can only put OpenAI and Anthropic API keys in the YAML
# config file. Keys for all APIs can be stored in a .env file
# https://aider.chat/docs/config/dotenv.html
@ -137,7 +137,7 @@ cog.outl("```")
## Set the reasoning_effort API parameter (default: not set)
#reasoning-effort: xxx
## Set the thinking token budget for models that support it (default: not set)
## Set the thinking token budget for models that support it. Use 0 to disable. (default: not set)
#thinking-tokens: xxx
## Verify the SSL cert when connecting to models (default: True)
@ -266,6 +266,9 @@ cog.outl("```")
## Enable/disable adding .aider* to .gitignore (default: True)
#gitignore: true
## Enable/disable the addition of files listed in .gitignore to Aider's editing scope.
#add-gitignore-files: false
## Specify the aider ignore file (default: .aiderignore in git root)
#aiderignore: .aiderignore
@ -290,8 +293,8 @@ cog.outl("```")
## Prefix all commit messages with 'aider: ' (default: False)
#attribute-commit-message-committer: false
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
#attribute-co-authored-by: false
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: True). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
#attribute-co-authored-by: true
## Enable/disable git pre-commit hooks with --no-verify (default: False)
#git-commit-verify: false
@ -349,6 +352,12 @@ cog.outl("```")
## Permanently disable analytics
#analytics-disable: false
## Send analytics to custom PostHog instance
#analytics-posthog-host: xxx
## Send analytics to custom PostHog project
#analytics-posthog-project-api-key: xxx
############
# Upgrading:
@ -440,6 +449,9 @@ cog.outl("```")
## Specify the language to use in the chat (default: None, uses system settings)
#chat-language: xxx
## Specify the language to use in the commit message (default: None, user language)
#commit-language: xxx
## Always say yes to every confirmation
#yes-always: false

View file

@ -40,9 +40,9 @@ OPENAI_API_KEY=<key>
ANTHROPIC_API_KEY=<key>
```
#### Yaml config file
#### YAML config file
You can also set those API keys via special entries in the
[yaml config file](/docs/config/aider_conf.html), like this:
[YAML config file](/docs/config/aider_conf.html), like this:
```yaml
openai-api-key: <key>
@ -74,7 +74,7 @@ OPENROUTER_API_KEY=bar
DEEPSEEK_API_KEY=baz
```
#### Yaml config file
#### YAML config file
You can also set API keys in the

View file

@ -112,7 +112,7 @@ cog.outl("```")
## Set the reasoning_effort API parameter (default: not set)
#AIDER_REASONING_EFFORT=
## Set the thinking token budget for models that support it (default: not set)
## Set the thinking token budget for models that support it. Use 0 to disable. (default: not set)
#AIDER_THINKING_TOKENS=
## Verify the SSL cert when connecting to models (default: True)
@ -241,6 +241,9 @@ cog.outl("```")
## Enable/disable adding .aider* to .gitignore (default: True)
#AIDER_GITIGNORE=true
## Enable/disable the addition of files listed in .gitignore to Aider's editing scope.
#AIDER_ADD_GITIGNORE_FILES=false
## Specify the aider ignore file (default: .aiderignore in git root)
#AIDER_AIDERIGNORE=.aiderignore
@ -265,8 +268,8 @@ cog.outl("```")
## Prefix all commit messages with 'aider: ' (default: False)
#AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER=false
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
#AIDER_ATTRIBUTE_CO_AUTHORED_BY=false
## Attribute aider edits using the Co-authored-by trailer in the commit message (default: True). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
#AIDER_ATTRIBUTE_CO_AUTHORED_BY=true
## Enable/disable git pre-commit hooks with --no-verify (default: False)
#AIDER_GIT_COMMIT_VERIFY=false
@ -319,6 +322,12 @@ cog.outl("```")
## Permanently disable analytics
#AIDER_ANALYTICS_DISABLE=false
## Send analytics to custom PostHog instance
#AIDER_ANALYTICS_POSTHOG_HOST=
## Send analytics to custom PostHog project
#AIDER_ANALYTICS_POSTHOG_PROJECT_API_KEY=
############
# Upgrading:
@ -397,6 +406,9 @@ cog.outl("```")
## Specify the language to use in the chat (default: None, uses system settings)
#AIDER_CHAT_LANGUAGE=
## Specify the language to use in the commit message (default: None, user language)
#AIDER_COMMIT_LANGUAGE=
## Always say yes to every confirmation
#AIDER_YES_ALWAYS=

View file

@ -12,7 +12,7 @@ Aider allows you to configure your preferred text editor for use with the `/edit
You can specify the text editor with the `--editor` switch or using
`editor:` in aider's
[yaml config file](https://aider.chat/docs/config/aider_conf.html).
[YAML config file](https://aider.chat/docs/config/aider_conf.html).
## Environment variables

View file

@ -79,17 +79,19 @@ for alias, model in sorted(MODEL_ALIASES.items()):
- `4-turbo`: gpt-4-1106-preview
- `4o`: gpt-4o
- `deepseek`: deepseek/deepseek-chat
- `flash`: gemini/gemini-2.5-flash-preview-04-17
- `gemini`: gemini/gemini-2.5-pro-preview-05-06
- `gemini-2.5-pro`: gemini/gemini-2.5-pro-preview-05-06
- `flash`: gemini/gemini-2.5-flash
- `flash-lite`: gemini/gemini-2.5-flash-lite
- `gemini`: gemini/gemini-3-pro-preview
- `gemini-2.5-pro`: gemini/gemini-2.5-pro
- `gemini-3-pro-preview`: gemini/gemini-3-pro-preview
- `gemini-exp`: gemini/gemini-2.5-pro-exp-03-25
- `grok3`: xai/grok-3-beta
- `haiku`: claude-3-5-haiku-20241022
- `optimus`: openrouter/openrouter/optimus-alpha
- `opus`: claude-3-opus-20240229
- `opus`: claude-opus-4-20250514
- `quasar`: openrouter/openrouter/quasar-alpha
- `r1`: deepseek/deepseek-reasoner
- `sonnet`: anthropic/claude-3-7-sonnet-20250219
- `sonnet`: anthropic/claude-sonnet-4-20250514
<!--[[[end]]]-->
## Priority

View file

@ -49,8 +49,10 @@ usage: aider [-h] [--model] [--openai-api-key] [--anthropic-api-key]
[--completion-menu-current-color]
[--completion-menu-current-bg-color] [--code-theme]
[--show-diffs] [--git | --no-git]
[--gitignore | --no-gitignore] [--aiderignore]
[--subtree-only] [--auto-commits | --no-auto-commits]
[--gitignore | --no-gitignore]
[--add-gitignore-files | --no-add-gitignore-files]
[--aiderignore] [--subtree-only]
[--auto-commits | --no-auto-commits]
[--dirty-commits | --no-dirty-commits]
[--attribute-author | --no-attribute-author]
[--attribute-committer | --no-attribute-committer]
@ -64,7 +66,9 @@ usage: aider [-h] [--model] [--openai-api-key] [--anthropic-api-key]
[--lint-cmd] [--auto-lint | --no-auto-lint]
[--test-cmd] [--auto-test | --no-auto-test] [--test]
[--analytics | --no-analytics] [--analytics-log]
[--analytics-disable] [--just-check-update]
[--analytics-disable] [--analytics-posthog-host]
[--analytics-posthog-project-api-key]
[--just-check-update]
[--check-update | --no-check-update]
[--show-release-notes | --no-show-release-notes]
[--install-main-branch] [--upgrade] [--version]
@ -74,9 +78,9 @@ usage: aider [-h] [--model] [--openai-api-key] [--anthropic-api-key]
[--apply-clipboard-edits] [--exit] [--show-repo-map]
[--show-prompts] [--voice-format] [--voice-language]
[--voice-input-device] [--disable-playwright] [--file]
[--read] [--vim] [--chat-language] [--yes-always] [-v]
[--load] [--encoding] [--line-endings] [-c]
[--env-file]
[--read] [--vim] [--chat-language] [--commit-language]
[--yes-always] [-v] [--load] [--encoding]
[--line-endings] [-c] [--env-file]
[--suggest-shell-commands | --no-suggest-shell-commands]
[--fancy-input | --no-fancy-input]
[--multiline | --no-multiline]
@ -171,7 +175,7 @@ Set the reasoning_effort API parameter (default: not set)
Environment variable: `AIDER_REASONING_EFFORT`
### `--thinking-tokens VALUE`
Set the thinking token budget for models that support it (default: not set)
Set the thinking token budget for models that support it. Use 0 to disable. (default: not set)
Environment variable: `AIDER_THINKING_TOKENS`
### `--verify-ssl`
@ -388,6 +392,14 @@ Aliases:
- `--gitignore`
- `--no-gitignore`
### `--add-gitignore-files`
Enable/disable the addition of files listed in .gitignore to Aider's editing scope.
Default: False
Environment variable: `AIDER_ADD_GITIGNORE_FILES`
Aliases:
- `--add-gitignore-files`
- `--no-add-gitignore-files`
### `--aiderignore AIDERIGNORE`
Specify the aider ignore file (default: .aiderignore in git root)
Default: .aiderignore
@ -445,8 +457,8 @@ Aliases:
- `--no-attribute-commit-message-committer`
### `--attribute-co-authored-by`
Attribute aider edits using the Co-authored-by trailer in the commit message (default: False). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
Default: False
Attribute aider edits using the Co-authored-by trailer in the commit message (default: True). If True, this takes precedence over default --attribute-author and --attribute-committer behavior unless they are explicitly set to True.
Default: True
Environment variable: `AIDER_ATTRIBUTE_CO_AUTHORED_BY`
Aliases:
- `--attribute-co-authored-by`
@ -546,6 +558,14 @@ Permanently disable analytics
Default: False
Environment variable: `AIDER_ANALYTICS_DISABLE`
### `--analytics-posthog-host ANALYTICS_POSTHOG_HOST`
Send analytics to custom PostHog instance
Environment variable: `AIDER_ANALYTICS_POSTHOG_HOST`
### `--analytics-posthog-project-api-key ANALYTICS_POSTHOG_PROJECT_API_KEY`
Send analytics to custom PostHog project
Environment variable: `AIDER_ANALYTICS_POSTHOG_PROJECT_API_KEY`
## Upgrading:
### `--just-check-update`
@ -683,6 +703,10 @@ Environment variable: `AIDER_VIM`
Specify the language to use in the chat (default: None, uses system settings)
Environment variable: `AIDER_CHAT_LANGUAGE`
### `--commit-language COMMIT_LANGUAGE`
Specify the language to use in the commit message (default: None, user language)
Environment variable: `AIDER_COMMIT_LANGUAGE`
### `--yes-always`
Always say yes to every confirmation
Environment variable: `AIDER_YES_ALWAYS`

View file

@ -25,7 +25,7 @@ aider --model r1
```
Inside the aider chat, you can use `/thinking-tokens 4k` or `/reasoning-effort low` to change
the amount of reasoning.
the amount of reasoning. Use `/thinking-tokens 0` to disable thinking tokens.
The rest of this document describes more advanced details which are mainly needed
if you're configuring aider to work with a lesser known reasoning model or one served
@ -47,6 +47,7 @@ You can use the `--thinking-tokens` switch to request
the model use a certain number of thinking tokens.
This switch is useful for Sonnet 3.7.
You can specify the token budget like "1024", "1k", "8k" or "0.01M".
Use "0" to disable thinking tokens.
### Model compatibility and settings

View file

@ -264,14 +264,15 @@ tr:hover { background-color: #f5f5f5; }
</style>
<table>
<tr><th>Model Name</th><th class='right'>Total Tokens</th><th class='right'>Percent</th></tr>
<tr><td>gemini/gemini-2.5-pro-exp-03-25</td><td class='right'>890,057</td><td class='right'>69.9%</td></tr>
<tr><td>o3</td><td class='right'>373,753</td><td class='right'>29.4%</td></tr>
<tr><td>openrouter/REDACTED</td><td class='right'>8,745</td><td class='right'>0.7%</td></tr>
<tr><td>gemini/gemini-2.5-pro</td><td class='right'>222,047</td><td class='right'>23.7%</td></tr>
<tr><td>gpt-5</td><td class='right'>211,072</td><td class='right'>22.6%</td></tr>
<tr><td>gemini/gemini-3-flash-preview</td><td class='right'>187,836</td><td class='right'>20.1%</td></tr>
<tr><td>None</td><td class='right'>168,988</td><td class='right'>18.1%</td></tr>
<tr><td>gemini/gemini-3-pro-preview</td><td class='right'>81,851</td><td class='right'>8.8%</td></tr>
<tr><td>o3-pro</td><td class='right'>36,620</td><td class='right'>3.9%</td></tr>
<tr><td>gemini/gemini-2.5-flash-lite</td><td class='right'>15,470</td><td class='right'>1.7%</td></tr>
<tr><td>gemini/gemini-2.5-flash-lite-preview-06-17</td><td class='right'>11,371</td><td class='right'>1.2%</td></tr>
</table>
{: .note :}
Some models show as REDACTED, because they are new or unpopular models.
Aider's analytics only records the names of "well known" LLMs.
<!--[[[end]]]-->
## How are the "aider wrote xx% of code" stats computed?
@ -370,6 +371,10 @@ Aider is
under an
[Apache 2.0 license](https://github.com/Aider-AI/aider/blob/main/LICENSE.txt).
## Can I Script Aider?
Yes. You can script aider via the command line or python. See more from here: [Scripting aider](https://aider.chat/docs/scripting.html)
<div style="height:80vh"></div>

View file

@ -28,12 +28,6 @@ These one-liners will install aider, along with python 3.12 if needed.
They are based on the
[uv installers](https://docs.astral.sh/uv/getting-started/installation/).
#### Windows
```powershell
powershell -ExecutionPolicy ByPass -c "irm https://aider.chat/install.ps1 | iex"
```
#### Mac & Linux
Use curl to download the script and execute it with sh:
@ -48,6 +42,12 @@ If your system doesn't have curl, you can use wget:
wget -qO- https://aider.chat/install.sh | sh
```
#### Windows
```powershell
powershell -ExecutionPolicy ByPass -c "irm https://aider.chat/install.ps1 | iex"
```
## Install with uv
@ -55,7 +55,7 @@ You can install aider with uv:
```bash
python -m pip install uv # If you need to install uv
uv tool install --force --python python3.12 aider-chat@latest
uv tool install --force --python python3.12 --with pip aider-chat@latest
```
This will install uv using your existing python version 3.8-3.13,

View file

@ -77,10 +77,10 @@ cog.out(get_supported_languages_md())
| capnp | .capnp | | ✓ |
| chatito | .chatito | ✓ | ✓ |
| clarity | .clar | | ✓ |
| clojure | .clj | | ✓ |
| clojure | .cljc | | ✓ |
| clojure | .cljs | | ✓ |
| clojure | .edn | | ✓ |
| clojure | .clj | | ✓ |
| clojure | .cljc | | ✓ |
| clojure | .cljs | | ✓ |
| clojure | .edn | | ✓ |
| cmake | .cmake | | ✓ |
| cmake | CMakeLists.txt | | ✓ |
| commonlisp | .cl | ✓ | ✓ |
@ -110,11 +110,11 @@ cog.out(get_supported_languages_md())
| fennel | .fnl | | ✓ |
| firrtl | .fir | | ✓ |
| fish | .fish | | ✓ |
| fortran | .f | | ✓ |
| fortran | .f03 | | ✓ |
| fortran | .f08 | | ✓ |
| fortran | .f90 | | ✓ |
| fortran | .f95 | | ✓ |
| fortran | .f | | ✓ |
| fortran | .f03 | | ✓ |
| fortran | .f08 | | ✓ |
| fortran | .f90 | | ✓ |
| fortran | .f95 | | ✓ |
| func | .fc | | ✓ |
| gdscript | .gd | | ✓ |
| gitattributes | .gitattributes | | ✓ |
@ -133,7 +133,7 @@ cog.out(get_supported_languages_md())
| gstlaunch | .launch | | ✓ |
| hack | .hack | | ✓ |
| hare | .ha | | ✓ |
| haskell | .hs | | ✓ |
| haskell | .hs | | ✓ |
| haxe | .hx | | ✓ |
| hcl | .hcl | ✓ | ✓ |
| hcl | .tf | ✓ | ✓ |
@ -153,7 +153,7 @@ cog.out(get_supported_languages_md())
| json | .json | | ✓ |
| jsonnet | .jsonnet | | ✓ |
| jsonnet | .libsonnet | | ✓ |
| julia | .jl | | ✓ |
| julia | .jl | | ✓ |
| kconfig | Kconfig | | ✓ |
| kdl | .kdl | | ✓ |
| kotlin | .kt | ✓ | ✓ |
@ -172,8 +172,8 @@ cog.out(get_supported_languages_md())
| make | Makefile | | ✓ |
| markdown | .markdown | | ✓ |
| markdown | .md | | ✓ |
| matlab | .m | | ✓ |
| matlab | .mat | | ✓ |
| matlab | .m | | ✓ |
| matlab | .mat | | ✓ |
| mermaid | .mermaid | | ✓ |
| meson | meson.build | | ✓ |
| ninja | .ninja | | ✓ |
@ -257,7 +257,7 @@ cog.out(get_supported_languages_md())
| xml | .xml | | ✓ |
| xml | .xsl | | ✓ |
| yuck | .yuck | | ✓ |
| zig | .zig | | ✓ |
| zig | .zig | | ✓ |
<!--[[[end]]]-->

View file

@ -285,6 +285,6 @@ mod_dates = [get_last_modified_date(file) for file in files]
latest_mod_date = max(mod_dates)
cog.out(f"{latest_mod_date.strftime('%B %d, %Y.')}")
]]]-->
May 08, 2025.
November 20, 2025.
<!--[[[end]]]-->
</p>

View file

@ -0,0 +1,111 @@
---
parent: Connecting to LLMs
nav_order: 510
---
# GitHub Copilot
Aider can connect to GitHub Copilots LLMs because Copilot exposes a standard **OpenAI-style**
endpoint at:
```
https://api.githubcopilot.com
```
First, install aider:
{% include install.md %}
---
## Configure your environment
```bash
# macOS/Linux
export OPENAI_API_BASE=https://api.githubcopilot.com
export OPENAI_API_KEY=<oauth_token>
# Windows (PowerShell)
setx OPENAI_API_BASE https://api.githubcopilot.com
setx OPENAI_API_KEY <oauth_token>
# …restart the shell after setx commands
```
---
### Where do I get the token?
The easiest path is to sign in to Copilot from any JetBrains IDE (PyCharm, GoLand, etc).
After you authenticate a file appears:
```
~/.config/github-copilot/apps.json
```
On Windows the config can be found in:
```
~\AppData\Local\github-copilot\apps.json
```
Copy the `oauth_token` value that string is your `OPENAI_API_KEY`.
*Note:* tokens created by the Neovim **copilot.lua** plugin (old `hosts.json`) sometimes lack the
needed scopes. If you see “access to this endpoint is forbidden”, regenerate the token with a
JetBrains IDE.
---
## Discover available models
Copilot hosts many models (OpenAI, Anthropic, Google, etc).
List the models your subscription allows with:
```bash
curl -s https://api.githubcopilot.com/models \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "Copilot-Integration-Id: vscode-chat" | jq -r '.data[].id'
```
Each returned ID can be used with aider by **prefixing it with `openai/`**:
```bash
aider --model openai/gpt-4o
# or
aider --model openai/claude-3.7-sonnet-thought
```
---
## Quick start
```bash
# change into your project
cd /to/your/project
# talk to Copilot
aider --model openai/gpt-4o
```
---
## Optional config file (`~/.aider.conf.yml`)
```yaml
openai-api-base: https://api.githubcopilot.com
openai-api-key: "<oauth_token>"
model: openai/gpt-4o
weak-model: openai/gpt-4o-mini
show-model-warnings: false
```
---
## FAQ
* Calls made through aider are billed through your Copilot subscription
(aider will still print *estimated* costs).
* The Copilot docs explicitly allow third-party “agents” that hit this API aider is playing by
the rules.
* Aider talks directly to the REST endpoint—no web-UI scraping or browser automation.

View file

@ -59,43 +59,57 @@ cog.out(''.join(lines))
- ALEPHALPHA_API_KEY
- ANTHROPIC_API_KEY
- ANYSCALE_API_KEY
- ARK_API_KEY
- AZURE_AI_API_KEY
- AZURE_API_KEY
- AZURE_OPENAI_API_KEY
- BASETEN_API_KEY
- BYTEZ_API_KEY
- CEREBRAS_API_KEY
- CLARIFAI_API_KEY
- CLOUDFLARE_API_KEY
- CO_API_KEY
- CODESTRAL_API_KEY
- COHERE_API_KEY
- COMPACTIFAI_API_KEY
- DASHSCOPE_API_KEY
- DATABRICKS_API_KEY
- DEEPINFRA_API_KEY
- DEEPSEEK_API_KEY
- FEATHERLESS_AI_API_KEY
- FIREWORKS_AI_API_KEY
- FIREWORKS_API_KEY
- FIREWORKSAI_API_KEY
- GEMINI_API_KEY
- GOOGLE_API_KEY
- GROQ_API_KEY
- HUGGINGFACE_API_KEY
- INFINITY_API_KEY
- MARITALK_API_KEY
- MISTRAL_API_KEY
- MOONSHOT_API_KEY
- NEBIUS_API_KEY
- NLP_CLOUD_API_KEY
- NOVITA_API_KEY
- NVIDIA_NIM_API_KEY
- OLLAMA_API_KEY
- OPENAI_API_KEY
- OPENAI_LIKE_API_KEY
- OPENROUTER_API_KEY
- OR_API_KEY
- OVHCLOUD_API_KEY
- PALM_API_KEY
- PERPLEXITYAI_API_KEY
- PREDIBASE_API_KEY
- PROVIDER_API_KEY
- REPLICATE_API_KEY
- SAMBANOVA_API_KEY
- TOGETHERAI_API_KEY
- USER_API_KEY
- VERCEL_AI_GATEWAY_API_KEY
- VOLCENGINE_API_KEY
- VOYAGE_API_KEY
- WANDB_API_KEY
- WATSONX_API_KEY
- WX_API_KEY
- XAI_API_KEY

View file

@ -40,7 +40,7 @@ cd /to/your/project
aider --model vertex_ai/claude-3-5-sonnet@20240620
```
Or you can use the [yaml config](/docs/config/aider_conf.html) to set the model to any of the
Or you can use the [YAML config](/docs/config/aider_conf.html) to set the model to any of the
models supported by Vertex AI.
Example `.aider.conf.yml` file:

View file

@ -52,10 +52,9 @@ will confirm you wish to opt-in to analytics.
- `--no-analytics` will turn off analytics for the current session.
- By default, if you don't provide `--analytics` or `--no-analytics`,
aider will enable analytics for a random subset of users.
Such randomly selected users will be asked if they wish to opt-in to analytics.
This will never happen if you have permanently disabled analytics
with `--analytics-disable`.
Randomly selected users will be asked if they wish to opt-in to analytics.
## Opting in
@ -106,6 +105,12 @@ If you want to just log analytics without reporting them, you can do:
aider --analytics-log filename.jsonl --no-analytics
```
### Sending analytics to custom PostHog project or installation
Aider uses PostHog for analytics collection. You can configure aider to send analytics to your own PostHog project or a custom PostHog installation using these parameters:
- `--analytics-posthog-project-api-key KEY` - Set a custom PostHog project API key
- `--analytics-posthog-host HOST` - Set a custom PostHog host (default is app.posthog.com)
## Reporting issues

View file

@ -57,7 +57,30 @@ cog.out(model_list)
]]]-->
- anthropic.claude-3-5-haiku-20241022-v1:0
- anthropic.claude-3-5-sonnet-20241022-v2:0
- anthropic.claude-3-7-sonnet-20240620-v1:0
- anthropic.claude-3-7-sonnet-20250219-v1:0
- anthropic.claude-haiku-4-5-20251001-v1:0
- anthropic.claude-haiku-4-5@20251001
- anthropic.claude-opus-4-1-20250805-v1:0
- anthropic.claude-opus-4-20250514-v1:0
- anthropic.claude-opus-4-5-20251101-v1:0
- anthropic.claude-sonnet-4-20250514-v1:0
- anthropic.claude-sonnet-4-5-20250929-v1:0
- apac.anthropic.claude-3-5-sonnet-20241022-v2:0
- apac.anthropic.claude-haiku-4-5-20251001-v1:0
- apac.anthropic.claude-sonnet-4-20250514-v1:0
- au.anthropic.claude-haiku-4-5-20251001-v1:0
- au.anthropic.claude-sonnet-4-5-20250929-v1:0
- azure_ai/claude-haiku-4-5
- azure_ai/claude-opus-4-1
- azure_ai/claude-sonnet-4-5
- azure_ai/deepseek-v3.2
- azure_ai/deepseek-v3.2-speciale
- azure_ai/mistral-medium-2505
- bedrock/us-gov-east-1/claude-sonnet-4-5-20250929-v1:0
- bedrock/us-gov-west-1/anthropic.claude-3-7-sonnet-20250219-v1:0
- bedrock/us-gov-west-1/claude-sonnet-4-5-20250929-v1:0
- bedrock/us.anthropic.claude-3-5-haiku-20241022-v1:0
- claude-3-5-haiku-20241022
- claude-3-5-haiku-latest
- claude-3-5-sonnet-20240620
@ -68,24 +91,72 @@ cog.out(model_list)
- claude-3-haiku-20240307
- claude-3-opus-20240229
- claude-3-opus-latest
- claude-3-sonnet-20240229
- claude-4-opus-20250514
- claude-4-sonnet-20250514
- claude-haiku-4-5
- claude-haiku-4-5-20251001
- claude-opus-4-1
- claude-opus-4-1-20250805
- claude-opus-4-20250514
- claude-opus-4-5
- claude-opus-4-5-20251101
- claude-sonnet-4-20250514
- claude-sonnet-4-5
- claude-sonnet-4-5-20250929
- claude-sonnet-4-5-20250929-v1:0
- codestral/codestral-2405
- codestral/codestral-latest
- databricks/databricks-claude-3-7-sonnet
- databricks/databricks-claude-haiku-4-5
- databricks/databricks-claude-opus-4
- databricks/databricks-claude-opus-4-1
- databricks/databricks-claude-opus-4-5
- databricks/databricks-claude-sonnet-4
- databricks/databricks-claude-sonnet-4-1
- databricks/databricks-claude-sonnet-4-5
- deepseek/deepseek-chat
- deepseek/deepseek-coder
- deepseek/deepseek-r1
- deepseek/deepseek-reasoner
- deepseek/deepseek-v3
- deepseek/deepseek-v3.2
- eu.anthropic.claude-3-5-haiku-20241022-v1:0
- eu.anthropic.claude-3-5-sonnet-20241022-v2:0
- eu.anthropic.claude-3-7-sonnet-20250219-v1:0
- eu.anthropic.claude-haiku-4-5-20251001-v1:0
- eu.anthropic.claude-opus-4-1-20250805-v1:0
- eu.anthropic.claude-opus-4-20250514-v1:0
- eu.anthropic.claude-opus-4-5-20251101-v1:0
- eu.anthropic.claude-sonnet-4-20250514-v1:0
- eu.anthropic.claude-sonnet-4-5-20250929-v1:0
- global.anthropic.claude-haiku-4-5-20251001-v1:0
- global.anthropic.claude-opus-4-5-20251101-v1:0
- global.anthropic.claude-sonnet-4-20250514-v1:0
- global.anthropic.claude-sonnet-4-5-20250929-v1:0
- jp.anthropic.claude-haiku-4-5-20251001-v1:0
- jp.anthropic.claude-sonnet-4-5-20250929-v1:0
- mistral/codestral-2405
- mistral/codestral-2508
- mistral/codestral-latest
- mistral/codestral-mamba-latest
- mistral/devstral-2512
- mistral/devstral-medium-2507
- mistral/devstral-small-2505
- mistral/devstral-small-2507
- mistral/labs-devstral-small-2512
- mistral/magistral-medium-2506
- mistral/magistral-medium-2509
- mistral/magistral-medium-latest
- mistral/magistral-small-2506
- mistral/magistral-small-latest
- mistral/mistral-large-2402
- mistral/mistral-large-2407
- mistral/mistral-large-2411
- mistral/mistral-large-3
- mistral/mistral-large-latest
- mistral/mistral-medium
- mistral/mistral-medium-2312
- mistral/mistral-medium-2505
- mistral/mistral-medium-latest
- mistral/mistral-small
- mistral/mistral-small-latest
@ -101,10 +172,26 @@ cog.out(model_list)
- mistral/pixtral-large-latest
- openrouter/anthropic/claude-3.5-sonnet
- openrouter/anthropic/claude-3.7-sonnet
- openrouter/anthropic/claude-haiku-4.5
- openrouter/anthropic/claude-opus-4
- openrouter/anthropic/claude-opus-4.1
- openrouter/anthropic/claude-opus-4.5
- openrouter/anthropic/claude-sonnet-4
- openrouter/anthropic/claude-sonnet-4.5
- openrouter/deepseek/deepseek-chat-v3.1
- openrouter/deepseek/deepseek-r1
- openrouter/deepseek/deepseek-r1-0528
- openrouter/deepseek/deepseek-v3.2
- openrouter/deepseek/deepseek-v3.2-exp
- us.anthropic.claude-3-5-haiku-20241022-v1:0
- us.anthropic.claude-3-5-sonnet-20241022-v2:0
- us.anthropic.claude-3-7-sonnet-20250219-v1:0
- us.anthropic.claude-haiku-4-5-20251001-v1:0
- us.anthropic.claude-opus-4-1-20250805-v1:0
- us.anthropic.claude-opus-4-20250514-v1:0
- us.anthropic.claude-opus-4-5-20251101-v1:0
- us.anthropic.claude-sonnet-4-20250514-v1:0
- us.anthropic.claude-sonnet-4-5-20250929-v1:0
- vertex_ai/claude-3-5-haiku
- vertex_ai/claude-3-5-haiku@20241022
- vertex_ai/claude-3-5-sonnet
@ -118,6 +205,20 @@ cog.out(model_list)
- vertex_ai/claude-3-opus@20240229
- vertex_ai/claude-3-sonnet
- vertex_ai/claude-3-sonnet@20240229
- vertex_ai/claude-haiku-4-5@20251001
- vertex_ai/claude-opus-4
- vertex_ai/claude-opus-4-1
- vertex_ai/claude-opus-4-1@20250805
- vertex_ai/claude-opus-4-5
- vertex_ai/claude-opus-4-5@20251101
- vertex_ai/claude-opus-4@20250514
- vertex_ai/claude-sonnet-4
- vertex_ai/claude-sonnet-4-5
- vertex_ai/claude-sonnet-4-5@20250929
- vertex_ai/claude-sonnet-4@20250514
- vertex_ai/deepseek-ai/deepseek-r1-0528-maas
- vertex_ai/deepseek-ai/deepseek-v3.1-maas
- vertex_ai/deepseek-ai/deepseek-v3.2-maas
<!--[[[end]]]-->

View file

@ -88,7 +88,7 @@ for all the supported arguments.
It can also be helpful to set the equivalent of `--yes` by doing this:
```
```python
from aider.io import InputOutput
io = InputOutput(yes=True)
# ...

View file

@ -57,7 +57,7 @@ cog.out(get_help_md())
| **/save** | Save commands to a file that can reconstruct the current chat session's files |
| **/settings** | Print out the current settings |
| **/test** | Run a shell command and add the output to the chat on non-zero exit code |
| **/think-tokens** | Set the thinking token budget (supports formats like 8096, 8k, 10.5k, 0.5M) |
| **/think-tokens** | Set the thinking token budget, eg: 8096, 8k, 10.5k, 0.5M, or 0 to disable. |
| **/tokens** | Report on the number of tokens used by the current chat context |
| **/undo** | Undo the last git commit if it was done by aider |
| **/voice** | Record and transcribe voice input |

View file

@ -69,11 +69,11 @@ cog.out(text)
]]]-->
<a href="https://github.com/Aider-AI/aider" class="github-badge badge-stars" title="Total number of GitHub stars the Aider project has received">
<span class="badge-label">⭐ GitHub Stars</span>
<span class="badge-value">33K</span>
<span class="badge-value">39K</span>
</a>
<a href="https://pypi.org/project/aider-chat/" class="github-badge badge-installs" title="Total number of installations via pip from PyPI">
<span class="badge-label">📦 Installs</span>
<span class="badge-value">2.2M</span>
<span class="badge-value">4.1M</span>
</a>
<div class="github-badge badge-tokens" title="Number of tokens processed weekly by Aider users">
<span class="badge-label">📈 Tokens/week</span>
@ -85,7 +85,7 @@ cog.out(text)
</a>
<a href="/HISTORY.html" class="github-badge badge-coded" title="Percentage of the new code in Aider's last release written by Aider itself">
<span class="badge-label">🔄 Singularity</span>
<span class="badge-value">92%</span>
<span class="badge-value">88%</span>
</a>
<!--[[[end]]]-->
</div>
@ -269,178 +269,183 @@ cog.out(text)
<script>
const testimonials = [
{
text: "My life has changed... There's finally an AI coding tool that's good enough to keep up with me... Aider... It's going to rock your world.",
author: "Eric S. Raymond",
text: "My life has changed... Aider... It's going to rock your world.",
author: "Eric S. Raymond on X",
link: "https://x.com/esrtweet/status/1910809356381413593"
},
{
text: "The best free open source AI coding assistant.",
author: "IndyDevDan",
author: "IndyDevDan on YouTube",
link: "https://youtu.be/YALpX8oOn78"
},
{
text: "The best AI coding assistant so far.",
author: "Matthew Berman",
author: "Matthew Berman on YouTube",
link: "https://www.youtube.com/watch?v=df8afeb1FY8"
},
{
text: "Aider ... has easily quadrupled my coding productivity.",
author: "SOLAR_FIELDS",
author: "SOLAR_FIELDS on Hacker News",
link: "https://news.ycombinator.com/item?id=36212100"
},
{
text: "It's a cool workflow... Aider's ergonomics are perfect for me.",
author: "qup",
author: "qup on Hacker News",
link: "https://news.ycombinator.com/item?id=38185326"
},
{
text: "It's really like having your senior developer live right in your Git repo - truly amazing!",
author: "rappster",
author: "rappster on GitHub",
link: "https://github.com/Aider-AI/aider/issues/124"
},
{
text: "What an amazing tool. It's incredible.",
author: "valyagolev",
author: "valyagolev on GitHub",
link: "https://github.com/Aider-AI/aider/issues/6#issue-1722897858"
},
{
text: "Aider is such an astounding thing!",
author: "cgrothaus",
author: "cgrothaus on GitHub",
link: "https://github.com/Aider-AI/aider/issues/82#issuecomment-1631876700"
},
{
text: "It was WAY faster than I would be getting off the ground and making the first few working versions.",
author: "Daniel Feldman",
author: "Daniel Feldman on X",
link: "https://twitter.com/d_feldman/status/1662295077387923456"
},
{
text: "THANK YOU for Aider! It really feels like a glimpse into the future of coding.",
author: "derwiki",
author: "derwiki on Hacker News",
link: "https://news.ycombinator.com/item?id=38205643"
},
{
text: "It's just amazing. It is freeing me to do things I felt were out my comfort zone before.",
author: "Dougie",
author: "Dougie on Discord",
link: "https://discord.com/channels/1131200896827654144/1174002618058678323/1174084556257775656"
},
{
text: "This project is stellar.",
author: "funkytaco",
author: "funkytaco on GitHub",
link: "https://github.com/Aider-AI/aider/issues/112#issuecomment-1637429008"
},
{
text: "Amazing project, definitely the best AI coding assistant I've used.",
author: "joshuavial",
author: "joshuavial on GitHub",
link: "https://github.com/Aider-AI/aider/issues/84"
},
{
text: "I absolutely love using Aider ... It makes software development feel so much lighter as an experience.",
author: "principalideal0",
author: "principalideal0 on Discord",
link: "https://discord.com/channels/1131200896827654144/1133421607499595858/1229689636012691468"
},
{
text: "I have been recovering from multiple shoulder surgeries ... and have used aider extensively. It has allowed me to continue productivity.",
author: "codeninja",
text: "I have been recovering from ... surgeries ... aider ... has allowed me to continue productivity.",
author: "codeninja on Reddit",
link: "https://www.reddit.com/r/OpenAI/s/nmNwkHy1zG"
},
{
text: "I am an aider addict. I'm getting so much more work done, but in less time.",
author: "dandandan",
author: "dandandan on Discord",
link: "https://discord.com/channels/1131200896827654144/1131200896827654149/1135913253483069470"
},
{
text: "After wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.",
author: "SystemSculpt",
text: "Aider... blows everything else out of the water hands down, there's no competition whatsoever.",
author: "SystemSculpt on Discord",
link: "https://discord.com/channels/1131200896827654144/1131200896827654149/1178736602797846548"
},
{
text: "Aider is amazing, coupled with Sonnet 3.5 it's quite mind blowing.",
author: "Josh Dingus",
author: "Josh Dingus on Discord",
link: "https://discord.com/channels/1131200896827654144/1133060684540813372/1262374225298198548"
},
{
text: "Hands down, this is the best AI coding assistant tool so far.",
author: "IndyDevDan",
author: "IndyDevDan on YouTube",
link: "https://www.youtube.com/watch?v=MPYFPvxfGZs"
},
{
text: "[Aider] changed my daily coding workflows. It's mind-blowing how a single Python application can change your life.",
author: "maledorak",
text: "[Aider] changed my daily coding workflows. It's mind-blowing how ...(it)... can change your life.",
author: "maledorak on Discord",
link: "https://discord.com/channels/1131200896827654144/1131200896827654149/1258453375620747264"
},
{
text: "Best agent for actual dev work in existing codebases.",
author: "Nick Dobos",
author: "Nick Dobos on X",
link: "https://twitter.com/NickADobos/status/1690408967963652097?s=20"
},
{
text: "One of my favorite pieces of software. Blazing trails on new paradigms!",
author: "Chris Wall",
author: "Chris Wall on X",
link: "https://x.com/chris65536/status/1905053299251798432"
},
{
text: "Aider has been revolutionary for me and my work.",
author: "Starry Hope",
author: "Starry Hope on X",
link: "https://x.com/starryhopeblog/status/1904985812137132056"
},
{
text: "Try aider! One of the best ways to vibe code.",
author: "Chris Wall",
author: "Chris Wall on X",
link: "https://x.com/Chris65536/status/1905053418961391929"
},
{
text: "Freaking love Aider.",
author: "hztar on Hacker News",
link: "https://news.ycombinator.com/item?id=44035015"
},
{
text: "Aider is hands down the best. And it's free and opensource.",
author: "AriyaSavakaLurker",
author: "AriyaSavakaLurker on Reddit",
link: "https://www.reddit.com/r/ChatGPTCoding/comments/1ik16y6/whats_your_take_on_aider/mbip39n/"
},
{
text: "Aider is also my best friend.",
author: "jzn21",
author: "jzn21 on Reddit",
link: "https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27dcnb/"
},
{
text: "Try Aider, it's worth it.",
author: "jorgejhms",
author: "jorgejhms on Reddit",
link: "https://www.reddit.com/r/ChatGPTCoding/comments/1heuvuo/aider_vs_cline_vs_windsurf_vs_cursor/m27cp99/"
},
{
text: "I like aider :)",
author: "Chenwei Cui",
author: "Chenwei Cui on X",
link: "https://x.com/ccui42/status/1904965344999145698"
},
{
text: "Aider is the precision tool of LLM code gen... Minimal, thoughtful and capable of surgical changes to your codebase all while keeping the developer in control.",
author: "Reilly Sweetland",
text: "Aider is the precision tool of LLM code gen... Minimal, thoughtful and capable of surgical changes ... while keeping the developer in control.",
author: "Reilly Sweetland on X",
link: "https://x.com/rsweetland/status/1904963807237259586"
},
{
text: "Cannot believe aider vibe coded a 650 LOC feature across service and cli today in 1 shot.",
author: "autopoietist",
author: "autopoietist on Discord",
link: "https://discord.com/channels/1131200896827654144/1131200896827654149/1355675042259796101"
},
{
text: "Oh no the secret is out! Yes, Aider is the best coding tool around. I highly, highly recommend it to anyone.",
author: "Joshua D Vander Hook",
author: "Joshua D Vander Hook on X",
link: "https://x.com/jodavaho/status/1911154899057795218"
},
{
text: "thanks to aider, i have started and finished three personal projects within the last two days",
author: "joseph stalzyn",
author: "joseph stalzyn on X",
link: "https://x.com/anitaheeder/status/1908338609645904160"
},
{
text: "Been using aider as my daily driver for over a year ... I absolutely love the tool, like beyond words.",
author: "koleok",
author: "koleok on Discord",
link: "https://discord.com/channels/1131200896827654144/1273248471394291754/1356727448372252783"
},
{
text: "Aider ... is the tool to benchmark against.",
author: "BeetleB",
author: "BeetleB on Hacker News",
link: "https://news.ycombinator.com/item?id=43930201"
},
{
text: "aider is really cool",
author: "kache (@yacineMTB)",
author: "kache on X",
link: "https://x.com/yacineMTB/status/1911224442430124387"
}
];
@ -642,6 +647,7 @@ const testimonials = [
<li><a href="/docs/leaderboards/">LLM Leaderboards</a></li>
<li><a href="https://github.com/Aider-AI/aider">GitHub Repository</a></li>
<li><a href="https://discord.gg/Y7X7bhMQFV">Discord Community</a></li>
<li><a href="https://aider.chat/HISTORY.html">Release notes</a></li>
<li><a href="/blog/">Blog</a></li>
</ul>
</div>

View file

@ -425,7 +425,7 @@ function Invoke-Installer($artifacts, $platforms) {
Write-Information ""
Write-Information "Installing aider-chat..."
& "$dest_dir\uv.exe" tool install --force --python python3.12 aider-chat@latest
& "$dest_dir\uv.exe" tool install --force --python python3.12 --with pip aider-chat@latest
if (-not $NoModifyPath) {
Add-Ci-Path $dest_dir

View file

@ -1178,7 +1178,7 @@ install() {
say "Installing aider..."
say ""
# Install aider-chat using the newly installed uv
ensure "${_install_dir}/uv" tool install --force --python python3.12 aider-chat@latest
ensure "${_install_dir}/uv" tool install --force --python python3.12 --with pip aider-chat@latest
# Avoid modifying the users PATH if they are managing their PATH manually
case :$PATH:

View file

@ -184,7 +184,7 @@ def main(
False, "--clean", "-c", help="Discard the existing testdir and make a clean copy"
),
cont: bool = typer.Option(False, "--cont", help="Continue the (single) matching testdir"),
make_new: bool = typer.Option(False, "--new", "-n", help="Make a new dated testdir"),
make_new: bool = typer.Option(False, "--new", help="Make a new dated testdir"),
no_unit_tests: bool = typer.Option(False, "--no-unit-tests", help="Do not run unit tests"),
no_aider: bool = typer.Option(False, "--no-aider", help="Do not run aider"),
verbose: bool = typer.Option(False, "--verbose", "-v", help="Verbose output"),

View file

@ -1,4 +1,4 @@
FROM python:3.10-slim AS base
FROM python:3.10-slim-bookworm AS base
# Install system dependencies
RUN apt-get update && \

View file

@ -4,11 +4,11 @@ aiohappyeyeballs==2.6.1
# via
# -c requirements/common-constraints.txt
# aiohttp
aiohttp==3.11.18
aiohttp==3.13.2
# via
# -c requirements/common-constraints.txt
# litellm
aiosignal==1.3.2
aiosignal==1.4.0
# via
# -c requirements/common-constraints.txt
# aiohttp
@ -16,13 +16,17 @@ annotated-types==0.7.0
# via
# -c requirements/common-constraints.txt
# pydantic
anyio==4.9.0
anyio==4.12.0
# via
# -c requirements/common-constraints.txt
# httpx
# openai
# watchfiles
attrs==25.3.0
asgiref==3.11.0
# via
# -c requirements/common-constraints.txt
# mixpanel
attrs==25.4.0
# via
# -c requirements/common-constraints.txt
# aiohttp
@ -33,34 +37,34 @@ backoff==2.2.1
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
# posthog
beautifulsoup4==4.13.4
beautifulsoup4==4.14.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
cachetools==5.5.2
cachetools==6.2.4
# via
# -c requirements/common-constraints.txt
# google-auth
certifi==2025.4.26
certifi==2025.11.12
# via
# -c requirements/common-constraints.txt
# httpcore
# httpx
# requests
cffi==1.17.1
cffi==2.0.0
# via
# -c requirements/common-constraints.txt
# sounddevice
# soundfile
charset-normalizer==3.4.2
charset-normalizer==3.4.4
# via
# -c requirements/common-constraints.txt
# requests
click==8.1.8
click==8.3.1
# via
# -c requirements/common-constraints.txt
# litellm
configargparse==1.7
configargparse==1.7.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
@ -77,20 +81,24 @@ distro==1.9.0
# -c requirements/common-constraints.txt
# openai
# posthog
filelock==3.18.0
fastuuid==0.14.0
# via
# -c requirements/common-constraints.txt
# litellm
filelock==3.20.1
# via
# -c requirements/common-constraints.txt
# huggingface-hub
flake8==7.2.0
flake8==7.3.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
frozenlist==1.6.0
frozenlist==1.8.0
# via
# -c requirements/common-constraints.txt
# aiohttp
# aiosignal
fsspec==2025.3.2
fsspec==2025.12.0
# via
# -c requirements/common-constraints.txt
# huggingface-hub
@ -98,7 +106,7 @@ gitdb==4.0.12
# via
# -c requirements/common-constraints.txt
# gitpython
gitpython==3.1.44
gitpython==3.1.45
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
@ -106,17 +114,17 @@ google-ai-generativelanguage==0.6.15
# via
# -c requirements/common-constraints.txt
# google-generativeai
google-api-core[grpc]==2.24.2
google-api-core[grpc]==2.28.1
# via
# -c requirements/common-constraints.txt
# google-ai-generativelanguage
# google-api-python-client
# google-generativeai
google-api-python-client==2.169.0
google-api-python-client==2.187.0
# via
# -c requirements/common-constraints.txt
# google-generativeai
google-auth==2.40.1
google-auth==2.45.0
# via
# -c requirements/common-constraints.txt
# google-ai-generativelanguage
@ -124,15 +132,15 @@ google-auth==2.40.1
# google-api-python-client
# google-auth-httplib2
# google-generativeai
google-auth-httplib2==0.2.0
google-auth-httplib2==0.3.0
# via
# -c requirements/common-constraints.txt
# google-api-python-client
google-generativeai==0.8.5
google-generativeai==0.8.6
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
googleapis-common-protos==1.70.0
googleapis-common-protos==1.72.0
# via
# -c requirements/common-constraints.txt
# google-api-core
@ -141,12 +149,13 @@ grep-ast==0.9.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
grpcio==1.71.0
grpcio==1.67.1
# via
# -c requirements/common-constraints.txt
# google-api-core
# grpcio-status
grpcio-status==1.71.0
# litellm
grpcio-status==1.67.1
# via
# -c requirements/common-constraints.txt
# google-api-core
@ -154,7 +163,7 @@ h11==0.16.0
# via
# -c requirements/common-constraints.txt
# httpcore
hf-xet==1.1.0
hf-xet==1.2.0
# via
# -c requirements/common-constraints.txt
# huggingface-hub
@ -162,7 +171,7 @@ httpcore==1.0.9
# via
# -c requirements/common-constraints.txt
# httpx
httplib2==0.22.0
httplib2==0.31.0
# via
# -c requirements/common-constraints.txt
# google-api-python-client
@ -171,12 +180,13 @@ httpx==0.28.1
# via
# -c requirements/common-constraints.txt
# litellm
# mixpanel
# openai
huggingface-hub==0.31.1
huggingface-hub==0.36.0
# via
# -c requirements/common-constraints.txt
# tokenizers
idna==3.10
idna==3.11
# via
# -c requirements/common-constraints.txt
# anyio
@ -196,32 +206,32 @@ jinja2==3.1.6
# via
# -c requirements/common-constraints.txt
# litellm
jiter==0.9.0
jiter==0.12.0
# via
# -c requirements/common-constraints.txt
# openai
json5==0.12.0
json5==0.12.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
jsonschema==4.23.0
jsonschema==4.25.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
# litellm
jsonschema-specifications==2025.4.1
jsonschema-specifications==2025.9.1
# via
# -c requirements/common-constraints.txt
# jsonschema
litellm==1.68.1
litellm==1.80.10
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
markdown-it-py==3.0.0
markdown-it-py==4.0.0
# via
# -c requirements/common-constraints.txt
# rich
markupsafe==3.0.2
markupsafe==3.0.3
# via
# -c requirements/common-constraints.txt
# jinja2
@ -233,7 +243,7 @@ mdurl==0.1.2
# via
# -c requirements/common-constraints.txt
# markdown-it-py
mixpanel==4.10.1
mixpanel==5.0.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
@ -241,7 +251,7 @@ mslex==1.3.0
# via
# -c requirements/common-constraints.txt
# oslex
multidict==6.4.3
multidict==6.7.0
# via
# -c requirements/common-constraints.txt
# aiohttp
@ -255,7 +265,7 @@ numpy==1.26.4
# -c requirements/common-constraints.txt
# scipy
# soundfile
openai==1.75.0
openai==2.13.0
# via
# -c requirements/common-constraints.txt
# litellm
@ -263,7 +273,7 @@ oslex==0.1.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
packaging==24.2
packaging==25.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
@ -277,29 +287,29 @@ pexpect==4.9.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
pillow==11.2.1
pillow==12.0.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
posthog==4.0.1
posthog==7.4.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
prompt-toolkit==3.0.51
prompt-toolkit==3.0.52
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
propcache==0.3.1
propcache==0.4.1
# via
# -c requirements/common-constraints.txt
# aiohttp
# yarl
proto-plus==1.26.1
proto-plus==1.27.0
# via
# -c requirements/common-constraints.txt
# google-ai-generativelanguage
# google-api-core
protobuf==5.29.4
protobuf==5.29.5
# via
# -c requirements/common-constraints.txt
# google-ai-generativelanguage
@ -308,7 +318,7 @@ protobuf==5.29.4
# googleapis-common-protos
# grpcio-status
# proto-plus
psutil==7.0.0
psutil==7.1.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
@ -325,21 +335,22 @@ pyasn1-modules==0.4.2
# via
# -c requirements/common-constraints.txt
# google-auth
pycodestyle==2.13.0
pycodestyle==2.14.0
# via
# -c requirements/common-constraints.txt
# flake8
pycparser==2.22
pycparser==2.23
# via
# -c requirements/common-constraints.txt
# cffi
pydantic==2.11.4
pydantic==2.12.5
# via
# -c requirements/common-constraints.txt
# google-generativeai
# litellm
# mixpanel
# openai
pydantic-core==2.33.2
pydantic-core==2.41.5
# via
# -c requirements/common-constraints.txt
# pydantic
@ -347,23 +358,23 @@ pydub==0.25.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
pyflakes==3.3.2
pyflakes==3.4.0
# via
# -c requirements/common-constraints.txt
# flake8
pygments==2.19.1
pygments==2.19.2
# via
# -c requirements/common-constraints.txt
# rich
pypandoc==1.15
pypandoc==1.16.2
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
pyparsing==3.2.3
pyparsing==3.2.5
# via
# -c requirements/common-constraints.txt
# httplib2
pyperclip==1.9.0
pyperclip==1.11.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
@ -371,25 +382,25 @@ python-dateutil==2.9.0.post0
# via
# -c requirements/common-constraints.txt
# posthog
python-dotenv==1.1.0
python-dotenv==1.2.1
# via
# -c requirements/common-constraints.txt
# litellm
pyyaml==6.0.2
pyyaml==6.0.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
# huggingface-hub
referencing==0.36.2
referencing==0.37.0
# via
# -c requirements/common-constraints.txt
# jsonschema
# jsonschema-specifications
regex==2024.11.6
regex==2025.11.3
# via
# -c requirements/common-constraints.txt
# tiktoken
requests==2.32.3
requests==2.32.5
# via
# -c requirements/common-constraints.txt
# google-api-core
@ -397,11 +408,11 @@ requests==2.32.3
# mixpanel
# posthog
# tiktoken
rich==14.0.0
rich==14.2.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
rpds-py==0.24.0
rpds-py==0.30.0
# via
# -c requirements/common-constraints.txt
# jsonschema
@ -414,14 +425,13 @@ scipy==1.15.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
shtab==1.7.2
shtab==1.8.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
six==1.17.0
# via
# -c requirements/common-constraints.txt
# mixpanel
# posthog
# python-dateutil
smmap==5.0.2
@ -431,13 +441,12 @@ smmap==5.0.2
sniffio==1.3.1
# via
# -c requirements/common-constraints.txt
# anyio
# openai
socksio==1.0.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
sounddevice==0.5.1
sounddevice==0.5.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
@ -445,15 +454,15 @@ soundfile==0.13.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
soupsieve==2.7
soupsieve==2.8.1
# via
# -c requirements/common-constraints.txt
# beautifulsoup4
tiktoken==0.9.0
tiktoken==0.12.0
# via
# -c requirements/common-constraints.txt
# litellm
tokenizers==0.21.1
tokenizers==0.22.1
# via
# -c requirements/common-constraints.txt
# litellm
@ -470,59 +479,60 @@ tree-sitter-c-sharp==0.23.1
# via
# -c requirements/common-constraints.txt
# tree-sitter-language-pack
tree-sitter-embedded-template==0.23.2
tree-sitter-embedded-template==0.25.0
# via
# -c requirements/common-constraints.txt
# tree-sitter-language-pack
tree-sitter-language-pack==0.7.3
tree-sitter-language-pack==0.13.0
# via
# -c requirements/common-constraints.txt
# grep-ast
tree-sitter-yaml==0.7.0
tree-sitter-yaml==0.7.2
# via
# -c requirements/common-constraints.txt
# tree-sitter-language-pack
typing-extensions==4.13.2
typing-extensions==4.15.0
# via
# -c requirements/common-constraints.txt
# aiosignal
# anyio
# beautifulsoup4
# google-generativeai
# huggingface-hub
# openai
# posthog
# pydantic
# pydantic-core
# referencing
# typing-inspection
typing-inspection==0.4.0
typing-inspection==0.4.2
# via
# -c requirements/common-constraints.txt
# pydantic
uritemplate==4.1.1
uritemplate==4.2.0
# via
# -c requirements/common-constraints.txt
# google-api-python-client
urllib3==2.4.0
urllib3==2.6.2
# via
# -c requirements/common-constraints.txt
# mixpanel
# requests
watchfiles==1.0.5
watchfiles==1.1.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements.in
wcwidth==0.2.13
wcwidth==0.2.14
# via
# -c requirements/common-constraints.txt
# prompt-toolkit
yarl==1.20.0
yarl==1.22.0
# via
# -c requirements/common-constraints.txt
# aiohttp
zipp==3.21.0
zipp==3.23.0
# via
# -c requirements/common-constraints.txt
# importlib-metadata
tree-sitter==0.23.2; python_version < "3.10"
tree-sitter==0.24.0; python_version >= "3.10"
tree-sitter==0.25.2; python_version >= "3.10"

View file

@ -2,23 +2,27 @@
# uv pip compile --no-strip-extras --output-file=requirements/common-constraints.txt requirements/requirements.in requirements/requirements-browser.in requirements/requirements-dev.in requirements/requirements-help.in requirements/requirements-playwright.in
aiohappyeyeballs==2.6.1
# via aiohttp
aiohttp==3.11.18
aiohttp==3.13.2
# via
# huggingface-hub
# litellm
# llama-index-core
aiosignal==1.3.2
aiosignal==1.4.0
# via aiohttp
altair==5.5.0
aiosqlite==0.22.0
# via llama-index-core
altair==6.0.0
# via streamlit
annotated-types==0.7.0
# via pydantic
anyio==4.9.0
anyio==4.12.0
# via
# httpx
# openai
# watchfiles
attrs==25.3.0
asgiref==3.11.0
# via mixpanel
attrs==25.4.0
# via
# aiohttp
# jsonschema
@ -27,32 +31,32 @@ backoff==2.2.1
# via
# -r requirements/requirements.in
# posthog
banks==2.1.2
banks==2.2.0
# via llama-index-core
beautifulsoup4==4.13.4
beautifulsoup4==4.14.3
# via -r requirements/requirements.in
blinker==1.9.0
# via streamlit
build==1.2.2.post1
build==1.3.0
# via pip-tools
cachetools==5.5.2
cachetools==6.2.4
# via
# google-auth
# streamlit
certifi==2025.4.26
certifi==2025.11.12
# via
# httpcore
# httpx
# requests
cffi==1.17.1
cffi==2.0.0
# via
# sounddevice
# soundfile
cfgv==3.4.0
cfgv==3.5.0
# via pre-commit
charset-normalizer==3.4.2
charset-normalizer==3.4.4
# via requests
click==8.1.8
click==8.3.1
# via
# litellm
# nltk
@ -61,39 +65,38 @@ click==8.1.8
# typer
codespell==2.4.1
# via -r requirements/requirements-dev.in
cogapp==3.4.1
cogapp==3.6.0
# via -r requirements/requirements-dev.in
colorama==0.4.6
# via griffe
configargparse==1.7
configargparse==1.7.1
# via -r requirements/requirements.in
contourpy==1.3.2
contourpy==1.3.3
# via matplotlib
cycler==0.12.1
# via matplotlib
dataclasses-json==0.6.7
# via llama-index-core
deprecated==1.2.18
deprecated==1.3.1
# via
# banks
# llama-index-core
# llama-index-instrumentation
diff-match-patch==20241021
# via -r requirements/requirements.in
dill==0.4.0
# via
# multiprocess
# pathos
dirtyjson==1.0.8
# via llama-index-core
diskcache==5.6.3
# via -r requirements/requirements.in
distlib==0.3.9
distlib==0.4.0
# via virtualenv
distro==1.9.0
# via
# openai
# posthog
filelock==3.18.0
fastuuid==0.14.0
# via litellm
filelock==3.20.1
# via
# huggingface-hub
# torch
@ -101,37 +104,37 @@ filelock==3.18.0
# virtualenv
filetype==1.2.0
# via llama-index-core
flake8==7.2.0
flake8==7.3.0
# via -r requirements/requirements.in
fonttools==4.57.0
fonttools==4.61.1
# via matplotlib
frozenlist==1.6.0
frozenlist==1.8.0
# via
# aiohttp
# aiosignal
fsspec==2025.3.2
fsspec==2025.12.0
# via
# huggingface-hub
# llama-index-core
# torch
gitdb==4.0.12
# via gitpython
gitpython==3.1.44
gitpython==3.1.45
# via
# -r requirements/requirements.in
# streamlit
google-ai-generativelanguage==0.6.15
# via google-generativeai
google-api-core[grpc]==2.24.2
google-api-core[grpc]==2.28.1
# via
# google-ai-generativelanguage
# google-api-python-client
# google-cloud-bigquery
# google-cloud-core
# google-generativeai
google-api-python-client==2.169.0
google-api-python-client==2.187.0
# via google-generativeai
google-auth==2.40.1
google-auth==2.45.0
# via
# google-ai-generativelanguage
# google-api-core
@ -140,43 +143,44 @@ google-auth==2.40.1
# google-cloud-bigquery
# google-cloud-core
# google-generativeai
google-auth-httplib2==0.2.0
google-auth-httplib2==0.3.0
# via google-api-python-client
google-cloud-bigquery==3.31.0
google-cloud-bigquery==3.39.0
# via -r requirements/requirements-dev.in
google-cloud-core==2.4.3
google-cloud-core==2.5.0
# via google-cloud-bigquery
google-crc32c==1.7.1
google-crc32c==1.8.0
# via google-resumable-media
google-generativeai==0.8.5
google-generativeai==0.8.6
# via -r requirements/requirements.in
google-resumable-media==2.7.2
google-resumable-media==2.8.0
# via google-cloud-bigquery
googleapis-common-protos==1.70.0
googleapis-common-protos==1.72.0
# via
# google-api-core
# grpcio-status
greenlet==3.2.2
greenlet==3.3.0
# via
# playwright
# sqlalchemy
grep-ast==0.9.0
# via -r requirements/requirements.in
griffe==1.7.3
griffe==1.15.0
# via banks
grpcio==1.71.0
grpcio==1.67.1
# via
# google-api-core
# grpcio-status
grpcio-status==1.71.0
# litellm
grpcio-status==1.67.1
# via google-api-core
h11==0.16.0
# via httpcore
hf-xet==1.1.0
hf-xet==1.2.0
# via huggingface-hub
httpcore==1.0.9
# via httpx
httplib2==0.22.0
httplib2==0.31.0
# via
# google-api-python-client
# google-auth-httplib2
@ -184,16 +188,17 @@ httpx==0.28.1
# via
# litellm
# llama-index-core
# mixpanel
# openai
huggingface-hub[inference]==0.31.1
huggingface-hub[inference]==0.36.0
# via
# llama-index-embeddings-huggingface
# sentence-transformers
# tokenizers
# transformers
identify==2.6.10
identify==2.6.15
# via pre-commit
idna==3.10
idna==3.11
# via
# anyio
# httpx
@ -207,7 +212,7 @@ importlib-metadata==7.2.1
# litellm
importlib-resources==6.5.2
# via -r requirements/requirements.in
iniconfig==2.1.0
iniconfig==2.3.0
# via pytest
jinja2==3.1.6
# via
@ -216,60 +221,60 @@ jinja2==3.1.6
# litellm
# pydeck
# torch
jiter==0.9.0
jiter==0.12.0
# via openai
joblib==1.5.0
joblib==1.5.3
# via
# nltk
# scikit-learn
json5==0.12.0
json5==0.12.1
# via -r requirements/requirements.in
jsonschema==4.23.0
jsonschema==4.25.1
# via
# -r requirements/requirements.in
# altair
# litellm
jsonschema-specifications==2025.4.1
jsonschema-specifications==2025.9.1
# via jsonschema
kiwisolver==1.4.8
kiwisolver==1.4.9
# via matplotlib
litellm==1.68.1
litellm==1.80.10
# via -r requirements/requirements.in
llama-index-core==0.12.26
# via
# -r requirements/requirements-help.in
# llama-index-embeddings-huggingface
llama-index-embeddings-huggingface==0.5.3
llama-index-core==0.14.10
# via llama-index-embeddings-huggingface
llama-index-embeddings-huggingface==0.6.1
# via -r requirements/requirements-help.in
lox==0.13.0
llama-index-instrumentation==0.4.2
# via llama-index-workflows
llama-index-workflows==2.11.5
# via llama-index-core
lox==1.0.0
# via -r requirements/requirements-dev.in
markdown-it-py==3.0.0
markdown-it-py==4.0.0
# via rich
markupsafe==3.0.2
markupsafe==3.0.3
# via jinja2
marshmallow==3.26.1
# via dataclasses-json
matplotlib==3.10.3
matplotlib==3.10.8
# via -r requirements/requirements-dev.in
mccabe==0.7.0
# via flake8
mdurl==0.1.2
# via markdown-it-py
mixpanel==4.10.1
mixpanel==5.0.0
# via -r requirements/requirements.in
mpmath==1.3.0
# via sympy
mslex==1.3.0
# via oslex
multidict==6.4.3
multidict==6.7.0
# via
# aiohttp
# yarl
multiprocess==0.70.18
# via pathos
mypy-extensions==1.1.0
# via typing-inspect
narwhals==1.38.2
narwhals==2.14.0
# via altair
nest-asyncio==1.6.0
# via llama-index-core
@ -278,7 +283,7 @@ networkx==3.4.2
# -r requirements/requirements.in
# llama-index-core
# torch
nltk==3.9.1
nltk==3.9.2
# via llama-index-core
nodeenv==1.9.1
# via pre-commit
@ -295,11 +300,11 @@ numpy==1.26.4
# soundfile
# streamlit
# transformers
openai==1.75.0
openai==2.13.0
# via litellm
oslex==0.1.3
# via -r requirements/requirements.in
packaging==24.2
packaging==25.0
# via
# -r requirements/requirements.in
# altair
@ -311,56 +316,50 @@ packaging==24.2
# pytest
# streamlit
# transformers
pandas==2.2.3
pandas==2.3.3
# via
# -r requirements/requirements-dev.in
# streamlit
pathos==0.3.4
# via lox
pathspec==0.12.1
# via
# -r requirements/requirements.in
# grep-ast
pexpect==4.9.0
# via -r requirements/requirements.in
pillow==11.2.1
pillow==12.0.0
# via
# -r requirements/requirements.in
# llama-index-core
# matplotlib
# sentence-transformers
# streamlit
pip==25.1.1
pip==25.3
# via pip-tools
pip-tools==7.4.1
pip-tools==7.5.2
# via -r requirements/requirements-dev.in
platformdirs==4.3.8
platformdirs==4.5.1
# via
# banks
# llama-index-core
# virtualenv
playwright==1.52.0
playwright==1.57.0
# via -r requirements/requirements-playwright.in
pluggy==1.5.0
pluggy==1.6.0
# via pytest
posthog==4.0.1
posthog==7.4.0
# via -r requirements/requirements.in
pox==0.3.6
# via pathos
ppft==1.7.7
# via pathos
pre-commit==4.2.0
pre-commit==4.5.1
# via -r requirements/requirements-dev.in
prompt-toolkit==3.0.51
prompt-toolkit==3.0.52
# via -r requirements/requirements.in
propcache==0.3.1
propcache==0.4.1
# via
# aiohttp
# yarl
proto-plus==1.26.1
proto-plus==1.27.0
# via
# google-ai-generativelanguage
# google-api-core
protobuf==5.29.4
protobuf==5.29.5
# via
# google-ai-generativelanguage
# google-api-core
@ -369,11 +368,11 @@ protobuf==5.29.4
# grpcio-status
# proto-plus
# streamlit
psutil==7.0.0
psutil==7.1.3
# via -r requirements/requirements.in
ptyprocess==0.7.0
# via pexpect
pyarrow==20.0.0
pyarrow==22.0.0
# via streamlit
pyasn1==0.6.1
# via
@ -381,18 +380,21 @@ pyasn1==0.6.1
# rsa
pyasn1-modules==0.4.2
# via google-auth
pycodestyle==2.13.0
pycodestyle==2.14.0
# via flake8
pycparser==2.22
pycparser==2.23
# via cffi
pydantic==2.11.4
pydantic==2.12.5
# via
# banks
# google-generativeai
# litellm
# llama-index-core
# llama-index-instrumentation
# llama-index-workflows
# mixpanel
# openai
pydantic-core==2.33.2
pydantic-core==2.41.5
# via pydantic
pydeck==0.9.1
# via streamlit
@ -400,27 +402,29 @@ pydub==0.25.1
# via -r requirements/requirements.in
pyee==13.0.0
# via playwright
pyflakes==3.3.2
pyflakes==3.4.0
# via flake8
pygments==2.19.1
# via rich
pypandoc==1.15
pygments==2.19.2
# via
# pytest
# rich
pypandoc==1.16.2
# via -r requirements/requirements.in
pyparsing==3.2.3
pyparsing==3.2.5
# via
# httplib2
# matplotlib
pyperclip==1.9.0
pyperclip==1.11.0
# via -r requirements/requirements.in
pyproject-hooks==1.2.0
# via
# build
# pip-tools
pytest==8.3.5
pytest==9.0.2
# via
# -r requirements/requirements-dev.in
# pytest-env
pytest-env==1.1.5
pytest-env==1.2.0
# via -r requirements/requirements-dev.in
python-dateutil==2.9.0.post0
# via
@ -428,27 +432,27 @@ python-dateutil==2.9.0.post0
# matplotlib
# pandas
# posthog
python-dotenv==1.1.0
python-dotenv==1.2.1
# via litellm
pytz==2025.2
# via pandas
pyyaml==6.0.2
pyyaml==6.0.3
# via
# -r requirements/requirements.in
# huggingface-hub
# llama-index-core
# pre-commit
# transformers
referencing==0.36.2
referencing==0.37.0
# via
# jsonschema
# jsonschema-specifications
regex==2024.11.6
regex==2025.11.3
# via
# nltk
# tiktoken
# transformers
requests==2.32.3
requests==2.32.5
# via
# google-api-core
# google-cloud-bigquery
@ -459,19 +463,19 @@ requests==2.32.3
# streamlit
# tiktoken
# transformers
rich==14.0.0
rich==14.2.0
# via
# -r requirements/requirements.in
# typer
rpds-py==0.24.0
rpds-py==0.30.0
# via
# jsonschema
# referencing
rsa==4.9.1
# via google-auth
safetensors==0.5.3
safetensors==0.7.0
# via transformers
scikit-learn==1.6.1
scikit-learn==1.8.0
# via sentence-transformers
scipy==1.15.3
# via
@ -480,36 +484,36 @@ scipy==1.15.3
# sentence-transformers
semver==3.0.4
# via -r requirements/requirements-dev.in
sentence-transformers==4.1.0
sentence-transformers==5.2.0
# via llama-index-embeddings-huggingface
setuptools==80.3.1
# via pip-tools
setuptools==80.9.0
# via
# llama-index-core
# pip-tools
# torch
shellingham==1.5.4
# via typer
shtab==1.7.2
shtab==1.8.0
# via -r requirements/requirements.in
six==1.17.0
# via
# mixpanel
# posthog
# python-dateutil
smmap==5.0.2
# via gitdb
sniffio==1.3.1
# via
# anyio
# openai
# via openai
socksio==1.0.0
# via -r requirements/requirements.in
sounddevice==0.5.1
sounddevice==0.5.3
# via -r requirements/requirements.in
soundfile==0.13.1
# via -r requirements/requirements.in
soupsieve==2.7
soupsieve==2.8.1
# via beautifulsoup4
sqlalchemy[asyncio]==2.0.40
sqlalchemy[asyncio]==2.0.45
# via llama-index-core
streamlit==1.45.0
streamlit==1.52.2
# via -r requirements/requirements-browser.in
sympy==1.14.0
# via torch
@ -519,21 +523,19 @@ tenacity==9.1.2
# streamlit
threadpoolctl==3.6.0
# via scikit-learn
tiktoken==0.9.0
tiktoken==0.12.0
# via
# litellm
# llama-index-core
tokenizers==0.21.1
tokenizers==0.22.1
# via
# litellm
# transformers
toml==0.10.2
# via streamlit
torch==2.2.2
# via
# -r requirements/requirements-help.in
# sentence-transformers
tornado==6.4.2
torch==2.9.1
# via sentence-transformers
tornado==6.5.4
# via streamlit
tqdm==4.67.1
# via
@ -544,29 +546,32 @@ tqdm==4.67.1
# openai
# sentence-transformers
# transformers
transformers==4.51.3
transformers==4.57.3
# via sentence-transformers
tree-sitter==0.24.0
tree-sitter==0.25.2
# via tree-sitter-language-pack
tree-sitter-c-sharp==0.23.1
# via tree-sitter-language-pack
tree-sitter-embedded-template==0.23.2
tree-sitter-embedded-template==0.25.0
# via tree-sitter-language-pack
tree-sitter-language-pack==0.7.3
tree-sitter-language-pack==0.13.0
# via grep-ast
tree-sitter-yaml==0.7.0
tree-sitter-yaml==0.7.2
# via tree-sitter-language-pack
typer==0.15.3
typer==0.20.0
# via -r requirements/requirements-dev.in
typing-extensions==4.13.2
typing-extensions==4.15.0
# via
# aiosignal
# altair
# anyio
# beautifulsoup4
# google-generativeai
# huggingface-hub
# llama-index-core
# llama-index-workflows
# openai
# posthog
# pydantic
# pydantic-core
# pyee
@ -582,31 +587,29 @@ typing-inspect==0.9.0
# via
# dataclasses-json
# llama-index-core
typing-inspection==0.4.0
typing-inspection==0.4.2
# via pydantic
tzdata==2025.2
tzdata==2025.3
# via pandas
uritemplate==4.1.1
uritemplate==4.2.0
# via google-api-python-client
urllib3==2.4.0
# via
# mixpanel
# requests
uv==0.7.3
urllib3==2.6.2
# via requests
uv==0.9.18
# via -r requirements/requirements-dev.in
virtualenv==20.31.2
virtualenv==20.35.4
# via pre-commit
watchfiles==1.0.5
watchfiles==1.1.1
# via -r requirements/requirements.in
wcwidth==0.2.13
wcwidth==0.2.14
# via prompt-toolkit
wheel==0.45.1
# via pip-tools
wrapt==1.17.2
wrapt==2.0.1
# via
# deprecated
# llama-index-core
yarl==1.20.0
yarl==1.22.0
# via aiohttp
zipp==3.21.0
zipp==3.23.0
# via importlib-metadata

View file

@ -1,10 +1,10 @@
# This file was autogenerated by uv via the following command:
# uv pip compile --no-strip-extras --constraint=requirements/common-constraints.txt --output-file=requirements/requirements-browser.txt requirements/requirements-browser.in
altair==5.5.0
altair==6.0.0
# via
# -c requirements/common-constraints.txt
# streamlit
attrs==25.3.0
attrs==25.4.0
# via
# -c requirements/common-constraints.txt
# jsonschema
@ -13,19 +13,19 @@ blinker==1.9.0
# via
# -c requirements/common-constraints.txt
# streamlit
cachetools==5.5.2
cachetools==6.2.4
# via
# -c requirements/common-constraints.txt
# streamlit
certifi==2025.4.26
certifi==2025.11.12
# via
# -c requirements/common-constraints.txt
# requests
charset-normalizer==3.4.2
charset-normalizer==3.4.4
# via
# -c requirements/common-constraints.txt
# requests
click==8.1.8
click==8.3.1
# via
# -c requirements/common-constraints.txt
# streamlit
@ -33,11 +33,11 @@ gitdb==4.0.12
# via
# -c requirements/common-constraints.txt
# gitpython
gitpython==3.1.44
gitpython==3.1.45
# via
# -c requirements/common-constraints.txt
# streamlit
idna==3.10
idna==3.11
# via
# -c requirements/common-constraints.txt
# requests
@ -46,19 +46,19 @@ jinja2==3.1.6
# -c requirements/common-constraints.txt
# altair
# pydeck
jsonschema==4.23.0
jsonschema==4.25.1
# via
# -c requirements/common-constraints.txt
# altair
jsonschema-specifications==2025.4.1
jsonschema-specifications==2025.9.1
# via
# -c requirements/common-constraints.txt
# jsonschema
markupsafe==3.0.2
markupsafe==3.0.3
# via
# -c requirements/common-constraints.txt
# jinja2
narwhals==1.38.2
narwhals==2.14.0
# via
# -c requirements/common-constraints.txt
# altair
@ -68,24 +68,24 @@ numpy==1.26.4
# pandas
# pydeck
# streamlit
packaging==24.2
packaging==25.0
# via
# -c requirements/common-constraints.txt
# altair
# streamlit
pandas==2.2.3
pandas==2.3.3
# via
# -c requirements/common-constraints.txt
# streamlit
pillow==11.2.1
pillow==12.0.0
# via
# -c requirements/common-constraints.txt
# streamlit
protobuf==5.29.4
protobuf==5.29.5
# via
# -c requirements/common-constraints.txt
# streamlit
pyarrow==20.0.0
pyarrow==22.0.0
# via
# -c requirements/common-constraints.txt
# streamlit
@ -101,16 +101,16 @@ pytz==2025.2
# via
# -c requirements/common-constraints.txt
# pandas
referencing==0.36.2
referencing==0.37.0
# via
# -c requirements/common-constraints.txt
# jsonschema
# jsonschema-specifications
requests==2.32.3
requests==2.32.5
# via
# -c requirements/common-constraints.txt
# streamlit
rpds-py==0.24.0
rpds-py==0.30.0
# via
# -c requirements/common-constraints.txt
# jsonschema
@ -123,7 +123,7 @@ smmap==5.0.2
# via
# -c requirements/common-constraints.txt
# gitdb
streamlit==1.45.0
streamlit==1.52.2
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-browser.in
@ -135,21 +135,21 @@ toml==0.10.2
# via
# -c requirements/common-constraints.txt
# streamlit
tornado==6.4.2
tornado==6.5.4
# via
# -c requirements/common-constraints.txt
# streamlit
typing-extensions==4.13.2
typing-extensions==4.15.0
# via
# -c requirements/common-constraints.txt
# altair
# referencing
# streamlit
tzdata==2025.2
tzdata==2025.3
# via
# -c requirements/common-constraints.txt
# pandas
urllib3==2.4.0
urllib3==2.6.2
# via
# -c requirements/common-constraints.txt
# requests

View file

@ -1,26 +1,26 @@
# This file was autogenerated by uv via the following command:
# uv pip compile --no-strip-extras --constraint=requirements/common-constraints.txt --output-file=requirements/requirements-dev.txt requirements/requirements-dev.in
build==1.2.2.post1
build==1.3.0
# via
# -c requirements/common-constraints.txt
# pip-tools
cachetools==5.5.2
cachetools==6.2.4
# via
# -c requirements/common-constraints.txt
# google-auth
certifi==2025.4.26
certifi==2025.11.12
# via
# -c requirements/common-constraints.txt
# requests
cfgv==3.4.0
cfgv==3.5.0
# via
# -c requirements/common-constraints.txt
# pre-commit
charset-normalizer==3.4.2
charset-normalizer==3.4.4
# via
# -c requirements/common-constraints.txt
# requests
click==8.1.8
click==8.3.1
# via
# -c requirements/common-constraints.txt
# pip-tools
@ -29,11 +29,11 @@ codespell==2.4.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
cogapp==3.4.1
cogapp==3.6.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
contourpy==1.3.2
contourpy==1.3.3
# via
# -c requirements/common-constraints.txt
# matplotlib
@ -41,69 +41,64 @@ cycler==0.12.1
# via
# -c requirements/common-constraints.txt
# matplotlib
dill==0.4.0
# via
# -c requirements/common-constraints.txt
# multiprocess
# pathos
distlib==0.3.9
distlib==0.4.0
# via
# -c requirements/common-constraints.txt
# virtualenv
filelock==3.18.0
filelock==3.20.1
# via
# -c requirements/common-constraints.txt
# virtualenv
fonttools==4.57.0
fonttools==4.61.1
# via
# -c requirements/common-constraints.txt
# matplotlib
google-api-core[grpc]==2.24.2
google-api-core[grpc]==2.28.1
# via
# -c requirements/common-constraints.txt
# google-cloud-bigquery
# google-cloud-core
google-auth==2.40.1
google-auth==2.45.0
# via
# -c requirements/common-constraints.txt
# google-api-core
# google-cloud-bigquery
# google-cloud-core
google-cloud-bigquery==3.31.0
google-cloud-bigquery==3.39.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
google-cloud-core==2.4.3
google-cloud-core==2.5.0
# via
# -c requirements/common-constraints.txt
# google-cloud-bigquery
google-crc32c==1.7.1
google-crc32c==1.8.0
# via
# -c requirements/common-constraints.txt
# google-resumable-media
google-resumable-media==2.7.2
google-resumable-media==2.8.0
# via
# -c requirements/common-constraints.txt
# google-cloud-bigquery
googleapis-common-protos==1.70.0
googleapis-common-protos==1.72.0
# via
# -c requirements/common-constraints.txt
# google-api-core
# grpcio-status
grpcio==1.71.0
grpcio==1.67.1
# via
# -c requirements/common-constraints.txt
# google-api-core
# grpcio-status
grpcio-status==1.71.0
grpcio-status==1.67.1
# via
# -c requirements/common-constraints.txt
# google-api-core
identify==2.6.10
identify==2.6.15
# via
# -c requirements/common-constraints.txt
# pre-commit
idna==3.10
idna==3.11
# via
# -c requirements/common-constraints.txt
# requests
@ -111,23 +106,23 @@ imgcat==0.6.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
iniconfig==2.1.0
iniconfig==2.3.0
# via
# -c requirements/common-constraints.txt
# pytest
kiwisolver==1.4.8
kiwisolver==1.4.9
# via
# -c requirements/common-constraints.txt
# matplotlib
lox==0.13.0
lox==1.0.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
markdown-it-py==3.0.0
markdown-it-py==4.0.0
# via
# -c requirements/common-constraints.txt
# rich
matplotlib==3.10.3
matplotlib==3.10.8
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
@ -135,10 +130,6 @@ mdurl==0.1.2
# via
# -c requirements/common-constraints.txt
# markdown-it-py
multiprocess==0.70.18
# via
# -c requirements/common-constraints.txt
# pathos
nodeenv==1.9.1
# via
# -c requirements/common-constraints.txt
@ -149,58 +140,46 @@ numpy==1.26.4
# contourpy
# matplotlib
# pandas
packaging==24.2
packaging==25.0
# via
# -c requirements/common-constraints.txt
# build
# google-cloud-bigquery
# matplotlib
# pytest
pandas==2.2.3
pandas==2.3.3
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
pathos==0.3.4
# via
# -c requirements/common-constraints.txt
# lox
pillow==11.2.1
pillow==12.0.0
# via
# -c requirements/common-constraints.txt
# matplotlib
pip==25.1.1
pip==25.3
# via
# -c requirements/common-constraints.txt
# pip-tools
pip-tools==7.4.1
pip-tools==7.5.2
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
platformdirs==4.3.8
platformdirs==4.5.1
# via
# -c requirements/common-constraints.txt
# virtualenv
pluggy==1.5.0
pluggy==1.6.0
# via
# -c requirements/common-constraints.txt
# pytest
pox==0.3.6
# via
# -c requirements/common-constraints.txt
# pathos
ppft==1.7.7
# via
# -c requirements/common-constraints.txt
# pathos
pre-commit==4.2.0
pre-commit==4.5.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
proto-plus==1.26.1
proto-plus==1.27.0
# via
# -c requirements/common-constraints.txt
# google-api-core
protobuf==5.29.4
protobuf==5.29.5
# via
# -c requirements/common-constraints.txt
# google-api-core
@ -216,11 +195,12 @@ pyasn1-modules==0.4.2
# via
# -c requirements/common-constraints.txt
# google-auth
pygments==2.19.1
pygments==2.19.2
# via
# -c requirements/common-constraints.txt
# pytest
# rich
pyparsing==3.2.3
pyparsing==3.2.5
# via
# -c requirements/common-constraints.txt
# matplotlib
@ -229,12 +209,12 @@ pyproject-hooks==1.2.0
# -c requirements/common-constraints.txt
# build
# pip-tools
pytest==8.3.5
pytest==9.0.2
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
# pytest-env
pytest-env==1.1.5
pytest-env==1.2.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
@ -248,16 +228,16 @@ pytz==2025.2
# via
# -c requirements/common-constraints.txt
# pandas
pyyaml==6.0.2
pyyaml==6.0.3
# via
# -c requirements/common-constraints.txt
# pre-commit
requests==2.32.3
requests==2.32.5
# via
# -c requirements/common-constraints.txt
# google-api-core
# google-cloud-bigquery
rich==14.0.0
rich==14.2.0
# via
# -c requirements/common-constraints.txt
# typer
@ -269,7 +249,7 @@ semver==3.0.4
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
setuptools==80.3.1
setuptools==80.9.0
# via
# -c requirements/common-constraints.txt
# pip-tools
@ -281,27 +261,27 @@ six==1.17.0
# via
# -c requirements/common-constraints.txt
# python-dateutil
typer==0.15.3
typer==0.20.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
typing-extensions==4.13.2
typing-extensions==4.15.0
# via
# -c requirements/common-constraints.txt
# typer
tzdata==2025.2
tzdata==2025.3
# via
# -c requirements/common-constraints.txt
# pandas
urllib3==2.4.0
urllib3==2.6.2
# via
# -c requirements/common-constraints.txt
# requests
uv==0.7.3
uv==0.9.18
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-dev.in
virtualenv==20.31.2
virtualenv==20.35.4
# via
# -c requirements/common-constraints.txt
# pre-commit

View file

@ -5,7 +5,7 @@ numpy<2
# Mac x86 only supports 2.2.2
# https://discuss.pytorch.org/t/why-no-macosx-x86-64-build-after-torch-2-2-2-cp39-none-macosx-10-9-x86-64-whl/204546/2
torch==2.2.2
# torch==2.2.2
# Later versions break test_help in GitHub Actions on Windows and Ubuntu
llama-index-core==0.12.26
# llama-index-core==0.12.26

View file

@ -4,42 +4,46 @@ aiohappyeyeballs==2.6.1
# via
# -c requirements/common-constraints.txt
# aiohttp
aiohttp==3.11.18
aiohttp==3.13.2
# via
# -c requirements/common-constraints.txt
# huggingface-hub
# llama-index-core
aiosignal==1.3.2
aiosignal==1.4.0
# via
# -c requirements/common-constraints.txt
# aiohttp
aiosqlite==0.22.0
# via
# -c requirements/common-constraints.txt
# llama-index-core
annotated-types==0.7.0
# via
# -c requirements/common-constraints.txt
# pydantic
anyio==4.9.0
anyio==4.12.0
# via
# -c requirements/common-constraints.txt
# httpx
attrs==25.3.0
attrs==25.4.0
# via
# -c requirements/common-constraints.txt
# aiohttp
banks==2.1.2
banks==2.2.0
# via
# -c requirements/common-constraints.txt
# llama-index-core
certifi==2025.4.26
certifi==2025.11.12
# via
# -c requirements/common-constraints.txt
# httpcore
# httpx
# requests
charset-normalizer==3.4.2
charset-normalizer==3.4.4
# via
# -c requirements/common-constraints.txt
# requests
click==8.1.8
click==8.3.1
# via
# -c requirements/common-constraints.txt
# nltk
@ -51,16 +55,17 @@ dataclasses-json==0.6.7
# via
# -c requirements/common-constraints.txt
# llama-index-core
deprecated==1.2.18
deprecated==1.3.1
# via
# -c requirements/common-constraints.txt
# banks
# llama-index-core
# llama-index-instrumentation
dirtyjson==1.0.8
# via
# -c requirements/common-constraints.txt
# llama-index-core
filelock==3.18.0
filelock==3.20.1
# via
# -c requirements/common-constraints.txt
# huggingface-hub
@ -70,22 +75,22 @@ filetype==1.2.0
# via
# -c requirements/common-constraints.txt
# llama-index-core
frozenlist==1.6.0
frozenlist==1.8.0
# via
# -c requirements/common-constraints.txt
# aiohttp
# aiosignal
fsspec==2025.3.2
fsspec==2025.12.0
# via
# -c requirements/common-constraints.txt
# huggingface-hub
# llama-index-core
# torch
greenlet==3.2.2
greenlet==3.3.0
# via
# -c requirements/common-constraints.txt
# sqlalchemy
griffe==1.7.3
griffe==1.15.0
# via
# -c requirements/common-constraints.txt
# banks
@ -93,7 +98,7 @@ h11==0.16.0
# via
# -c requirements/common-constraints.txt
# httpcore
hf-xet==1.1.0
hf-xet==1.2.0
# via
# -c requirements/common-constraints.txt
# huggingface-hub
@ -105,14 +110,14 @@ httpx==0.28.1
# via
# -c requirements/common-constraints.txt
# llama-index-core
huggingface-hub[inference]==0.31.1
huggingface-hub[inference]==0.36.0
# via
# -c requirements/common-constraints.txt
# llama-index-embeddings-huggingface
# sentence-transformers
# tokenizers
# transformers
idna==3.10
idna==3.11
# via
# -c requirements/common-constraints.txt
# anyio
@ -124,21 +129,28 @@ jinja2==3.1.6
# -c requirements/common-constraints.txt
# banks
# torch
joblib==1.5.0
joblib==1.5.3
# via
# -c requirements/common-constraints.txt
# nltk
# scikit-learn
llama-index-core==0.12.26
llama-index-core==0.14.10
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-help.in
# llama-index-embeddings-huggingface
llama-index-embeddings-huggingface==0.5.3
llama-index-embeddings-huggingface==0.6.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-help.in
markupsafe==3.0.2
llama-index-instrumentation==0.4.2
# via
# -c requirements/common-constraints.txt
# llama-index-workflows
llama-index-workflows==2.11.5
# via
# -c requirements/common-constraints.txt
# llama-index-core
markupsafe==3.0.3
# via
# -c requirements/common-constraints.txt
# jinja2
@ -150,7 +162,7 @@ mpmath==1.3.0
# via
# -c requirements/common-constraints.txt
# sympy
multidict==6.4.3
multidict==6.7.0
# via
# -c requirements/common-constraints.txt
# aiohttp
@ -168,7 +180,7 @@ networkx==3.4.2
# -c requirements/common-constraints.txt
# llama-index-core
# torch
nltk==3.9.1
nltk==3.9.2
# via
# -c requirements/common-constraints.txt
# llama-index-core
@ -180,59 +192,61 @@ numpy==1.26.4
# scikit-learn
# scipy
# transformers
packaging==24.2
packaging==25.0
# via
# -c requirements/common-constraints.txt
# huggingface-hub
# marshmallow
# transformers
pillow==11.2.1
pillow==12.0.0
# via
# -c requirements/common-constraints.txt
# llama-index-core
# sentence-transformers
platformdirs==4.3.8
platformdirs==4.5.1
# via
# -c requirements/common-constraints.txt
# banks
propcache==0.3.1
# llama-index-core
propcache==0.4.1
# via
# -c requirements/common-constraints.txt
# aiohttp
# yarl
pydantic==2.11.4
pydantic==2.12.5
# via
# -c requirements/common-constraints.txt
# banks
# llama-index-core
pydantic-core==2.33.2
# llama-index-instrumentation
# llama-index-workflows
pydantic-core==2.41.5
# via
# -c requirements/common-constraints.txt
# pydantic
pyyaml==6.0.2
pyyaml==6.0.3
# via
# -c requirements/common-constraints.txt
# huggingface-hub
# llama-index-core
# transformers
regex==2024.11.6
regex==2025.11.3
# via
# -c requirements/common-constraints.txt
# nltk
# tiktoken
# transformers
requests==2.32.3
requests==2.32.5
# via
# -c requirements/common-constraints.txt
# huggingface-hub
# llama-index-core
# tiktoken
# transformers
safetensors==0.5.3
safetensors==0.7.0
# via
# -c requirements/common-constraints.txt
# transformers
scikit-learn==1.6.1
scikit-learn==1.8.0
# via
# -c requirements/common-constraints.txt
# sentence-transformers
@ -241,15 +255,16 @@ scipy==1.15.3
# -c requirements/common-constraints.txt
# scikit-learn
# sentence-transformers
sentence-transformers==4.1.0
sentence-transformers==5.2.0
# via
# -c requirements/common-constraints.txt
# llama-index-embeddings-huggingface
sniffio==1.3.1
setuptools==80.9.0
# via
# -c requirements/common-constraints.txt
# anyio
sqlalchemy[asyncio]==2.0.40
# llama-index-core
# torch
sqlalchemy[asyncio]==2.0.45
# via
# -c requirements/common-constraints.txt
# llama-index-core
@ -265,18 +280,17 @@ threadpoolctl==3.6.0
# via
# -c requirements/common-constraints.txt
# scikit-learn
tiktoken==0.9.0
tiktoken==0.12.0
# via
# -c requirements/common-constraints.txt
# llama-index-core
tokenizers==0.21.1
tokenizers==0.22.1
# via
# -c requirements/common-constraints.txt
# transformers
torch==2.2.2
torch==2.9.1
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-help.in
# sentence-transformers
tqdm==4.67.1
# via
@ -286,16 +300,18 @@ tqdm==4.67.1
# nltk
# sentence-transformers
# transformers
transformers==4.51.3
transformers==4.57.3
# via
# -c requirements/common-constraints.txt
# sentence-transformers
typing-extensions==4.13.2
typing-extensions==4.15.0
# via
# -c requirements/common-constraints.txt
# aiosignal
# anyio
# huggingface-hub
# llama-index-core
# llama-index-workflows
# pydantic
# pydantic-core
# sentence-transformers
@ -308,20 +324,20 @@ typing-inspect==0.9.0
# -c requirements/common-constraints.txt
# dataclasses-json
# llama-index-core
typing-inspection==0.4.0
typing-inspection==0.4.2
# via
# -c requirements/common-constraints.txt
# pydantic
urllib3==2.4.0
urllib3==2.6.2
# via
# -c requirements/common-constraints.txt
# requests
wrapt==1.17.2
wrapt==2.0.1
# via
# -c requirements/common-constraints.txt
# deprecated
# llama-index-core
yarl==1.20.0
yarl==1.22.0
# via
# -c requirements/common-constraints.txt
# aiohttp

View file

@ -1,10 +1,10 @@
# This file was autogenerated by uv via the following command:
# uv pip compile --no-strip-extras --constraint=requirements/common-constraints.txt --output-file=requirements/requirements-playwright.txt requirements/requirements-playwright.in
greenlet==3.2.2
greenlet==3.3.0
# via
# -c requirements/common-constraints.txt
# playwright
playwright==1.52.0
playwright==1.57.0
# via
# -c requirements/common-constraints.txt
# -r requirements/requirements-playwright.in
@ -12,7 +12,7 @@ pyee==13.0.0
# via
# -c requirements/common-constraints.txt
# playwright
typing-extensions==4.13.2
typing-extensions==4.15.0
# via
# -c requirements/common-constraints.txt
# pyee

View file

@ -35,12 +35,16 @@ google-generativeai
# in matplotlib and a bunch of other deps
# https://github.com/networkx/networkx/blob/d7132daa8588f653eacac7a5bae1ee85a183fa43/pyproject.toml#L57
# We really only need networkx itself and scipy for the repomap.
networkx
#
# >3.5 seems to not be available for py3.10
networkx<3.5
# This is the one networkx dependency that we need.
# Including it here explicitly because we
# didn't specify networkx[default] above.
scipy
#
# 1.16 onwards only supports python3.11+
scipy<1.16
# GitHub Release action failing on "KeyError: 'home-page'"
# https://github.com/pypa/twine/blob/6fbf880ee60915cf1666348c4bdd78a10415f2ac/twine/__init__.py#L40

View file

@ -1,3 +1,3 @@
tree-sitter==0.23.2; python_version < "3.10"
tree-sitter==0.24.0; python_version >= "3.10"
tree-sitter==0.25.2; python_version >= "3.10"

View file

@ -5,7 +5,7 @@ FROM bretfisher/jekyll-serve
WORKDIR /site
# Copy the current directory contents into the container at /srv/jekyll
COPY website /site
COPY aider/website /site
RUN apt-get update && apt-get install libcurl4

View file

@ -89,8 +89,13 @@ def get_commit_authors(commits):
commit_to_author = dict()
for commit in commits:
author = run(["git", "show", "-s", "--format=%an", commit]).strip()
commit_message = run(["git", "show", "-s", "--format=%s", commit]).strip()
if commit_message.lower().startswith("aider:"):
subject = run(["git", "show", "-s", "--format=%s", commit]).strip()
full_message = run(["git", "show", "-s", "--format=%B", commit]).strip()
lower_subject = subject.lower()
lower_full = full_message.lower()
if lower_subject.startswith("aider:") or "co-authored-by: aider" in lower_full:
author += " (aider)"
commit_to_author[commit] = author
return commit_to_author

View file

@ -112,6 +112,8 @@ def main():
cmd = [
"aider",
"--model",
"gpt-5",
hist_path,
"--read",
log_path,

View file

@ -834,6 +834,36 @@ two
self.assertNotIn(fname2, str(coder.abs_fnames))
self.assertNotIn(fname3, str(coder.abs_fnames))
def test_skip_gitignored_files_on_init(self):
with GitTemporaryDirectory() as _:
repo_path = Path(".")
repo = git.Repo.init(repo_path)
ignored_file = repo_path / "ignored_by_git.txt"
ignored_file.write_text("This file should be ignored by git.")
regular_file = repo_path / "regular_file.txt"
regular_file.write_text("This is a regular file.")
gitignore_content = "ignored_by_git.txt\n"
(repo_path / ".gitignore").write_text(gitignore_content)
repo.index.add([str(regular_file), ".gitignore"])
repo.index.commit("Initial commit with gitignore and regular file")
mock_io = MagicMock()
mock_io.tool_warning = MagicMock()
fnames_to_add = [str(ignored_file), str(regular_file)]
coder = Coder.create(self.GPT35, None, mock_io, fnames=fnames_to_add)
self.assertNotIn(str(ignored_file.resolve()), coder.abs_fnames)
self.assertIn(str(regular_file.resolve()), coder.abs_fnames)
mock_io.tool_warning.assert_any_call(
f"Skipping {ignored_file.name} that matches gitignore spec."
)
def test_check_for_urls(self):
io = InputOutput(yes=True)
coder = Coder.create(self.GPT35, None, io=io)
@ -1181,6 +1211,122 @@ This command will print 'Hello, World!' to the console."""
sanity_check_messages(coder.cur_messages)
self.assertEqual(coder.cur_messages[-1]["role"], "assistant")
def test_normalize_language(self):
coder = Coder.create(self.GPT35, None, io=InputOutput())
# Test None and empty
self.assertIsNone(coder.normalize_language(None))
self.assertIsNone(coder.normalize_language(""))
# Test "C" and "POSIX"
self.assertIsNone(coder.normalize_language("C"))
self.assertIsNone(coder.normalize_language("POSIX"))
# Test already formatted names
self.assertEqual(coder.normalize_language("English"), "English")
self.assertEqual(coder.normalize_language("French"), "French")
# Test common locale codes (fallback map, assuming babel is not installed or fails)
with patch("aider.coders.base_coder.Locale", None):
self.assertEqual(coder.normalize_language("en_US"), "English")
self.assertEqual(coder.normalize_language("fr_FR"), "French")
self.assertEqual(coder.normalize_language("es"), "Spanish")
self.assertEqual(coder.normalize_language("de_DE.UTF-8"), "German")
self.assertEqual(
coder.normalize_language("zh-CN"), "Chinese"
) # Test hyphen in fallback
self.assertEqual(coder.normalize_language("ja"), "Japanese")
self.assertEqual(
coder.normalize_language("unknown_code"), "unknown_code"
) # Fallback to original
# Test with babel.Locale mocked (available)
mock_babel_locale = MagicMock()
mock_locale_instance = MagicMock()
mock_babel_locale.parse.return_value = mock_locale_instance
with patch("aider.coders.base_coder.Locale", mock_babel_locale):
mock_locale_instance.get_display_name.return_value = "english" # For en_US
self.assertEqual(coder.normalize_language("en_US"), "English")
mock_babel_locale.parse.assert_called_with("en_US")
mock_locale_instance.get_display_name.assert_called_with("en")
mock_locale_instance.get_display_name.return_value = "french" # For fr-FR
self.assertEqual(coder.normalize_language("fr-FR"), "French") # Test with hyphen
mock_babel_locale.parse.assert_called_with("fr_FR") # Hyphen replaced
mock_locale_instance.get_display_name.assert_called_with("en")
# Test with babel.Locale raising an exception (simulating parse failure)
mock_babel_locale_error = MagicMock()
mock_babel_locale_error.parse.side_effect = Exception("Babel parse error")
with patch("aider.coders.base_coder.Locale", mock_babel_locale_error):
self.assertEqual(coder.normalize_language("en_US"), "English") # Falls back to map
def test_get_user_language(self):
io = InputOutput()
coder = Coder.create(self.GPT35, None, io=io)
# 1. Test with self.chat_language set
coder.chat_language = "fr_CA"
with patch.object(coder, "normalize_language", return_value="French Canadian") as mock_norm:
self.assertEqual(coder.get_user_language(), "French Canadian")
mock_norm.assert_called_once_with("fr_CA")
coder.chat_language = None # Reset
# 2. Test with locale.getlocale()
with patch("locale.getlocale", return_value=("en_GB", "UTF-8")) as mock_getlocale:
with patch.object(
coder, "normalize_language", return_value="British English"
) as mock_norm:
self.assertEqual(coder.get_user_language(), "British English")
mock_getlocale.assert_called_once()
mock_norm.assert_called_once_with("en_GB")
# Test with locale.getlocale() returning None or empty
with patch("locale.getlocale", return_value=(None, None)) as mock_getlocale:
with patch("os.environ.get") as mock_env_get: # Ensure env vars are not used yet
mock_env_get.return_value = None
self.assertIsNone(coder.get_user_language()) # Should be None if nothing found
# 3. Test with environment variables: LANG
with patch(
"locale.getlocale", side_effect=Exception("locale error")
): # Mock locale to fail
with patch("os.environ.get") as mock_env_get:
mock_env_get.side_effect = lambda key: "de_DE.UTF-8" if key == "LANG" else None
with patch.object(coder, "normalize_language", return_value="German") as mock_norm:
self.assertEqual(coder.get_user_language(), "German")
mock_env_get.assert_any_call("LANG")
mock_norm.assert_called_once_with("de_DE")
# Test LANGUAGE (takes precedence over LANG if both were hypothetically checked
# by os.environ.get, but our code checks in order, so we mock the first one it finds)
with patch("locale.getlocale", side_effect=Exception("locale error")):
with patch("os.environ.get") as mock_env_get:
mock_env_get.side_effect = lambda key: "es_ES" if key == "LANGUAGE" else None
with patch.object(coder, "normalize_language", return_value="Spanish") as mock_norm:
self.assertEqual(coder.get_user_language(), "Spanish")
mock_env_get.assert_any_call("LANGUAGE") # LANG would be called first
mock_norm.assert_called_once_with("es_ES")
# 4. Test priority: chat_language > locale > env
coder.chat_language = "it_IT"
with patch("locale.getlocale", return_value=("en_US", "UTF-8")) as mock_getlocale:
with patch("os.environ.get", return_value="de_DE") as mock_env_get:
with patch.object(
coder, "normalize_language", side_effect=lambda x: x.upper()
) as mock_norm:
self.assertEqual(coder.get_user_language(), "IT_IT") # From chat_language
mock_norm.assert_called_once_with("it_IT")
mock_getlocale.assert_not_called()
mock_env_get.assert_not_called()
coder.chat_language = None
# 5. Test when no language is found
with patch("locale.getlocale", side_effect=Exception("locale error")):
with patch("os.environ.get", return_value=None) as mock_env_get:
self.assertIsNone(coder.get_user_language())
def test_architect_coder_auto_accept_true(self):
with GitTemporaryDirectory():
io = InputOutput(yes=True)

View file

@ -1621,6 +1621,33 @@ class TestCommands(TestCase):
# Clean up: remove the test file from the home directory
test_file.unlink()
# pytest tests/basic/test_commands.py -k test_cmd_read_only_with_square_brackets
def test_cmd_read_only_with_square_brackets(self):
with GitTemporaryDirectory() as repo_dir:
io = InputOutput(pretty=False, fancy_input=False, yes=False)
coder = Coder.create(self.GPT35, None, io)
commands = Commands(io, coder)
# Create test layout
test_dir = Path(repo_dir) / "[id]"
test_dir.mkdir()
test_file = Path(repo_dir) / "[id]" / "page.tsx"
test_file.write_text("Test file")
# Test the /read-only command
commands.cmd_read_only("[id]/page.tsx")
# Check if test file was added to abs_read_only_fnames
self.assertTrue(
any(os.path.samefile(str(test_file), fname) for fname in coder.abs_read_only_fnames)
)
# Test dropping all read-only files
commands.cmd_drop("[id]/page.tsx")
# Check if all files were removed from abs_read_only_fnames
self.assertEqual(len(coder.abs_read_only_fnames), 0)
def test_cmd_diff(self):
with GitTemporaryDirectory() as repo_dir:
repo = git.Repo(repo_dir)

View file

@ -4,11 +4,17 @@ from aider.history import ChatSummary
from aider.models import Model
def count(msg):
if isinstance(msg, list):
return sum(count(m) for m in msg)
return len(msg["content"].split())
class TestChatSummary(TestCase):
def setUp(self):
self.mock_model = mock.Mock(spec=Model)
self.mock_model.name = "gpt-3.5-turbo"
self.mock_model.token_count = lambda msg: len(msg["content"].split())
self.mock_model.token_count = count
self.mock_model.info = {"max_input_tokens": 4096}
self.mock_model.simple_send_with_retries = mock.Mock()
self.chat_summary = ChatSummary(self.mock_model, max_tokens=100)
@ -55,8 +61,11 @@ class TestChatSummary(TestCase):
)
def test_summarize(self):
messages = [{"role": "user", "content": f"Message {i}"} for i in range(10)]
messages.extend([{"role": "assistant", "content": f"Response {i}"} for i in range(10)])
N = 100
messages = [None] * (2 * N)
for i in range(N):
messages[2 * i] = {"role": "user", "content": f"Message {i}"}
messages[2 * i + 1] = {"role": "assistant", "content": f"Response {i}"}
with mock.patch.object(
self.chat_summary,
@ -65,9 +74,11 @@ class TestChatSummary(TestCase):
):
result = self.chat_summary.summarize(messages)
print(result)
self.assertIsInstance(result, list)
self.assertGreater(len(result), 0)
self.assertLessEqual(len(result), len(messages))
self.assertLess(len(result), len(messages))
self.assertEqual(result[0]["content"], "Summary")
def test_fallback_to_second_model(self):
mock_model1 = mock.Mock(spec=Model)

View file

@ -152,6 +152,114 @@ class TestMain(TestCase):
self.assertEqual("one\ntwo\n.aider*\n.env\n", gitignore.read_text())
del os.environ["GIT_CONFIG_GLOBAL"]
def test_command_line_gitignore_files_flag(self):
with GitTemporaryDirectory() as git_dir:
git_dir = Path(git_dir)
# Create a .gitignore file
gitignore_file = git_dir / ".gitignore"
gitignore_file.write_text("ignored.txt\n")
# Create an ignored file
ignored_file = git_dir / "ignored.txt"
ignored_file.write_text("This file should be ignored.")
# Get the absolute path to the ignored file
abs_ignored_file = str(ignored_file.resolve())
# Test without the --add-gitignore-files flag (default: False)
coder = main(
["--exit", "--yes", abs_ignored_file],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
force_git_root=git_dir,
)
# Verify the ignored file is not in the chat
self.assertNotIn(abs_ignored_file, coder.abs_fnames)
# Test with --add-gitignore-files set to True
coder = main(
["--add-gitignore-files", "--exit", "--yes", abs_ignored_file],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
force_git_root=git_dir,
)
# Verify the ignored file is in the chat
self.assertIn(abs_ignored_file, coder.abs_fnames)
# Test with --add-gitignore-files set to False
coder = main(
["--no-add-gitignore-files", "--exit", "--yes", abs_ignored_file],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
force_git_root=git_dir,
)
# Verify the ignored file is not in the chat
self.assertNotIn(abs_ignored_file, coder.abs_fnames)
def test_add_command_gitignore_files_flag(self):
with GitTemporaryDirectory() as git_dir:
git_dir = Path(git_dir)
# Create a .gitignore file
gitignore_file = git_dir / ".gitignore"
gitignore_file.write_text("ignored.txt\n")
# Create an ignored file
ignored_file = git_dir / "ignored.txt"
ignored_file.write_text("This file should be ignored.")
# Get the absolute path to the ignored file
abs_ignored_file = str(ignored_file.resolve())
rel_ignored_file = "ignored.txt"
# Test without the --add-gitignore-files flag (default: False)
coder = main(
["--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
force_git_root=git_dir,
)
with patch.object(coder.io, "confirm_ask", return_value=True):
coder.commands.cmd_add(rel_ignored_file)
# Verify the ignored file is not in the chat
self.assertNotIn(abs_ignored_file, coder.abs_fnames)
# Test with --add-gitignore-files set to True
coder = main(
["--add-gitignore-files", "--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
force_git_root=git_dir,
)
with patch.object(coder.io, "confirm_ask", return_value=True):
coder.commands.cmd_add(rel_ignored_file)
# Verify the ignored file is in the chat
self.assertIn(abs_ignored_file, coder.abs_fnames)
# Test with --add-gitignore-files set to False
coder = main(
["--no-add-gitignore-files", "--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
force_git_root=git_dir,
)
with patch.object(coder.io, "confirm_ask", return_value=True):
coder.commands.cmd_add(rel_ignored_file)
# Verify the ignored file is not in the chat
self.assertNotIn(abs_ignored_file, coder.abs_fnames)
def test_main_args(self):
with patch("aider.coders.Coder.create") as MockCoder:
# --yes will just ok the git repo without blocking on input
@ -949,16 +1057,19 @@ class TestMain(TestCase):
def test_invalid_edit_format(self):
with GitTemporaryDirectory():
with patch("aider.io.InputOutput.offer_url") as mock_offer_url:
result = main(
["--edit-format", "not-a-real-format", "--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
)
self.assertEqual(result, 1) # main() should return 1 on error
mock_offer_url.assert_called_once()
args, _ = mock_offer_url.call_args
self.assertEqual(args[0], "https://aider.chat/docs/more/edit-formats.html")
# Suppress stderr for this test as argparse prints an error message
with patch("sys.stderr", new_callable=StringIO) as mock_stderr:
with self.assertRaises(SystemExit) as cm:
_ = main(
["--edit-format", "not-a-real-format", "--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
)
# argparse.ArgumentParser.exit() is called with status 2 for invalid choice
self.assertEqual(cm.exception.code, 2)
stderr_output = mock_stderr.getvalue()
self.assertIn("invalid choice", stderr_output)
self.assertIn("not-a-real-format", stderr_output)
def test_default_model_selection(self):
with GitTemporaryDirectory():
@ -1032,6 +1143,16 @@ class TestMain(TestCase):
system_info = coder.get_platform_info()
self.assertIn("Spanish", system_info)
def test_commit_language_japanese(self):
with GitTemporaryDirectory():
coder = main(
["--commit-language", "japanese", "--exit", "--yes"],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
)
self.assertIn("japanese", coder.commit_language)
@patch("git.Repo.init")
def test_main_exit_with_git_command_not_found(self, mock_git_init):
mock_git_init.side_effect = git.exc.GitCommandNotFound("git", "Command 'git' not found")
@ -1275,6 +1396,21 @@ class TestMain(TestCase):
for call in mock_io_instance.tool_warning.call_args_list:
self.assertNotIn("Cost estimates may be inaccurate", call[0][0])
def test_argv_file_respects_git(self):
with GitTemporaryDirectory():
fname = Path("not_in_git.txt")
fname.touch()
with open(".gitignore", "w+") as f:
f.write("not_in_git.txt")
coder = main(
argv=["--file", "not_in_git.txt"],
input=DummyInput(),
output=DummyOutput(),
return_coder=True,
)
self.assertNotIn("not_in_git.txt", str(coder.abs_fnames))
self.assertFalse(coder.allowed_to_edit("not_in_git.txt"))
def test_load_dotenv_files_override(self):
with GitTemporaryDirectory() as git_dir:
git_dir = Path(git_dir)

View file

@ -138,13 +138,13 @@ class TestModels(unittest.TestCase):
self.assertEqual(model.name, "gpt-3.5-turbo")
model = Model("sonnet")
self.assertEqual(model.name, "anthropic/claude-3-7-sonnet-20250219")
self.assertEqual(model.name, "anthropic/claude-sonnet-4-20250514")
model = Model("haiku")
self.assertEqual(model.name, "claude-3-5-haiku-20241022")
model = Model("opus")
self.assertEqual(model.name, "claude-3-opus-20240229")
self.assertEqual(model.name, "claude-opus-4-20250514")
# Test non-alias passes through unchanged
model = Model("gpt-4")

View file

@ -93,16 +93,14 @@ class TestOnboarding(unittest.TestCase):
@patch.dict(os.environ, {"OPENROUTER_API_KEY": "or_key"}, clear=True)
def test_try_select_default_model_openrouter_free(self, mock_check_tier):
"""Test OpenRouter free model selection."""
self.assertEqual(
try_to_select_default_model(), "openrouter/google/gemini-2.5-pro-exp-03-25:free"
)
self.assertEqual(try_to_select_default_model(), "openrouter/deepseek/deepseek-r1:free")
mock_check_tier.assert_called_once_with("or_key")
@patch("aider.onboarding.check_openrouter_tier", return_value=False) # Assume paid tier
@patch.dict(os.environ, {"OPENROUTER_API_KEY": "or_key"}, clear=True)
def test_try_select_default_model_openrouter_paid(self, mock_check_tier):
"""Test OpenRouter paid model selection."""
self.assertEqual(try_to_select_default_model(), "openrouter/anthropic/claude-3.7-sonnet")
self.assertEqual(try_to_select_default_model(), "openrouter/anthropic/claude-sonnet-4")
mock_check_tier.assert_called_once_with("or_key")
@patch("aider.onboarding.check_openrouter_tier")
@ -146,7 +144,7 @@ class TestOnboarding(unittest.TestCase):
)
def test_try_select_default_model_priority_openrouter(self, mock_check_tier):
"""Test OpenRouter key takes priority."""
self.assertEqual(try_to_select_default_model(), "openrouter/anthropic/claude-3.7-sonnet")
self.assertEqual(try_to_select_default_model(), "openrouter/anthropic/claude-sonnet-4")
mock_check_tier.assert_called_once_with("or_key")
@patch("aider.onboarding.check_openrouter_tier")
@ -346,7 +344,7 @@ class TestOnboarding(unittest.TestCase):
@patch(
"aider.onboarding.try_to_select_default_model",
side_effect=[None, "openrouter/google/gemini-2.5-pro-exp-03-25:free"],
side_effect=[None, "openrouter/deepseek/deepseek-r1:free"],
) # Fails first, succeeds after oauth
@patch(
"aider.onboarding.offer_openrouter_oauth", return_value=True
@ -360,7 +358,7 @@ class TestOnboarding(unittest.TestCase):
selected_model = select_default_model(args, io_mock, analytics_mock)
self.assertEqual(selected_model, "openrouter/google/gemini-2.5-pro-exp-03-25:free")
self.assertEqual(selected_model, "openrouter/deepseek/deepseek-r1:free")
self.assertEqual(mock_try_select.call_count, 2) # Called before and after oauth
mock_offer_oauth.assert_called_once_with(io_mock, analytics_mock)
# Only one warning is expected: "No LLM model..."

View file

@ -0,0 +1,73 @@
from pathlib import Path
from aider.models import ModelInfoManager
from aider.openrouter import OpenRouterModelManager
class DummyResponse:
"""Minimal stand-in for requests.Response used in tests."""
def __init__(self, json_data):
self.status_code = 200
self._json_data = json_data
def json(self):
return self._json_data
def test_openrouter_get_model_info_from_cache(monkeypatch, tmp_path):
"""
OpenRouterModelManager should return correct metadata taken from the
downloaded (and locally cached) models JSON payload.
"""
payload = {
"data": [
{
"id": "mistralai/mistral-medium-3",
"context_length": 32768,
"pricing": {"prompt": "100", "completion": "200"},
"top_provider": {"context_length": 32768},
}
]
}
# Fake out the network call and the HOME directory used for the cache file
monkeypatch.setattr("requests.get", lambda *a, **k: DummyResponse(payload))
monkeypatch.setattr(Path, "home", staticmethod(lambda: tmp_path))
manager = OpenRouterModelManager()
info = manager.get_model_info("openrouter/mistralai/mistral-medium-3")
assert info["max_input_tokens"] == 32768
assert info["input_cost_per_token"] == 100.0
assert info["output_cost_per_token"] == 200.0
assert info["litellm_provider"] == "openrouter"
def test_model_info_manager_uses_openrouter_manager(monkeypatch):
"""
ModelInfoManager should delegate to OpenRouterModelManager when litellm
provides no data for an OpenRouter-prefixed model.
"""
# Ensure litellm path returns no info so that fallback logic triggers
monkeypatch.setattr("aider.models.litellm.get_model_info", lambda *a, **k: {})
stub_info = {
"max_input_tokens": 512,
"max_tokens": 512,
"max_output_tokens": 512,
"input_cost_per_token": 100.0,
"output_cost_per_token": 200.0,
"litellm_provider": "openrouter",
}
# Force OpenRouterModelManager to return our stub info
monkeypatch.setattr(
"aider.models.OpenRouterModelManager.get_model_info",
lambda self, model: stub_info,
)
mim = ModelInfoManager()
info = mim.get_model_info("openrouter/fake/model")
assert info == stub_info

View file

@ -59,6 +59,28 @@ class TestRepo(unittest.TestCase):
self.assertIn("index", diffs)
self.assertIn("workingdir", diffs)
def test_diffs_with_single_byte_encoding(self):
with GitTemporaryDirectory():
encoding = "cp1251"
repo = git.Repo()
fname = Path("foo.txt")
fname.write_text("index\n", encoding=encoding)
repo.git.add(str(fname))
# Make a change with non-ASCII symbols in the working dir
fname.write_text("АБВ\n", encoding=encoding)
git_repo = GitRepo(InputOutput(encoding=encoding), None, ".")
diffs = git_repo.get_diffs()
# check that all diff output can be converted to utf-8 for sending to model
diffs.encode("utf-8")
self.assertIn("index", diffs)
self.assertIn("АБВ", diffs)
def test_diffs_detached_head(self):
with GitTemporaryDirectory():
repo = git.Repo()
@ -278,7 +300,7 @@ class TestRepo(unittest.TestCase):
# check the commit message and author/committer
commit = raw_repo.head.commit
self.assertIn("Co-authored-by: aider (gpt-test) <noreply@aider.chat>", commit.message)
self.assertIn("Co-authored-by: aider (gpt-test) <aider@aider.chat>", commit.message)
self.assertEqual(commit.message.splitlines()[0], "Aider edit")
# With default (None), co-authored-by takes precedence
self.assertEqual(
@ -333,7 +355,7 @@ class TestRepo(unittest.TestCase):
# check the commit message and author/committer
commit = raw_repo.head.commit
self.assertIn(
"Co-authored-by: aider (gpt-test-combo) <noreply@aider.chat>", commit.message
"Co-authored-by: aider (gpt-test-combo) <aider@aider.chat>", commit.message
)
self.assertEqual(commit.message.splitlines()[0], "Aider combo edit")
# When co-authored-by is true BUT author/committer are explicit True,
@ -661,3 +683,34 @@ class TestRepo(unittest.TestCase):
# Verify the commit was actually made
latest_commit_msg = raw_repo.head.commit.message
self.assertEqual(latest_commit_msg.strip(), "Should succeed")
@patch("aider.models.Model.simple_send_with_retries")
def test_get_commit_message_uses_system_prompt_prefix(self, mock_send):
"""
Verify that GitRepo.get_commit_message() prepends the model.system_prompt_prefix
to the system prompt sent to the LLM.
"""
mock_send.return_value = "good commit message"
prefix = "MY-CUSTOM-PREFIX"
model = Model("gpt-3.5-turbo")
model.system_prompt_prefix = prefix
with GitTemporaryDirectory():
repo = GitRepo(InputOutput(), None, None, models=[model])
# Call the function under test
repo.get_commit_message("dummy diff", "dummy context")
# Ensure the LLM was invoked once
mock_send.assert_called_once()
# Grab the system message sent to the model
messages = mock_send.call_args[0][0]
system_msg_content = messages[0]["content"]
# Verify the prefix is at the start of the system message
self.assertTrue(
system_msg_content.startswith(prefix),
"system_prompt_prefix should be prepended to the system prompt",
)

View file

@ -302,6 +302,9 @@ class TestRepoMapAllLanguages(unittest.TestCase):
def test_language_gleam(self):
self._test_language_repo_map("gleam", "gleam", "greet")
def test_language_haskell(self):
self._test_language_repo_map("haskell", "hs", "add")
def test_language_java(self):
self._test_language_repo_map("java", "java", "Greeting")
@ -334,6 +337,9 @@ class TestRepoMapAllLanguages(unittest.TestCase):
def test_language_tsx(self):
self._test_language_repo_map("tsx", "tsx", "UserProps")
def test_language_zig(self):
self._test_language_repo_map("zig", "zig", "add")
def test_language_csharp(self):
self._test_language_repo_map("csharp", "cs", "IGreeter")
@ -355,6 +361,9 @@ class TestRepoMapAllLanguages(unittest.TestCase):
def test_language_chatito(self):
self._test_language_repo_map("chatito", "chatito", "intent")
def test_language_clojure(self):
self._test_language_repo_map("clojure", "clj", "greet")
def test_language_commonlisp(self):
self._test_language_repo_map("commonlisp", "lisp", "greet")
@ -388,6 +397,9 @@ class TestRepoMapAllLanguages(unittest.TestCase):
def test_language_ocaml_interface(self):
self._test_language_repo_map("ocaml_interface", "mli", "Greeter")
def test_language_matlab(self):
self._test_language_repo_map("matlab", "m", "Person")
def _test_language_repo_map(self, lang, key, symbol):
"""Helper method to test repo map generation for a specific language."""
# Get the fixture file path and name based on language

View file

@ -0,0 +1,6 @@
(ns greeter.core)
(defn greet
"Prints a greeting."
[name]
(println (str "Hello, " name "!")))

View file

@ -0,0 +1,7 @@
module Main where
add :: Int -> Int -> Int
add a b = a + b
main :: IO ()
main = print (add 2 3)

42
tests/fixtures/languages/matlab/test.m vendored Normal file
View file

@ -0,0 +1,42 @@
classdef Person
properties
name (1,1) string
age (1,1) double
end
methods
function obj = Person(name, age)
arguments
name (1,1) string
age (1,1) double = NaN
end
% Constructor for Person class
obj.name = name;
obj.age = age;
end
function greeting = greet(obj,formal)
arguments
obj
formal (1,1) logical = false
end
if formal
prefix = "Good day";
else
prefix = "Hello";
end
greeting = sprintf("%s, %s!", prefix, obj.name);
end
end
end
function greetings = create_greeting_list(people)
arguments
people (1,:) Person
end
% Create greetings for a list of people.
greetings = string(numel(people), 0);
for i = 1:numel(people)
greetings(i) = people(i).greet();
end
end

10
tests/fixtures/languages/zig/test.zig vendored Normal file
View file

@ -0,0 +1,10 @@
const std = @import("std");
pub fn add(a: i32, b: i32) i32 {
return a + b;
}
pub fn main() !void {
const stdout = std.io.getStdOut().writer();
try stdout.print("{}", .{add(2, 3)});
}