mirror of
https://github.com/astral-sh/ruff.git
synced 2025-11-23 05:35:35 +00:00
[ty] Add an evaluation for completions
This is still early days, but I hope the framework introduced here makes it very easy to add new truth data. Truth data should be seen as a form of regression test for non-ideal ranking of completion suggestions. I think it would help to read `crates/ty_completion_eval/README.md` first to get an idea of what you're reviewing.
This commit is contained in:
parent
6b94e620fe
commit
3771f1567c
63 changed files with 1213 additions and 4 deletions
16
crates/ty_completion_eval/truth/numpy-array/main.py
Normal file
16
crates/ty_completion_eval/truth/numpy-array/main.py
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
# This one is tricky because `array` is an exported
|
||||
# symbol in a whole bunch of numpy internal modules.
|
||||
#
|
||||
# At time of writing (2025-10-07), the right completion
|
||||
# doesn't actually show up at all in the suggestions
|
||||
# returned. In fact, nothing from the top-level `numpy`
|
||||
# module shows up.
|
||||
arra<CURSOR: numpy.array>
|
||||
|
||||
import numpy as np
|
||||
# In contrast to above, this *does* include the correct
|
||||
# completion. So there is likely some kind of bug in our
|
||||
# symbol discovery code for auto-import that isn't present
|
||||
# when using ty to discover symbols (which is likely far
|
||||
# too expensive to use across all dependencies).
|
||||
np.arra<CURSOR: array>
|
||||
Loading…
Add table
Add a link
Reference in a new issue