## Summary
This PR adds support for understanding the legacy definition and PEP 695
definition for `ParamSpec`.
This is still very initial and doesn't really implement any of the
semantics.
Part of https://github.com/astral-sh/ty/issues/157
## Test Plan
Add mdtest cases.
## Ecosystem analysis
Most of the diagnostics in `starlette` are due to the fact that ty now
understands `ParamSpec` is not a `Todo` type, so the assignability check
fails. The code looks something like:
```py
class _MiddlewareFactory(Protocol[P]):
def __call__(self, app: ASGIApp, /, *args: P.args, **kwargs: P.kwargs) -> ASGIApp: ... # pragma: no cover
class Middleware:
def __init__(
self,
cls: _MiddlewareFactory[P],
*args: P.args,
**kwargs: P.kwargs,
) -> None:
self.cls = cls
self.args = args
self.kwargs = kwargs
# ty complains that `ServerErrorMiddleware` is not assignable to `_MiddlewareFactory[P]`
Middleware(ServerErrorMiddleware, handler=error_handler, debug=debug)
```
There are multiple diagnostics where there's an attribute access on the
`Wrapped` object of `functools` which Pyright also raises:
```py
from functools import wraps
def my_decorator(f):
@wraps(f)
def wrapper(*args, **kwds):
return f(*args, **kwds)
# Pyright: Cannot access attribute "__signature__" for class "_Wrapped[..., Unknown, ..., Unknown]"
Attribute "__signature__" is unknown [reportAttributeAccessIssue]
# ty: Object of type `_Wrapped[Unknown, Unknown, Unknown, Unknown]` has no attribute `__signature__` [unresolved-attribute]
wrapper.__signature__
return wrapper
```
There are additional diagnostics that is due to the assignability checks
failing because ty now infers the `ParamSpec` instead of using the
`Todo` type which would always succeed. This results in a few
`no-matching-overload` diagnostics because the assignability checks
fail.
There are a few diagnostics related to
https://github.com/astral-sh/ty/issues/491 where there's a variable
which is either a bound method or a variable that's annotated with
`Callable` that doesn't contain the instance as the first parameter.
Another set of (valid) diagnostics are where the code hasn't provided
all the type variables. ty is now raising diagnostics for these because
we include `ParamSpec` type variable in the signature. For example,
`staticmethod[Any]` which contains two type variables.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Closes https://github.com/astral-sh/ty/issues/989
There are various situations where users expect the Python packages
installed in the same environment as ty itself to be considered during
type checking. A minimal example would look like:
```
uv venv my-env
uv pip install my-env ty httpx
echo "import httpx" > foo.py
./my-env/bin/ty check foo.py
```
or
```
uv tool install ty --with httpx
echo "import httpx" > foo.py
ty check foo.py
```
While these are a bit contrived, there are real-world situations where a
user would expect a similar behavior to work. Notably, all of the other
type checkers consider their own environment when determining search
paths (though I'll admit that I have not verified when they choose not
to do this).
One common situation where users are encountering this today is with
`uvx --with-requirements script.py ty check script.py` — which is
currently our "best" recommendation for type checking a PEP 723 script,
but it doesn't work.
Of the options discussed in
https://github.com/astral-sh/ty/issues/989#issuecomment-3307417985, I've
chosen (2) as our criteria for including ty's environment in the search
paths.
- If no virtual environment is discovered, we will always include ty's
environment.
- If a `.venv` is discovered in the working directory, we will _prepend_
ty's environment to the search paths. The dependencies in ty's
environment (e.g., from `uvx --with`) will take precedence.
- If a virtual environment is active, e.g., `VIRTUAL_ENV` (i.e.,
including conda prefixes) is set, we will not include ty's environment.
The reason we need to special case the `.venv` case is that we both
1. Recommend `uvx ty` today as a way to check your project
2. Want to enable `uvx --with <...> ty`
And I don't want (2) to break when you _happen_ to be in a project
(i.e., if we only included ty's environment when _no_ environment is
found) and don't want to remove support for (1).
I think long-term, I want to make `uvx <cmd>` layer the environment on
_top_ of the project environment (in uv), which would obviate the need
for this change when you're using uv. However, that change is breaking
and I think users will expect this behavior in contexts where they're
not using uv, so I think we should handle it in ty regardless.
I've opted not to include the environment if it's non-virtual (i.e., a
system environment) for now. It seems better to start by being more
restrictive. I left a comment in the code.
## Test Plan
I did some manual testing with the initial commit, then subsequently
added some unit tests.
```
❯ echo "import httpx" > example.py
❯ uvx --with httpx ty check example.py
Installed 8 packages in 19ms
error[unresolved-import]: Cannot resolve imported module `httpx`
--> foo/example.py:1:8
|
1 | import httpx
| ^^^^^
|
info: Searched in the following paths during module resolution:
info: 1. /Users/zb/workspace/ty/python (first-party code)
info: 2. /Users/zb/workspace/ty (first-party code)
info: 3. vendored://stdlib (stdlib typeshed stubs vendored by ty)
info: make sure your Python environment is properly configured: https://docs.astral.sh/ty/modules/#python-environment
info: rule `unresolved-import` is enabled by default
Found 1 diagnostic
❯ uvx --from . --with httpx ty check example.py
All checks passed!
```
```
❯ uv init --script foo.py
Initialized script at `foo.py`
❯ uv add --script foo.py httpx
warning: The Python request from `.python-version` resolved to Python 3.13.8, which is incompatible with the script's Python requirement: `>=3.14`
Updated `foo.py`
❯ echo "import httpx" >> foo.py
❯ uvx --with-requirements foo.py ty check foo.py
error[unresolved-import]: Cannot resolve imported module `httpx`
--> foo.py:15:8
|
13 | if __name__ == "__main__":
14 | main()
15 | import httpx
| ^^^^^
|
info: Searched in the following paths during module resolution:
info: 1. /Users/zb/workspace/ty/python (first-party code)
info: 2. /Users/zb/workspace/ty (first-party code)
info: 3. vendored://stdlib (stdlib typeshed stubs vendored by ty)
info: make sure your Python environment is properly configured: https://docs.astral.sh/ty/modules/#python-environment
info: rule `unresolved-import` is enabled by default
Found 1 diagnostic
❯ uvx --from . --with-requirements foo.py ty check foo.py
All checks passed!
```
Notice we do not include ty's environment if `VIRTUAL_ENV` is set
```
❯ VIRTUAL_ENV=.venv uvx --with httpx ty check foo/example.py
error[unresolved-import]: Cannot resolve imported module `httpx`
--> foo/example.py:1:8
|
1 | import httpx
| ^^^^^
|
info: Searched in the following paths during module resolution:
info: 1. /Users/zb/workspace/ty/python (first-party code)
info: 2. /Users/zb/workspace/ty (first-party code)
info: 3. vendored://stdlib (stdlib typeshed stubs vendored by ty)
info: 4. /Users/zb/workspace/ty/.venv/lib/python3.13/site-packages (site-packages)
info: make sure your Python environment is properly configured: https://docs.astral.sh/ty/modules/#python-environment
info: rule `unresolved-import` is enabled by default
Found 1 diagnostic
```
## Summary
Before this PR, we would emit diagnostics like "Invalid key access" for
a TypedDict literal with invalid key, which doesn't make sense since
there's no "access" in that case. This PR just adjusts the wording to be
more general, and adjusts the documentation of the lint rule too.
I noticed this in the playground and thought it would be a quick fix. As
usual, it turned out to be a bit more subtle than I expected, but for
now I chose to punt on the complexity. We may ultimately want to have
different rules for invalid subscript vs invalid TypedDict literal,
because an invalid key in a TypedDict literal is low severity: it's a
typo detector, but not actually a type error. But then there's another
wrinkle there: if the TypedDict is `closed=True`, then it _is_ a type
error. So would we want to separate the open and closed cases into
separate rules, too? I decided to leave this as a question for future.
If we wanted to use separate rules, or use specific wording for each
case instead of the generalized wording I chose here, that would also
involve a bit of extra work to distinguish the cases, since we use a
generic set of functions for reporting these errors.
## Test Plan
Added and updated mdtests.
## Summary
- Type checkers (and type-checker authors) think in terms of types, but
I think most Python users think in terms of values. Rather than saying
that a _type_ `X` "has no attribute `foo`" (which I think sounds strange
to many users), say that "an object of type `X` has no attribute `foo`"
- Special-case certain types so that the diagnostic messages read more
like normal English: rather than saying "Type `<class 'Foo'>` has no
attribute `bar`" or "Object of type `<class 'Foo'>` has no attribute
`bar`", just say "Class `Foo` has no attribute `bar`"
## Test Plan
Mdtests and snapshots updated
Summary
--
This PR unifies the two different ways Ruff and ty construct syntax
errors. Ruff has been storing the primary message in the diagnostic
itself, while ty attached the message to the primary annotation:
```
> ruff check try.py
invalid-syntax: name capture `x` makes remaining patterns unreachable
--> try.py:2:10
|
1 | match 42:
2 | case x: ...
| ^
3 | case y: ...
|
Found 1 error.
> uvx ty check try.py
WARN ty is pre-release software and not ready for production use. Expect to encounter bugs, missing features, and fatal errors.
Checking ------------------------------------------------------------ 1/1 files
error[invalid-syntax]
--> try.py:2:10
|
1 | match 42:
2 | case x: ...
| ^ name capture `x` makes remaining patterns unreachable
3 | case y: ...
|
Found 1 diagnostic
```
I think there are benefits to both approaches, and I do like ty's
version, but I feel like we should pick one (and it might help with
#20901 eventually). I slightly prefer Ruff's version, so I went with
that. Hopefully this isn't too controversial, but I'm happy to close
this if it is.
Note that this shouldn't change any other diagnostic formats in ty
because
[`Diagnostic::primary_message`](98d27c4128/crates/ruff_db/src/diagnostic/mod.rs (L177))
was already falling back to the primary annotation message if the
diagnostic message was empty. As a result, I think this change will
partially resolve the FIXME therein.
Test Plan
--
Existing tests with updated snapshots
This is the ultra-minimal implementation of
* https://github.com/astral-sh/ty/issues/296
that was previously discussed as a good starting point. In particular we
don't actually bother trying to figure out the exact python versions,
but we still mention "hey btw for No Reason At All... you're on python
3.10" when you try to access something that has a definition rooted in
the stdlib that we believe exists sometimes.
## Summary
Bump the latest supported Python version of ty to 3.14 and updates some
references from 3.13 to 3.14.
This also fixes a bug with `dataclasses.field` on 3.14 (which adds a new
keyword-only parameter to that function, breaking our previously naive
matching on the parameter structure of that function).
## Test Plan
A `ty check` on a file with template strings (without any further
configuration) doesn't raise errors anymore.
Summary
--
Closes#19467 and also removes the warning about using Python 3.14
without
preview enabled.
I also bumped `PythonVersion::default` to 3.9 because it reaches EOL
this month,
but we could also defer that for now if we wanted.
The first three commits are related to the `latest` bump to 3.14; the
fourth commit
bumps the default to 3.10.
Note that this PR also bumps the default Python version for ty to 3.10
because
there was a test asserting that it stays in sync with
`ast::PythonVersion`.
Test Plan
--
Existing tests
I spot-checked the ecosystem report, and I believe these are all
expected. Inbits doesn't specify a target Python version, so I guess
we're applying the default. UP007, UP035, and UP045 all use the new
default value to emit new diagnostics.
<!--
Thank you for contributing to Ruff/ty! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title? (Please prefix
with `[ty]` for ty pull
requests.)
- Does this pull request include references to any relevant issues?
-->
## Summary
#19990 didn't completely fix the base vs. child conda environment
distinction, since it detected slightly different behavior than what I
usually see in conda. E.g., I see something like the following:
```
(didn't yet activate conda, but base is active)
➜ printenv | grep CONDA
CONDA_PYTHON_EXE=/opt/anaconda3/bin/python
CONDA_PREFIX=/opt/anaconda3
CONDA_DEFAULT_ENV=base
CONDA_EXE=/opt/anaconda3/bin/conda
CONDA_SHLVL=1
CONDA_PROMPT_MODIFIER=(base)
(activating conda)
➜ conda activate test
(test is an active conda environment)
❯ printenv | grep CONDA
CONDA_PREFIX=/opt/anaconda3/envs/test
CONDA_PYTHON_EXE=/opt/anaconda3/bin/python
CONDA_SHLVL=2
CONDA_PREFIX_1=/opt/anaconda3
CONDA_DEFAULT_ENV=test
CONDA_PROMPT_MODIFIER=(test)
CONDA_EXE=/opt/anaconda3/bin/conda
```
But the current behavior looks for `CONDA_DEFAULT_ENV =
basename(CONDA_PREFIX)` for the base environment instead of the child
environment, where we actually see this equality.
This pull request fixes that and updates the tests correspondingly.
## Test Plan
I updated the existing tests with the new behavior. Let me know if you
want more tests. Note: It shouldn't be necessary to test for the case
where we have `conda/envs/base`, since one should not be able to create
such an environment (one with the name of `CONDA_DEFAULT_ENV`).
---------
Co-authored-by: Aria Desires <aria.desires@gmail.com>
This PR ensures that we always put `./src` before `.` in our list of
first-party search paths. This better emulates the fact that at runtime,
the module name of a file `src/foo.py` would almost certainly be `foo`
rather than `src.foo`.
I wondered if fixing this might fix
https://github.com/astral-sh/ruff/pull/20603#issuecomment-3345317444. It
seems like that's not the case, but it also seems like it leads to
better diagnostics because we report much more intuitive module names to
the user in our error messages -- so, it's probably a good change
anyway.
## Summary
Fixes https://github.com/astral-sh/ty/issues/1242
From finding references with the LSP, `FileResolver::path` is only
called once, in `UnifiedFile::path`, so I went through those references,
and it looked safe to make this change in every case. Most of the
references are in the various output formats, where we inherited the
absolute vs relative path decision from Ruff. Two other uses are as
fallbacks if converting a relativized path to a string fails. Finally,
we use the path for sorting and in `UnifiedFile::relative_path`.
## Test Plan
Existing tests, with snapshots updated to show absolute paths (in the
`TestDb` this just added a `/` in front of the file names). I also
updated the GitLab CLI test to set the `CI_PROJECT_DIR` environment
variable and ran a test in GitLab CI:
<img width="613" height="114" alt="image"
src="https://github.com/user-attachments/assets/8ab81dba-54fd-4a24-9110-77ef89293cff"
/>
The names of the submodules returned should be *complete*. This
is the contract of `Module::name`. However, we were previously
only returning the basename of the submodule.
## Summary
This PR wires up the GitHub output format moved to `ruff_db` in #20320
to the ty CLI.
It's a bit smaller than the GitLab version (#20155) because some of the
helpers were already in place, but I did factor out a few
`DisplayDiagnosticConfig` constructor calls in Ruff. I also exposed the
`GithubRenderer` and a wrapper `DisplayGithubDiagnostics` type because
we needed a way to configure the program name displayed in the GitHub
diagnostics. This was previously hard-coded to `Ruff`:
<img width="675" height="247" alt="image"
src="https://github.com/user-attachments/assets/592da860-d2f5-4abd-bc5a-66071d742509"
/>
Another option would be to drop the program name in the output format,
but I think it can be helpful in workflows with multiple programs
emitting annotations (such as Ruff and ty!)
## Test Plan
New CLI test, and a manual test with `--config 'terminal.output-format =
"github"'`
## Summary
I felt it was safer to add the `python` folder *in addition* to a
possibly-existing `src` folder, even though the `src` folder only
contains Rust code for `maturin`-based projects. There might be
non-maturin projects where a `python` folder exists for other reasons,
next to a normal `src` layout.
closes https://github.com/astral-sh/ty/issues/1120
## Test Plan
Tested locally on the egglog-python project.
## Summary
Add backreferences to the original item declaration in TypedDict
diagnostics.
Thanks to @AlexWaygood for the suggestion.
## Test Plan
Updated snapshots
## Summary
Two minor cleanups:
- Return `Option<ClassType>` rather than `Option<ClassLiteral>` from
`TypeInferenceBuilder::class_context_of_current_method`. Now that
`ClassType::is_protocol` exists as a method as well as
`ClassLiteral::is_protocol`, this simplifies most of the call-sites of
the `class_context_of_current_method()` method.
- Make more use of the `MethodDecorator::try_from_fn_type` method in
`class.rs`. Under the hood, this method uses the new methods
`FunctionType::is_classmethod()` and `FunctionType::is_staticmethod()`
that @sharkdp recently added, so it gets the semantics more precisely
correct than the code it's replacing in `infer.rs` (by accounting for
implicit staticmethods/classmethods as well as explicit ones). By using
these methods we can delete some code elsewhere (the
`FunctionDecorators::from_decorator_types()` constructor)
## Test Plan
Existing tests
This is to facilitate recursive traversal of all modules in an
environment. This way, we can keep asking for submodules.
This also simplifies how this is used in completions, and probably makes
it faster. Namely, since we return the `Module` itself, callers don't
need to invoke the full module resolver just to get the module type.
Note that this doesn't include namespace packages. (Which were
previously not supported in `Module::all_submodules`.) Given how they
can be spread out across multiple search paths, they will likely require
special consideration here.
## Summary
This wires up the GitLab output format moved into `ruff_db` in
https://github.com/astral-sh/ruff/pull/20117 to the ty CLI.
While I was here, I made one unrelated change to the CLI docs. Clap was
rendering the escapes around the `\[default\]` brackets for the `full`
output, so I just switched those to parentheses:
```
--output-format <OUTPUT_FORMAT>
The format to use for printing diagnostic messages
Possible values:
- full: Print diagnostics verbosely, with context and helpful hints \[default\]
- concise: Print diagnostics concisely, one per line
- gitlab: Print diagnostics in the JSON format expected by GitLab Code Quality reports
```
## Test Plan
New CLI test, and a manual test with `--config 'terminal.output-format =
"gitlab"'` to make sure this works as a configuration option too. I also
tried piping the output through jq to make sure it's at least valid JSON
There are some situations that we have a confusing diagnostics due to
identical class names.
## Class with same name from different modules
```python
import pandas
import polars
df: pandas.DataFrame = polars.DataFrame()
```
This yields the following error:
**Actual:**
error: [invalid-assignment] "Object of type `DataFrame` is not
assignable to `DataFrame`"
**Expected**:
error: [invalid-assignment] "Object of type `polars.DataFrame` is not
assignable to `pandas.DataFrame`"
## Nested classes
```python
from enum import Enum
class A:
class B(Enum):
ACTIVE = "active"
INACTIVE = "inactive"
class C:
class B(Enum):
ACTIVE = "active"
INACTIVE = "inactive"
```
**Actual**:
error: [invalid-assignment] "Object of type `Literal[B.ACTIVE]` is not
assignable to `B`"
**Expected**:
error: [invalid-assignment] "Object of type
`Literal[my_module.C.B.ACTIVE]` is not assignable to `my_module.A.B`"
## Solution
In this MR we added an heuristics to detect when to use a fully
qualified name:
- There is an invalid assignment and;
- They are two different classes and;
- They have the same name
The fully qualified name always includes:
- module name
- nested classes name
- actual class name
There was no `QualifiedDisplay` so I had to implement it from scratch.
I'm very new to the codebase, so I might have done things inefficiently,
so I appreciate feedback.
Should we pre-compute the fully qualified name or do it on demand?
## Not implemented
### Function-local classes
Should we approach this in a different PR?
**Example**:
```python
# t.py
from __future__ import annotations
def function() -> A:
class A:
pass
return A()
class A:
pass
a: A = function()
```
#### mypy
```console
t.py:8: error: Incompatible return value type (got "t.A@5", expected "t.A") [return-value]
```
From my testing the 5 in `A@5` comes from the like number.
#### ty
```console
error[invalid-return-type]: Return type does not match returned value
--> t.py:4:19
|
4 | def function() -> A:
| - Expected `A` because of return type
5 | class A:
6 | pass
7 |
8 | return A()
| ^^^ expected `A`, found `A`
|
info: rule `invalid-return-type` is enabled by default
```
Fixes https://github.com/astral-sh/ty/issues/848
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Implement validation for `TypedDict` constructor calls and dictionary
literal assignments, including support for `total=False` and proper
field management.
Also add support for `Required` and `NotRequired` type qualifiers in
`TypedDict` classes, along with proper inheritance behavior and the
`total=` parameter.
Support both constructor calls and dict literal syntax
part of https://github.com/astral-sh/ty/issues/154
### Basic Required Field Validation
```py
class Person(TypedDict):
name: str
age: int | None
# Error: Missing required field 'name' in TypedDict `Person` constructor
incomplete = Person(age=25)
# Error: Invalid argument to key "name" with declared type `str` on TypedDict `Person`
wrong_type = Person(name=123, age=25)
# Error: Invalid key access on TypedDict `Person`: Unknown key "extra"
extra_field = Person(name="Bob", age=25, extra=True)
```
<img width="773" height="191" alt="Screenshot 2025-08-07 at 17 59 22"
src="https://github.com/user-attachments/assets/79076d98-e85f-4495-93d6-a731aa72a5c9"
/>
### Support for `total=False`
```py
class OptionalPerson(TypedDict, total=False):
name: str
age: int | None
# All valid - all fields are optional with total=False
charlie = OptionalPerson()
david = OptionalPerson(name="David")
emily = OptionalPerson(age=30)
frank = OptionalPerson(name="Frank", age=25)
# But type validation and extra fields still apply
invalid_type = OptionalPerson(name=123) # Error: Invalid argument type
invalid_extra = OptionalPerson(extra=True) # Error: Invalid key access
```
### Dictionary Literal Validation
```py
# Type checking works for both constructors and dict literals
person: Person = {"name": "Alice", "age": 30}
reveal_type(person["name"]) # revealed: str
reveal_type(person["age"]) # revealed: int | None
# Error: Invalid key access on TypedDict `Person`: Unknown key "non_existing"
reveal_type(person["non_existing"]) # revealed: Unknown
```
### `Required`, `NotRequired`, `total`
```python
from typing import TypedDict
from typing_extensions import Required, NotRequired
class PartialUser(TypedDict, total=False):
name: Required[str] # Required despite total=False
age: int # Optional due to total=False
email: NotRequired[str] # Explicitly optional (redundant)
class User(TypedDict):
name: Required[str] # Explicitly required (redundant)
age: int # Required due to total=True
bio: NotRequired[str] # Optional despite total=True
# Valid constructions
partial = PartialUser(name="Alice") # name required, age optional
full = User(name="Bob", age=25) # name and age required, bio optional
# Inheritance maintains original field requirements
class Employee(PartialUser):
department: str # Required (new field)
# name: still Required (inherited)
# age: still optional (inherited)
emp = Employee(name="Charlie", department="Engineering") # ✅
Employee(department="Engineering") # ❌
e: Employee = {"age": 1} # ❌
```
<img width="898" height="683" alt="Screenshot 2025-08-11 at 22 02 57"
src="https://github.com/user-attachments/assets/4c1b18cd-cb2e-493a-a948-51589d121738"
/>
## Implementation
The implementation reuses existing validation logic done in
https://github.com/astral-sh/ruff/pull/19782
### ℹ️ Why I did NOT synthesize an `__init__` for `TypedDict`:
`TypedDict` inherits `dict.__init__(self, *args, **kwargs)` that accepts
all arguments.
The type resolution system finds this inherited signature **before**
looking for synthesized members.
So `own_synthesized_member()` is never called because a signature
already exists.
To force synthesis, you'd have to override Python’s inheritance
mechanism, which would break compatibility with the existing ecosystem.
This is why I went with ad-hoc validation. IMO it's the only viable
approach that respects Python’s
inheritance semantics while providing the required validation.
### Refacto of `Field`
**Before:**
```rust
struct Field<'db> {
declared_ty: Type<'db>,
default_ty: Option<Type<'db>>, // NamedTuple and dataclass only
init_only: bool, // dataclass only
init: bool, // dataclass only
is_required: Option<bool>, // TypedDict only
}
```
**After:**
```rust
struct Field<'db> {
declared_ty: Type<'db>,
kind: FieldKind<'db>,
}
enum FieldKind<'db> {
NamedTuple { default_ty: Option<Type<'db>> },
Dataclass { default_ty: Option<Type<'db>>, init_only: bool, init: bool },
TypedDict { is_required: bool },
}
```
## Test Plan
Updated Markdown tests
---------
Co-authored-by: David Peter <mail@david-peter.de>
This is a port of the logic in https://github.com/astral-sh/uv/pull/7691
The basic idea is we use CONDA_DEFAULT_ENV as a signal for whether
CONDA_PREFIX is just the ambient system conda install, or the user has
explicitly activated a custom one. If the former, then the conda is
treated like a system install (having lowest priority). If the latter,
the conda is treated like an activated venv (having priority over
everything but an Actual activated venv).
Fixes https://github.com/astral-sh/ty/issues/611
## Summary
This PR adds a new lint, `invalid-await`, for all sorts of reasons why
an object may not be `await`able, as discussed in astral-sh/ty#919.
Precisely, `__await__` is guarded against being missing, possibly
unbound, or improperly defined (expects additional arguments or doesn't
return an iterator).
Of course, diagnostics need to be fine-tuned. If `__await__` cannot be
called with no extra arguments, it indicates an error (or a quirk?) in
the method signature, not at the call site. Without any doubt, such an
object is not `Awaitable`, but I feel like talking about arguments for
an *implicit* call is a bit leaky.
I didn't reference any actual diagnostic messages in the lint
definition, because I want to hear feedback first.
Also, there's no mention of the actual required method signature for
`__await__` anywhere in the docs. The only reference I had is the
`typing` stub. I basically ended up linking `[Awaitable]` to ["must
implement
`__await__`"](https://docs.python.org/3/library/collections.abc.html#collections.abc.Awaitable),
which is insufficient on its own.
## Test Plan
The following code was tested:
```python
import asyncio
import typing
class Awaitable:
def __await__(self) -> typing.Generator[typing.Any, None, int]:
yield None
return 5
class NoDunderMethod:
pass
class InvalidAwaitArgs:
def __await__(self, value: int) -> int:
return value
class InvalidAwaitReturn:
def __await__(self) -> int:
return 5
class InvalidAwaitReturnImplicit:
def __await__(self):
pass
async def main() -> None:
result = await Awaitable() # valid
result = await NoDunderMethod() # `__await__` is missing
result = await InvalidAwaitReturn() # `__await__` returns `int`, which is not a valid iterator
result = await InvalidAwaitArgs() # `__await__` expects additional arguments and cannot be called implicitly
result = await InvalidAwaitReturnImplicit() # `__await__` returns `Unknown`, which is not a valid iterator
asyncio.run(main())
```
---------
Co-authored-by: Carl Meyer <carl@astral.sh>