A new extension module, `_hmac`, now exposes the HACL* HMAC (formally verified) implementation.
The HACL* implementation is used as a fallback implementation when the OpenSSL implementation of HMAC
is not available or disabled. For now, only named hash algorithms are recognized and SIMD support provided
by HACL* for the BLAKE2 hash functions is not yet used.
CPython's pthread-based thread identifier relies on pthread_t being able
to be represented as an unsigned integer type.
This is true in most Linux libc implementations where it's defined as an
unsigned long, however musl typedefs it as a struct *.
If the pointer has the high bit set and is cast to PyThread_ident_t, the
resultant value can be sign-extended [0]. This can cause issues when
comparing against threading._MainThread's identifier. The main thread's
identifier value is retrieved via _get_main_thread_ident which is backed
by an unsigned long which truncates sign extended bits.
>>> hex(threading.main_thread().ident)
'0xb6f33f3c'
>>> hex(threading.current_thread().ident)
'0xffffffffb6f33f3c'
Work around this by conditionally compiling in some code for non-glibc
based Linux platforms that are at risk of sign-extension to return a
PyLong based on the main thread's unsigned long thread identifier if the
current thread is the main thread.
[0]: https://gcc.gnu.org/onlinedocs/gcc-14.2.0/gcc/Arrays-and-pointers-implementation.html
---------
Signed-off-by: Vincent Fazio <vfazio@gmail.com>
Optimize `LOAD_FAST` opcodes into faster versions that load borrowed references onto the operand stack when we can prove that the lifetime of the local outlives the lifetime of the temporary that is loaded onto the stack.
* Rename 'defined' attribute to 'in_local' to more accurately reflect how it is used
* Make death of variables explicit even for array variables.
* Convert in_memory from boolean to stack offset
* Don't apply liveness analyis to optimizer generated code
* Add 'out' parameter to stack.pop
Minor readability fix in PyUnstable_GC_VisitObjects
Replaces `if (visit_generation())` with `if (visit_generation() < 0)`,
since we are checking for the failure case, and it's confusing to have
that be implicitly `true`.
Also fixes a misspelt variable name.
In the free threaded build, the `_PyObject_LookupSpecial()` call can lead to
reference count contention on the returned function object becuase it
doesn't use stackrefs. Refactor some of the callers to use
`_PyObject_MaybeCallSpecialNoArgs`, which uses stackrefs internally.
This fixes the scaling bottleneck in the "lookup_special" microbenchmark
in `ftscalingbench.py`. However, the are still some uses of
`_PyObject_LookupSpecial()` that need to be addressed in future PRs.
Concurrent accesses from multiple threads to the same `cell` object did not
scale well in the free-threaded build. Use `_PyStackRef` and optimistically
avoid locking to improve scaling.
With the locks around cell reads gone, some of the free threading tests were
prone to starvation: the readers were able to run in a tight loop and the
writer threads weren't scheduled frequently enough to make timely progress.
Adjust the tests to avoid this.
Co-authored-by: Donghee Na <donghee.na@python.org>
* Rename 'defined' attribute to 'in_local' to more accurately reflect how it is used
* Make death of variables explicit even for array variables.
* Convert in_memory from boolean to stack offset
* Don't apply liveness analysis to optimizer generated code
* Fix RETURN_VALUE in optimizer
This tells TSAN not to sanitize `PyUnstable_InterpreterFrame_GetLine()`.
There's a possible data race on the access to the frame's `instr_ptr`
if the frame is currently executing. We don't really care about the
race. In theory, we could use relaxed atomics for every access to
`instr_ptr`, but that would create more code churn and current compilers
are overly conservative with optimizations around relaxed atomic
accesses.
We also don't sanitize `_PyFrame_IsIncomplete()` because it accesses
`instr_ptr` and is called from assertions within PyFrame_GetCode().