Issue #28485: Check for negative workers even without ProcessPoolExecutor

This matches the documentation, and passes the test suite when multithreading
is disabled.
This commit is contained in:
Martin Panter 2016-11-05 01:11:36 +00:00
parent da4887a88d
commit 88281ceed0
2 changed files with 7 additions and 3 deletions

View file

@ -66,13 +66,13 @@ def compile_dir(dir, maxlevels=10, ddir=None, force=False, rx=None,
optimize: optimization level or -1 for level of the interpreter optimize: optimization level or -1 for level of the interpreter
workers: maximum number of parallel workers workers: maximum number of parallel workers
""" """
if workers is not None and workers < 0:
raise ValueError('workers must be greater or equal to 0')
files = _walk_dir(dir, quiet=quiet, maxlevels=maxlevels, files = _walk_dir(dir, quiet=quiet, maxlevels=maxlevels,
ddir=ddir) ddir=ddir)
success = 1 success = 1
if workers is not None and workers != 1 and ProcessPoolExecutor is not None: if workers is not None and workers != 1 and ProcessPoolExecutor is not None:
if workers < 0:
raise ValueError('workers must be greater or equal to 0')
workers = workers or None workers = workers or None
with ProcessPoolExecutor(max_workers=workers) as executor: with ProcessPoolExecutor(max_workers=workers) as executor:
results = executor.map(partial(compile_file, results = executor.map(partial(compile_file,

View file

@ -113,6 +113,10 @@ Core and Builtins
Library Library
------- -------
- Issue #28485: Always raise ValueError for negative
compileall.compile_dir(workers=...) parameter, even when multithreading is
unavailable.
- Issue #28387: Fixed possible crash in _io.TextIOWrapper deallocator when - Issue #28387: Fixed possible crash in _io.TextIOWrapper deallocator when
the garbage collector is invoked in other thread. Based on patch by the garbage collector is invoked in other thread. Based on patch by
Sebastian Cufre. Sebastian Cufre.