GH-84559: Deprecate fork being the multiprocessing default. (#100618)

This starts the process. Users who don't specify their own start method
and use the default on platforms where it is 'fork' will see a
DeprecationWarning upon multiprocessing.Pool() construction or upon
multiprocessing.Process.start() or concurrent.futures.ProcessPool use.

See the related issue and documentation within this change for details.
This commit is contained in:
Gregory P. Smith 2023-02-02 15:50:35 -08:00 committed by GitHub
parent 618b7a8260
commit 0ca67e6313
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
16 changed files with 284 additions and 63 deletions

View file

@ -250,9 +250,10 @@ to a :class:`ProcessPoolExecutor` will result in deadlock.
then :exc:`ValueError` will be raised. If *max_workers* is ``None``, then then :exc:`ValueError` will be raised. If *max_workers* is ``None``, then
the default chosen will be at most ``61``, even if more processors are the default chosen will be at most ``61``, even if more processors are
available. available.
*mp_context* can be a multiprocessing context or None. It will be used to *mp_context* can be a :mod:`multiprocessing` context or ``None``. It will be
launch the workers. If *mp_context* is ``None`` or not given, the default used to launch the workers. If *mp_context* is ``None`` or not given, the
multiprocessing context is used. default :mod:`multiprocessing` context is used.
See :ref:`multiprocessing-start-methods`.
*initializer* is an optional callable that is called at the start of *initializer* is an optional callable that is called at the start of
each worker process; *initargs* is a tuple of arguments passed to the each worker process; *initargs* is a tuple of arguments passed to the
@ -284,6 +285,13 @@ to a :class:`ProcessPoolExecutor` will result in deadlock.
The *max_tasks_per_child* argument was added to allow users to The *max_tasks_per_child* argument was added to allow users to
control the lifetime of workers in the pool. control the lifetime of workers in the pool.
.. versionchanged:: 3.12
The implicit use of the :mod:`multiprocessing` *fork* start method as a
platform default (see :ref:`multiprocessing-start-methods`) now raises a
:exc:`DeprecationWarning`. The default will change in Python 3.14.
Code that requires *fork* should explicitly specify that when creating
their :class:`ProcessPoolExecutor` by passing a
``mp_context=multiprocessing.get_context('fork')`` parameter.
.. _processpoolexecutor-example: .. _processpoolexecutor-example:

View file

@ -19,7 +19,7 @@ offers both local and remote concurrency, effectively side-stepping the
:term:`Global Interpreter Lock <global interpreter lock>` by using :term:`Global Interpreter Lock <global interpreter lock>` by using
subprocesses instead of threads. Due subprocesses instead of threads. Due
to this, the :mod:`multiprocessing` module allows the programmer to fully to this, the :mod:`multiprocessing` module allows the programmer to fully
leverage multiple processors on a given machine. It runs on both Unix and leverage multiple processors on a given machine. It runs on both POSIX and
Windows. Windows.
The :mod:`multiprocessing` module also introduces APIs which do not have The :mod:`multiprocessing` module also introduces APIs which do not have
@ -99,11 +99,11 @@ necessary, see :ref:`multiprocessing-programming`.
.. _multiprocessing-start-methods:
Contexts and start methods Contexts and start methods
~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _multiprocessing-start-methods:
Depending on the platform, :mod:`multiprocessing` supports three ways Depending on the platform, :mod:`multiprocessing` supports three ways
to start a process. These *start methods* are to start a process. These *start methods* are
@ -115,7 +115,7 @@ to start a process. These *start methods* are
will not be inherited. Starting a process using this method is will not be inherited. Starting a process using this method is
rather slow compared to using *fork* or *forkserver*. rather slow compared to using *fork* or *forkserver*.
Available on Unix and Windows. The default on Windows and macOS. Available on POSIX and Windows platforms. The default on Windows and macOS.
*fork* *fork*
The parent process uses :func:`os.fork` to fork the Python The parent process uses :func:`os.fork` to fork the Python
@ -124,32 +124,39 @@ to start a process. These *start methods* are
inherited by the child process. Note that safely forking a inherited by the child process. Note that safely forking a
multithreaded process is problematic. multithreaded process is problematic.
Available on Unix only. The default on Unix. Available on POSIX systems. Currently the default on POSIX except macOS.
*forkserver* *forkserver*
When the program starts and selects the *forkserver* start method, When the program starts and selects the *forkserver* start method,
a server process is started. From then on, whenever a new process a server process is spawned. From then on, whenever a new process
is needed, the parent process connects to the server and requests is needed, the parent process connects to the server and requests
that it fork a new process. The fork server process is single that it fork a new process. The fork server process is single threaded
threaded so it is safe for it to use :func:`os.fork`. No unless system libraries or preloaded imports spawn threads as a
unnecessary resources are inherited. side-effect so it is generally safe for it to use :func:`os.fork`.
No unnecessary resources are inherited.
Available on Unix platforms which support passing file descriptors Available on POSIX platforms which support passing file descriptors
over Unix pipes. over Unix pipes such as Linux.
.. versionchanged:: 3.12
Implicit use of the *fork* start method as the default now raises a
:exc:`DeprecationWarning`. Code that requires it should explicitly
specify *fork* via :func:`get_context` or :func:`set_start_method`.
The default will change away from *fork* in 3.14.
.. versionchanged:: 3.8 .. versionchanged:: 3.8
On macOS, the *spawn* start method is now the default. The *fork* start On macOS, the *spawn* start method is now the default. The *fork* start
method should be considered unsafe as it can lead to crashes of the method should be considered unsafe as it can lead to crashes of the
subprocess. See :issue:`33725`. subprocess as macOS system libraries may start threads. See :issue:`33725`.
.. versionchanged:: 3.4 .. versionchanged:: 3.4
*spawn* added on all Unix platforms, and *forkserver* added for *spawn* added on all POSIX platforms, and *forkserver* added for
some Unix platforms. some POSIX platforms.
Child processes no longer inherit all of the parents inheritable Child processes no longer inherit all of the parents inheritable
handles on Windows. handles on Windows.
On Unix using the *spawn* or *forkserver* start methods will also On POSIX using the *spawn* or *forkserver* start methods will also
start a *resource tracker* process which tracks the unlinked named start a *resource tracker* process which tracks the unlinked named
system resources (such as named semaphores or system resources (such as named semaphores or
:class:`~multiprocessing.shared_memory.SharedMemory` objects) created :class:`~multiprocessing.shared_memory.SharedMemory` objects) created
@ -211,10 +218,10 @@ library user.
.. warning:: .. warning::
The ``'spawn'`` and ``'forkserver'`` start methods cannot currently The ``'spawn'`` and ``'forkserver'`` start methods generally cannot
be used with "frozen" executables (i.e., binaries produced by be used with "frozen" executables (i.e., binaries produced by
packages like **PyInstaller** and **cx_Freeze**) on Unix. packages like **PyInstaller** and **cx_Freeze**) on POSIX systems.
The ``'fork'`` start method does work. The ``'fork'`` start method may work if code does not use threads.
Exchanging objects between processes Exchanging objects between processes
@ -629,14 +636,14 @@ The :mod:`multiprocessing` package mostly replicates the API of the
calling :meth:`join()` is simpler. calling :meth:`join()` is simpler.
On Windows, this is an OS handle usable with the ``WaitForSingleObject`` On Windows, this is an OS handle usable with the ``WaitForSingleObject``
and ``WaitForMultipleObjects`` family of API calls. On Unix, this is and ``WaitForMultipleObjects`` family of API calls. On POSIX, this is
a file descriptor usable with primitives from the :mod:`select` module. a file descriptor usable with primitives from the :mod:`select` module.
.. versionadded:: 3.3 .. versionadded:: 3.3
.. method:: terminate() .. method:: terminate()
Terminate the process. On Unix this is done using the ``SIGTERM`` signal; Terminate the process. On POSIX this is done using the ``SIGTERM`` signal;
on Windows :c:func:`TerminateProcess` is used. Note that exit handlers and on Windows :c:func:`TerminateProcess` is used. Note that exit handlers and
finally clauses, etc., will not be executed. finally clauses, etc., will not be executed.
@ -653,7 +660,7 @@ The :mod:`multiprocessing` package mostly replicates the API of the
.. method:: kill() .. method:: kill()
Same as :meth:`terminate()` but using the ``SIGKILL`` signal on Unix. Same as :meth:`terminate()` but using the ``SIGKILL`` signal on POSIX.
.. versionadded:: 3.7 .. versionadded:: 3.7
@ -676,16 +683,17 @@ The :mod:`multiprocessing` package mostly replicates the API of the
.. doctest:: .. doctest::
>>> import multiprocessing, time, signal >>> import multiprocessing, time, signal
>>> p = multiprocessing.Process(target=time.sleep, args=(1000,)) >>> mp_context = multiprocessing.get_context('spawn')
>>> p = mp_context.Process(target=time.sleep, args=(1000,))
>>> print(p, p.is_alive()) >>> print(p, p.is_alive())
<Process ... initial> False <...Process ... initial> False
>>> p.start() >>> p.start()
>>> print(p, p.is_alive()) >>> print(p, p.is_alive())
<Process ... started> True <...Process ... started> True
>>> p.terminate() >>> p.terminate()
>>> time.sleep(0.1) >>> time.sleep(0.1)
>>> print(p, p.is_alive()) >>> print(p, p.is_alive())
<Process ... stopped exitcode=-SIGTERM> False <...Process ... stopped exitcode=-SIGTERM> False
>>> p.exitcode == -signal.SIGTERM >>> p.exitcode == -signal.SIGTERM
True True
@ -815,7 +823,7 @@ For an example of the usage of queues for interprocess communication see
Return the approximate size of the queue. Because of Return the approximate size of the queue. Because of
multithreading/multiprocessing semantics, this number is not reliable. multithreading/multiprocessing semantics, this number is not reliable.
Note that this may raise :exc:`NotImplementedError` on Unix platforms like Note that this may raise :exc:`NotImplementedError` on platforms like
macOS where ``sem_getvalue()`` is not implemented. macOS where ``sem_getvalue()`` is not implemented.
.. method:: empty() .. method:: empty()
@ -1034,9 +1042,8 @@ Miscellaneous
Returns a list of the supported start methods, the first of which Returns a list of the supported start methods, the first of which
is the default. The possible start methods are ``'fork'``, is the default. The possible start methods are ``'fork'``,
``'spawn'`` and ``'forkserver'``. On Windows only ``'spawn'`` is ``'spawn'`` and ``'forkserver'``. Not all platforms support all
available. On Unix ``'fork'`` and ``'spawn'`` are always methods. See :ref:`multiprocessing-start-methods`.
supported, with ``'fork'`` being the default.
.. versionadded:: 3.4 .. versionadded:: 3.4
@ -1048,7 +1055,7 @@ Miscellaneous
If *method* is ``None`` then the default context is returned. If *method* is ``None`` then the default context is returned.
Otherwise *method* should be ``'fork'``, ``'spawn'``, Otherwise *method* should be ``'fork'``, ``'spawn'``,
``'forkserver'``. :exc:`ValueError` is raised if the specified ``'forkserver'``. :exc:`ValueError` is raised if the specified
start method is not available. start method is not available. See :ref:`multiprocessing-start-methods`.
.. versionadded:: 3.4 .. versionadded:: 3.4
@ -1062,8 +1069,7 @@ Miscellaneous
is true then ``None`` is returned. is true then ``None`` is returned.
The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'`` The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'``
or ``None``. ``'fork'`` is the default on Unix, while ``'spawn'`` is or ``None``. See :ref:`multiprocessing-start-methods`.
the default on Windows and macOS.
.. versionchanged:: 3.8 .. versionchanged:: 3.8
@ -1084,11 +1090,26 @@ Miscellaneous
before they can create child processes. before they can create child processes.
.. versionchanged:: 3.4 .. versionchanged:: 3.4
Now supported on Unix when the ``'spawn'`` start method is used. Now supported on POSIX when the ``'spawn'`` start method is used.
.. versionchanged:: 3.11 .. versionchanged:: 3.11
Accepts a :term:`path-like object`. Accepts a :term:`path-like object`.
.. function:: set_forkserver_preload(module_names)
Set a list of module names for the forkserver main process to attempt to
import so that their already imported state is inherited by forked
processes. Any :exc:`ImportError` when doing so is silently ignored.
This can be used as a performance enhancement to avoid repeated work
in every process.
For this to work, it must be called before the forkserver process has been
launched (before creating a :class:`Pool` or starting a :class:`Process`).
Only meaningful when using the ``'forkserver'`` start method.
.. versionadded:: 3.4
.. function:: set_start_method(method, force=False) .. function:: set_start_method(method, force=False)
Set the method which should be used to start child processes. Set the method which should be used to start child processes.
@ -1102,6 +1123,8 @@ Miscellaneous
protected inside the ``if __name__ == '__main__'`` clause of the protected inside the ``if __name__ == '__main__'`` clause of the
main module. main module.
See :ref:`multiprocessing-start-methods`.
.. versionadded:: 3.4 .. versionadded:: 3.4
.. note:: .. note::
@ -1906,7 +1929,8 @@ their parent process exits. The manager classes are defined in the
.. doctest:: .. doctest::
>>> manager = multiprocessing.Manager() >>> mp_context = multiprocessing.get_context('spawn')
>>> manager = mp_context.Manager()
>>> Global = manager.Namespace() >>> Global = manager.Namespace()
>>> Global.x = 10 >>> Global.x = 10
>>> Global.y = 'hello' >>> Global.y = 'hello'
@ -2018,8 +2042,8 @@ the proxy). In this way, a proxy can be used just like its referent can:
.. doctest:: .. doctest::
>>> from multiprocessing import Manager >>> mp_context = multiprocessing.get_context('spawn')
>>> manager = Manager() >>> manager = mp_context.Manager()
>>> l = manager.list([i*i for i in range(10)]) >>> l = manager.list([i*i for i in range(10)])
>>> print(l) >>> print(l)
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81] [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
@ -2520,7 +2544,7 @@ multiple connections at the same time.
*timeout* is ``None`` then it will block for an unlimited period. *timeout* is ``None`` then it will block for an unlimited period.
A negative timeout is equivalent to a zero timeout. A negative timeout is equivalent to a zero timeout.
For both Unix and Windows, an object can appear in *object_list* if For both POSIX and Windows, an object can appear in *object_list* if
it is it is
* a readable :class:`~multiprocessing.connection.Connection` object; * a readable :class:`~multiprocessing.connection.Connection` object;
@ -2531,7 +2555,7 @@ multiple connections at the same time.
A connection or socket object is ready when there is data available A connection or socket object is ready when there is data available
to be read from it, or the other end has been closed. to be read from it, or the other end has been closed.
**Unix**: ``wait(object_list, timeout)`` almost equivalent **POSIX**: ``wait(object_list, timeout)`` almost equivalent
``select.select(object_list, [], [], timeout)``. The difference is ``select.select(object_list, [], [], timeout)``. The difference is
that, if :func:`select.select` is interrupted by a signal, it can that, if :func:`select.select` is interrupted by a signal, it can
raise :exc:`OSError` with an error number of ``EINTR``, whereas raise :exc:`OSError` with an error number of ``EINTR``, whereas
@ -2803,7 +2827,7 @@ Thread safety of proxies
Joining zombie processes Joining zombie processes
On Unix when a process finishes but has not been joined it becomes a zombie. On POSIX when a process finishes but has not been joined it becomes a zombie.
There should never be very many because each time a new process starts (or There should never be very many because each time a new process starts (or
:func:`~multiprocessing.active_children` is called) all completed processes :func:`~multiprocessing.active_children` is called) all completed processes
which have not yet been joined will be joined. Also calling a finished which have not yet been joined will be joined. Also calling a finished
@ -2866,7 +2890,7 @@ Joining processes that use queues
Explicitly pass resources to child processes Explicitly pass resources to child processes
On Unix using the *fork* start method, a child process can make On POSIX using the *fork* start method, a child process can make
use of a shared resource created in a parent process using a use of a shared resource created in a parent process using a
global resource. However, it is better to pass the object as an global resource. However, it is better to pass the object as an
argument to the constructor for the child process. argument to the constructor for the child process.

View file

@ -440,6 +440,11 @@ Deprecated
warning at compile time. This field will be removed in Python 3.14. warning at compile time. This field will be removed in Python 3.14.
(Contributed by Ramvikrams and Kumar Aditya in :gh:`101193`. PEP by Ken Jin.) (Contributed by Ramvikrams and Kumar Aditya in :gh:`101193`. PEP by Ken Jin.)
* Use of the implicit default ``'fork'`` start method for
:mod:`multiprocessing` and :class:`concurrent.futures.ProcessPoolExecutor`
now emits a :exc:`DeprecationWarning` on Linux and other non-macOS POSIX
systems. Avoid this by explicitly specifying a start method.
See :ref:`multiprocessing-start-methods`.
Pending Removal in Python 3.13 Pending Removal in Python 3.13
------------------------------ ------------------------------
@ -505,6 +510,9 @@ Pending Removal in Python 3.14
* Testing the truth value of an :class:`xml.etree.ElementTree.Element` * Testing the truth value of an :class:`xml.etree.ElementTree.Element`
is deprecated and will raise an exception in Python 3.14. is deprecated and will raise an exception in Python 3.14.
* The default :mod:`multiprocessing` start method will change to one of either
``'forkserver'`` or ``'spawn'`` on all platforms for which ``'fork'`` remains
the default per :gh:`84559`.
Pending Removal in Future Versions Pending Removal in Future Versions
---------------------------------- ----------------------------------

View file

@ -97,9 +97,15 @@ def compile_dir(dir, maxlevels=None, ddir=None, force=False,
files = _walk_dir(dir, quiet=quiet, maxlevels=maxlevels) files = _walk_dir(dir, quiet=quiet, maxlevels=maxlevels)
success = True success = True
if workers != 1 and ProcessPoolExecutor is not None: if workers != 1 and ProcessPoolExecutor is not None:
import multiprocessing
if multiprocessing.get_start_method() == 'fork':
mp_context = multiprocessing.get_context('forkserver')
else:
mp_context = None
# If workers == 0, let ProcessPoolExecutor choose # If workers == 0, let ProcessPoolExecutor choose
workers = workers or None workers = workers or None
with ProcessPoolExecutor(max_workers=workers) as executor: with ProcessPoolExecutor(max_workers=workers,
mp_context=mp_context) as executor:
results = executor.map(partial(compile_file, results = executor.map(partial(compile_file,
ddir=ddir, force=force, ddir=ddir, force=force,
rx=rx, quiet=quiet, rx=rx, quiet=quiet,

View file

@ -57,6 +57,7 @@ from functools import partial
import itertools import itertools
import sys import sys
from traceback import format_exception from traceback import format_exception
import warnings
_threads_wakeups = weakref.WeakKeyDictionary() _threads_wakeups = weakref.WeakKeyDictionary()
@ -616,9 +617,9 @@ class ProcessPoolExecutor(_base.Executor):
max_workers: The maximum number of processes that can be used to max_workers: The maximum number of processes that can be used to
execute the given calls. If None or not given then as many execute the given calls. If None or not given then as many
worker processes will be created as the machine has processors. worker processes will be created as the machine has processors.
mp_context: A multiprocessing context to launch the workers. This mp_context: A multiprocessing context to launch the workers created
object should provide SimpleQueue, Queue and Process. Useful using the multiprocessing.get_context('start method') API. This
to allow specific multiprocessing start methods. object should provide SimpleQueue, Queue and Process.
initializer: A callable used to initialize worker processes. initializer: A callable used to initialize worker processes.
initargs: A tuple of arguments to pass to the initializer. initargs: A tuple of arguments to pass to the initializer.
max_tasks_per_child: The maximum number of tasks a worker process max_tasks_per_child: The maximum number of tasks a worker process
@ -650,6 +651,22 @@ class ProcessPoolExecutor(_base.Executor):
mp_context = mp.get_context("spawn") mp_context = mp.get_context("spawn")
else: else:
mp_context = mp.get_context() mp_context = mp.get_context()
if (mp_context.get_start_method() == "fork" and
mp_context == mp.context._default_context._default_context):
warnings.warn(
"The default multiprocessing start method will change "
"away from 'fork' in Python >= 3.14, per GH-84559. "
"ProcessPoolExecutor uses multiprocessing. "
"If your application requires the 'fork' multiprocessing "
"start method, explicitly specify that by passing a "
"mp_context= parameter. "
"The safest start method is 'spawn'.",
category=mp.context.DefaultForkDeprecationWarning,
stacklevel=2,
)
# Avoid the equivalent warning from multiprocessing itself via
# a non-default fork context.
mp_context = mp.get_context("fork")
self._mp_context = mp_context self._mp_context = mp_context
# https://github.com/python/cpython/issues/90622 # https://github.com/python/cpython/issues/90622

View file

@ -23,6 +23,9 @@ class TimeoutError(ProcessError):
class AuthenticationError(ProcessError): class AuthenticationError(ProcessError):
pass pass
class DefaultForkDeprecationWarning(DeprecationWarning):
pass
# #
# Base type for contexts. Bound methods of an instance of this type are included in __all__ of __init__.py # Base type for contexts. Bound methods of an instance of this type are included in __all__ of __init__.py
# #
@ -258,6 +261,7 @@ class DefaultContext(BaseContext):
return self._actual_context._name return self._actual_context._name
def get_all_start_methods(self): def get_all_start_methods(self):
"""Returns a list of the supported start methods, default first."""
if sys.platform == 'win32': if sys.platform == 'win32':
return ['spawn'] return ['spawn']
else: else:
@ -280,6 +284,23 @@ if sys.platform != 'win32':
from .popen_fork import Popen from .popen_fork import Popen
return Popen(process_obj) return Popen(process_obj)
_warn_package_prefixes = (os.path.dirname(__file__),)
class _DeprecatedForkProcess(ForkProcess):
@classmethod
def _Popen(cls, process_obj):
import warnings
warnings.warn(
"The default multiprocessing start method will change "
"away from 'fork' in Python >= 3.14, per GH-84559. "
"Use multiprocessing.get_context(X) or .set_start_method(X) to "
"explicitly specify it when your application requires 'fork'. "
"The safest start method is 'spawn'.",
category=DefaultForkDeprecationWarning,
skip_file_prefixes=_warn_package_prefixes,
)
return super()._Popen(process_obj)
class SpawnProcess(process.BaseProcess): class SpawnProcess(process.BaseProcess):
_start_method = 'spawn' _start_method = 'spawn'
@staticmethod @staticmethod
@ -303,6 +324,9 @@ if sys.platform != 'win32':
_name = 'fork' _name = 'fork'
Process = ForkProcess Process = ForkProcess
class _DefaultForkContext(ForkContext):
Process = _DeprecatedForkProcess
class SpawnContext(BaseContext): class SpawnContext(BaseContext):
_name = 'spawn' _name = 'spawn'
Process = SpawnProcess Process = SpawnProcess
@ -318,13 +342,16 @@ if sys.platform != 'win32':
'fork': ForkContext(), 'fork': ForkContext(),
'spawn': SpawnContext(), 'spawn': SpawnContext(),
'forkserver': ForkServerContext(), 'forkserver': ForkServerContext(),
# Remove None and _DefaultForkContext() when changing the default
# in 3.14 for https://github.com/python/cpython/issues/84559.
None: _DefaultForkContext(),
} }
if sys.platform == 'darwin': if sys.platform == 'darwin':
# bpo-33725: running arbitrary code after fork() is no longer reliable # bpo-33725: running arbitrary code after fork() is no longer reliable
# on macOS since macOS 10.14 (Mojave). Use spawn by default instead. # on macOS since macOS 10.14 (Mojave). Use spawn by default instead.
_default_context = DefaultContext(_concrete_contexts['spawn']) _default_context = DefaultContext(_concrete_contexts['spawn'])
else: else:
_default_context = DefaultContext(_concrete_contexts['fork']) _default_context = DefaultContext(_concrete_contexts[None])
else: else:

View file

@ -4098,9 +4098,10 @@ class _TestSharedMemory(BaseTestCase):
def test_shared_memory_SharedMemoryManager_reuses_resource_tracker(self): def test_shared_memory_SharedMemoryManager_reuses_resource_tracker(self):
# bpo-36867: test that a SharedMemoryManager uses the # bpo-36867: test that a SharedMemoryManager uses the
# same resource_tracker process as its parent. # same resource_tracker process as its parent.
cmd = '''if 1: cmd = f'''if 1:
from multiprocessing.managers import SharedMemoryManager from multiprocessing.managers import SharedMemoryManager
from multiprocessing import set_start_method
set_start_method({multiprocessing.get_start_method()!r})
smm = SharedMemoryManager() smm = SharedMemoryManager()
smm.start() smm.start()
@ -4967,11 +4968,13 @@ class TestFlags(unittest.TestCase):
conn.send(tuple(sys.flags)) conn.send(tuple(sys.flags))
@classmethod @classmethod
def run_in_child(cls): def run_in_child(cls, start_method):
import json import json
r, w = multiprocessing.Pipe(duplex=False) mp = multiprocessing.get_context(start_method)
p = multiprocessing.Process(target=cls.run_in_grandchild, args=(w,)) r, w = mp.Pipe(duplex=False)
p.start() p = mp.Process(target=cls.run_in_grandchild, args=(w,))
with warnings.catch_warnings(category=DeprecationWarning):
p.start()
grandchild_flags = r.recv() grandchild_flags = r.recv()
p.join() p.join()
r.close() r.close()
@ -4982,8 +4985,10 @@ class TestFlags(unittest.TestCase):
def test_flags(self): def test_flags(self):
import json import json
# start child process using unusual flags # start child process using unusual flags
prog = ('from test._test_multiprocessing import TestFlags; ' + prog = (
'TestFlags.run_in_child()') 'from test._test_multiprocessing import TestFlags; '
f'TestFlags.run_in_child({multiprocessing.get_start_method()!r})'
)
data = subprocess.check_output( data = subprocess.check_output(
[sys.executable, '-E', '-S', '-O', '-c', prog]) [sys.executable, '-E', '-S', '-O', '-c', prog])
child_flags, grandchild_flags = json.loads(data.decode('ascii')) child_flags, grandchild_flags = json.loads(data.decode('ascii'))

View file

@ -30,6 +30,7 @@ def test_func():
def main(): def main():
multiprocessing.set_start_method('spawn')
test_pool = multiprocessing.Process(target=test_func) test_pool = multiprocessing.Process(target=test_func)
test_pool.start() test_pool.start()
test_pool.join() test_pool.join()

View file

@ -4,6 +4,7 @@ import collections.abc
import concurrent.futures import concurrent.futures
import functools import functools
import io import io
import multiprocessing
import os import os
import platform import platform
import re import re
@ -2762,7 +2763,13 @@ class GetEventLoopTestsMixin:
support.skip_if_broken_multiprocessing_synchronize() support.skip_if_broken_multiprocessing_synchronize()
async def main(): async def main():
pool = concurrent.futures.ProcessPoolExecutor() if multiprocessing.get_start_method() == 'fork':
# Avoid 'fork' DeprecationWarning.
mp_context = multiprocessing.get_context('forkserver')
else:
mp_context = None
pool = concurrent.futures.ProcessPoolExecutor(
mp_context=mp_context)
result = await self.loop.run_in_executor( result = await self.loop.run_in_executor(
pool, _test_get_event_loop_new_process__sub_proc) pool, _test_get_event_loop_new_process__sub_proc)
pool.shutdown() pool.shutdown()

View file

@ -18,6 +18,7 @@ import sys
import threading import threading
import time import time
import unittest import unittest
import warnings
import weakref import weakref
from pickle import PicklingError from pickle import PicklingError
@ -571,6 +572,24 @@ class ProcessPoolShutdownTest(ExecutorShutdownTest):
assert all([r == abs(v) for r, v in zip(res, range(-5, 5))]) assert all([r == abs(v) for r, v in zip(res, range(-5, 5))])
@unittest.skipIf(mp.get_all_start_methods()[0] != "fork", "non-fork default.")
class ProcessPoolExecutorDefaultForkWarning(unittest.TestCase):
def test_fork_default_warns(self):
with self.assertWarns(mp.context.DefaultForkDeprecationWarning):
with futures.ProcessPoolExecutor(2):
pass
def test_explicit_fork_does_not_warn(self):
with warnings.catch_warnings(record=True) as ws:
warnings.simplefilter("ignore")
warnings.filterwarnings(
'always', category=mp.context.DefaultForkDeprecationWarning)
ctx = mp.get_context("fork") # Non-default fork context.
with futures.ProcessPoolExecutor(2, mp_context=ctx):
pass
self.assertEqual(len(ws), 0, msg=[str(x) for x in ws])
create_executor_tests(ProcessPoolShutdownTest, create_executor_tests(ProcessPoolShutdownTest,
executor_mixins=(ProcessPoolForkMixin, executor_mixins=(ProcessPoolForkMixin,
ProcessPoolForkserverMixin, ProcessPoolForkserverMixin,

View file

@ -1,11 +1,11 @@
"""Test program for the fcntl C module. """Test program for the fcntl C module.
""" """
import multiprocessing
import platform import platform
import os import os
import struct import struct
import sys import sys
import unittest import unittest
from multiprocessing import Process
from test.support import verbose, cpython_only from test.support import verbose, cpython_only
from test.support.import_helper import import_module from test.support.import_helper import import_module
from test.support.os_helper import TESTFN, unlink from test.support.os_helper import TESTFN, unlink
@ -160,7 +160,8 @@ class TestFcntl(unittest.TestCase):
self.f = open(TESTFN, 'wb+') self.f = open(TESTFN, 'wb+')
cmd = fcntl.LOCK_EX | fcntl.LOCK_NB cmd = fcntl.LOCK_EX | fcntl.LOCK_NB
fcntl.lockf(self.f, cmd) fcntl.lockf(self.f, cmd)
p = Process(target=try_lockf_on_other_process_fail, args=(TESTFN, cmd)) mp = multiprocessing.get_context('spawn')
p = mp.Process(target=try_lockf_on_other_process_fail, args=(TESTFN, cmd))
p.start() p.start()
p.join() p.join()
fcntl.lockf(self.f, fcntl.LOCK_UN) fcntl.lockf(self.f, fcntl.LOCK_UN)
@ -171,7 +172,8 @@ class TestFcntl(unittest.TestCase):
self.f = open(TESTFN, 'wb+') self.f = open(TESTFN, 'wb+')
cmd = fcntl.LOCK_SH | fcntl.LOCK_NB cmd = fcntl.LOCK_SH | fcntl.LOCK_NB
fcntl.lockf(self.f, cmd) fcntl.lockf(self.f, cmd)
p = Process(target=try_lockf_on_other_process, args=(TESTFN, cmd)) mp = multiprocessing.get_context('spawn')
p = mp.Process(target=try_lockf_on_other_process, args=(TESTFN, cmd))
p.start() p.start()
p.join() p.join()
fcntl.lockf(self.f, fcntl.LOCK_UN) fcntl.lockf(self.f, fcntl.LOCK_UN)

View file

@ -4759,8 +4759,9 @@ class LogRecordTest(BaseTest):
# In other processes, processName is correct when multiprocessing in imported, # In other processes, processName is correct when multiprocessing in imported,
# but it is (incorrectly) defaulted to 'MainProcess' otherwise (bpo-38762). # but it is (incorrectly) defaulted to 'MainProcess' otherwise (bpo-38762).
import multiprocessing import multiprocessing
parent_conn, child_conn = multiprocessing.Pipe() mp = multiprocessing.get_context('spawn')
p = multiprocessing.Process( parent_conn, child_conn = mp.Pipe()
p = mp.Process(
target=self._extract_logrecord_process_name, target=self._extract_logrecord_process_name,
args=(2, LOG_MULTI_PROCESSING, child_conn,) args=(2, LOG_MULTI_PROCESSING, child_conn,)
) )

View file

@ -0,0 +1,82 @@
"""Test default behavior of multiprocessing."""
from inspect import currentframe, getframeinfo
import multiprocessing
from multiprocessing.context import DefaultForkDeprecationWarning
import sys
from test.support import threading_helper
import unittest
import warnings
def do_nothing():
pass
# Process has the same API as Thread so this helper works.
join_process = threading_helper.join_thread
class DefaultWarningsTest(unittest.TestCase):
@unittest.skipIf(sys.platform in ('win32', 'darwin'),
'The default is not "fork" on Windows or macOS.')
def setUp(self):
self.assertEqual(multiprocessing.get_start_method(), 'fork')
self.assertIsInstance(multiprocessing.get_context(),
multiprocessing.context._DefaultForkContext)
def test_default_fork_start_method_warning_process(self):
with warnings.catch_warnings(record=True) as ws:
warnings.simplefilter('ignore')
warnings.filterwarnings('always', category=DefaultForkDeprecationWarning)
process = multiprocessing.Process(target=do_nothing)
process.start() # warning should point here.
join_process(process)
self.assertIsInstance(ws[0].message, DefaultForkDeprecationWarning)
self.assertIn(__file__, ws[0].filename)
self.assertEqual(getframeinfo(currentframe()).lineno-4, ws[0].lineno)
self.assertIn("'fork'", str(ws[0].message))
self.assertIn("get_context", str(ws[0].message))
self.assertEqual(len(ws), 1, msg=[str(x) for x in ws])
def test_default_fork_start_method_warning_pool(self):
with warnings.catch_warnings(record=True) as ws:
warnings.simplefilter('ignore')
warnings.filterwarnings('always', category=DefaultForkDeprecationWarning)
pool = multiprocessing.Pool(1) # warning should point here.
pool.terminate()
pool.join()
self.assertIsInstance(ws[0].message, DefaultForkDeprecationWarning)
self.assertIn(__file__, ws[0].filename)
self.assertEqual(getframeinfo(currentframe()).lineno-5, ws[0].lineno)
self.assertIn("'fork'", str(ws[0].message))
self.assertIn("get_context", str(ws[0].message))
self.assertEqual(len(ws), 1, msg=[str(x) for x in ws])
def test_default_fork_start_method_warning_manager(self):
with warnings.catch_warnings(record=True) as ws:
warnings.simplefilter('ignore')
warnings.filterwarnings('always', category=DefaultForkDeprecationWarning)
manager = multiprocessing.Manager() # warning should point here.
manager.shutdown()
self.assertIsInstance(ws[0].message, DefaultForkDeprecationWarning)
self.assertIn(__file__, ws[0].filename)
self.assertEqual(getframeinfo(currentframe()).lineno-4, ws[0].lineno)
self.assertIn("'fork'", str(ws[0].message))
self.assertIn("get_context", str(ws[0].message))
self.assertEqual(len(ws), 1, msg=[str(x) for x in ws])
def test_no_mp_warning_when_using_explicit_fork_context(self):
with warnings.catch_warnings(record=True) as ws:
warnings.simplefilter('ignore')
warnings.filterwarnings('always', category=DefaultForkDeprecationWarning)
fork_mp = multiprocessing.get_context('fork')
pool = fork_mp.Pool(1)
pool.terminate()
pool.join()
self.assertEqual(len(ws), 0, msg=[str(x) for x in ws])
if __name__ == '__main__':
unittest.main()

View file

@ -533,6 +533,8 @@ class CompatPickleTests(unittest.TestCase):
def test_multiprocessing_exceptions(self): def test_multiprocessing_exceptions(self):
module = import_helper.import_module('multiprocessing.context') module = import_helper.import_module('multiprocessing.context')
for name, exc in get_exceptions(module): for name, exc in get_exceptions(module):
if issubclass(exc, Warning):
continue
with self.subTest(name): with self.subTest(name):
self.assertEqual(reverse_mapping('multiprocessing.context', name), self.assertEqual(reverse_mapping('multiprocessing.context', name),
('multiprocessing', name)) ('multiprocessing', name))

View file

@ -2431,7 +2431,8 @@ class ReTests(unittest.TestCase):
input_js = '''a(function() { input_js = '''a(function() {
/////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////
});''' });'''
p = multiprocessing.Process(target=pattern.sub, args=('', input_js)) mp = multiprocessing.get_context('spawn')
p = mp.Process(target=pattern.sub, args=('', input_js))
p.start() p.start()
p.join(SHORT_TIMEOUT) p.join(SHORT_TIMEOUT)
try: try:

View file

@ -0,0 +1,11 @@
The :mod:`multiprocessing` module and
:class:`concurrent.futures.ProcessPoolExecutor` will emit a
:exc:`DeprecationWarning` on Linux and other non-macOS POSIX systems when
the default multiprocessing start method of ``'fork'`` is used implicitly
rather than being explicitly specified through a
:func:`multiprocessing.get_context` context.
This is in preparation for default start method to change in Python 3.14 to
a default that is safe for multithreaded applications.
Windows and macOS are unaffected as their default start method is ``spawn``.