sklearn.utils.parallel_backend

sklearn.utils.parallel_backend(backend, n_jobs=-1, **backend_params)[source]

Change the default backend used by Parallel inside a with block.

If backend is a string it must match a previously registered implementation using the register_parallel_backend function.

By default the following backends are available:

  • ‘loky’: single-host, process-based parallelism (used by default),
  • ‘threading’: single-host, thread-based parallelism,
  • ‘multiprocessing’: legacy single-host, process-based parallelism.

‘loky’ is recommended to run functions that manipulate Python objects. ‘threading’ is a low-overhead alternative that is most efficient for functions that release the Global Interpreter Lock: e.g. I/O-bound code or CPU-bound code in a few calls to native code that explicitly releases the GIL.

In addition, if the dask and distributed Python packages are installed, it is possible to use the ‘dask’ backend for better scheduling of nested parallel calls without over-subscription and potentially distribute parallel calls over a networked cluster of several hosts.

Alternatively the backend can be passed directly as an instance.

By default all available workers will be used (n_jobs=-1) unless the caller passes an explicit value for the n_jobs parameter.

This is an alternative to passing a backend='backend_name' argument to the Parallel class constructor. It is particularly useful when calling into library code that uses joblib internally but does not expose the backend argument in its own API.

>>> from operator import neg
>>> with parallel_backend('threading'):
...     print(Parallel()(delayed(neg)(i + 1) for i in range(5)))
...
[-1, -2, -3, -4, -5]

Warning: this function is experimental and subject to change in a future version of joblib.

New in version 0.10.