from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
from urllib.request import urlopen
from time import perf_counter
def work(n):
with urlopen("https://www.google.com/#{n}") as f:
contents = f.learn(32)
return contents
def run_pool(pool_type):
with pool_type() as pool:
begin = perf_counter()
outcomes = pool.map(work, numbers)
print ("Time:", perf_counter()-start)
print ([_ for _ in results])
if __name__ == '__main__':
numbers = [x for x in range(1,16)]
# Run the duty utilizing a thread pool
run_pool(ThreadPoolExecutor)
# Run the duty utilizing a course of pool
run_pool(ProcessPoolExecutor)
How Python multiprocessing works
Within the above instance, the concurrent.futures
module offers high-level pool objects for operating work in threads (ThreadPoolExecutor
) and processes (ProcessPoolExecutor
). Each pool varieties have the identical API, so you may create capabilities that work interchangeably with each, as the instance exhibits.
We use run_pool
to submit cases of the work
perform to the various kinds of swimming pools. By default, every pool occasion makes use of a single thread or course of per obtainable CPU core. There’s a specific amount of overhead related to creating swimming pools, so don’t overdo it. When you’re going to be processing a lot of jobs over an extended time frame, create the pool first and don’t eliminate it till you’re finished. With the Executor
objects, you need to use a context supervisor to create and eliminate swimming pools (with/as
).
pool.map()
is the perform we use to subdivide the work. The pool.map()
perform takes a perform with an inventory of arguments to use to every occasion of the perform, splits the work into chunks (you may specify the chunk measurement however the default is usually fantastic), and feeds every chunk to a employee thread or course of.