hcn.utils.parallelize¶
Introduction¶
The @parallelize decorator is a wrapper around dask.
It allows you to transform a function that processes a single item into one that processes a list of tasks in parallel across multiple CPU cores.
Usage¶
Wrap your function and specify the number of cores. When called, pass a list of tasks as the first argument.
Examples¶
In [1]:
Copied!
from hcn.preamble import *
from hcn.utils.parallelize import parallelize
import time
from hcn.preamble import *
from hcn.utils.parallelize import parallelize
import time
In [2]:
Copied!
# Use 4 physical cores (defaults to all cores if cores=0)
# If tasks > cores, workers will queue and pull new tasks as they finish.
@parallelize(cores=4)
def heavy_sim(n):
time.sleep(1) # Simulate expensive computation
return n
# Use 4 physical cores (defaults to all cores if cores=0)
# If tasks > cores, workers will queue and pull new tasks as they finish.
@parallelize(cores=4)
def heavy_sim(n):
time.sleep(1) # Simulate expensive computation
return n
In [3]:
Copied!
tasks = [3, 6, 9, 12]
# --- Sequential Expectation: 4 seconds ---
# --- Parallel Execution: ~1 second ---
start = time.time()
results = heavy_sim(tasks)
print(f"Parallel execution time: {time.time() - start:.4f}s")
print(f"Results: {results}")
tasks = [3, 6, 9, 12]
# --- Sequential Expectation: 4 seconds ---
# --- Parallel Execution: ~1 second ---
start = time.time()
results = heavy_sim(tasks)
print(f"Parallel execution time: {time.time() - start:.4f}s")
print(f"Results: {results}")
Parallel execution time: 1.9540s Results: (3, 6, 9, 12)