IPython Documentation

Table Of Contents

Previous topic

Module: parallel.controller.mongodb

Next topic

Module: parallel.controller.sqlitedb

This Page

Module: parallel.controller.scheduler

The Python scheduler for rich scheduling.

The Pure ZMQ scheduler does not allow routing schemes other than LRU, nor does it check msg_id DAG dependencies. For those, a slightly slower Python Scheduler exists.

Authors:

  • Min RK

2 Classes

class IPython.parallel.controller.scheduler.Job(msg_id, raw_msg, idents, msg, header, metadata, targets, after, follow, timeout)

Bases: object

Simple container for a job

__init__(msg_id, raw_msg, idents, msg, header, metadata, targets, after, follow, timeout)
class IPython.parallel.controller.scheduler.TaskScheduler(**kwargs)

Bases: IPython.kernel.zmq.session.SessionFactory

Python TaskScheduler object.

This is the simplest object that supports msg_id based DAG dependencies. Only task msg_ids are checked, not msg_ids of jobs submitted via the MUX queue.

add_job(idx)

Called after self.targets[idx] just got the job with header. Override with subclasses. The default ordering is simple LRU. The default loads are the number of outstanding jobs.

available_engines()

return a list of available engine indices based on HWM

dispatch_notification(msg)

dispatch register/unregister events.

dispatch_query_reply(msg)

handle reply to our initial connection request

dispatch_result(raw_msg)

dispatch method for result replies

dispatch_submission(raw_msg)

Dispatch job submission to appropriate handlers.

fail_unreachable(msg_id, why=<class 'IPython.parallel.error.ImpossibleDependency'>)

a task has become unreachable, send a reply with an ImpossibleDependency error.

finish_job(idx)

Called after self.targets[idx] just finished a job. Override with subclasses.

handle_result(idents, parent, raw_msg, success=True)

handle a real task result, either success or failure

handle_stranded_tasks(engine)

Deal with jobs resident in an engine that died.

handle_unmet_dependency(idents, parent)

handle an unmet dependency

job_timeout(job, timeout_id)

callback for a job’s timeout.

The job may or may not have been run at this point.

maybe_run(job)

check location dependencies, and run if they are met.

resume_receiving()

Resume accepting jobs.

save_unmet(job)

Save a message for later submission when its dependencies are met.

stop_receiving()

Stop accepting jobs while there are no engines. Leave them in the ZMQ queue.

submit_task(job, indices=None)

Submit a task to any of a subset of our targets.

update_graph(dep_id=None, success=True)

dep_id just finished. Update our dependency graph and submit any jobs that just became runnable.

Called with dep_id=None to update entire graph for hwm, but without finishing a task.

7 Functions

IPython.parallel.controller.scheduler.logged(f)
IPython.parallel.controller.scheduler.plainrandom(loads)

Plain random pick.

IPython.parallel.controller.scheduler.lru(loads)

Always pick the front of the line.

The content of loads is ignored.

Assumes LRU ordering of loads, with oldest first.

IPython.parallel.controller.scheduler.twobin(loads)

Pick two at random, use the LRU of the two.

The content of loads is ignored.

Assumes LRU ordering of loads, with oldest first.

IPython.parallel.controller.scheduler.weighted(loads)

Pick two at random using inverse load as weight.

Return the less loaded of the two.

IPython.parallel.controller.scheduler.leastload(loads)

Always choose the lowest load.

If the lowest load occurs more than once, the first occurance will be used. If loads has LRU ordering, this means the LRU of those with the lowest load is chosen.

IPython.parallel.controller.scheduler.launch_scheduler(in_addr, out_addr, mon_addr, not_addr, reg_addr, config=None, logname='root', log_url=None, loglevel=10, identity='task', in_thread=False)