A Resource-Dependent Job Scheduling Algorithm Based on Graph Theory

I recently had an idea for a scheduling algorithm for resource-dependent jobs that views the problem not as a pure problem, but as a problem in graph theory. Right now, I lack the mathematics and formalisms, but I think that viewing a job scheduling problem as a graph theory problem is a pretty ingenious way of coming up with a solution. Let me explain.

Problem

Suppose that you have a series of jobs \(J\) that may each require two different types of resources: consumables \(C\), and equipment \(E\); consumables are limited and cannot be replenished, while equipment merely has a finite amount of usage at any given time. In addition, \(J\) may have dependencies on other jobs, so \(J_m\) for instance may require that \(J_n\) be completed before it executes. Each job \(J\) may also have a deadline.

How can all jobs \(J\) be sequenced such that they are completed in as short a timespan as possible, meet all deadlines, and achieve optimal utilization of available equipment?

Viewing the Problem as a Graph Theory Problem

This problem may be viewed as a graph theory problem, but with some added twists.

Jobs

Jobs \(J\) can be represented as nodes with their dependencies among each other represented as directed edges. Each of these nodes have a few properties about them:

The job’s required equipment utilization is represented graphically as the size of the node when expressing the graph. That is to say that a job that requires less equipment utilization is graphically smaller. In computation, these visualizations are irrelevant.

Resources

Consumables \(C\) and equipment \(E\) can be represented as nodes holding the value of remaining consumable quantity or equipment utilization. Graphically, the greater the value, the larger the node. These values represent the maximum sum of all output directed edge weights for a particular node. The difference between consumables and equipment is that a consumable permanently loses its quantity, while equipment gets its utilization returned to it after a job is done utilizing said piece of equipment.

Resources may also have dependent jobs that are required to be completed before the resource can be used. These are for things like fetching the resource from a warehouse.

Edges

A job dependency is represented as a solid directed edge. A resource dependency is represented as a dashed directed edge with a weight of 0 to represent that there will be a flow between those two nodes, but there is none currently. A resource utilization is represented as a solid directed edge with a weight equal to the consumption or utilization of the resource. Graphically, the edges are notated with “used/required/effective” (e.g. 0/20/10). In computation, this information is stored by the nodes individually. “Effective” will be explained in the next section.

In the algorithm, all edges start with a weight of 0 in the initial state. The base required resource need stays constant. The effective required resources will start off with their initial effective resource need.

That’s dense! Please allow me to color code and rearrange a bit for the sake of organization.

Much better!

Deadlines and Effective Resource Requirements

SRTF (shortest remaining time first) scheduling is a powerful thread scheduling algorithm in operating systems, but it has a few main problems: job completion times must be known in advance, and large jobs get bogged down by smaller jobs and may end up never being completed if there is a constant stream of new small jobs. The former problem is usually dealt with by things like Kalman Filters or exponential averaging, or even through the usage of a multi-level thread scheduling algorithm. Luckily for us, we already know job completion times, so this is a non-problem for our algorithm. However, we are still susceptible to large jobs getting bogged down by small jobs. This is a problem since jobs may have deadlines to meet.

Enter effective resource requirements. Unlike the base resource requirement, effective resource requirements reflect the urgency of a job’s completion based on its deadline and are not constant. This can allow a job to “jam” a resource or set of resources to hold its place in order to assert its priority while the resources are waiting to replenish to sufficient levels to start the job. The idea is that the effective resource requirement can make a job appear smaller than it actually is in order go gain higher priority over actually small jobs. The factor by which the big job “shrinks” should have a relationship with the distance from the current time to the job’s deadline. That is to say:

\[\text{Effective Resource Requirement} = \text{Base Resource Requirement} \times P(T, \text{deadline})\]

where \(P\) is priority.

Implementation of \(P(T, \text{deadline})\) hasn’t been concretely defined yet and is still totally open to discussion. One thing for sure though is that if \(J\) takes \(t\) time to complete, then at \(T = T_{deadline} - t\), \(P\) should be sufficiently small and be either 0 or effectively 0. Note that this does not mean that this is when it first meets this quantity! \(P\) should have approached a sufficiently small quantity or 0 earlier than \(T = T_{deadline} - t\) since \(J\) still needs to wait for whatever jobs using its required resources are to complete – it merely jams and claims top priority once those resources replenish.

Here are a few ways \(P\) could be theoretically implemented:

\(P\) should not be \(> 1\). If \(P > 1\), then assign \(P = 1\).

Job Scheduling

The algorithm advances in constant time intervals. Each point in time is called a “tick.” At each tick, recalculate all effective resource requirements. For each job, see if the Effective Resource Requirement \(\le\) Resources Available for all resources the job requires. If so, subtract the actual resource cost from the resource and add it to the edge’s weight if all other resource requirements are also met for the particular job. When the job is complete, if the resource was equipment, add the weight of the edge back to the equipment resource node. Delete the job and its edges. Continue this until all jobs have been completed.

Note that an equipment’s reservoir may be negative, as is the case with jobs that jam equipment to establish priority. Something special must happen here! The equipment incurs a “debt” and both it and the edge’s weight have negative values – subtract the actual resource cost from their original values. When utilization gets returned to the equipment from other jobs, it gets added to both the equipment’s utilization reservoir as well as the edge’s weight. When the edge’s weight goes from negative to 0, the actual resource cost gets automatically added to it.

A Bit of a Jam

One problem that this algorithm still has is when multiple jobs with similar deadlines and large requirements compete for the same resources. For example, suppose that \(J_1\) requires 200 units of \(R_1\) and its effective resource requirement is 0 at \(T=0\), and it will be completed at \(T=5\) (its deadline). Suppose that \(R_1\) has 300 units of utilization at maximum and has 300 units of utilization available at \(T=0\). Now, suppose that we have \(J_2\) which requires 150 units of \(R_1\) and at \(T=2\), its effective resource requirement is 0. It requires \(t=5\) time to complete. This means that the latest the job can start and still meet its deadline is \(T=2\) to complete at \(T=7\). We run into a problem here because though \(J_2\) can jam \(R_1\) to hold its place and prevent other jobs from using \(R_1\) after \(J_1\) finishes, \(J_2\) won’t be able to complete by the deadline anyways!

I haven’t thought of a solution yet, but I’d imagine that we’d need some new way to calculate effective resource requirements based on all jobs, adjusting for all jobs’ deadlines.

Conclusion

These are just really rough ramblings based on a rough idea I had, and I definitely can’t prove any correctness or maths right now! I know that most of this probably doesn’t make sense currently. Hopefully I can revisit this in the future at a later time to formalize the algorithm. Until then, this is just food for thought.

Happy hacking!