Is there a perfect scheduler that is non-optimal in the Big(O) sense but is optimal if you’re looking at maximizing hardware utilization? In other words, scheduler that takes a long time to determine CPU utilization for each process, but provides an optimal total CPU utilization? I realize that it would not be ideal since we’d essentially have these “sudden stops” as it recalculates the schedule. I’m just more interested in the theory.
If you have a fixed collection of processes to run on a single processor and unlimited time to schedule them in, you can always brute force all permutations of the processes and then pick whichever permutation maximizes and/or minimizes whatever property you like. The problem with this approach is that it has awful time complexity.
Edit: There’s probably other subtle issues that can arise, like I/O interrupts and other weird events fwiw.
Is there a perfect scheduler that is non-optimal in the Big(O) sense but is optimal if you’re looking at maximizing hardware utilization? In other words, scheduler that takes a long time to determine CPU utilization for each process, but provides an optimal total CPU utilization? I realize that it would not be ideal since we’d essentially have these “sudden stops” as it recalculates the schedule. I’m just more interested in the theory.
How would you deal with iowaits in a system like that? I can perfectly burn 100% of CPU time running a poll(), but that’s not useful work…
¯\_(ツ)_/¯
I guess that’s why I asked. I’m just curious if it’s even possible.
If you have a fixed collection of processes to run on a single processor and unlimited time to schedule them in, you can always brute force all permutations of the processes and then pick whichever permutation maximizes and/or minimizes whatever property you like. The problem with this approach is that it has awful time complexity.
Edit: There’s probably other subtle issues that can arise, like I/O interrupts and other weird events fwiw.