From: Vladimir Vassilevsky on 17 Apr 2010 15:30 D Yuniskis wrote: > Hi Vladimir, > > Vladimir Vassilevsky wrote: > >> George Neuner wrote: >> >>> As I said before, I know of no available RTOS that actually implements >>> deadline scheduling. It isn't simple, it is a global minimization >>> function on all the simultaneous time and resource constraints ... and >>> nobody does it. Nobody ever did it except maybe as a research >>> project. At best, RTOSes provide priority preemption and a promise of >>> maximum scheduling delay. If the application fits the predictability >>> model that this presents, that's fine - but it doesn't work for all >>> applications. >> >> >> Sophisticated scheduling algorithm is no cheap to compute. While >> optimizing the order of the tasks, it consumes the CPU resources by >> itself. So, there is not much that could be gained compared to the >> simplistic solutions like ratemonotonic; why bother? > > > Rate Monotonic doesn't guarantee the same levels of processor > utilization. It is only really useful with static priorities. > I.e., devices with fixed workloads that can be analyzed at > design time, etc. If you can get by with this, then why > use anything more complicated? > > I have one MTOS that deals with things "brute force" -- context > switches are *almost* free... so, I invoke the scheduler > VERY often (e.g., more than 1KHz). Everything can run > at essentially the same "priority" and still be guaranteed > timely service (though you have to keep everything "short > and sweet" because your timeslice *will* be very short!). > I write code in that environment very differently than I > do in "fleshier" ones -- e.g., I may literally spin on > a condition variable instead of having the OS add support > for an event notification. This is where I started. In the addition to described, in my first MTOS the tasks could call scheduler if they have nothing to do; so they don't have to consume the entire time slice. So, the scheduler loop was spinning through all tasks with the variable rate (from =10ms to ~10us per full revolution). If none of the tasks had anything to do, I put the system to sleep till an interrupt. This approach is very simple. It is also free from priority resource deadlock problems, as every task gets time slice anyway. > The trouble comes when you have to deal with more and varied > "tasks". Or, with asynchronous activations. I am already three major versions past that first round robin thingy :-) Round robin is supported as well as event driven priority scheduling. Things that > cant easily be sorted out "on paper" ahead of time (though > with reservations, some of this can be alleviated). > >> In practice, there are few realtime tasks and many tasks that can >> wait; the OS should not add the unnecessary overhead to every task. In >> the other words, if you need this particular task to be realtime, you >> should take care of it, not OS. > > > <frown> The OS is there to make the developer's job *easier* > by managing resources for him. If your OS is getting in the > way, then get a better OS! :> If you make a fool to pray God, he will break his forehead. > You can always devolve to the "run everything time critical > in an ISR and leave the rest in the background..." :-( I thought for a while about complete unification of interrupts and tasks. I.e. having them under the same abstraction. Unfortunately, that implies a lot of OS overhead. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
From: D Yuniskis on 17 Apr 2010 16:49 Hi Vladimir, Vladimir Vassilevsky wrote: > D Yuniskis wrote: >> Vladimir Vassilevsky wrote: >> >>> George Neuner wrote: >>> >>>> As I said before, I know of no available RTOS that actually implements >>>> deadline scheduling. It isn't simple, it is a global minimization >>>> function on all the simultaneous time and resource constraints ... and >>>> nobody does it. Nobody ever did it except maybe as a research >>>> project. At best, RTOSes provide priority preemption and a promise of >>>> maximum scheduling delay. If the application fits the predictability >>>> model that this presents, that's fine - but it doesn't work for all >>>> applications. >>> >>> >>> Sophisticated scheduling algorithm is no cheap to compute. While >>> optimizing the order of the tasks, it consumes the CPU resources by >>> itself. So, there is not much that could be gained compared to the >>> simplistic solutions like ratemonotonic; why bother? >> >> >> Rate Monotonic doesn't guarantee the same levels of processor >> utilization. It is only really useful with static priorities. >> I.e., devices with fixed workloads that can be analyzed at >> design time, etc. If you can get by with this, then why >> use anything more complicated? >> >> I have one MTOS that deals with things "brute force" -- context >> switches are *almost* free... so, I invoke the scheduler >> VERY often (e.g., more than 1KHz). Everything can run >> at essentially the same "priority" and still be guaranteed >> timely service (though you have to keep everything "short >> and sweet" because your timeslice *will* be very short!). >> I write code in that environment very differently than I >> do in "fleshier" ones -- e.g., I may literally spin on >> a condition variable instead of having the OS add support >> for an event notification. > > This is where I started. > In the addition to described, in my first MTOS the tasks could call > scheduler if they have nothing to do; so they don't have to consume the > entire time slice. So, the scheduler loop was spinning through all tasks > with the variable rate (from =10ms to ~10us per full revolution). If > none of the tasks had anything to do, I put the system to sleep till an > interrupt. Yes. You can even craft executives that save *no* "task" state (other than PC) -- but that puts other constraints on the way you write your application. > This approach is very simple. It is also free from priority resource > deadlock problems, as every task gets time slice anyway. Well, you can still have deadlock. You just don't have "priority inversion" as everything is "flat". >> The trouble comes when you have to deal with more and varied >> "tasks". Or, with asynchronous activations. > > I am already three major versions past that first round robin thingy :-) > Round robin is supported as well as event driven priority scheduling. > > Things that >> cant easily be sorted out "on paper" ahead of time (though >> with reservations, some of this can be alleviated). >> >>> In practice, there are few realtime tasks and many tasks that can >>> wait; the OS should not add the unnecessary overhead to every task. >>> In the other words, if you need this particular task to be realtime, >>> you should take care of it, not OS. >> >> <frown> The OS is there to make the developer's job *easier* >> by managing resources for him. If your OS is getting in the >> way, then get a better OS! :> > > If you make a fool to pray God, he will break his forehead. Hmmm... I guess i don't understand the reference :< >> You can always devolve to the "run everything time critical >> in an ISR and leave the rest in the background..." :-( > > I thought for a while about complete unification of interrupts and > tasks. I.e. having them under the same abstraction. Unfortunately, that > implies a lot of OS overhead. I treat the "application layer" AS IF it was just N layers of ISRs competing and coexisting. E.g., I try to design APIs so there aren't arbitrary "can't be called from an ISR" restrictions. Sometimes this affects the types of objects that can be supported (as you need to keep the interfaces "light"). The problem I always see with traditional "priority based" scheduling is there are no hard and fast rules for assigning priorities. Instead, folks tend to discover belatedly that the structure of their system is "wrong" and try to *make* it work by tweeking priorities. And, this turns into a "tipping canoe" exercise -- you move one way and the canoe starts to tip... so you move the other way and it tips a different way, etc. I.e., there is no "engineering" involved in picking the priority levels. It's always amusing to see folks evaluate the *number* of priority levels supported in an OS AS IF "more is better". (Then 50,000 levels has *got* to be better than *20*!) IME, an MTOS gives its biggest bang for buck by helping you decompose an application into independant "tasks" (using the meaning: "jobs") so you can focus on one at a time without having to juggle all of them at once. Beyond that, its a question of how much you want to migrate application requirements into the OS's domain instead of the application;s.
From: Paul Keinanen on 18 Apr 2010 01:54 On Sat, 17 Apr 2010 13:49:53 -0700, D Yuniskis <not.going.to.be(a)seen.com> wrote: >Vladimir Vassilevsky wrote: >> I thought for a while about complete unification of interrupts and >> tasks. I.e. having them under the same abstraction. Unfortunately, that >> implies a lot of OS overhead. > >I treat the "application layer" AS IF it was just N layers >of ISRs competing and coexisting. E.g., I try to design >APIs so there aren't arbitrary "can't be called from an ISR" >restrictions. Sometimes this affects the types of >objects that can be supported (as you need to keep the >interfaces "light"). > >The problem I always see with traditional "priority based" >scheduling is there are no hard and fast rules for >assigning priorities. For decades I have used the following guidelines: 1. Each real-time tasks should spend most of their time idle waiting for an event to process (waiting more than 99 % of the for the highest priority tasks). 2. Do as little as possible in the highest priority tasks, i.e. lower priority tasks can execute longer at a time than the high priority tasks. 3. At the bottom of the priority list the null tasks (which normally consumes the unused CPU time in an infinite loop) can be replaced by e.g. some round robin scheduler running non-RT tasks such as user interfaces etc. >Instead, folks tend to discover >belatedly that the structure of their system is "wrong" >and try to *make* it work by tweeking priorities. Unexperienced people seem to increase the priority of those tasks that do not respond fast enough. >And, >this turns into a "tipping canoe" exercise -- you move one >way and the canoe starts to tip... so you move the other >way and it tips a different way, etc. I.e., there is >no "engineering" involved in picking the priority levels. When trying to fix messed up messed up RT systems that others have made, the first thing is to look for tasks, for which the priority can be _lowered_, without causing significant performance loss for that task. When less critical tasks priorities have been moved lower, they are no longer competing with the most time critical tasks. Then it is time to look at those tasks running a long time at each activation (but still needing high priority to avoid e.g. Rx buffer overrun) and move those parts to a lower priority task (which is allowed to execute longer at a time). For instance the received bytes are assembled into a complete message frame in a high priority task, while any application level message data processing is moved to a lower priority task (or even to the null-task). >It's always amusing to see folks evaluate the *number* of >priority levels supported in an OS AS IF "more is better". >(Then 50,000 levels has *got* to be better than *20*!) A large number of priorities is nice, since you can assign a separate priority for each task. Of course, in most cases tasks with similar priorities could operate as well if the internal order is changed. A moderate level 8-16 of priority levels is usually enough _provided_ that you have a full control of the priority levels. The situation gets complicated, if the OS is using some fixed priorities of its own. Assuming that the OS has a task receiving messages and at a priority level immediately below it, an other OS tasks that consumes some of these messages addressed to it at a low urgency. Assume that you want to process at a high urgency some messages received and addressed to you by the high priority task. Your task should run below the producer task, but above the low urgency OS consumer task. In this example, there is no free priority levels between the two OS tasks, so you end up putting your consumer task at the same priority level as the OS consumer task, fighting unnecessary for CPU time. Please note, I have carefully used the words "addressed to", since otherwise typically in such discussions, someone will sooner or later claim that I am trying to use priorities to affect which task is processing which messages :-). >IME, an MTOS gives its biggest bang for buck by helping >you decompose an application into independant "tasks" >(using the meaning: "jobs") so you can focus on one >at a time without having to juggle all of them at once. Absolutely, the OS should make it easy to isolate time critical and non-time critical parts into separate tasks. >Beyond that, its a question of how much you want to >migrate application requirements into the OS's domain >instead of the application;s. The requirements for various RT systems vary so greatly that it is very hard for the OS designer to create a one size fits all solutions. In the worst case, they may even harm the total RT system, when forcing or at least tempting (e.g. providing an easy to use API) to use some solution that is not optimal for a particular RT system.
From: George Neuner on 18 Apr 2010 01:56 On Sat, 17 Apr 2010 11:52:12 -0700, D Yuniskis <not.going.to.be(a)seen.com> wrote: > >I added memory as it was an example of a *resource* that needs >to be "remembered" just like other parts of the "task's" state. >Your comment: > > "You don't have to remember much beyond what an interrupted > task was doing - it's register context - and information > needed for scheduling: priority, trigger events, time stamps, > etc." > >enumerated the sorts of things that have to be "remembered" in >a task's state. "It's register context", "what it was doing" >"[scheduling information]", etc. I was contending that there >are lots of *other* things that have to be "remembered" for >the task (i.e., everything that fits in the [process] container >that I alluded to). My comment was in the context of things that affect scheduling - the subject of this "thread" - which memory typically does not. I covered the resources that affect scheduling under the term "trigger events". >> but Ada's task scheduler isn't particularly >> sophisticated to begin with > >Doesn't have to be sophisticated -- just predictable. :> Which it isn't on a multiprocessor - which extends to threaded and multi-core cpus. Reliance on Ada's scheduler to prevent synchronization problems requires a single threaded cpu. >> In a real deadline scheduling system, when a task pends on an event >> the scheduler needs to know: >> >> - the actual deadline. This can take 2 forms: a maximum delay >> before scheduling a run-to-completion task, or for a time sliced >> task the time required to compute the response and the maximum >> cutoff time for producing the response. >> - whether the deadline is "hard" or "soft". It is an error to miss >> a "hard" deadline. A "soft" deadline is a best effort response. >> - how to deal with a deadline which is missed: e.g., ignore it, >> restart the task, resume the task, etc. >> - the priority of the task >> - any ad hoc. other things the scheduler needs to know >> >> A deadline scheduler has to guarantee that all "hard" deadlines are >> met (for suitable hardware) and make best effort at meeting all the >> "soft" deadlines. In a time slicing system the scheduler has to >> decide which ready tasks can safely be delayed another quantum in >> order to pick one to run. > >The scheduler doesn't have to "guarantee" that these are met. >That's the job of the system designer. E.g., it you try to >schedule for 110% utilization, the scheduler *will* fail. > >Some systems *can't* meet their deadlines 100% of the time. >This is especially true for systems with asynchronous >processing needs (event driven). > >If you are designing a system that operates in steady state >all the time, then scheduling is much more predictable. E.g., >RMA is a win, here -- though with much lower processor >utilization guarantees. > >EDF is rather easy to implement and, in those systems where >"tasks" (in my "abstract" role) are in harmonious accord >with each other (e.g., you never exceed 100% utilization >even on the short term), can be quite effective. > >> As I said before, I know of no available RTOS that actually implements >> deadline scheduling. It isn't simple, it is a global minimization >> function on all the simultaneous time and resource constraints ... and > >You're trying to describe something that "can't work" -- with >unconstrained task requirements. E.g., I have three "tasks" >each needing 100 time units to complete and each must be >completed within 110 time units. "Coincidence" has caused >them to be started (asynchronous events!) at the same instant. >No scheduling algorithm "works" in overload. > >OTOH, EDF *will* schedule all three of them properly >IN THE PRESENCE OF OTHER COMPETING TASKS *if* they >never exceed 100% utilization. (this is a relatively simple >exercise to do in your head -- and doesn't require lots of >analysis. It's analagous to a family getting ready >for work/school in the morning: "Who needs to be OUT THE DOOR >*first*? OK, *he* gets first crack at the bathroom (shared >resource). Once he's done (his deadline is met), "Who needs >to be out the door first?", again. I.e., if you don't have to >be "out the door" until afternoon, then you just aren't going >to be using the bathroom in the morning! :> > >As long as the work to be done *can* be done in the time >available, EDF will make it happen. If it *can't* (i.e., >the system is overloaded) then you're trying to have the >scheduler decide who *should* break and who *shouldn't*. > >> nobody does it. Nobody ever did it except maybe as a research > >Hmmm... you are aware of every RTOS in every product released? >:> Surely you are just commenting on commercial offerings? > >E.g., I rely on CoW semantics in my distributed memory >system. I don't think many (any) commercial RTOS's support >this -- yet, clearly *someone* has "done it" (me! :> ). > >> project. At best, RTOSes provide priority preemption and a promise of >> maximum scheduling delay. If the application fits the predictability >> model that this presents, that's fine - but it doesn't work for all >> applications. > >And, if the system is overloaded, how is this "better"/worse? I'm not disagreeing with anything you've said. You just forcefully made MY point that no available system actually does deadline scheduling. And, for the record, I'm not describing something impossible - only something which is VERY HARD. I can point you to reading on the subject if you're interested. George
From: George Neuner on 18 Apr 2010 02:35 On Sun, 18 Apr 2010 01:56:10 -0400, George Neuner <gneuner2(a)comcast.net> wrote: Re: deadline scheduling >And, for the record, I'm not describing something impossible - only >something which is VERY HARD. I can point you to reading on the >subject if you're interested. I realized right after I sent this that some might consider this statement foolishness because - as Don said previously, there are situations where no possible scheduling works. A scheduler can only work within the bounds of the resources it has .... if those are inadequate for the problem specification, then that is not the fault of the scheduler. However, a guarantee scheduler - of which "deadline" is just one form - is an active agent in the system which serves to double check the specification. If there is an irresolvable scheduling, that may reveal a flaw, either in the spec or in the implementation. If, as Don said: >>... that "can't work" -- with unconstrained task requirements. >>E.g., I have three "tasks" each needing 100 time units to complete >>and each must be completed within 110 time units. "Coincidence" >>has caused them to be started (asynchronous events!) at the same >>instant. No scheduling algorithm "works" in overload. The phrase "unconstrained task requirements" is the key here. What Don describes above is a specification error. What I'm getting at here is that we are not discussing scheduling in general purpose operating systems with an unbounded mix of user tasks. Rather, we are discussing scheduling in tightly crafted, coherent systems having fairly rigid design specifications. Those are the constraints that allow guarantee scheduling to work. EDF is a very simple form of guarantee scheduling. There are more complex forms which take into account more parameters. George
First
|
Prev
|
Next
|
Last
Pages: 1 2 3 4 5 Prev: A 16-bit/32-bit microprocessor specifically for UAV purpose Next: What is Bit Spread? |