From: David Brown on 9 Feb 2007 07:28 John Larkin wrote: > On Thu, 08 Feb 2007 09:37:52 +0100, David Brown > <david(a)westcontrol.removethisbit.com> wrote: > >> John Larkin wrote: >>> On Mon, 05 Feb 2007 09:45:44 +0100, David Brown >>> <david(a)westcontrol.removethisbit.com> wrote: >>> >>>> John Larkin wrote: >>>>> On Fri, 02 Feb 2007 15:13:53 GMT, Vladimir Vassilevsky >>>>> <antispam_bogus(a)hotmail.com> wrote: >>>>> >>>>>> Jan Panteltje wrote: >>>>>> >>>>>> >>>>>>>> It is better to stay on the earth rather then fall on somebody's head. >>>>>>>> Masking the errors is the worst practice. >>>>>>> Yes that is true. >>>>>>> But you have to try one day to fly.... Errors will show you where to improve. >>>>>> Deadlines. That's another reason for the software to be the far from >>>>>> perfect. >>>>> So, you will actually release software that you know is buggy, or that >>>>> you haven't tested, because of some schedule? Please tell me who you >>>>> work for, so I can be sure to never buy their stuff. >>>>> >>>> John, this is comp.arch.embedded - the answer is *always* "it depends". >>>> For some products, it is vital to hit the schedule - even if there are >>>> known bugs or problems. Perhaps you ship the hardware now, and upgrade >>>> the software later - perhaps you ship the whole thing even with its >>>> outstanding problems. For other products, you have to set the highest >>>> possible standards, and quality cannot be lowered for any purpose. I >>>> have no idea what sort of systems VV works with - they could well be of >>>> the sort where issues such as cost and timing are more important than >>>> quality and reliability. >>> There are two methodologies to consider: >>> >>> 1. Write a lot of code fast. Once you get a clean compile, start >>> testing it on the hardware and look for bugs. Keep fixing bugs until >>> it's time that you have to ship. Intend to find the rest of the bugs >>> later, which usually means when enough customers complain. >>> >>> 2. Write and comment the code carefully. Read through it carefully to >>> look for bugs, interactions, optimizations. Fix or entirely rewrite >>> anything that doesn't look right. Figure more review time than coding >>> time. NOW fire it up on the hardware and test it. >>> >>> Method 2, done right, makes it close to impossible to ship a product >>> with bugs, because most of the bugs are found before you even run the >>> code. Nobody can walk up and say "we have to ship it now, we'll finish >>> debugging later." >>> >>> Method 2 is faster, too. >>> >>> John >>> >> Method 2 is an ideal to strive for, but it is not necessarily possible - >> it depends on the project. In some cases, you know what the program is >> supposed to do, you know how to do it, and you can specify, design, code >> and even debug the software before you have the hardware. There's no >> doubt that leads to the best software - the most reliable, and the most >> maintainable. If you are making a system where you have the time, >> expertise (the customer's expertise - I am taking the developer's >> expertise for granted here :-), and budget to support this, then that is >> great. >> >> But in many cases, the customer does not know what they want until you >> and they have gone through several rounds of prototyping, viewing, and >> re-writing. As a developer, you might need a lot of trial and error >> getting software support for your hardware to work properly. Sometimes >> you can do reasonable prototyping of the software in a quick and dirty >> way (like a simulation on a PC) to establish what you need, just like >> breadboarding to test your electronics ideas, but not always. A >> "development" project is, as the name suggests, something that changes >> with time. Now, I am not suggesting that Method 1 is a good thing - >> just that your two methods are black and white, while reality is often >> somewhat grey. What do you do when a customer is asking for a control >> system for a new machine he is designing, but is not really sure how it >> should work? Maybe the mechanics are not finished - maybe they can't be >> finished until the software is also in place. You go through a lot of >> cycles of rough specification, rough design, rough coding, rough testing >> with the customer, and repeat as needed. Theoretically, you could then >> take the finished system, see what it does, write a specification based >> on that, and re-do the software from scratch to that specification using >> Method 2 above. If the machine in question is a jet engine, then that's >> a very good idea - if it is an automatic rose picker, then it's unlikely >> that the budget will stretch. >> >> I think a factor that makes us appear to have different opinions here is >> the question of who is the customer. For most of my projects, we make >> electronics and software for a manufacturer who builds it into their >> system and sells it on to end users. You, I believe, specify and design >> your own products which you then sell to end users. From our point of >> view, you are your own customers. It is up to the customer (i.e., the >> person who knows what the product should do) to give good >> specifications. As a producer of high-end technical products, you might >> be able to give such good specifications - for many developers, their >> customers are their company's marketing droids or external customers, >> and they don't have the required experience. >> >> Programming from specifications is like walking on water - it's easy >> when it's frozen. > > There's no doubt that the task to be performed may change; it usually > does. But that doesn't mean you have to use Method 1 on whatever code > is changed. On the contrary, the easiest way to create bugs is to dash > off quick changes without taking into account the entire context and > interactions. A change in the spec is no excuse for abandoning careful > coding practices. > Indeed - I am not advocating Method 1, just saying that a pure Method 2 is not always possible or even appropriate. > What we do is write the manual first, and get the customer to agree > that's what he wants, and use the manual as our design spec. Changes > may still happen, but the mechanism is to edit the manual and get > agreement again, then change the code. Carefully, without hacks. > > And, when we're done, we have a manual! > On some products, we can do that - and there is no doubt that it is best for everyone when that is the case. Unfortunately it is not always possible - it depends highly on the customer, the type of project, and limitations such as time and budget. >> As a developer, you can do >> the best you can with the material you have - but don't promise perfection! > > The surest way to have bugs is to assume their inevitability. We > expect, from ourselves, perfect, bug-free code, and tell our customers > that they can expect bug-free products from us. When people say "all > software has bugs" it really means that *their* software has bugs. > I mostly agree. There is certainly no excuse for saying that all software has bugs - and there is no reason for coding in a way which makes bugs almost inevitable. But on some sorts of systems, you *do* have to assume that there may be bugs in the software. To a fair extent, it does not really matter if the failure is due to the software, the electronics, or outside effects (unexpected temperature changes, physical damage, cosmic rays, or whatever) - you have to assume the possibility of the control system failing. That's all part of risk analysis. Bug-free code is about quality. Top quality products cost time and money, and are thus not always appropriate - "good enough" is, after all, good enough. Perhaps they will save time and money in the long run, but perhaps not - and perhaps the customer has little money at the start and would rather deal with long term costs even if they turn out to be higher. For some jobs, the appropriate quality for the software is zero bugs, but in many cases you can tolerate minor flaws as long as the job gets done. The real dangers with software bugs, unlike other flaws in a system, is that they are often hidden, and they can have unexpectedly wide consequences. Modularisation and isolation of software parts can be a big win here - your critical control loops can be bug-free, while glitches in display code might be tolerated. "There are two ways of constructing a software design; one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C. A. R. Hoare > John >
From: Terry Given on 9 Feb 2007 08:45 Didi wrote: >>which is, of course, event driven software. which is (AIUI) what windows >>is all about. perhaps that explains it. > > > You must be joking? While event and interrupt are quite different > things, > to claim that windows is "event driven" with its many seconds range > latencies is laughable at best. A wanna-be event dirven, may be :-). > I never said it was good, and the last bit ("perhaps..." et al) is indicative of my low opinion of windoze. > Dimiter [snip] tell me about it. A couple of years back I developed some testers that used a PC to talk to a range of little blue I/O boxes. The PC(s) were >= 1GHz pentiummyjigs, and our pc guru (who is good) couldnt even get a guaranteed 1ms interrupt out of the poxy OS. Cheers Terry
From: Didi on 9 Feb 2007 09:13 > I never said it was good, and the last bit ("perhaps..." et al) is > indicative of my low opinion of windoze. I thought this was the case, although I now see my posting did not show it. > tell me about it. A couple of years back I developed some testers that > used a PC to talk to a range of little blue I/O boxes. The PC(s) were >= > 1GHz pentiummyjigs, and our pc guru (who is good) couldnt even get a > guaranteed 1ms interrupt out of the poxy OS. Oh I am sure nobody can even dream of 1mS latency with windows. Some time ago, when they had only NT, a guy told me 22 mS was the best achievable (he was living in a windows world, though, so I don't know if this was possible or wishfull thinking). Dimiter On Feb 9, 3:45 pm, Terry Given <my_n...(a)ieee.org> wrote: > Didi wrote: > >>which is, of course, event driven software. which is (AIUI) what windows > >>is all about. perhaps that explains it. > > > You must be joking? While event and interrupt are quite different > > things, > > to claim that windows is "event driven" with its many seconds range > > latencies is laughable at best. A wanna-be event dirven, may be :-). > > I never said it was good, and the last bit ("perhaps..." et al) is > indicative of my low opinion of windoze. > > > Dimiter > > [snip] > > tell me about it. A couple of years back I developed some testers that > used a PC to talk to a range of little blue I/O boxes. The PC(s) were >= > 1GHz pentiummyjigs, and our pc guru (who is good) couldnt even get a > guaranteed 1ms interrupt out of the poxy OS. > > Cheers > Terry
From: Ken Smith on 9 Feb 2007 09:48 In article <1171028058.046574.318680(a)q2g2000cwa.googlegroups.com>, Didi <dp(a)tgi-sci.com> wrote: >> which is, of course, event driven software. which is (AIUI) what windows >> is all about. perhaps that explains it. > >You must be joking? While event and interrupt are quite different >things, >to claim that windows is "event driven" with its many seconds range >latencies is laughable at best. A wanna-be event dirven, may be :-). "Event driven" is a classification of how something operates. A "Pinto" was a car. Windows uses the event FIFO model. This ensures that the events are taken in turn. It doesn't ensure that they are acted on quickly. This model actually makes it harder to react quickly to events but it saves having to implement event commutation. Consider this happening: Disk operation complete Mouse moved to the right 10 mickeys Printer port interrupt Serial interrupt Mouse button clicked You can safely move the mouse action down the list to after the serial interrupt. In an interrupt priority system it could be. Windows, however can't easily do this sort of thing, -- -- kensmith(a)rahul.net forging knowledge
From: Ken Smith on 9 Feb 2007 09:54
In article <1171028460.691147(a)ftpsrv1>, Terry Given <my_name(a)ieee.org> wrote: >Didi wrote: [....] >tell me about it. A couple of years back I developed some testers that >used a PC to talk to a range of little blue I/O boxes. The PC(s) were >= >1GHz pentiummyjigs, and our pc guru (who is good) couldnt even get a >guaranteed 1ms interrupt out of the poxy OS. There are special drivers for serial ports that get about that sort of timing. The trend these days is to offload the work from the PC to some external box. This way you can have the PC only set the parameters and run the user interface. The actual work is done by a much more capable processor such as an 8051. -- -- kensmith(a)rahul.net forging knowledge |