Prev: When writing html table to div, the data from table is unformatted
Next: multi file download with one click
From: Leif Roar Moldskred on 12 Feb 2010 06:42 In comp.lang.java.programmer Arved Sandstrom <dcest61(a)hotmail.com> wrote: > > This is what I am getting at, although we need to have Brian's example > as a baseline. In this day and age, however, I'm not convinced that a > person could even give away a free car (it wouldn't be free in any case, > it would still get taxed, and you'd have to transfer title) and be > completely off the hook, although 99 times out of 100 I'd agree with > Brian that it's not a likely scenario for lawsuits. Where Brian's example falls down is that the previous owner of the car is, in effect, just a reseller: he isn't likely to have manufactured the car or modified it to any degree. However, let us assume that he _has_ done modifications to the car such as, say, replacing the fuel tank. If he messed up the repair and, without realising it, turned the fuel car into a potential firebomb, he would be liable for this defect even if he gave the car away free of charge. > With software the law is immature. I don't think the law is immature when it comes to software. Ultimately, software is covered by the same laws as Ford Pintos. That said, the legal practice might be lagging behind, as might the market and users' awareness of legal rights and duties. > To my way of thinking there are some > implied obligations that come into effect as soon as a software program > is published, regardless of price. Despite all the "legal" disclaimers > to the effect that all the risk is assumed by the user of the free > software, the fact is that the author would not make the program > available unless he believed that it worked, and unless he believed that > it would not cause harm. This is common sense. Indeed, and while the exact limit varies between legal jurisdictions, there is a legal limit to how much responsibility for a product the manufacturer can cede through contracts or licenses. > It's early days, and clearly software publishers are able to get away > with this for now. But things may change. Let us hope they will. -- Leif Roar Moldskred
From: Martin Gregorie on 12 Feb 2010 07:23 On Fri, 12 Feb 2010 07:16:33 +0000, Richard Heathfield wrote: > No, it was a bug that wasted a byte and threw away data. And it's still > a bug - some of the "solutions" adopted by the industry just shifted the > problem on a little, by using a "century window" technique. That will > catch up with us eventually. > Lets not forget that up to some time in the '90s COBOL could not read the century, which created a blind spot about four digit years in many IT people, COBOL being the language of choice for many mainframe systems (and a lot of minicomputers too, thanks to the quality of the Microfocus implementation). Until CODASYL changed the language spec, some time in the mid '90s, the only way you could get the date from the OS was with the "ACCEPT CURRENT- DATE FROM DATE." where CURRENT-DATE could only be defined as a six digit field: 01 CURRENT-DATE. 05 CD-YY pic 99. 05 CD-MM pic 99. 05 CD-DD pic 99. -- martin@ | Martin Gregorie gregorie. | Essex, UK org |
From: Seebs on 12 Feb 2010 08:29 On 2010-02-12, Arved Sandstrom <dcest61(a)hotmail.com> wrote: > With software the law is immature. To my way of thinking there are some > implied obligations that come into effect as soon as a software program > is published, regardless of price. Despite all the "legal" disclaimers > to the effect that all the risk is assumed by the user of the free > software, the fact is that the author would not make the program > available unless he believed that it worked, and unless he believed that > it would not cause harm. This is common sense. Common sense has the interesting attribute that it is frequently totally wrong. I have published a fair amount of code which I was quite sure had at least some bugs, but which I believed worked well enough for recreational use or to entertain. Or which I thought might be interesting to someone with the time or resources to make it work. Or which I believed worked in the specific cases I'd had time to test. I do believe that software will not cause harm *unless people do something stupid with it*. Such as relying on it without validating it. > I don't know if there is a legal principle attached to this concept, but > if not I figure one will get identified. Simply put, the act of > publishing _is_ a statement of fitness for use by the author, and to > attach completely contradictory legal disclaimers to the product is > somewhat absurd. I don't agree. I think it is a reasonable *assumption*, in the lack of evidence to the contrary, that the publication is a statement of *suspected* fitness for use. But if someone disclaims that, well, you should assume that they have a reason to do so. Such as, say, knowing damn well that it is at least somewhat buggy. Wind River Linux 3.0 shipped with a hunk of code I wrote, which is hidden and basically invisible in the infrastructure. We are quite aware that it had, as shipped, at least a handful of bugs. We are pretty sure that these bugs have some combination of the following attributes: 1. Failure will be "loud" -- you can't fail to notice that a particular failure occurred, and the failure will call attention to itself in some way. 2. Failure will be "harmless" -- operation of the final system image built in the run which triggered the failure will be successful because the failure won't matter to it. 3. Failure will be caught internally and corrected. So far, out of however many users over the last year or so, plus huge amounts of internal use, we've not encountered a single counterexample. We've encountered bugs which had only one of these traits, or only two of them, but we have yet to find an example of an installed system failing to operate as expected as a result of a bug in this software. (And believe me, we are looking!) That's not to say it's not worth fixing these bugs; I've spent much of my time for the last couple of weeks doing just that. I've found a fair number of them, some quite "serious" -- capable of resulting in hundreds or thousands of errors... All of which were caught internally and corrected. The key here is that I wrote the entire program with the assumption that I could never count on any other part of the program working. There's a client/server model involved. The server is intended to be robust against a broad variety of misbehaviors from the clients, and indeed, it has been so. The client is intended to be robust against a broad variety of misbehavior from the server, and indeed, it has been so. At one point in early testing, a fairly naive and obvious bug resulted in the server coredumping under fairly common circumstances. I didn't notice this for two or three weeks because the code to restart the server worked consistently. In fact, I only actually noticed it when I noticed the segfault log messages on the console... A lot of planning goes into figuring out how to handle bad inputs, how to fail gracefully if you can't figure out how to handle bad inputs, and so on. Do enough of that carefully enough and you have software that is at least moderately durable. -s p.s.: For the curious: It's something similar-in-concept to the "fakeroot" tool used on Debian to allow non-root users to create tarballs or disk images which contain filesystems with device nodes, root-owned files, and other stuff that allows a non-root developer to do system development for targeting of other systems. It's under GPLv2 right now, and I'm doing a cleanup pass after which we plan to make it available more generally under LGPL. When it comes out, I will probably announce it here, because even though it is probably the least portable code I have EVER written, there is of course a great deal of fairly portable code gluing together the various non-portable bits, and some of it's fairly interesting. -- Copyright 2010, all wrongs reversed. Peter Seebach / usenet-nospam(a)seebs.net http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
From: Nick Keighley on 12 Feb 2010 09:02 On 12 Feb, 11:30, Michael Foukarakis <electricde...(a)gmail.com> wrote: > On Feb 12, 10:08 am, Nick Keighley <nick_keighley_nos...(a)hotmail.com> > > On 11 Feb, 09:58, Michael Foukarakis <electricde...(a)gmail.com> wrote: > > > I am not an expert at law, so I cannot reason about justification or > > > necessity. However, I do recall quite a few "mishaps" and software > > > bugs that cost both money and lives. > > > Let's see: a) Mariner I, b) 1982, an F-117 crashed, can't recall if > > > the pilot made it, c) the NIST has estimated that software bugs cost > > > the US economy $59 billion annually, d) 1997, radar software > > > malfunction led to a Korean jet crash and 225 deaths, e) 1995, a > > > flight-management system presents conflicting information to the > > > pilots of an American Airlines jet, who got lost, crashed into a > > > mountain, leading to the deaths of 159 people, f) the crash of Mars > > > Polar Lander, etc. Common sense tells me that certain people bear > > > responsibility over those accidents. > > >http://catless.ncl.ac.uk/risks > > I'm terribly sorry, but I didn't get your point, if there was one. > Seriously, no irony at all. Care to elaborate? oh, sorry. You were listing "software bugs that cost both money and lives", I thought your list was a bit light (Ariane and Therac spring to mind immediatly). I thought you might not have come across the RISKs forum that discusses many computer related (and often software related) bugs.
From: Martin Gregorie on 12 Feb 2010 09:42
On Fri, 12 Feb 2010 05:58:07 -0800, Nick Keighley wrote: > On 12 Feb, 11:21, Michael Foukarakis <electricde...(a)gmail.com> wrote: >> > Products have passed all the >> > tests, yet still failed to meet spec in production. > > the testing was inadequate then. System test is supposed to test > compliance with the requirement. > Quite. System tests should at least be written by the designers, and preferably by the commissioning users. Module tests should NOT be written by the coders. > The System Test people do black box > testing (no access to internals) and demonstrate that it meets the > requirement. The customer then witnesses a System Acceptance Test (often > a cut-down version of System test plus some goodies of his own > (sometimes just ad hoc "what does this do then?")). > These are the only tests that really count apart from performance testing. Its really important that the project manager keep an eye on all levels of testing and especially on how the coders design unit tests or it can all turn to worms. -- martin@ | Martin Gregorie gregorie. | Essex, UK org | |