Prev: Example of building a list of file names?
Next: Could this unclosed() byteArrayInputStream cause high Heap usage ?
From: Lew on 13 Feb 2010 18:44 Lew wrote: >> There should be a much wider gap between the pay scale of the good >> developer and that of the putz or newbie. Something akin to the gap >> between top actors and those who have to wait tables, or top pro >> athletes and those in the minor leagues. Joe Wright wrote: > Linus Torvalds writes free software. He is handsomely compensated. I > understand that Richard Stallman is not destitute either. Hmmm. How handsomely? How far from destitute? How does their compensation compare to, say, Drew Brees's? Or Sean Payton's? What is the ratio between their compensation and that of someone starting out as a developer? How does it compare to the ratio between Peyton Manning's (of the Indianopolis Colts) compensation and Brenan Jackson's (of the Pittsburgh Colts)? I am espousing pay gaps akin to those of actors or pro athletes. -- Lew
From: Lew on 13 Feb 2010 18:51 Lew wrote: >>> There should be a much wider gap between the pay scale of the good >>> developer and that of the putz or newbie. Something akin to the gap >>> between top actors and those who have to wait tables, or top pro >>> athletes and those in the minor leagues. Joe Wright wrote: >> Linus Torvalds writes free software. He is handsomely compensated. I >> understand that Richard Stallman is not destitute either. Hmmm. The fact that they write free software is immaterial to my point. -- Lew
From: Öö Tiib on 13 Feb 2010 19:23 On Feb 13, 5:09 pm, Lew <l...(a)lewscanon.com> wrote: > James Kanze wrote: > > Logically, I think that most of the techniques necessary for > > making really high quality software would be difficult to apply > > in the context of a free development. And at least up to a > > Nonsense. Free software has a much higher rate of adoption of best > practices for high quality than for-pay software does. > > You say so, too. It's the "logically" with which I take issue. That > free software uses the best techniques and has the highest quality in > the marketplace is entirely logical, in addition to being an observed > fact. You just have to avoid false assumptions and fallacies in reasoning. Not sure what you mean. There are no such logical binary connection. Opposite is as easy to observe. Just download few C++ code-bases at random from places like sourceforge.net and review them. One produced by using good techniques is really hard to find there. Most code there has quality so low that it would be unthinkable in professional software house to pass QA peer review with it. It is easy to logically explain since most of it is hobby of non-professionals who find software development amusing or professionals of other language who learn C++ as hobby. Results are slightly better with larger and more popular open source products but that is often thanks to huge tester and developer base and not good techniques used. In best shape are open source projects that are popular and where commercial companies are actively participating since they need these for building or supporting their commercial products. Again it is easy to see how the companies are actually enforcing the techniques and quality there and it is likely that the companies use even higher standards in-house. Worst what i have seen is the code written by in-house software department of some smaller non-software companies but that is again easy to explain by workers of that department obfuscating their work to gain job security. So all things have logical explanations and there are no silly binary connections like free = quality and commercial = lack of quality.
From: Malcolm McLean on 14 Feb 2010 02:34 On Feb 13, 4:59 pm, Arved Sandstrom <dces...(a)hotmail.com> wrote: > Most software programs I have to work > with do not have show stopper bugs, and realistically do not need to be > "recalled". > I'm using Matlab at the moment. It seems to crash about once every two days. A bug in a C subroutine will also take down the whole system. That's irritating but liveable with. However what if the results of my scientifc programs have small errors in them? No one will die, but false information may get into the scientific literature. If I was using it for engineering calculations, someone might die. However the chance of a bug in my bespoke Matlab code is probably orders or magnitude greater than a bug in matlab's routines themselves. So does it really matter?
From: Arved Sandstrom on 14 Feb 2010 03:25
Malcolm McLean wrote: > On Feb 13, 4:59 pm, Arved Sandstrom <dces...(a)hotmail.com> wrote: >> Most software programs I have to work >> with do not have show stopper bugs, and realistically do not need to be >> "recalled". >> > I'm using Matlab at the moment. It seems to crash about once every two > days. A bug in a C subroutine will also take down the whole system. > That's irritating but liveable with. However what if the results of my > scientifc programs have small errors in them? No one will die, but > false information may get into the scientific literature. If I was > using it for engineering calculations, someone might die. However the > chance of a bug in my bespoke Matlab code is probably orders or > magnitude greater than a bug in matlab's routines themselves. So does > it really matter? This is a really good point. I've worked with programs that deal with health information, others that deal with vital statistics (birth/death/marriage etc), others that deal with other public records (like driver licensing and vehicle registration). I've also spent quite a few years (not recently) writing programs that manipulate scientific data (primarily oceanographic data). In the case of the latter (the oceanographic data), naive acceptance of my program output by a researcher might have led to professional embarrassment, or in the worst case it could have skewed public policy related to fisheries management or climate science. Realistically though we had so many checks at all stages that the chances of errors were minute. Data integrity defects, or the hint thereof, in the driver's license/vehicle registration/vital statistics applications certainly cause(d) a lot of sleepless nights, but the effects here are individual, and depending on who you are talking about tend to confine themselves to career damage and again embarrassment, and some wasted time and money, but rarely anything truly serious. Data integrity errors in health information are ungood. Period. In the first case (oceanographic data processing) occasional crashes of programs were irrelevant. All the software was custom, and it's not like it had to be running 24/7. In the case of the motor vehicle registry systems, or the vital statistics systems, it does need to be running 24/7 (e.g. the police need to run plates at 3 AM as much as they do at 3 PM), but ultimately a crash is still only an embarrassment. In the case of health information (e.g. a system that paramedics can use to query MedicAlert data while on a call), a crash is unacceptable. Depending on the application you can describe the impact of both data integrity errors, and downtime due to crashes (or extreme sluggishness). All this stuff really needs to be part of system requirements (not just software requirements). I've noticed too that many programmers tend to concentrate on issues like uptime, performance and business logic, but completely or substantially ignore matters like encodings, data type conversions, data value sanity checking and invariants, to mention a few. IOW, they do not have a good understanding of the data. Ultimately what is tolerable obviously varies, both with respect to system availability and also with respect to data integrity. If we have defects then we are guaranteed to have defects in both areas. Personally I believe that with respect to the latter - data integrity - that software developers at large could learn a lot from scientific programmers, but good luck with that one. AHS |