From: jmfbahciv on 1 Mar 2007 07:34 In article <es5i08$ujr$3(a)blue.rahul.net>, kensmith(a)green.rahul.net (Ken Smith) wrote: >In article <8ab6a$45e5c387$4fe73b0$13095(a)DIALUPUSA.NET>, >nonsense(a)unsettled.com <nonsense(a)unsettled.com> wrote: >>Ken Smith wrote: >>> In article <es3v6k$8qk_001(a)s823.apx1.sbo.ma.dialup.rcn.com>, >>> <jmfbahciv(a)aol.com> wrote: >> >>>>There exists a Murphy's Law corrollary that guarantees each time >>>>a file is opened an error will be introduced. >> >>> This is simply bogus BS. >> >>Any time you open a file in a writable mode an error may >>be introduced. > >The "in a writable mode" makes this a very different statement. Each time you copy, the file has been in a writable mode. > > >>Now consider your linux system. Every time access any file, >>changes are written. Believe it or not, an error may be >>introduced. Knowing Murphy as intimately as I do, some >>significant number will end up introducing an error. When >>it is, in my case, the error will be important. > >That is a case where the file has been modified not merely opened for >reading. Has the date changed? Then some part of the overhead of the file has changed. If you are saving this file to a backup tape, you are writing that file to another device. If you have an OS that keeps track of written blocks, then the list of those blocks can be changed, especially if a bad spot forms on the device. Those are only a few of the things that can go wrong. There is always the midnight editor. On a network? There are lots of opportunities to get a file modified without your knowledge. > > >>"Reliable" systems are defined by a threshold in the number >>of errors/some_number of operations. But you knew that, no? > >Yes, I knew that but it appears that BAH doesn't understand about the >difference between making a back up, doing a restore and repairing damage. I do. I simply posed the situation where the problem that caused the mess is also on the backup. Doing a restore will restore the mess maker. > > > >>BAH's career included a requirement that she be paranoid >>about all things that can go wrong. There's no sense arguing >>these issues because in the different worlds you live in >>each of you is right. > >Her's must be some other planet. Oh, definitely. We were on the mainframe world of timesharing computing. It's requirements are very different from the small computers you know about. /BAH
From: jmfbahciv on 1 Mar 2007 07:40 In article <87y7mhb0fx.fsf(a)nonospaz.fatphil.org>, Phil Carmody <thefatphil_demunged(a)yahoo.co.uk> wrote: >jmfbahciv(a)aol.com writes: >> In article <ersjj1$ui3$9(a)blue.rahul.net>, >> kensmith(a)green.rahul.net (Ken Smith) wrote: >> >In article <45E1CD23.26249F55(a)hotmail.com>, >> >Eeyore <rabbitsfriendsandrelations(a)hotmail.com> wrote: >> >[....] >> >>> No, not only the addressing appears larger. The total memory appears to >> >>> be more. Merely allowing an address space that is larger is merely >> >>> address translation. You only get into virtual memory when it appears the >> >>> programs as though the machine has more memory than there is physical RAM. >> >>> This is exactly what I was telling you when I directed you to how the word >> >>> "virtual" is defined. >> >> >> >>To the processor itself the VM should be transparent. It should 'look' and >> >>behave like acres of RAM. A good example of where the such a task should be >> >>offloaded from the CPU itself. >> > >> >No, that isn't done. VM systems are also usually multitaskers. You could >> >create one that isn't but the rule is that they are. Here's how it the >> >operation breaks down in a multitask environment. >> > >> >- Running Task A >> >- Task A does a page fault on the real memory >> >- OS gets an interrupt >> >- Perhaps some checking is done here >> >- OS looks for the page to swap out >> >> Swap out from where? > >Main memory, obviously. That's what we're talking about. No, Ken is not talking about main memory. He is talking about "swapping" when the RAM's data is to be written out. > >> If the CPU architecture has write-through >> cache you don't have to move the contents of the page you need >> to remove in order to fetch the page that Task A needs from >> memory. > >Wrong. If it's not moved onto the swap medium, it's lost. >My kind of computing doesn't like losing data, yours might, >but as we know BAH computing is BAD computing. You do not lose "data" if you never modify the EXE. There were good reasons to slap user's fingers if they tried to self-modify their code. Sharable code was not writable in our scheme. > >> >- Complex issue of priority on swapping skipped here. >> >- OS marks the outgoing page to be not usable >> >- OS starts swap actions going >> >- OS looks for a task that can run now >> >- OS remembers some stuff about task priorities >> >- OS switches to new context >> >- Task B runs >> >- Swap action completes >> >- OS gets interrupt >> >- OS marks the new page as ready to go >> >- OS checks the task priority information >> >- OS maybe switches tasks >> >- Task A or B runs depending on what OS decided. >> > >> > >> >This way, a lower priority task can do useful stuff while we wait for the >> >pages to swap. >> >> Priorities are usually set based on hardware at the level you're >> talking about. > >You're gibbering again. Were you attempting to counter something he said? Are you really interested in an answer? /BAH
From: jmfbahciv on 1 Mar 2007 08:12 In article <es46st$fiu$5(a)blue.rahul.net>, kensmith(a)green.rahul.net (Ken Smith) wrote: >In article <es3v6k$8qk_001(a)s823.apx1.sbo.ma.dialup.rcn.com>, > <jmfbahciv(a)aol.com> wrote: >>In article <es1ive$89d$6(a)blue.rahul.net>, >> kensmith(a)green.rahul.net (Ken Smith) wrote: >[...] >>>>I know what I'm talking about. >>> >>>You don't seem to me making clear points on the subject. >> >>I can't help that. When you read my stuff with the initial >>assumption that it is going to be wrong, the onus of clarity >>is not on my shoulders. > >When I saw that first post I read from you, I had no opinion on the >subject before I started reading it. It quickly became obvious that you >don't do a good job of explaining things. When I did figure out what you >were meaning, I discovered that much of it was simply wrong. > >You may not consider that you have an "onus of clarity" I agree that I have an onus of clarity, but not in the case where everything I write is going to be read as 100% incorrect. It's a waste of my time and fruitless exercise. > but you need to be >clear if you want to make a case. If your goal isn't to change a mind by >making a clear argument then I have to ask why you are posting at all. It >doesn't make sense for you to post messages that you know others will not >understand. Some people understand. Some, who don't understand, take the time to figure it out when the subject matter matters. When I was working, they couldn't have paid me a million dollars a day to write. >>>> In the case of sources, if your >>>>procedures don't make you use them once in a while, they can >>>>disappear and be gone for years before anybody discovers that they're >>>>missing. >>> >>>Does "sources" in this case mean source code? >> >>Yes. >> >>> Assuming yes, this >>>statement is not actually true. >> >>You are wrong. > >No, I am right. See how when I do understand what you mean I discover >that it is fact wrong. I am talking about actual times when this happened. In one case, the sources were gone for five years before a certain corporation discovered the problem. It was one of the most important programs of that company's business. There have been other instances where sources disappeared that I know about. These were the ones that became elevated to firefight. A firefight is when the customer has such a severe problem that bit gods have to drop everything they're doing and work on the customer problem. > > >>> You only need to have an effective check >>>that the files are still the same as before. You don't have to attempt to >>>compile. >> >>A compilation guarantees that every thing that is needed to build >>the product is present. > >No this is simply wrong. Mere compilation only proves that something that >didn't generate error messages is there. You need to then compare the >results with what you got last time from the compile. Even this is not >100%. You have to make sure there weren't any object files on there. We were talking about missing files. I'm talking about the case when files go missing and are never missed. If your app runs for years without any problems, and suddenly the OS world changes out from underneath it, you might have to change the code. That is when you get the source, diddle them, rebuild the app, and type RUN to the EXE. It is highly likely that, if you haven't had to build that app over the last five years, you'll not have the sources. And the backup scheme doesn't save all backups over the last five years. > >> When a system has run without any problems >>for years, there is ususally nobody around who can build it nor >>maintain it; the first person to find a job is the one who babysits >>sources. When they're not actively changing, it doesn't make any >>sense to a manager to pay somebody to watch paint, that was designed >>never to dry, dry. > >For important software, the code is often treated as an important drawing >or religious text is. If well designed systems are in place, the >documents will be maintained. I'm not talking about documents. Those can be "saved" longterm in hard copy. I'm talking about source code. If you don't pay a babysitter, the files will disappear and nobody will miss them. > > >>>The issue is to make sure the files never disappear or get damaged. >> >>The only way to do this is to make the usage of them a part of >>daily computing life. > >No, you can do it once a year once the software has stopped changing. I don't know how to point out how you are wrong in this case. I'm not talking about source that have been under active development. I'm talking about sources whose functionality has not been broken for a long time. > >>> This >>>can be done with a procedure that doesn't require the very old media. >>>Checks like the CRC are quite effective. >> >>Nope. It is not effective over the long term. > >You are wrong again. I am thinking long-term scenarios. You are not. And that's why you keep thinking I'm wrong. Yours is correct for very short term bit storage; it is not for long term bit storages of the same set of bits. I am talking about events that really happened in the past. Do you not wish to learn from history so you can prevent a repetition? > >>>>The access date-time, last-written date-time, and last-read date-time >>>>should be three separate date-time fields. There is a fourth >>>>that is moderately useful, but I can't recall what that one is. >>> >>>Linux stores creation and modification dates. That is enough. >> >>No, it's not. Access dates are also important in backup procedures. > >Nope wrong again. If a file hasn't been accessed or if it has doesn't >matter at all. At the end of the year, the new copies are made. If you keep no record in the file that people have been accessing it every day, then the system can reach a conclusion that the file is no longer used and can be expunged. > >[.....] >>>It does take a lot of time. The "care" is having well written software. >>>If the system is damaged, you have to repair it. This is just life. You >>>can do things to prevent the damage in the first place but this is not the >>>issue we are talking about. We got here by talking about backups. >> >>And what if the breaking was done by something that is on those tapes? >>Whenever you restore the tapes, the system proceed to break again. > >You are constantly confusing restoring with repairing. No, I am not. > They are two very >different things. As long as you keep confusing the two you will not be >able to see your errors on this subject. I can't conjure a different of explaining the problem so that you can understand what I'm talking about. > > > >>>>Another problem that needs to be solved is off-site storage that >>>>doesn't degrade and still be able to read after a decade of >>>>hard/software evolution. I don't think anybody has produced >>>>a method yet. There is one going on but the only way to verify >>>>that it works is to wait a decade ;-). >>> >>>You can transcribe the data every so often. >> >>You can never verify that bits were dropped over the long term. > >Yes, you can. You need to read up about redundant information. In order to verify that a file hasn't changed over the long term, you have to have something that is five years old for comparison. > > >>Copying is not a good method of keeping a snapshot of something >>in the past. The copy is a new file. It is not the old file >>and there is no guarantee that something hasn't changed. > >You can only lower the odds of having it be wrong. One chance in one >googleplex is low enough odds to be considered safe. Your odds are off. Never underestimate Murphy's Law. > > >>There exists a Murphy's Law corrollary that guarantees each time >>a file is opened an error will be introduced. > >This is simply bogus BS. Nope. It is similar to the situation where a spelling correction to a post contains a spelling error. I don't why this seems to happen; it's on my list of life's mysteries to solve. > > >>> Since the media has gotten >>>denser with time, this make sense from a cost point of view. That big >>>hole in the mountain in Utah is only a limited size. >> >>YOu still have a lot to learn about bit management. > >No, you appear to. Nope. I know more about it than you might ever learn. /BAH
From: jmfbahciv on 1 Mar 2007 08:45 In article <es1h1n$89d$2(a)blue.rahul.net>, kensmith(a)green.rahul.net (Ken Smith) wrote: >In article <es14vi$8qk_001(a)s924.apx1.sbo.ma.dialup.rcn.com>, > <jmfbahciv(a)aol.com> wrote: >>In article <eruu77$vf3$4(a)blue.rahul.net>, >> kensmith(a)green.rahul.net (Ken Smith) wrote: >>>In article <eruk81$8qk_003(a)s965.apx1.sbo.ma.dialup.rcn.com>, >>> <jmfbahciv(a)aol.com> wrote: >>>[.....] >>>>The wrinkle to the new process is that the checks have stopped >>>>traveling. Instead you are trusting the payee to destroy the >>>>piece of paper you sent to him; >>> >>>No, the bank at the other end defaces the checks it processes by marking >>>them. The payee no longer has a check that is legally defect free so he >>>can't cash it again. >> >>There are banking services that will accept the scanned image of >>a personal check for deposits. > >This is likely a very different matter than the money leaving the account >it was in. When I put money in my bank accound at the "electronic >teller", I punch in the amount in the checks. The bank shows my balance >increased by the amount I entered. They actually give two numbers. The >first is the new balance the second is how much is "available". When I >first started with the bank the available amount would only increase the >next day after they've looked at the contents. These days the numbers are >the same. Do krw's suggestion and google "check 21". That's what is getting advertised here which will allow anybody to deposit via a scanner. Retail and commercial stores are already doing this. >>>> in addition, the bills >>>>you pay now have fine print that says writing check to them >>>>gives them permission to access your account. >>> >>>This is not true of any of the bills I checked the back of. >> >>Wait a while, then. All of my monthly bills now say this. > >I will take some sort of action if they start any nonsense like that with >me. I've been trying to tell you that there is no sort of action to take other than the strategies I've been talking about in this thread drift. I fired American Express credit card because this is the only way they process receipts. Since their check-handling section was not doing things well, I fired them. I got another credit card whose data bases have never seen my finiancial key information. Until another method of payment is developed, that's the way it is going to stay. Now this approach is not a viable one for people who workb because it take too much wallclock time to pay the credit card bill. I thought about having two banks but that won't work in this area. Banks change their names and merge and split honoring the rules of musical chairs. > >[....] >>>The Fed is attempting to make the process all electronic. I trust humans >>>about as little as I trust computers so I don't see much of a change in >>>security in this. Back when everything was on paper, someone could empty >>>your account with a fraud. All that has happened is that the tools have >>>changed a bit. >> >>Not only have the tools changed, but the speed of the transactions >>are now in picoseconds and the number of transactions made has >>increased enormously/minute. > >Those are issues of quantity not quality. Exactly. Quality is out the window. [Blame my fingers for that one; I didn't do it.] > >> In addition, no human is in the middle >>of the process so there is nobody to notice if something goes wrong >>and push the stop button. > >That person in the middle was more likely to make an error than prevent >one, The sole purpose of having that person in the middle was to slow the process down. This was a good thing. Eliminating it has opened all flavors of worm cans. > > >>a lot of this identify theft in the news is possible because >>no human needs to OK transactions. Banking is no longer local >>and most of it now is impersonal. > >The identity theft crime has been going on from before when there were >computers. The problem is that people allow important information about >themselves to be stolen from obvious places. Not any more. Eliminating the requirement of human interaction has caused the rate of incidences to increase astronmically. /BAH
From: nonsense on 1 Mar 2007 08:57
jmfbahciv(a)aol.com wrote: > In article <epccu25dvaomn9ak8i5fmq0lks6prbbtuh(a)4ax.com>, > MassiveProng <MassiveProng(a)thebarattheendoftheuniverse.org> wrote: > > Aren't you out of vital bodily fluids yet? This is what happens when you free the serfs. |