From: Ken Smith on 23 Feb 2007 09:22 In article <ermofh$8qk_003(a)s774.apx1.sbo.ma.dialup.rcn.com>, <jmfbahciv(a)aol.com> wrote: >In article <er4i05$1ln$7(a)blue.rahul.net>, > kensmith(a)green.rahul.net (Ken Smith) wrote: >>In article <er47qv$8qk_001(a)s897.apx1.sbo.ma.dialup.rcn.com>, >> <jmfbahciv(a)aol.com> wrote: >>[.....] >>>>NT was written in the first place for a processor that didn't do >>>>interrupts well. >>> >>>Nuts. If the hardware doesn't do it, then you can make the software >>>do it. As TW used to say, "A small matter of programming". >> >>On the N10 there was no way to code around it. The hardware was designed >>so that it had to walk to the breakroom and back before it picked up the >>phone. Nothing you could say over the phone would help. >> >> >> >>>> The N10 AKA 860 processor had to spill its entire >>>>pipeline when interrupted. This slowed things down a lot when the code >>>>involved interrupts. When the project was moved back to the X86 world, it >>>>was marketed as secure ... well sort of .... well kind of .... its better >>>>than 98. I don't think a lot of time was spent on improving the interrupt >>>>performance. >>> >>>You are confusing delivery of computing services by software with >>>delivery of computing services of hardware. >> >>No, hardware sets the upper limit on what software can do. > >That all depends on who is doing the coding. If a CPU chip needs 1 hour to do a an add instruction, you can't make it go faster by anything you code. Like I said it sets the upper limit on the performance. ><snip> > >/BAH -- -- kensmith(a)rahul.net forging knowledge
From: Ken Smith on 23 Feb 2007 09:34 In article <ermmhd$8qk_001(a)s774.apx1.sbo.ma.dialup.rcn.com>, <jmfbahciv(a)aol.com> wrote: >In article <erhn0i$em5$4(a)blue.rahul.net>, > kensmith(a)green.rahul.net (Ken Smith) wrote: [.....] >>Sure you can. If the computer is running a printer server, you can >>predict the right order for the files to be read by the server. If there >>is a task constantly running to take sliding billion point FFTs, you know >>what is best for the FFT part. Just because the human may change >>something it doesn't mean they change everything. > >All of this is single-task, single user thinking. No, it isn't. It is taking a practical example of a way that real multiuser systems actually run. It is very common for a small fraction of the tasks to need the large fraction of the memory. This is just the way it is in the real world. The computer that is doing the work of posting this is a multiuser machine. It has me on it using only several tens of kilobytes. There are a couple of other users like me and a couple that are doing something that takes a lot of RAM. > Computer usage >by the general population requires more than this. You keep >looking at the load balance from a naive user POV. No, you are just making stuff up because you've been shown to be wrong about the real world of computers today. > My biz >was timesharing from the OS' POV. I'm using a company that sells computer time like a timesharing company. They also sell modem access, and data storage. This is the modern business model for this sort of company. >>No, I'm thinking of the case where something very difficult needs to be >>done with a PC. While it is doing it, the best rules for swapping are >>known. > >Again, I think you are confused about swapping. The OS only needs >to swap if it has to temporarily delete contents of memory whose >bit settings have to be restored exactly as they were. You only think that because you haven't stopped to think about what I wrote. We were discussing the case where swapping had to happen. There is no point in asking at this point if it needs to happen because the argument has already shown that it must. There is more data to be maintained than will fit into the RAM of the machine. There is no question about swapping being needed. The discussion is about the advantages of having the code specific to the large memory usage make informed choices about swapping. -- -- kensmith(a)rahul.net forging knowledge
From: Ken Smith on 23 Feb 2007 09:47 In article <ermlh8$8ss_012(a)s774.apx1.sbo.ma.dialup.rcn.com>, <jmfbahciv(a)aol.com> wrote: >In article <erhnfn$em5$5(a)blue.rahul.net>, > kensmith(a)green.rahul.net (Ken Smith) wrote: >>In article <erhd4t$8qk_001(a)s916.apx1.sbo.ma.dialup.rcn.com>, >> <jmfbahciv(a)aol.com> wrote: >>>In article <i70nt25k4ubuvllr029cun9ebu1e1bng0a(a)4ax.com>, >>> MassiveProng <MassiveProng(a)thebarattheendoftheuniverse.org> wrote: >>>>On Mon, 19 Feb 07 13:29:06 GMT, jmfbahciv(a)aol.com Gave us: >>>> >>>>>Not at all. OSes were handling the above problems in the 60s. >>>>>The reason virtual memory was invented was to solve the above >>>>>problem. >>>>> >>>> The swapping, in this case, CAUSES the interference. >>> >>>That has to do with severe memory management problems. >> >>No, it is just part of the overhead of doing VM. > >Nope. VM doesn't have to swap. Swapping is done to make >room so a memory request by another task can be serviced. You said "nope" and then confirmed that my statement was correct. Swapping needs to happen if you need more virtual RAM than there is real RAM. To be able to swap, the code for doing the swapping must always be in the real RAM. As a result, there is code overhead in having a VM system. There is also a speed overhead when swapping happens. The OS uses some amount of CPU time on the taks switching needed to make VM do the swap. >I don't know why but people often confuse virtual memory >addressing with swapping. The two are separate. No, you are confused on the issue of needing to swap. We have already discussed why the swapping is needed. In the case we are talking about, there is more that needs to be in memory than there is physical RAM. >> If the speed of >>results is important, it is better not to have to swap. This means that >>doing the tasks one after the other is the way to go. > >You still have swap if a task is too big for the computer's resources. >Single-tasking doesn't prevent swapping. It prevents context >switching, but not swapping. You are confusing context switching >with swapping. No, I am not. The case under discussion was quite specific. There were two large tasks which individually would fit into physical memory but that added up to more than would fit. If the two were loaded at the same time, the results would be later in coming than if you ran them one after the other. >> Back in the day, >>the IBM370 would "roll out" a program so that it was put on hold while >>something else was done. > >That had to do with resources such as magtape drives, diskpacks, >etc. No it was mostly about RAM. The IBM360 can only address 16M. Running a copy MVS couldn't get around this hard limit. If there was not enough address space to fit all the code or data, something had to be rolled out and processed later. [....] >>Windows allocates memory on a least fit basis. This tends to leave a lot >>of small holes in the memory space. Unfortunately on x86 machines, the >>memory management doesn't do address translation on a finer grain than the >>segment size. This leads to a lot of fluff in memory space. >> >>Windows doesn't assume that garbage collection is needed nor does it have >>memory compacting. > >By memory compacting, are you talking about shuffling? No. In a page based MMU, you don't have this issue. When you only have a by segment memory management, you end up with gaps in the memory space. Compacting is a method for gathering the smaller gaps together to make bigger free blocks. It requires that the data be moved. -- -- kensmith(a)rahul.net forging knowledge
From: Ken Smith on 23 Feb 2007 09:50 In article <ermmos$8qk_002(a)s774.apx1.sbo.ma.dialup.rcn.com>, <jmfbahciv(a)aol.com> wrote: [....] >>>>I use electronic banking. I go to the banks web site and do it. It is >>>>just another "surfing the web" case. I don't have any special software to >>>>do it. I am far from the normal user but even I didn't add anything >>>>beyond the web browser to do my banking. >>> >>>Since you have already converted to on-line banking, why are >>>you disputing my statements about it? >> >>I am disputing your incorrect statements. > >You cannot know what is incorrect because you've already been >herded into doing online banking. You are completely off your nut on this. -- -- kensmith(a)rahul.net forging knowledge
From: Ken Smith on 23 Feb 2007 09:59
In article <ermm1f$8qk_001(a)s774.apx1.sbo.ma.dialup.rcn.com>, <jmfbahciv(a)aol.com> wrote: >In article <era3ti$tvp$6(a)blue.rahul.net>, > kensmith(a)green.rahul.net (Ken Smith) wrote: [.....] >>> The problem is that the software side >>>of the biz is dragging its heels and the hardware types are >>>getting away with not including assists to the OS guys. >> >>The most hardware guys have to design hardware to run with Windows. > >Sigh! Windows' OS has a grandfather that required those hardware >assists. Your point is making an excuse for lousy design >requirements. No, I am pointing out what has really happened. Windows was written to run on the usual DOS hardware. Gradually, features got added until it needed more stuff than DOS. If I designed a 4096 by 4096 display, I wouldn't sell very many unless I could make a windows driver for it. [.....] >>Even just suggesting that there be true backups of peoples machines throws >>them into a panic. > >Good. That's why you should just say the words. That will have >a ripple effect throughout the industry. No, I need my computer to work and be backed up. I don't give the hind most 3/4 of a rat what happens to the average windows user's data. >> "Imagine an evil person gets to the PC, deletes all >>the files of that user and reformats the harddisk on the machine. How >>long would it take to put it all back as a working system?" has been the >>question I have asked. > >Instead of saying evil person, just say lightning strike or power >surge or blizzard/hurricane when everything shuts down for 2 weeks. That is a lot less damage than an evil person can cause. Backing up by storing on two computers will serve to protect against lightning. >>>Nope. That won't happen. As soon as you get a procedure and >>>hard/software/humans in place, MS will change something that >>>will break it. I've never been able to figure out a independent >>>way no matter what they did. >> >>On just a sinlge PC it is quite easy. > >No, it is not. The way files, directories, structures and MOST >importantly, data has been organized makes it almost impossible >to manage files on a PC. We are talking about a backup. You can just copy the entire hard disk when it is a single PC. >> Doing the backup of whats on the >>server is hard. On a single PC, you boot something other than windows and >>make a bitwise image of the hard disk. When things break, you go back to >>that other OS and restore the disk as it was. If you have been good about >>backing up your data files, you don't need to do a full image every day. > >That's called an incremental backup. Great care needs to occur >to combine full and incremental backups. No great amount of care is needed. I've done that sort of restore a few times with no great trouble. Since files are stored with the modification date, a copy command that checks dates does the hard part. > >/BAH -- -- kensmith(a)rahul.net forging knowledge |