From: jmfbahciv on
In article <er1sis$rie$3(a)blue.rahul.net>,
kensmith(a)green.rahul.net (Ken Smith) wrote:
>In article <er1jv3$8qk_001(a)s806.apx1.sbo.ma.dialup.rcn.com>,
> <jmfbahciv(a)aol.com> wrote:
>>In article <eqvacf$mq6$3(a)blue.rahul.net>,
>> kensmith(a)green.rahul.net (Ken Smith) wrote:
>>>In article <equso2$8ss_002(a)s884.apx1.sbo.ma.dialup.rcn.com>,
>>> <jmfbahciv(a)aol.com> wrote:
>>>[.....]
>>>>So am I. PCs should have never implied one task running
>>>>at a time nor a single disk pathway.
>>>
>>>In days gone by, you could get channel controllers (alaIBM360) for the ISA
>>>bus. With a little programming skill, you could make things like
>>>transfering files happen without help from the main CPU.
>>
>>The CPU should have had to manhandle I/O transfers other than
>>telling the device to go and where to put its done interrupt.
>
>I assume you intended a "not" in there somewhere.

Yes. Thank you! I still have not figured out how to compensate
for this seeing/typing glitch of mine.

>
>On the 360, you could direct the channel to "go get the record for Mr.
>Jones".

Sure. That was IBM; their primary tradeoff was generally to favor
anything that dealt with humungous amounts of data. They did data
processing very, very, very well. You could code our OS to also
do this. However, there were lots of dangers doing direct I/O
(that's what we called it). It was "smarter" to allow the monitor
(the piece of OS code that was always in core) to handle the
I/O traffic; we called this buffered mode I/O and it allowed
the user to continue to do I/O even though the device was not
keeping up. (This is the best I can write without giving you
a tutorial in our user-mode coding.)

>
>>
>>> I expect these
>>>things will reappear as more demanding applications get written for Linux.
>>
>>NT should have been able to do this for thousands but it got borke.
>
>NT was written in the first place for a processor that didn't do
>interrupts well.

Nuts. If the hardware doesn't do it, then you can make the software
do it. As TW used to say, "A small matter of programming".

> The N10 AKA 860 processor had to spill its entire
>pipeline when interrupted. This slowed things down a lot when the code
>involved interrupts. When the project was moved back to the X86 world, it
>was marketed as secure ... well sort of .... well kind of .... its better
>than 98. I don't think a lot of time was spent on improving the interrupt
>performance.

You are confusing delivery of computing services by software with
delivery of computing services of hardware.
>
>
>>I was talking about simple things like being able to print a file,
>>download yesterday's newsgroups posts, build an EXE while the PC
>>user played a session of Pong! while waiting for everything to
>>finish. This all should have happened on one machine without
>>interfering with each other.
>
>Linux does ok at this.

Of course it does. However, no Unix is "PC-user friendly". I'm
trying to work on this problem but I've been getting side-tracked.

> Right now I also have LTSpice running on another
>desktop. I'm typing this while if figures.

My point is that you should not have to have another computer _system_
to do any other task. It is possible to have all tasks done for
you on that one system without any of them interfering with the other.
The only time there should ever be discernible inteference is when
a task requires [what we used to call] real-time computing. Other
than instrumentation, there usually isn't any computing task that
has to have the CPU pay attention to it *right now*.

>
>
>>For some strange reason, MS products can't chew gum and salivate
>>at the same time; it appears that they think this is a feature.
>
>The DOS mind set was to only do one thing at a time.

That is OK but it should never display the monitor prompt until
it's done with each task. You do not lie to your user! EVER!!!!

>Some bits of later
>versions looked like multitasking was intended but abandoned. Even very
>later versions save registers into code space instead of onto the stack.

That [saving regs in core] has nothing to do with anything. Their
problem was never understanding how to do memory management *and*
buffered mode I/O; as a side effect, the OS never learned how
to honor directory partitioning of stored bits.

We had a DOS that was able to this. Not teaching the MS-DOS
about reasonable data moving and storage was a sin. It has
held back the computing biz evolution by, at least, two decades.
My estimates are always off so it could even be three decades.

/BAH


From: jmfbahciv on
In article <er1rfs$rie$2(a)blue.rahul.net>,
kensmith(a)green.rahul.net (Ken Smith) wrote:
>In article <er1k4l$8qk_002(a)s806.apx1.sbo.ma.dialup.rcn.com>,
> <jmfbahciv(a)aol.com> wrote:
>>In article <jff7t2160tu1ll3bb1k7t0k5beohqsv3ti(a)4ax.com>,
>> MassiveProng <MassiveProng(a)thebarattheendoftheuniverse.org> wrote:
>>>On Wed, 14 Feb 2007 15:37:51 +0000 (UTC), kensmith(a)green.rahul.net
>>>(Ken Smith) Gave us:
>>>
>>>>In article <equso2$8ss_002(a)s884.apx1.sbo.ma.dialup.rcn.com>,
>>>> <jmfbahciv(a)aol.com> wrote:
>>>>[.....]
>>>>>So am I. PCs should have never implied one task running
>>>>>at a time nor a single disk pathway.
>>>>
>>>>In days gone by, you could get channel controllers (alaIBM360) for the ISA
>>>>bus. With a little programming skill, you could make things like
>>>>transfering files happen without help from the main CPU. I expect these
>>>>things will reappear as more demanding applications get written for Linux.
>>>>
>>>
>>>
>>> You guys are both idiots.
>>>
>>>http://tekmicro.com/products/product.cfm?id=57&gid=1
>>>
>>>http://tekmicro.com/products/product.cfm?id=13&gid=1
>>>
>>>
>>> The world has left you behind.
>>
>>In some ways, the world hasn't caught up with us. It's going
>>to take another five years, I think, before the OS biz' main
>>distributions get as agile as ours was in 1980. It's been
>>almost 3 decades to reinvent the wheel.
>
>I know you can't view the web. The links point to some cute devices that
>are nothing like the channel controllers on the IBM360. One does wonder
>why MissingProng would have included them in his post.

Hardware is not my area of expertise. Our computing biz had started
to manufacture "smart" controllers in the early 80s. I still have
mixed feeling about offloading that into the controllers. However,
the fad was distributed processing at the time and that was one
way to distribute it.

>
>It seems that Intel doesn't make the 8089 anymore. Something that evolved
>from it could be a good thing to have in the modern computers.

Be careful here. You have a unique view of what hardware should do
for you; krw has a very different view of what the hardware should
do. There are a thousand other bit gods who each have a different
view. Part of the job of designing hardware, which is going to
be available to many types of users, is to not preclude any of those
bit gods' opinions.

> Perhaps it
>will be multiCPU machines that will be the next break through in general
>purpose computing.

That's already been done. That's not a breakthrough.

> There are already some machines for special purposes
>that have 32K processors in them.

What would be the best thing for the computer system biz right now
is for the hardware to hit the brick wall. Then resources will
be available for decent coding work. At the moment any slop
is acceptable because there is enough hardware capacity to handle
it.
>
>I think that Linux has reached the point where it is good enough for all
>practical purposes.

Nope. It is not a product (in the sense that we called things
a product). It is still a toy; it has a little bit more growing
up to do.


> Chances are a break through in the hardware will be
>the next big thing.

GAWD. I HOPE NOT. All we've been having is hardware break
throughs. This has all had the direct effect of coding slop.

> As OSes go it is fairly good. Unlike Windows, it can
>keep up with a 19200 baud serial stream.

Oh, jeez. You are too impressed with small potatoes. It should
be keeping up with 1000 19200 baud serial streams.

/BAH
From: jmfbahciv on
In article <be273$45d50d69$49ecf9d$20196(a)DIALUPUSA.NET>,
"nonsense(a)unsettled.com" <nonsense(a)unsettled.com> wrote:
>MassiveProng wrote:
>> On Thu, 15 Feb 07 12:36:37 GMT, jmfbahciv(a)aol.com Gave us:
>>
>>
>>>In some ways, the world hasn't caught up with us. It's going
>>>to take another five years, I think, before the OS biz' main
>>>distributions get as agile as ours was in 1980. It's been
>>>almost 3 decades to reinvent the wheel.
>>
>>
>>
>> Bwuahahahahahahahahahahahahahahahah!
>>
>>
>> That IS funny! You should do geek stand up!
>>
>> There was no such thing as Gigabit per second sampling back then.
>>
>> Compared to today's chips, a 5 volt TTL chip take a year to reach
>> logic level 1.
>>
>> The word (phrase, term, etc.) for today is:
>>
>> "Slew Rate"
>>
>> Try again, honey, you are at 70dB down. You need to boost the gain
>> a bit. Too much bit error rate...
>>
>> Oh... that's right... it's not the carrier or the packets, its the
>> data.
>>
>> Bad data is bad data. One cannot clean up what is errant from the
>> start.
>
>None of the has anything to do with the OS biz.
>
>As usual, you redefine the discussion to suit yourself.

I don't think he is redefining it; I think he believes he's
talking about the same thing. He keeps reminding me of the
last tech I had to finally resort to beating up in order
to get him to understand what was going to happen.
I don't think that guy knows to this day why his way was the exactly
wrong way.

/BAH


From: jmfbahciv on
In article <er33t7$8jq$1(a)blue.rahul.net>,
kensmith(a)green.rahul.net (Ken Smith) wrote:
>In article <be273$45d50d69$49ecf9d$20196(a)DIALUPUSA.NET>,
>nonsense(a)unsettled.com <nonsense(a)unsettled.com> wrote:
>[.....]
>>None of the has anything to do with the OS biz.
>
>
>We just had another wonderful experience with XP. Characters pumped into
>the serial port may take up to 5 seconds before a DOS application running
>under XP gets to see them.

I would expect that. Why aren't you expecting that?

> Most of them eventually come through.
>
>Tomorrow, we may try it with "dosemu" to see how well that works.

<shrug> Change the threshold number that causes the DOS emulator
to hand over bits. There's gotta be one.

/BAH

From: jasen on
On 2007-02-15, Ken Smith <kensmith(a)green.rahul.net> wrote:

> The DOS mind set was to only do one thing at a time. Some bits of later
> versions looked like multitasking was intended but abandoned. Even very
> later versions save registers into code space instead of onto the stack.

I read that there was a multitasking dos released by Microsoft in
Europe. and then there's Deskview and I think Digital Research had
a go at multitasking dos too.

I played with something called multidos (I think it) was shareware or
freeware and faked multitasking somehow.





Bye.
Jasen