From: BGB / cr88192 on

"Robert Redelmeier" <redelm(a)ev1.net.invalid> wrote in message
news:hpnfaf$qa7$1(a)speranza.aioe.org...
> In alt.lang.asm Nathan Baker <nathancbaker(a)gmail.com> wrote in part:
>> There was a time when, every few years, the CPU clock speed
>> seemed to be increasing geometrically...
>>
>> 1 MHz > 8 MHz > 25 MHz > 90 MHz > 133 MHz > 500 MHz
>>
>> But when they got into the GHz range, the trend stopped. There is
>> no longer a "user noticable" increase in performance gained by
>> purchasing new hardware. This places the burden on software.
>
> Even outside of niches (video compression is still CPU-bound), this
> really is not true. The trend stopped mostly because it didn't pay.
> CPU time has dwindled to insignificance for common tasks, but other
> things (video rendering, disk seek, network response) have not.
>
> People can see signficant improvements in some tasks
> with appropriate hardware upgrades like videocards and SSDs.
> MS-Windows can boot in 10-15s with the latter.
>
> As for fixing software, fast CPUs _reduce_ the cost of locally
> suboptimal code. What matters is high-level optimization.
>
> As an example, Firefox can be made to open much quicker if you
> disable automatic checking for updates and set it to open on a
> blank page. Perhaps Firefox should delay update checking.
> This makes a bigger difference than a 2.0 vs 2.5 GHz CPU speed.
>


this is in part why I don't worry that much that the way I usually use my
assembler is not the fastest possible...
the amount of code I have to feed it before performance matters is
"massive", and by this point I would likely have much bigger concerns.

there is also my metadata database, which I optimized mostly because
DB-loading was bogging down startup times, but during running (or code
compilation) it is not a signigicant time user (the compiler generally
pulls/sends info to/from its own internal representations).


most time still generally goes into whatever the app is doing, rather than
internal maintainence costs, and so it typically matters more that the
runtime is lightweight, rather than the codegen tasks being "as fast as
possible".

although, granted, my C compiler is still a bit slow to really be used for
some things I had originally imagined (producing C fragments at run-time and
eval'ing them...).


hence, I am changing my more recent strategy to using a "C-enabled"
script-language (BGBScript) for things more eval-related (if I can access
the C toplevel and typesystem without too much hassles, this is looking good
enough...).

the BGBScript route looks much more promising at the moment than the Java or
C# routes.

however, BGBScript still needs a bit more work here:
better integration with the external typesystems (both C and JVM-style);
more solidly defining the object and scoping models;
....

note that the language may end up being mixed-type (using both static and
dynamic type semantics).

var x; //dynamically typed variable
var i:int; //integer variable
var a:int[]; //integer array (likely Java/C# style).
var pi:int*; //pointer-to-integer

but, all this could lead to issues...

note:
x=i; //ok, 'i' converted to a fixint
x=a; //ok, JVM-style arrays convert fine
x=pi; //issue, may need to box pointer

i=x; //issue, needs typecheck
a=x; //issue, needs typecheck
pi=x; //issue, several possible issues

pi=a; //should work ok


in C#, doing stuff like the above needs casts, but a lot may depend some on
the "policy" of the language (C# likes to be overly strict, and requires
casts for damn near everything).

BGBScript, being intended mostly as a scripting language, will probably be
far more lax and not balk unless there really is a problem (as in, the
operation either doesn't make sense or can't be safely completed).

unlike in C or C++, types may not be complex type declarations.

all types will thus have a fixed form:
<modifiers> <typename> <specifiers>


another issue is whether Prototype OO or Class/Instance OO should be the
"default" model (either way, I am likely to support both models).

P-OO is likely to remain dynamically-typed (statically-typed P-OO is likely
to be more hassle than it is worth).

Class/Instance is likely to allow mixed-typing, and will use a Java/C# style
object model.

....


or such...



From: Maxim S. Shatskih on
> Python is not ONLY slow...

From what I know on Python, it is a junk language worse then both Perl and PHP.

--
Maxim S. Shatskih
Windows DDK MVP
maxim(a)storagecraft.com
http://www.storagecraft.com

From: Nathan Baker on
"James Harris" <james.harris.1(a)googlemail.com> wrote in message
news:292e7385-9a2b-4acd-8b79-a8443fbfa2b9(a)x12g2000yqx.googlegroups.com...
On 8 Apr, 22:48, "Nathan Baker" <nathancba...(a)gmail.com> wrote:
>
> Forgive me but from at least two perspectives that's an extraordinary
> viewpoint. I thought you might be referring to compilers and other
> development tools.

Oh, yeah, right. I *did* originally respond to a dev-tool issue, so why did
I introduce hardware into the picture??

Well, maybe it is a 'strawman', but it sure does seem to be a popular
'excuse/justification' for developer's choices on both sides of the aisle.
Examples:

o "Netbook/phone/gadget resources are restricted and are clocked slow,
therefore we are forced to develop using system languages!"
o "Modern desktops are *so* ahead of last decade's and are *so* resource
rich, we'd be fools not to develop using scripting languages!"

Well, software _is_ dependent on hardware, so I think it will forever be a
talking point.

> On topic for at least one of the crossposted groups (comp.lang.misc)
> we ought to be able to design a language that can exploit and benefit
> from the amazing hardware we programmers now have available.

Doesn't http://golang.org/ do that?? If not, in what areas is it lacking?

Nathan.


From: James Harris on
On 10 Apr, 03:17, "Maxim S. Shatskih" <ma...(a)storagecraft.com.no.spam>
wrote:

(As this is topical primarily only in comp.lang.misc I have removed
a.l.a and a.o.d from followups. Feel free to add back if you
disagree.)

> > Python is not ONLY slow...
>
> From what I know on Python, it is a junk language worse then both Perl and PHP.

Any language has its good points and bad points. Whether Python is
good or bad probably depends on

1. What a programmer is trying to use it for. If we can't put on
wallpaper with a spanner does that mean a spanner is useless? (An
extreme example, I know.)

2. Whether the philosophy of the language matches the programmer's
approach or way of thinking.

As Brendan points out, a lot of people like Python. To me it's great
for doing some things. It is *much* more readable than Perl, as in the
following.

http://codewiki.wikispaces.com/ip_checksum.py

It is about the smallest example I've put online. As you can see, in
some respects it's very C-ish.

James
From: Rod Pemberton on
"Robert Redelmeier" <redelm(a)ev1.net.invalid> wrote in message
news:hpnfaf$qa7$1(a)speranza.aioe.org...
> In alt.lang.asm Nathan Baker <nathancbaker(a)gmail.com> wrote in part:
> > There was a time when, every few years, the CPU clock speed
> > seemed to be increasing geometrically...
> >
> > 1 MHz > 8 MHz > 25 MHz > 90 MHz > 133 MHz > 500 MHz
> >
> > But when they got into the GHz range, the trend stopped. There is
> > no longer a "user noticable" increase in performance gained by
> > purchasing new hardware. This places the burden on software.
>
> Even outside of niches (video compression is still CPU-bound), this
> really is not true. The trend stopped mostly because it didn't pay.
> CPU time has dwindled to insignificance for common tasks, but other
> things (video rendering, disk seek, network response) have not.
>

Yes, the Amiga PC model, using a processor with multiple coprocessors, was a
brilliant advancement, wasn't it? It's too bad PC's are still struggling to
adopt the model...

I was going to ask NB when multi-core x86 production started. Wasn't it
right were the clock speed doubling stops? i.e., around 1GHz?


Rod Pemberton