From: nodenose on

Armin Back schrieb:

> Hi ???
>
> > Is there an switch to paralyze the GC so that he behave like in Vo 2.6?
> > Can i apply the linker options to VO 2.6?
>
> did you try to set the initial available amount of dynamic memory in your
> start method to a greater value, for instance setting DynSize(100)?
>
> HTH
>
> Armin


Verry big thanks to You! Thats make me happy. I'm glad i don't have
rewrite my complete code. Setting dynsize to 100 makes it faster but
200 restores the desired speed from 2.5b. I've made an GC-Profile-Log
again and the dif is amazing.


0000 0.00000000 [152] GC Time: 0314ms LRun: 0032ms CCnt: 0020
0001 0.61927706 [152] GC Time: 0734ms LRun: 0032ms CCnt: 0040
0002 1.61726844 [152] GC Time: 1234ms LRun: 0032ms CCnt: 0060
0003 3.70779300 [152] GC Time: 1734ms LRun: 0032ms CCnt: 0080
0004 5.05891752 [152] GC Time: 2219ms LRun: 0032ms CCnt: 0100

For the same code and programmflow. The direct relation between GC
activity and the amount of memory that is preallocated by dynsize makes
it clearly that there is a problem / impact that schould be documented.


I think the problem is that the memory subsystem have to expand the
needed memorypages each time the App is creating a new gc-managed
object. If that is the problem it would be more efficient to give the
runtime a hint how it should extend the MemSize rather then setting a
fixed value starttime. This would ensure a good payload for the gc
cause he or she did not need to remanage the pages each time an object
is created or destroyed. Setting the dynsize at starttime helps but
what heapens after 2 or four hours uptime. If our Customers loads more
objects into the application. Preallocation is not the ultimate
solution because our app is not a clairvoyant ;)


Maybe there is a hidden option for that....

From: nodenose on
Geoff schrieb:

> Hi Nameless.
>
> Aren't you taking this the wrong way?
> We're here to offer you help - you asked, remember?
>
> You can write apps in many different ways and by the sound of things,
> you rely on a lot of dynamic object generation. Let me repeat my main
> comment: most people do not experience a problem with GC performance
> impacting on their applications.
>
> All I'm asking is that you show us some of that code style where
> profiling indicates high GC loads and I am certain we could offer you
> different ways to write your code. In many respects, changes in the
> compiler in 2.7 forced the coder to adapt to better programming
> practices. That such adaptation causes some pain is not unexpected.
> We're here to assist if you will let us.
>
> > about that. I can not understand how you try to downplay the fact, that
> > the same code is 100% slower than under Vo 2.6. And that the GC needs
> > more than 50% of the entire processing time of our app.
>
> I'm not. But you also have to accept that this is extremely unusual.
> This means you should accept the fact that you are doing something
> different.
>
> > I don't like those VO-Thread with hundreds of Post saying "hey its you
> > fault. rewrite your code." I don't whant another unproductive VO-flame
>
> But no-one is starting one! Equally you need to consider the possibility
> that there might be a better way to write certain parts of your
> application. No-one is suggesting everything. Why shoot us first before
> giving us a try?
>
> Geoff


Firs let me introduce. My name is Georg Tsakumagos.

Ok Ok maybee im a little bit to aggressive. but i want to prevent such
megathreads. You should know how fast these threats are drifting away
from the original problem. But anyway, i can only give a small hint of
the uesed programmingstyle. The code we talk about is mixed: Some OOP
and some real old code wich is coded using AOP.

This means Array Oriented Programming



But anyway the simple overview:

I will descript our DB-Managing tool. This give a small hint. Our
db-accesslayer is building objects from the SQL-Resultsets. For this we
need some meta-info for about 60 tables. This holds the complete
description for each column. So after we created the metainfo and read
some config we make the connection to the db. After that we buid up an
tree with the most important ODBC-options. We need this because we
support all major SQL-Databases (mySQL, Sybase, Oracle, SQL-Server,
informix... ) even in different versions. To archive this we have to
tweak some options. Especially the SQL-Server needs some extra options
to supports concurrent reads and updates. After setting up the
db-connection we test the tablestructure. For this we have to read the
structure from db an compare it with the internal metainfo. After that
we load our authorisation-system. This system is similar to the
ACL-System. The user can have functional rights (e.g. deleting this,
editing that) or rights on objects. Our app is an document management
system. The government has filestructures with over 8000 files. The
admin can (should not) give each user the right for every file. So
ordinary after the user has loged on were have 600 Objects (without
VOGui) in memory. Some of them have Bigger some are smaler memory
footprint.

The most worst code i found is AOP code. Some parts of our framework
use this code style. Here an worst case coding example:


wCnt1 := ALen( aInfo )
aNames := ArrayCreate( wCnt1 )
for i := 1 upto wCnt1
aTemp := aInfo[i]
cCol := AllTrim( Upper( aTemp[1] ) )
wIdx := AScanExact( aDBModelCol, cCol )
if wIdx == 0
symaccess := NULL_SYMBOL
oDBAspect := NULL_OBJECT
else
symaccess := aDBModelNames[ wIdx ]
oDBAspect := oDBModelSpec:GetAspectForAccess( symaccess )
end if
oNode := TOOLsDBMSColumnNode{ aTemp, aVOInfo, oDBAspect,
TOOLsDBMS_Column_DBDefinition }
self:oColumns:AddItem( oNode )
oNode:_SetOwner( self )
aNames[i] := AllTrim( cCol )
next


So the only thing i can optimize is how the objects are created: I can
not change the amout of objects that will be created. And as you can
see in this thread the main reason for the speed impact in 2.7 is the
different behavior of the memory subsystem. I wont say this is a bug.
But i think that my guess (see other post) about the mem subsystem is
right. And if this is right, GrafX schould provide an solution wich
makes everyone happy.

this is not impossible.

From: Johan Nel on
Hi Georg,

From what I see, in your code you basically read a whole structure into
memory that are maybe not actually used during the session. I also had
a similar approach, still in the old Clipper days and found a lot of
problems with it with memory etc. I changed in the end to a "just in
time" approach to solve it, which "speeded" up the application, although
there was actually a small speed overhead with each process, although it
was not really a concern from the user perspective.

I am not sure if it would help, but do you really need all of those 600
objects in memory? I would rather go for an approach where I would only
load the objects necessary "just in time" and maybe then keep them in
memory if the user maybe want to go back to one of them. That will give
you a smaller memory footprint for over-all system performance and also
hopefully less GC overhead.

Not sure if this will help or if we talking the same language, but my
two cents worth.

Johan Nel
Pretoria, South Africa.

> First let me introduce. My name is Georg Tsakumagos.

> This means Array Oriented Programming
> db-accesslayer is building objects from the SQL-Resultsets. For this we
> need some meta-info for about 60 tables. This holds the complete
> description for each column. So after we created the metainfo and read
> some config we make the connection to the db. After that we buid up an
[snip]
> ordinary after the user has loged on were have 600 Objects (without
> VOGui) in memory. Some of them have Bigger some are smaler memory
> footprint.
[snip]
> I cannot change the amout of objects that will be created. And as you can
> see in this thread the main reason for the speed impact in 2.7 is the
> different behavior of the memory subsystem. I wont say this is a bug.
From: Geoff on
Hello Georg.

Firstly, thank you for the extended explanation. Your process sounds
very interesting and I find it instructive to see what other people do
with code, especially VO. It can often lead to parallel ideas.

Now, my thinking follows that of Johan. I would rather create objects
and often, but create and release as necessary, rather than build a
dictionary of objects up front. That requires application redesign and I
suspect you aren't prepared to consider this.

So, VO does appear to introduced a performance hit since to 2.7 for your
application type but I still think there are things you can do. You
might try to get hold of Robert's VO Voodoo papers because they go over
various techniques to help the app with the GC. Perhaps there are ways
to cut down collections or freeze dyn mem etc.

Geoff


From: nodenose on
Geoff schrieb:

> Hello Georg.
>
> Firstly, thank you for the extended explanation. Your process sounds
> very interesting and I find it instructive to see what other people do
> with code, especially VO. It can often lead to parallel ideas.
>
> Now, my thinking follows that of Johan. I would rather create objects
> and often, but create and release as necessary, rather than build a
> dictionary of objects up front. That requires application redesign and I
> suspect you aren't prepared to consider this.
>
> So, VO does appear to introduced a performance hit since to 2.7 for your
> application type but I still think there are things you can do. You
> might try to get hold of Robert's VO Voodoo papers because they go over
> various techniques to help the app with the GC. Perhaps there are ways
> to cut down collections or freeze dyn mem etc.
>
> Geoff


Hello,,

we do a lot of optimisation in our programm. And belive me that i know
what we can optimize. But the problem is that our VO will run out in a
few years. Some sooner some later. I is not economic if i rewrite that
code to get the golden code award.

Keep in mind that our app is an an dm-system. Some things have to be
cached to achieve a aceptable performance. We do a lot of in this
direction. Even filter as much we can on DB. The other big thing is ouR
cache for the business objects. We have implemented something like an
singleton pattern for them. So we can be absolut shure that for each
DB-Record is only one business object in memory. And the best is that
this works together with the gc. If we no longer need the object it get
kicked by the GC as other objects. If we load that object again we can
detect if it is still in memory or not.

But beware the problem is not the amout ob objects in memory. As you
can see in the logs the GC is still fast if you have a lot of them. The
problem is the frequency he is called. As you can see this is directly
influenced by the amout of prealloceted pages.

I would be happy if someone of the VODEV-Team could confirm this guess.
If this is the real problem we can find a way to advance VO in that
aspect. This type of problems is not new in computer science. If you
take a closer look to your database server you can find the same
problem. If an tablespace has to be extended the DBMS reallocate more
space than needed. This extend can be configured. This is good because
customer A has only a small DB and needs only one megabyte each year.
The customer B needs one megabyte each hour. So in this example it
would be smarter for admin A to extend the DB in greater quantities to
reduce the expensive expansions.


best regards...