From: Steve Howard on
On Nov 19, 1:14 am, raja <dextersu...(a)gmail.com> wrote:
>
> Steve Howard,
> May i know how you calculated the below ones ...
> "It looks like you had an average of 30% USER CPU for the duration of
> the snap (92,000 seconds of USER CPU, with 14 CPU's over six hours of
> available CPU to be used)."
>

Hi Raja,

The OS STAT piece of your AWR said you used...

Operating System Statistics
Statistic Total

<<snip>>

USER_TIME 9,246,603
NUM_CPUS 14
NUM_CPU_CORES 7

The USER_TIME is quoted in centiseconds, so 9,246,603 / 100 = ~ 92,000
seconds of user CPU over the six hour window.

You have 14 CPU's of time available to you, so...

6 hours * 14 CPU's = 302,400 seconds (14 * 6 hours * 60 minutes * 60
seconds)

92,000 used / 302400 available = ~30% utilized

That's an average over 6 hours, and I once saw Jonathan Lewis write
that if your head is on fire and your feet are are in a bucket of ice
water, on average you should be pretty comfortable :)

YMMV

HTH,

Steve
From: joel garry on
On Nov 18, 10:14 pm, raja <dextersu...(a)gmail.com> wrote:
> Hi,
>

> Joel Garry,
> Can you explain again on what you said ?
>
> Thanks in Advance.
>
> With Regards,
> Raja.

A magic bullet (synonym for silver bullet) is a magical simple answer
to a complex problem. In Oracle performance tuning, it refers to
setting a parameter or performing some process to fix a problem,
without properly defining the problem or solution. There has been
some controversy over this (that's an understatement!) in the Oracle
world. In general, a proper methodology for tuning will have ways to
define problems and solutions. Unfortunately, using magic works often
enough that some people consider it proper business practice. It
becomes difficult to distinguish it from best practices and reasonable
cookbooking (the latter would be making stuff work just by following
directions, without thinking about it - like Oracle installs, ignoring
configuration issues - XE is a good thing for a certain solution set,
for example). As Arthur Clarke (IIRC) put it, sufficiently advanced
technology is indistinguishable from magic. The problem with magic
bullets is they overstate how advanced the technology is, so Oracle
tuning is distinguishable from magic, as you hopefully will discover
as you learn AWR and statspack.

Bind peeking was kind of an overreach in the technology on Oracle's
part, in my opinion. Fortunately, when Oracle adds a feature like
this they usually leave in some way to turn it off, often using a
mystical incantation called an underscore init parameter. The problem
they were addressing was apparently increasing performance by putting
in some code to decide whether or not to make the optimizer have to
come up with a new plan. This created a new problem, which is that it
decides wrong, under some fairly common conditions - this is where I
thought they overreached, it should have been obvious - but I don't
know all the ins and outs of it, and don't know what they were
intending, this is just my opinion. Since optimization is potentially
done with any sql, and servers tend to not just do one thing, I think
the goof was missing the same sql could do quite different things -
getting a list of all product numbers for a drop down list of values,
versus just a few for a report, for example. So is there no
deterministic way to figure out whether bind peeking is a good idea?
As Steve said, setting the underscore variable had helped a problem -
only to bite them later. This is a general problem of
oversimplification, solving problems with magic bullets.

So I ran into a performance problem with some app code, where it is
not even asking Oracle to do anything. Some customers of the vendor
had discovered setting the underscore variable helped their vague
performance problem (which for all I know was another problem I had
run into, solved by adding an index). So when I tell the vendor about
the performance problem, they tell me to set the underscore variable.
I then laugh about it on usenet.

jg
--
@home.com is bogus.
"This site is intended solely for use by Oracle's authorized users.
Use of this site is subject to the Legal Notices, Terms for Use and
Privacy Statement located on this site. Use of the site by customers
and partners, if authorized, is also subject to the terms of your
contract(s) with Oracle. Use of this site by Oracle employees is also
subject to company policies, including the Code of Conduct.
Unauthorized access or breach of these terms may result in termination
of your authorization to use this site and/or civil and criminal
penalties. " Ahh, so that's how I terminate civial and criminal
penalties.

From: raja on
Hi,

Steve Howard,
So the CPU still has 30% available for resources and the system is not
overloaded. Then its good.

Joel Garry,
On web search i found that mostly these underscore init.ora parameters
( hidden parameters, should be used only if oracle support suggests to
us ) are creating problems ( side effects ).
I asked Charles, then why oracle has created them. Charles said
( please see previous posts ) its for flexibility ( depends upon the
application/system that we use ).
In the AWR Report that i have pasted, i hope those underscore init.ora
parameters are set to FALSE, so i hope there should not be any
problem.


In AWR Report, SQL Statistics, there are many parts like :
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
SQL ordered by Version Count
Which i should look into first ? why ?

Any other places that i have to check in the AWR Report ?
If you tell me some hot spots to look into, I will try and get back to
you with results, so that you people can correct me.

With Regards,
Raja.
From: Shakespeare on
Steve Howard schreef:
> On Nov 19, 1:14 am, raja <dextersu...(a)gmail.com> wrote:
>> Steve Howard,
>> May i know how you calculated the below ones ...
>> "It looks like you had an average of 30% USER CPU for the duration of
>> the snap (92,000 seconds of USER CPU, with 14 CPU's over six hours of
>> available CPU to be used)."
>>
>
> Hi Raja,
>
> The OS STAT piece of your AWR said you used...
>
> Operating System Statistics
> Statistic Total
>
> <<snip>>
>
> USER_TIME 9,246,603
> NUM_CPUS 14
> NUM_CPU_CORES 7
>
> The USER_TIME is quoted in centiseconds, so 9,246,603 / 100 = ~ 92,000
> seconds of user CPU over the six hour window.
>
> You have 14 CPU's of time available to you, so...
>
> 6 hours * 14 CPU's = 302,400 seconds (14 * 6 hours * 60 minutes * 60
> seconds)
>
> 92,000 used / 302400 available = ~30% utilized
>
> That's an average over 6 hours, and I once saw Jonathan Lewis write
> that if your head is on fire and your feet are are in a bucket of ice
> water, on average you should be pretty comfortable :)
>
> YMMV
>
> HTH,
>
> Steve

Try that same case with a light bulb and voltages. Metafores are
wonderful things....

Shakespeare
From: Charles Hooper on
On Nov 18, 1:38 am, raja <dextersu...(a)gmail.com> wrote:
> Hi,
>
> I have another AWR Report generated with 1hr interval.
> I dont know how to approach this. Please help on this...
>
> AWR Report :
>             Snap Id Snap Time          Sessions Cursors/Session
> Begin Snap: 10739   13-Nov-08 09:00:45 105      58.2
> End Snap:   10763   13-Nov-08 15:00:12 91       75.2
> Elapsed:   359.46 (mins)
> DB Time:   1,723.72 (mins)

Raja,

One of my previous comments in this thread to you: "I would suggest
collecting an AWR report for 10 to 15 minutes, unless the performance
problem is only present in this 31 second time period." The first AWR
report was of such a short duration that it is possible to obtain
false positives (misleading information) about what is happening in
the system - it is quite possible that some event started in that
short time interval and ended just after the AWR snap ended, and it is
also possible that an event started long before the AWR snap started
and ended during the 31.2 second time interval. Some of the
statistics reported in the AWR report are updated in the database
immediately as they happen, and others are only updated in the
database at the end of the event (the completion of a SQL statement,
for instance). This inconsistency in the statistics may be very
misleading, and that is why I recommended running an AWR report for
10-15 minutes, as the longer duration allows a greater percentage of
events to start and end in the time interval. The latest version of
your AWR report goes to the other extreme, with a six hour duration.
The database server may have had severe performance problems for 30
minutes, yet when the statistics are averaged over six hours, the
system might appear to be only 10% busy _on average_ (0.5 hours / 6
hours = 8.3%), which is likely an acceptable value, and the severe
performance problem would be completely missed. That is what is meant
by the phrase "if your head is on fire and your feet are are in a
bucket of ice water, on average you should be pretty comfortable."

I will spend a couple minutes looking at the latest AWR report that
you posted, but I still suggest collecting an AWR report for 10 to 15
minutes (and only 10 to 15 minutes).

Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8
Prev: Role , System privilege and Object privilege
Next: RMAN Error