From: Oleg Starodumov on

> Maybe you can answer me the following question too:
> If Method C will be implemented, how much performance does it cost? And one
> which operations? Where does the performance - degradation come from? The
> Debugger-Events?
>

Yes, the performance hit is coming from dispatching debug events to the debugger.
So if the application generates lots of events (usually exceptions, also debug output,
module load/unload, thread start/exit, etc.), it can be slowed down. If there is not
too many debug events, performance will not be seriously affected.

This is if you attach debugger to an already running application. If you start the app
under debugger (which you shouldn't do IMO), there will be lots of various debug
checks enabled by default (heap, etc.), which will hurt performance too.

> > There are two basic options:
> >
> > 1) Use just-in-time debugger configured in Registry, system-wide.
> > The problem with reliability of just-in-time debugging is the same as
> > with MiniDumpWriteDump - JIT debugger has to be started from the
> > inside of the crashed process, using CreateProcess function, and
> > CreateProcess itself can fail because of corruption of the process'
> > state (e.g. process heap).
>
> Ok. If I understand this correctly, choosing the method will not be any
> safer than creating a Thread in-process which creates the dump (new Process
> vs. new Thread).
>

Yes, though creating a new thread from the filter is not a good idea IMO,
I would better recommend to create the helper thread beforehand, then
the failing thread would only need to set one event and wait for another -
same as with external watchdog.

> A last question: Can Method C be implemented without installing the
> "Debugging Tools for Windows" on the users machine?

Yes, it can. Win32 debugging API does not depend on presence of Debugging Tools.
You only need dbghelp.dll to create the dump.

Oleg




From: Aurelien Regat-Barrel on
G�nter Prossliner a �crit :

> In case of a OutOfMemory
> operation, there _may_ be not enoght memory for creating the dump.

I want to catch C++ bad_alloc exception too. To increase the chance to
have enought memory for the execution of my filter, I am thinking about
doing this:

if ( it_is_a_bad_alloc )
{
HeapDestroy( (HANDLE)_get_heap_handle() );
}

If frees all CRT heap memory in a very abrupt way :-)

According to Oleg Starodumov article:
http://www.debuginfo.com/articles/effminidumps.html#minidumpnormal
MiniDumpWriteDump( MiniDumpNormal ) will not try to access the
(destroyed) heap memory of my process, so it should be ok. Can you
confirm that point ? Or do you see a potential pitfall ?

Thanks.

--
Aur�lien Regat-Barrel
From: Oleg Starodumov on

> I want to catch C++ bad_alloc exception too. To increase the chance to
> have enought memory for the execution of my filter, I am thinking about
> doing this:
>
> if ( it_is_a_bad_alloc )
> {
> HeapDestroy( (HANDLE)_get_heap_handle() );
> }
>
> If frees all CRT heap memory in a very abrupt way :-)
>

HeapDestroy could have undesirable side effects if the heap header is corrupted.
IMO a much safer option would be to reserve a range of virtual memory
beforehand, and release it before calling the filter.

Oleg




From: G�nter Prossliner on
Hello Oleg!


Thank you for your informative answer!

>> Maybe you can answer me the following question too:
>> If Method C will be implemented, how much performance does it cost?
>> And one which operations? Where does the performance - degradation
>> come from? The Debugger-Events?
>>
>
> Yes, the performance hit is coming from dispatching debug events to
> the debugger.
> So if the application generates lots of events (usually exceptions,
> also debug output, module load/unload, thread start/exit, etc.), it
> can be slowed down. If there is not too many debug events,
> performance will not be seriously affected.

Ok. In the "normal program flow" there shall be not too many of them. How
are the calls serialized between the debuggee and the debugger? Over shared
memory?

When I do nothing within the most event-procedures (only unhandled
exceptions will be processed) is it possible to say how much overhead it
will be? Is it possible to subscribe to the needed event(s) only?

> This is if you attach debugger to an already running application. If
> you start the app under debugger (which you shouldn't do IMO), there
> will be lots of various debug checks enabled by default (heap, etc.),
> which will hurt performance too.

The application will not be started under the debugger.

>> Ok. If I understand this correctly, choosing the method will not be
>> any safer than creating a Thread in-process which creates the dump
>> (new Process vs. new Thread).
>>
>
> Yes, though creating a new thread from the filter is not a good idea
> IMO,
> I would better recommend to create the helper thread beforehand, then
> the failing thread would only need to set one event and wait for
> another -
> same as with external watchdog.

This is a very good idea! I will go on with that.


I will implement the following methods:

You can configure the DumpHelper by using two modes:

Mode "Fast": It creates a thread which waits for an event to be set until it
calls "MiniDumpCreateDump" in process. The event will be set from the custom
unhandled exception filter.

Mode "Safe": It starts an watchdog-process (actually rundll32.exe with an
exported "DebuggerMain" from my dll (by using rundll32 I must not deploy
anything else but the dll), which attaches to the application as a Debugger,
and calls "MiniDumpCreateDump" within the watchdog process.

I will forget about the third method (creating a debugger-process within the
unhandled exception filter which creates the Minidump) because according to
your posting it is not any safer than the "Fast" mode.

What do you think about it?


GP


From: Aurelien Regat-Barrel on
Oleg Starodumov a �crit :
>>I want to catch C++ bad_alloc exception too. To increase the chance to
>>have enought memory for the execution of my filter, I am thinking about
>>doing this:
>>
>> if ( it_is_a_bad_alloc )
>> {
>> HeapDestroy( (HANDLE)_get_heap_handle() );
>> }
>>
>>If frees all CRT heap memory in a very abrupt way :-)
>>
>
>
> HeapDestroy could have undesirable side effects if the heap header is
corrupted.
> IMO a much safer option would be to reserve a range of virtual memory
> beforehand, and release it before calling the filter.

I guess you mean reseve + commit VM.

I do not really care about memory corruption, because I consider it as
unrecoverable. The heap header could be corrupted yes, but my pointer
returned by VirtualAlloc could be altered too. You may say that the risk
of accidentally modifying this little pointer is lower than modifying
the big heap header, but I consider that both cases are unlikely to
occur within my software :-)
An important point here is that I am writing C++ software with very few
Win32 direct interaction. And none of them a tied to memory management.
All memory allocation is done through the CRT functions. So the heap
corruption would have to successfully bypass the VC++ 8 "safe library",
then the CRT checked heap, then the checking features of the Win32 debug
heap, and finally a bad_alloc exception should be thrown somewhere.
That's why I consider that scenario as very unlikely to occur. Maybe I
could call HeapValidate before HeapDestroy, but ergh, I am not writing
software for the Space Shuttle :-)
If my app crashes because of a memory corruption, creating a minidump
won't help me. I think the best way to handle a memory corruption is to
crash the app as soon as possible. As I said, I consider it as an
unrecoverable error. So, if HeapDestroy makes my app to be killed, it is
an acceptable behaviour for me.

However, there are specific errors that I am interested in catching and
reporting, and for which writing a unhandled exception filter is difficult:
- stack overflow
- out of memory
For the first one, Jochen gave me an acceptable solution. The second is
a very difficult one, as I need memory to report the error, but I don't
have that memory. So, how can I increase the chance to have sufficient
memory for at least writing a mini dump ?

I first had the same idea as yours : allocate memory beforehand, and
free it before calling MiniDumpWriteDump. Once you have decided how much
memory to reserve (not a so easy question to answer), this only solves
the case when the memory limit has been filled by your process. Since
its execution is suspended as we are in a filter, the reserved memory
that you just freed should still be available afterward. Okay.

But what happens if the memory is widely allocated by an other process
which asks for memory faster than you can release it ? In such a case,
you have to free a lot of memory in order to increase your chance to
successfully report the bad_alloc failure. I can see two ways of doing it :
- reserve beforehand a lot of memory (the problem of how much to reserve
still has to be solved) and free it when you need it
- force your process to release as much memory as it is possible to do
by destroying the CRT heap

The second approch is radical, and forces you to kill to application
afterward. As we are in an unhandled exception filter, I think it is an
acceptable behaviour.

I don't like to reserve a lot of needless memory, and I think the
probability to get an other process competing with yours for memory is
greater than a Win32 heap header corruption. And I also have to admit
that I find the second solution much funny than the first one :-)

Sorry for this so brief reply, next time I will not forget to give
detailed infos about the origin and the creation of the universe :-)

--
Aur�lien Regat-Barrel