From: D Yuniskis on
Hi,

I use a *lot* of invariants in my code. Some are there
just during development; others "ride shotgun" over the
code at run-time.

So, a lot of my formal testing is concerned with
verifying these tings *can't* happen *and*, verifying
the intended remedy when they *do*!

I'm looking for a reasonably portable (in the sense of
"not toolchain dependant") way of presenting these
regression tests that won't require the scaffolding that
I particularly use.

For example, some structures (not "structs") that I
enforce may not be possible to create *in* a source
file. For these, I create "initializers" that
actively build a nonconforming image prior to unleashing
the code-under-test. If written correctly, the code
being tested *should* detect the "CAN'T HAPPEN"
conditions represented in that image and react
accordingly (of course, this differs from the production
run-time behavior as I *want* to see its results).

I can't see any other flexible way of doing this that
wouldn't rely on knowing particulars of the compiler
and target a priori.

Or, do folks just not *test* these sorts of things (formally)?
From: d_s_klein on
On Mar 28, 11:15 am, D Yuniskis <not.going.to...(a)seen.com> wrote:
> Hi,
>
> I use a *lot* of invariants in my code.  Some are there
> just during development; others "ride shotgun" over the
> code at run-time.
>
> So, a lot of my formal testing is concerned with
> verifying these tings *can't* happen *and*, verifying
> the intended remedy when they *do*!
>
> I'm looking for a reasonably portable (in the sense of
> "not toolchain dependant") way of presenting these
> regression tests that won't require the scaffolding that
> I particularly use.
>
> For example, some structures (not "structs") that I
> enforce may not be possible to create *in* a source
> file.  For these, I create "initializers" that
> actively build a nonconforming image prior to unleashing
> the code-under-test.  If written correctly, the code
> being tested *should* detect the "CAN'T HAPPEN"
> conditions represented in that image and react
> accordingly (of course, this differs from the production
> run-time behavior as I *want* to see its results).
>
> I can't see any other flexible way of doing this that
> wouldn't rely on knowing particulars of the compiler
> and target a priori.
>
> Or, do folks just not *test* these sorts of things (formally)?

Yes, they do get tested when "formal" testing is done. However, I
think that you can appreciate that a lot of software is being sent out
without formal testing.

For example, "formal" testing requires that every case in a switch
statement be entered; even the default.

<rant>
Most companies task the junior engineers with testing - the
inexperienced that really don't know where to look for errors, or how
to expose them. These companies contest that the persons experienced
enough to do the job are "too expensive" for "just testing". IMnsHO
this is why most of the software out there is pure cr(a)p.
</rant>

To answer your last question, most software is not formally tested. I
asked a developer at a large software company how the product was
tested, and the reply was "we outsource that". When I asked how they
determined if it was tested _properly_, the reply was "we'll outsource
that too".

RK
From: -jg on
On Apr 1, 5:02 am, d_s_klein <d_s_kl...(a)yahoo.com> wrote:
> To answer your last question, most software is not formally tested.  I
> asked a developer at a large software company how the product was
> tested, and the reply was "we outsource that".  When I asked how they
> determined if it was tested _properly_, the reply was "we'll outsource
> that too".

Seems they follow that universal regression test rule,
the one that realizes that within the words
"THE CUSTOMER"
you can always find these words too :
"CHUM TESTER"

Using the customer is the ultimate outsourcing coup!!

-jg

From: D Yuniskis on
d_s_klein wrote:
> On Mar 28, 11:15 am, D Yuniskis <not.going.to...(a)seen.com> wrote:
>> Hi,
>>
>> I use a *lot* of invariants in my code. Some are there
>> just during development; others "ride shotgun" over the
>> code at run-time.
>>
>> So, a lot of my formal testing is concerned with
>> verifying these tings *can't* happen *and*, verifying
>> the intended remedy when they *do*!
>>
>> I'm looking for a reasonably portable (in the sense of
>> "not toolchain dependant") way of presenting these
>> regression tests that won't require the scaffolding that
>> I particularly use.
>>
>> For example, some structures (not "structs") that I
>> enforce may not be possible to create *in* a source
>> file. For these, I create "initializers" that
>> actively build a nonconforming image prior to unleashing
>> the code-under-test. If written correctly, the code
>> being tested *should* detect the "CAN'T HAPPEN"
>> conditions represented in that image and react
>> accordingly (of course, this differs from the production
>> run-time behavior as I *want* to see its results).
>>
>> I can't see any other flexible way of doing this that
>> wouldn't rely on knowing particulars of the compiler
>> and target a priori.
>>
>> Or, do folks just not *test* these sorts of things (formally)?
>
> Yes, they do get tested when "formal" testing is done. However, I
> think that you can appreciate that a lot of software is being sent out
> without formal testing.

So, folks just write code and *claim* it works? And their
employers are OK with that?? (sorry, I'm not THAT cynical... :< )

> For example, "formal" testing requires that every case in a switch
> statement be entered; even the default.

Yes. Hence the advantages of regression testing -- so you
don't have to *keep* repeating these tests each time you
commit changes to a file/module.

> <rant>
> Most companies task the junior engineers with testing - the
> inexperienced that really don't know where to look for errors, or how
> to expose them. These companies contest that the persons experienced
> enough to do the job are "too expensive" for "just testing". IMnsHO
> this is why most of the software out there is pure cr(a)p.
> </rant>

I would contend that those folks are "too expensive" to be
writing *code*! That can easily be outsourced and/or
automated. OTOH, good testing (the flip side of "specification")
is something that you can *only* do if you have lots of experience
and insight into how things *can* and *do* go wrong.

> To answer your last question, most software is not formally tested. I
> asked a developer at a large software company how the product was
> tested, and the reply was "we outsource that". When I asked how they
> determined if it was tested _properly_, the reply was "we'll outsource
> that too".

<frown> I refuse (?) to believe that is the case across the board.
You just can't stay in business if you have no idea as to the
quality of your product *or* mechanisms to control same!
From: D Yuniskis on
-jg wrote:
> On Apr 1, 5:02 am, d_s_klein <d_s_kl...(a)yahoo.com> wrote:
>> To answer your last question, most software is not formally tested. I
>> asked a developer at a large software company how the product was
>> tested, and the reply was "we outsource that". When I asked how they
>> determined if it was tested _properly_, the reply was "we'll outsource
>> that too".
>
> Seems they follow that universal regression test rule,
> the one that realizes that within the words
> "THE CUSTOMER"
> you can always find these words too :
> "CHUM TESTER"
>
> Using the customer is the ultimate outsourcing coup!!

Unfortunately, this attitude seems to cause many
firms TO CURSE THEM instead of embracing them.