From: John Speth on
>> What about the concept of achieving higher levels of firmware quality?
>> How we can define it?
>>
>> Number of bugs per LOC?
>> Perceived quality by the final user?
>> Or.... ?
>
> Up time vs. problem time is a common one -- i.e. how many hours you can
> drive your car without the throttle sticking wide open, or how long your
> pacemaker works without skipping a beat.
>
> If the system provides discrete events, like readings or firings or
> whatever, the number of attempts vs. the number of failures.

Tim's response is excellent. In other words, you and your customer will
define quality. Your job as the firmware QA engineer is to define the
metrics of quality and then go ahead and measure it.

"If you can't measure it, you can't manage it."

JJS



--- news://freenews.netfront.net/ - complaints: news(a)netfront.net ---
From: news.tin.it on
Nel suo scritto precedente, John Speth ha sostenuto :
>> If the system provides discrete events, like readings or firings or
>> whatever, the number of attempts vs. the number of failures.
>
> Tim's response is excellent. In other words, you and your customer will
> define quality. Your job as the firmware QA engineer is to define the
> metrics of quality and then go ahead and measure it.

Measuring "something" is the very first step, that's right.
Here's the question: are we sure that we're measuring the right thing?
If we were speaking about a mechanical components (say an aircraft
wing,
for example) we could define a set of mechanical test cases because we
all
know what we're looking for (a wing that can make an airplan fly).
As a matter of fact, these tests don't care about the customer
definition
of quality (ok... customers like not to crash while flying to Hawaii
-.- )

But when we talk about firmware, are we able to define something like
this?
Or we have to lean only on customer aspectatives?
Wouldn't it be better if we can define a metric that allows to compare
initial requisites with produced firmware?
Are we falling back to bugs counting?

Regards!

--
http://www.grgmeda.it


From: Tim Wescott on
news.tin.it wrote:
> Nel suo scritto precedente, John Speth ha sostenuto :
>>> If the system provides discrete events, like readings or firings or
>>> whatever, the number of attempts vs. the number of failures.
>>
>> Tim's response is excellent. In other words, you and your customer
>> will define quality. Your job as the firmware QA engineer is to
>> define the metrics of quality and then go ahead and measure it.
>
> Measuring "something" is the very first step, that's right.
> Here's the question: are we sure that we're measuring the right thing?
> If we were speaking about a mechanical components (say an aircraft wing,
> for example) we could define a set of mechanical test cases because we all
> know what we're looking for (a wing that can make an airplan fly).
> As a matter of fact, these tests don't care about the customer definition
> of quality (ok... customers like not to crash while flying to Hawaii -.- )
>
> But when we talk about firmware, are we able to define something like this?
> Or we have to lean only on customer aspectatives?
> Wouldn't it be better if we can define a metric that allows to compare
> initial requisites with produced firmware?
> Are we falling back to bugs counting?

I think you're confusing the question "What is quality?" with the
question "How do we accurately express, predict and measure quality?".

I'd expand on that, but about all I can say at this point is "and that's
a hard question to answer!".

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
From: D Yuniskis on
news.tin.it wrote:
> Nel suo scritto precedente, John Speth ha sostenuto :
>>> If the system provides discrete events, like readings or firings or
>>> whatever, the number of attempts vs. the number of failures.
>>
>> Tim's response is excellent. In other words, you and your customer
>> will define quality. Your job as the firmware QA engineer is to
>> define the metrics of quality and then go ahead and measure it.
>
> Measuring "something" is the very first step, that's right.
> Here's the question: are we sure that we're measuring the right thing?
> If we were speaking about a mechanical components (say an aircraft wing,
> for example) we could define a set of mechanical test cases because we all
> know what we're looking for (a wing that can make an airplan fly).

Presumably, those tests are designed to verify that the wing
conforms to some *specification* regarding its weight, mechanical
strength, conformance of the airfoil to "ideal" contour, etc.

Those specifications were, in turn, derived from other specifications:
"We have to be able to carry X passengers and Y cargo over adistance of
M miles with a fuel efficiency of E...".

> As a matter of fact, these tests don't care about the customer definition
> of quality (ok... customers like not to crash while flying to Hawaii -.- )
>
> But when we talk about firmware, are we able to define something like this?
> Or we have to lean only on customer aspectatives?
> Wouldn't it be better if we can define a metric that allows to compare
> initial requisites with produced firmware?

You need specifications ("requisites") against which to measure.
Quality, in the software sense, is how well you conform to your
requirements (how pretty your code looks might be nice, too,
but that doesn't directly indicate how well it does what it
is *designed* to do)

> Are we falling back to bugs counting?

How do you define the "quality" of a homebuilder:
- Number of nail-pops_in_the_drywall after 6 months?
- Number of floor_creaks?
- Ounces_of_water_per_inch_penetrating_the_roof_per_inch_of_rain?
etc.

Many industries have "invented" (seemingly) meaningless
metrics simply because you have to count *something*...

I suspect most (software) products are now evaluated solely
in terms of "number of units sold". :< And, as long as that
number is high enough to keep the company profitable, they
keep doing what they are doing.
From: Cesar Rabak on
Em 10/3/2010 18:34, D Yuniskis escreveu:
> news.tin.it wrote:
>> Nel suo scritto precedente, John Speth ha sostenuto :
>>>> If the system provides discrete events, like readings or firings or
>>>> whatever, the number of attempts vs. the number of failures.
>>>
>>> Tim's response is excellent. In other words, you and your customer
>>> will define quality. Your job as the firmware QA engineer is to
>>> define the metrics of quality and then go ahead and measure it.
>>
>> Measuring "something" is the very first step, that's right.
>> Here's the question: are we sure that we're measuring the right thing?
>> If we were speaking about a mechanical components (say an aircraft wing,
>> for example) we could define a set of mechanical test cases because we
>> all
>> know what we're looking for (a wing that can make an airplan fly).
>
> Presumably, those tests are designed to verify that the wing
> conforms to some *specification* regarding its weight, mechanical
> strength, conformance of the airfoil to "ideal" contour, etc.
>
> Those specifications were, in turn, derived from other specifications:
> "We have to be able to carry X passengers and Y cargo over adistance of
> M miles with a fuel efficiency of E...".
>
>> As a matter of fact, these tests don't care about the customer definition
>> of quality (ok... customers like not to crash while flying to Hawaii
>> -.- )
>>
>> But when we talk about firmware, are we able to define something like
>> this?
>> Or we have to lean only on customer aspectatives?
>> Wouldn't it be better if we can define a metric that allows to compare
>> initial requisites with produced firmware?
>
> You need specifications ("requisites") against which to measure.
> Quality, in the software sense, is how well you conform to your
> requirements (how pretty your code looks might be nice, too,
> but that doesn't directly indicate how well it does what it
> is *designed* to do)
>

Well in the "software sense" we've advanced a lot more than that. We
have now the "SQuaRE series" of international standards (ISO/IEC
25000-ISO/IEC 25051).

In fact they address some of the subtleties written earlier like the
question of the wing 'quality' versus the passenger's perceived
attributes for a plane quality.


>> Are we falling back to bugs counting?
>
> How do you define the "quality" of a homebuilder:
> - Number of nail-pops_in_the_drywall after 6 months?
> - Number of floor_creaks?
> - Ounces_of_water_per_inch_penetrating_the_roof_per_inch_of_rain?
> etc.
>

Ditto.

> Many industries have "invented" (seemingly) meaningless
> metrics simply because you have to count *something*...

Or because those 'somethings' are meaningful in the chain of supply
becoming antecedents for the attributes ultimately perceived by the
final user?

>
> I suspect most (software) products are now evaluated solely
> in terms of "number of units sold". :< And, as long as that
> number is high enough to keep the company profitable, they
> keep doing what they are doing.

Yes. . . the good enough SW that made some Redmond company the
wealthiest in the Globe ;-)

--
Cesar Rabak
GNU/Linux User 52247.
Get counted: http://counter.li.org/