From: D Yuniskis on
Hi George,

George Neuner wrote:
> On Mon, 03 May 2010 14:08:38 -0700, D Yuniskis
> <not.going.to.be(a)seen.com> wrote:
>
>> I'm just hoping to find something that can
>> be done "alongside" the development instead of injecting
>> something into the process. Writing clear code is hard
>> enough for most folks. Adding deliberate obfuscation
>> just makes it that much more fragile and vulnerable to
>> error. We used to call the "tamper-proofing" activity
>> "RE-bugging" -- an indication of how hard it was to
>> get it right -- and always did it *after* the executable
>> was known to operate correctly... *two* test cycles! :<
>> Obviously, if you can come up with a scheme that can be
>> surreptitiously used *during* development, then the
>> developer can actually debug "production code" instead of
>> having to add this second "post-processing" step.
>
> I write and hack compilers for fun, have hacked them for business and
> I was part of a team that wrote a compiler for a user programmable
> DSP+FPGA signal/image processing PC board. Hacking compilers is
> tricky business and it is all too easy to wind up with an unreliable
> tool.

Exactly. It also constrains your development: what if
you don't have sources to the compiler? What if the vendor
is unwilling to make a "special" for you (or, unwilling
to go through the certification of that "special")?
What if you move to an entirely different hardware/software
platform? etc.

> I understand the reluctance to mess with a working executable, but I
> think adding a post production step with a custom tool is preferable
> to mucking with compilers. Maybe I've been blessed by good people,
> but IME it isn't all that hard to get someone to commit to using a
> provided macro or template system. Trusting them to extend or
> maintain it is a different issue, but again, IME it hasn't been a big
> deal.

I think the drawback there is that it potentially exports that
technology from your organization. I.e., there is no reason
why a developer needs to know what happens to his sources
after he has written/debugged them. If your "transformations"
don't sematically alter the executable, he shouldn't be
able to tell whether this was "part of the compiler" or not.

> I've never needed to obfuscate an executable, but I've dealt with very
> flexible and adaptive programs and I understand your issues with other
> developers and reliability.

It's not really obfuscation. The sources and binaries still
are 100% legitimate (ideally). You just want an easy way
of making N "functionally identical yet physically different"
instances of an executable from one set of sources

> I was principle developer for 3 different lines of industrial QA apps,
> 2 of which are FDA approved for food and pharmaceutical production as
> well as general industrial use. These apps were developed and
> maintained by teams of 3-8 people over the 10 years I was involved
> with them. These programs have hundreds of runtime options: for
> equipment enumeration and interfacing, for customizing the operator
> UI, for inspection tuning, performance tuning, logging, security, etc.
> Despite nearly every operation being conditional, they are still
> required to have near perfect inspection reliability (zero false
> negatives, less than 0.1% false positives) and 99.9% uptime.

I wouldn't think of trying to deploy a watermarking/fingerprinting
system in an FDA environment. I don't know how to get through
the validation with *different* executables! Unless the
changes were confined to "dead code" -- which, in itself, is
disallowed. :<

(there are other application domains that would also be
reluctant to adopt any sort of watermarking because of
their insistence on verifiable "images")

> [Knock wood, I've never once had a production system crash in the
> field due to my software. Hardware reliability I can't control, but
> my software can be as perfect as my manager allows. 8-)

<grin> I am haunted by an autopilot (marine) I designed some
30+ years ago. After returning from our test run, an examination
of the actual course taken showed an "S" in the plot at one
particular place. Did my software "divide by zero" (or
something similar)? Or, was this the spot where we stopped
to fish (which requires constantly readjusting the boat's
direction to keep it pointed into the swells)?

My boss wasn't worried about it (since the rest of the
trip -- I think 7 legs? -- went uneventfully) but the image
of that "S" is burned into my memory... :-/

> The pharma apps are my masterpieces, but my claim to fame is compact
> discs. If you bought any kind of pre-recorded CD or DVD - music,
> game, program, etc. - between 1995 and 2001, the odds are about 50%
> that it passed through one of my QA apps during production.

Is there an easy/high-speed way to verify prerecorded
media is "playable"? E.g., discs that see lots of
circulation (e.g., "Blockbusters", public library, etc)
that need to be verified as "undamaged" before being
reintroduced into circulation?
From: D Yuniskis on
Hi George,

George Neuner wrote:
> On Mon, 03 May 2010 14:14:59 -0700, D Yuniskis
>> George Neuner wrote:
>>
>>> Also, there is no standard vtable implementation - some compilers use
>>> arrays, others use hash tables. And in an array implementation, it's
>>> possible for complicated classes to have more than 1 vtable.
>> Yes. This approach would require hacking the compiler
>> (i.e., *a* compiler). That makes it less attractive.
>> But, I don't see any way of manipulating tables that
>> the developer wouldn't otherwise be aware of.
>
> Definitely a compiler level hack - I don't see any simple way to do it
> after the fact. Swapping vtable entries would also require changing
> method call sites - the indexes or hash keys would need to match.
>
> One thing I forgot to mention is that some [really good] C++ compilers
> can incorporate the whole vtable hierarchy into each class so that the
> class stands alone. This allows any method to be dispatched with a
> single lookup regardless of the position of its implementing class in
> the hierarchy.

I've played with some of the ideas discussed here -- reordering
auto variables, using multiple library versions, etc. -- and
it looks pretty easy to get the required degree of
"differentness" between images.

The library approach is the most reliable -- you can be
*sure* to get different images (which isn't guaranteed
when you start mucking with variable declarations).
And, it seems easiest to be able to predict/guarantee
behaviorally (write the different library versions
with "constant performance" as an explicit goal).

Unfortunate as it also requires the most *deliberate*
effort (though it can be done in parallel with the
regular development) -- whereas the "reorder variable"
technique would take very little effort beyond a
preprocessor.

<shrug>
From: George Neuner on
On Mon, 17 May 2010 09:52:20 -0700, D Yuniskis
<not.going.to.be(a)seen.com> wrote:

>Hi George,
>
>George Neuner wrote:
>
>> [Knock wood, I've never once had a production system crash in the
>> field due to my software. Hardware reliability I can't control, but
>> my software can be as perfect as my manager allows. 8-)
>
><grin> I am haunted by an autopilot (marine) I designed some
>30+ years ago. After returning from our test run, an examination
>of the actual course taken showed an "S" in the plot at one
>particular place. Did my software "divide by zero" (or
>something similar)? Or, was this the spot where we stopped
>to fish (which requires constantly readjusting the boat's
>direction to keep it pointed into the swells)?
>
>My boss wasn't worried about it (since the rest of the
>trip -- I think 7 legs? -- went uneventfully) but the image
>of that "S" is burned into my memory... :-/

Doesn't sound like a bug to me ... ship it!


>> The pharma apps are my masterpieces, but my claim to fame is compact
>> discs. If you bought any kind of pre-recorded CD or DVD - music,
>> game, program, etc. - between 1995 and 2001, the odds are about 50%
>> that it passed through one of my QA apps during production.
>
>Is there an easy/high-speed way to verify prerecorded
>media is "playable"? E.g., discs that see lots of
>circulation (e.g., "Blockbusters", public library, etc)
>that need to be verified as "undamaged" before being
>reintroduced into circulation?

There's no way for you to check the integrity of the stamped aluminum
cookie other than to try to play it ... on the production line where
the orientation of the disc is fixed, checking of the recording is
using 2-dimensional laser imaging.

However, checking for scratches in the plastic coating can be done
optically. You need high resolution and low-angle offset lighting. An
undamaged disc appears as "near-black" to the camera - scratches in
the coating reflect more light into the camera.

__
| |
| |
| | -- camera
| |
/\
o o -- lights
________

\_ disc


Scratch check is pretty simple to implement from an image processing
point of view - threshold to remove the camera bias, erode a bit to
reduce/eliminate jitter and noise, blob scan for anything visible and
compare blob sizes to your rejection criteria.

You need at least 1Kx1K resolution to catch defects that can affect
playback with no oversampling, (ideally) square pixels, and very
stable symmetric lighting all around. Way back when, the system I
worked on used a fiber ring and laser, but I think it probably could
be done now with sufficiently bright LEDs. We used 4 512x512 box
cameras, one aimed at each quadrant (b_tch to align but even with the
mount much cheaper than a single 1Kx1K camera). Square pixel PAL
cameras tend to be a little more expensive than NTSC. You need a good
low noise frame grabber too ... I don't remember what we used but
something like the Matrox Solios eA/XA would probably work well.

If you're handy with mount construction, I'd guess you could piece
together a decent system for less than $5000 (might be less if you can
talk your way into an engineering sample on the frame grabber).


Probably you were looking for an answer like: "stick it in the player
and run this <obscure> software" ... Sorry.

George
From: IanM on
D Yuniskis wrote:
> <grin> I am haunted by an autopilot (marine) I designed some
> 30+ years ago. After returning from our test run, an examination
> of the actual course taken showed an "S" in the plot at one
> particular place. Did my software "divide by zero" (or
> something similar)? Or, was this the spot where we stopped
> to fish (which requires constantly readjusting the boat's
> direction to keep it pointed into the swells)?
>
> My boss wasn't worried about it (since the rest of the
> trip -- I think 7 legs? -- went uneventfully) but the image
> of that "S" is burned into my memory... :-/

Not uncommon if your autopilot uses a fluxgate or other magnetic compass
sensor as its primary heading reference. If you are in relatively
shallow water pass over a modern era wreck that isn't a danger to
surface navigation, there is usually enough steel around to cause a
significant amount of compass deviation. I've had a complete 360 deg.
turn caused by such a wreck and innumerable S wiggles. With a little
local knowledge you soon learn to avoid the more troublesome ones.

--
Ian Malcolm. London, ENGLAND. (NEWSGROUP REPLY PREFERRED)
ianm[at]the[dash]malcolms[dot]freeserve[dot]co[dot]uk
[at]=@, [dash]=- & [dot]=. *Warning* HTML & >32K emails --> NUL:
From: D Yuniskis on
Hi Ian,

IanM wrote:
> D Yuniskis wrote:
>> <grin> I am haunted by an autopilot (marine) I designed some
>> 30+ years ago. After returning from our test run, an examination
>> of the actual course taken showed an "S" in the plot at one
>> particular place. Did my software "divide by zero" (or
>> something similar)? Or, was this the spot where we stopped
>> to fish (which requires constantly readjusting the boat's
>> direction to keep it pointed into the swells)?
>>
>> My boss wasn't worried about it (since the rest of the
>> trip -- I think 7 legs? -- went uneventfully) but the image
>> of that "S" is burned into my memory... :-/
>
> Not uncommon if your autopilot uses a fluxgate or other magnetic compass
> sensor as its primary heading reference. If you are in relatively

Hmmmm... it was a cascade control loop. A conventional
autopilot: clear magnetic disc floating in liquid with
optical sensors to tell when it is "aligned" properly while
the whole assembly is "motor driven" -- i.e., to "set" course,
turn on a servo loop that keeps the "compass" nulled
as the boat is steered onto the desired heading. Thereafter,
any deviations from this null activate the rudder servos.

Onto this (*my* "claim to fame") was a software servo loop
that took LORAN-C coordinates of "destination" and kept
tweeking the "motor drive" to update the "new" course
(i.e., instead of conventional autopilot that seeks to
maintain a constant heading, my goal was to reach a desired
*destination*)

> shallow water pass over a modern era wreck that isn't a danger to
> surface navigation, there is usually enough steel around to cause a
> significant amount of compass deviation. I've had a complete 360 deg.
> turn caused by such a wreck and innumerable S wiggles. With a little
> local knowledge you soon learn to avoid the more troublesome ones.

Ah, that's possible! It is also possible that the anomalies
in the recorded track happened when we were in "manual"
control (fishing for Blues). It could also have been an
anomaly in the LORAN receiver. <shrug>

It's been 30+ years. I haven't heard of any *deaths* so I
don't lose too much sleep over it! :> (though I really would
have liked a resolution "back then")