Prev: Inductor DC current rateing peak current
Next: Switching Power Supplies - Low Output Voltages - Efficiency
From: D Yuniskis on 6 Jul 2010 13:21 [forwarded from a conversation on a private list] > > > I've had a hard time settling on the right set of parameters > > > to support. Keep in mind, the framework has to support a > > > variety of devices so tailoring the parameters to a > > > particular device or device class is not acceptable. > > > (the API also needs to be consistent!) > > > > I'd add "latency" or "lag" to Bob's list. This is good if you are > > trying to tune the loop. Knowing the actual lag is much easier than > > trying to estimate it from the system's response. > > <frown> I'll partially concede that point -- to "latency". But, only > to the extent that it is present in the actual measurement device or > DATAC system. > > By way of a (less arguesome) example, consider a transducer who's > output requires substantial number crunching -- or, coordination with > other transducers -- to yield its effective output. The time between > the "measured event/condition" and the availability of "valid data" > I would term "measurement latency". > > To move into a (more arguesome) example, consider an integrating > sensor which samples its process variable over an interval T. > At the end of said interval (or, some "measurement latency" after), > a value is available. But, what is this representative of? (ick) > Unless the PV was static for the duration of the measurement, > the value can't be said to represent the PV at the *start* of > the integration interval, *end* of the interval or *middle*! (it > may represent the variable at *several* different points in said > interval!). Deciding "when" the value represents it is somewhat > arbitrary. For any decision other than "at the end of the integration > interval", there is an inherent "measurement latency" in the value. > > Each of these issues differs from the condition that I *think* you are > addressing -- namely, lag brought about by the process's topology, > itself. E.g., if an effector and sensor have some inherent transport > delay between them, I consider that part of the process model and > *not* the measurement device. It requires higher order modeling > to address adequately -- knowledge that the sensor itself probably > doesn't have available to it. > > For example, consider siting a temperature sensor downstream from > a furnace. The sensor does not represent the actual temperature > of the furnace. There are inevitable losses in the transport system > between the furnace and sensor. > > Also, there are *delays* in the process variable as measured at the > sensor site vs. *in* the furnace. Locating the sensor twice as far > from the furnace doubles the delays (all else being equal). This is > not a characteristic of the sensor (measurement device) but, rather, > the system design. E.g., an identical sensor used elsewhere would > have different overall behavior if viewed in this end-to-end fashion > (which it *shouldn't*). > > And, these delays vary with the rate of flow (et al.) of the heated > material being sensed. Double the flow rate and (all else being > equal), the delays are halved. > > I.e., this aspect of the measurement is more closely tied to the > system's design and topology than to the measurement device. And, > requires more global knowledge to compensate than is available to > the measurement device itself. > > [<frown> have I made that distinction clear enough?] > > > > When all parameters have been specified, the client can query > > > the server for the current operating conditions/configuration. > > > Note that this will be AT LEAST as good as requested but probably > > > *not* as good as initially advertised (by the server). > > > > > > It is important to note that the server is only contractually > > > obligated to provide the level of service *requested* by the > > > client -- *not* the configuration advertised at the end of the > > > configuration dialog! I.e., if another client connects to the > > > server subsequently, the server might tighten the operating > > > configuration *beyond* this advertisement -- as long as the > > > requested configuration is still supported. > > > > [snip example] > > > > What if some characteristic of the device's measurement scheme > > varies with value? Consider the tacho example switching to 1/f > > mode. > > Hmmm... this is a flaw in my scheme. :< I implicitly assume that > the devices perform *at* (or beyond) the capabilities "as configured". > The "cop out" answer is that this is, indeed, how they behave -- if > the device happens to perform A LOT better than advertised for > some values of the process variable (e.g., your example), then you > can't fault it for being "too good"! :> > > However, I would have to verify that any "improvements" don't violate > any assumptions made downstream by the clients. (e.g., if latency > varies, then it isn't usable). > > Perhaps I should return a tuple for each data value to allow it to be > tagged with this sort of information? The client could always discard > or ignore it, if superfluous (aside from bandwidth, there is really > no cost to providing the information) > > > > What else am I missing? I haven't made up my mind on "precision" > > > as I think it is too hard to find specified on most devices. > > > > > > And, I don't know how to address server-side filtering. It may > > > be too device-specific to reside on the server (?) -- unless > > > I allow a function to be pushed? (creating such functions would > > > be annoying as they would need upcalls from the server's > > > configuration mechanism to adapt to changes in the *actual* > > > configuration of the service -- e.g., if update rates change...) > > > > You won't be able to push this unless you are using some portable > > scripting language. Could the clients pick from preexisting filters > > built into the server's image? Or, do client side filtering? > > I think the preexsting filters is a losing proposition. It forces the > device to fit a particular mold and doesn't allow the device's special > needs or abilities to be addressed. > > OTOH, one can argue that those should be addressed *within* the > device's interface and not exposed to the client (???) > > Grrrr.... tough call. Doing the filtering server-side would be a > great win for the clients! :-/ I'll have to stew on that for > a while. > > Thanks!
From: D Yuniskis on 8 Jul 2010 12:32
1 Lucky Texan wrote: [8<] >> What else am I missing? I haven't made up my mind on "precision" >> as I think it is too hard to find specified on most devices. >> >> And, I don't know how to address server-side filtering. I think >> it may be too device-specific to reside on the server (?) -- unless >> I allow a function to be pushed? (creating such functions would >> be annoying as they would need upcalls from the server's >> configuration mechanism to adapt to changes in the *actual* >> configuration of the service -- e.g., if update rates change...) > > The only thoughts I had are perhaps an 'accuracy,confidence' - type > progress bar as each parameter is gathered/displayed. Particularly > those which are waiting on other inputs. A user in a hurry could > decide when 'it's good enough' and maybe go on to a different task or > block? Or even letter grades from F to A. Ah, sorry if I wasn't clear. The "user" (client) is a piece of software. This interface lets that software tailor itself to the capabilities afforded by the sensor. For example, it allows a control loop to do some self-tuning by knowing what these characteristics are. So, if a sensor has a high inherent latency or slow update rate, it lets the loop "slow down" to avoid oscillation (i.e., no sense trying to control faster than the available sensory data). It also lets the software (client) know the "goodness" of the data it is operating on. E.g., you can have great resolution but if your *precision* sucks, what good is it? > And if the system includes alarm/flag conditions, those could be user- > defined in a ranking system. That's not really an issue, here. > Perhaps for some devices, a reduction in > accuracy is less important than a speedy report? Yes. In some cases, that *speed* allows you to get an even *more* accurate report. E.g., configure for wide operating range, fast response time, sacrifice accuracy; take reading(s); reconfigure for tighter operating range (based on those observations) at higher accuracy -- possibly sacrificing response time. It lets the measurement device *appear* to have smarts -- instead of just blindly configuring it to operate in a certain manner and then *living* with those constraints "forever". > I dunno, just brainstorming. That was my point in asking! ;-) Thanks! --don |