From: 1 Lucky Texan on 21 Feb 2010 23:15 On Feb 21, 8:41 pm, D Yuniskis <not.going.to...(a)seen.com> wrote: > Hi Carl, > > > > 1 Lucky Texan wrote: > > On Feb 21, 2:51 pm, D Yuniskis <not.going.to...(a)seen.com> wrote: > >> 1 Lucky Texan wrote: > >>> On Feb 19, 3:20 pm, D Yuniskis <not.going.to...(a)seen.com> wrote: > > >>>> So, the question I pose is: given that we increasingly use > >>>> multipurpose devices in our lives and that one wants to > >>>> *deliberately* reduce the complexity of the UI's on those > >>>> devices (either because we don't want to overload the > >>>> user -- imagine having a windowed interface on your > >>>> microwave oven -- or because we simply can't *afford* a > >>>> rich interface -- perhaps owing to space/cost constraints), > >>>> what sorts of reasonable criteria would govern how an > >>>> interface can successfully manage this information while > >>>> taking into account the users' limitations? > >>> If (and it's a big if) I understnd where you interest lies, it is less > >>> in 'information overload' (I think the military has done a huge > >>> amount of research in this area for fighter pilots/'battlefield' > >>> conditions) and more in 'detection' of such overload/fatigue. If so, I > > >> Yes. Though think of it as *prediction* instead of detection. > >> I.e., what to *avoid* in designing a UI so that the user > >> *won't* be overloaded/fatigued/etc. > > [snip] > > > > >> Contrast this with limited context interfaces in which the > >> "previous activity" is completely obscured by the newer > >> activity (e.g., a handheld device, aural interface, etc.). > > >> So, my question tries to identify / qualify those types > >> of issues that make UI's inefficient in these reduced > >> context deployments. > > >>> expect a system to monitor 'key strokes' (mouse moves w'ever - user > >> Hmmm... that may have a corollary. I.e., if you assume keystrokes > >> (mouse clicks, etc.) represent some basic measure of work or > >> cognition). So, the fewer of these, the less taxing the > >> "distraction". > > >>> input) and their frequency/uniqueness rates. Possibly some type of eye > >>> tracking could be helpful? > > > Even reading rates could predict the onset of overload. Again, the Air > > Yes, but keep in mind this is c.a.e and most of the "devices" > we deal with aren't typical desktop applications. I.e., > the user rarely has to "read much". Rather, he spends > time looking for a "display" (item) and adjusting a "control" > to affect some change. > > > Force has bumped into this issue. There is likely an entire branch of > > psychology dealing with these issues. > > > As for the mechanics in a system, some could perhaps be implemented > > with present or near-term technology. Certainly the military could > > justify eye-tracking, brainwave monitoring or other indicators. But > > reading rates, mouse click rates, typing speed, etc. Might be doable > > now. I can also envision some add-on widgets that might allow for, say > > a double right click to create a 'finger string'. As in tying a sting > > around your finger. A type of bookmark that would recall the precise > > conditions of the system (time, date, screen display, url, etc.) when > > the user detected something troubling. May not be as precise as 'the > > Actually, this is worth pursuing. Though not just when "detected > something troubling" but, also, to serve as a "remember what I was > doing *now*". > > I suspect a lot can be done with creating unique "screens" in > visual interfaces -- so the user recognizes what is happening > *on* that screen simply by it's overall appearance (layout, etc.). > Though this requires a conscious effort throughout the entire > system design to ensure this uniqueness is preserved. I > suspect, too often, we strive for similarity in "screens" > instead of deliberate dis-similarity. > > > infilled date was wrong', but it may be enough of a clue that, when > > the user reviews the recalled screen later, it triggers a memory like > > "hmmm, what was her....OH YEAH!, that date is wrong!" . > > > fun stuff to think about. > > *Taxing* stuff to think about! :> So much easier to just > look at a bunch of interfaces and say what's *wrong* with them! > yet, to do so in a way that allows "what's right" to be > extracted is challenging. One other quick though, in some 'dedicated' systems,it can be very important to make any deviation from the operator's 'expectation' GREATLY noticeable. I've seen some poor early software in semi- automated test stations, where some small line of text changes from 'pass' to fail. That's all. Well, the expectation could be something like 97% good boards. So, as an operator, can you be relied on to notice that text change when you have just tested 100-200 bds before a bad one comes along? I told the programmer i wanted the screen to change color, the font size to increase and, if available, a beeper to sound! That is somewhat the opposite of information overload, perhaps we'd call it 'tedium' w'ever. But, as you say, these things are important. Things like preset, 'check-off' lists, and systems that do not 'assume' an operator is paying attention and require 'distinct' inputs to keep them aware - I guess that all falls near this issue huh?
From: D Yuniskis on 23 Feb 2010 14:13 Hi Carl, 1 Lucky Texan wrote: > On Feb 21, 8:41 pm, D Yuniskis <not.going.to...(a)seen.com> wrote: >>>> Yes. Though think of it as *prediction* instead of detection. >>>> I.e., what to *avoid* in designing a UI so that the user >>>> *won't* be overloaded/fatigued/etc. [snip] >> I suspect a lot can be done with creating unique "screens" in >> visual interfaces -- so the user recognizes what is happening >> *on* that screen simply by it's overall appearance (layout, etc.). >> Though this requires a conscious effort throughout the entire >> system design to ensure this uniqueness is preserved. I >> suspect, too often, we strive for similarity in "screens" >> instead of deliberate dis-similarity. > > One other quick though, in some 'dedicated' systems,it can be very > important to make any deviation from the operator's 'expectation' > GREATLY noticeable. I've seen some poor early software in semi- Yes. > automated test stations, where some small line of text changes from > 'pass' to fail. That's all. Well, the expectation could be something > like 97% good boards. So, as an operator, can you be relied on to > notice that text change when you have just tested 100-200 bds before a > bad one comes along? I told the programmer i wanted the screen to > change color, the font size to increase and, if available, a beeper to With non-graphic interfaces, just changing the formatting of the text (even if this is a side-effect of the *amount* of text being displayed) can be enough of a visual cue. E.g.: "OK" vs. "There is something that has gone unexpectedly wrong with whatever you happened to be doing right now. So, you might want to think twice before you buy any potato chips at the market this weekend" > sound! That is somewhat the opposite of information overload, perhaps > we'd call it 'tedium' w'ever. But, as you say, these things are Yes. When responsible for debugging a large piece of ATE for a client, I got bored with the tedium of the many hours the tester would spend testing the UUT (the ATE device was actually the UUT in this case :> -- tested by yet another bit of kit). So, I would hack the test script (a proprietary format that was easy to decode) to skip some of the longer tests (that I knew already passed). I was careful to patch everything back before final sell-off. Almost. When the memory test came along (keep in mind, the device being tested is an ATE device -- 600 pin tester -- so the "pattern memory" was *big*), instead of the familiar: Test 1070 - Pattern Memory prompt on the test console, it, instead, said: Go for coffee This was different enough to be noticeable -- even in the tedium of those hours of noninteractive tests. Customer was not pleased. Boss was not pleased. *I* was not pleased (as I now had to sit through the entire sell-off procedure a second time with a "virgin" test disk) > important. Things like preset, 'check-off' lists, and systems that do > not 'assume' an operator is paying attention and require 'distinct' > inputs to keep them aware - I guess that all falls near this issue huh? I think -- to minimize the "distraction" -- you want to make these distractions *really* "no-brainers". The kinds of things that can be done in your sleep. I.e., the opposite of deliberately making them require lots of your attention (to "get them right" as in your example). Note how many interactive desktop applications/web sites deliberately change their interfaces to force you to read what they are saying. Sort of like stores rearranging their product offerings to force you to "go looking" for what you want (instead of mindlessly -- unattentively -- proceeding directly *to* the items you seek).
From: 1 Lucky Texan on 23 Feb 2010 15:43 On Feb 23, 1:13 pm, D Yuniskis <not.going.to...(a)seen.com> wrote: > Hi Carl, > > 1 Lucky Texan wrote: > > On Feb 21, 8:41 pm, D Yuniskis <not.going.to...(a)seen.com> wrote: > >>>> Yes. Though think of it as *prediction* instead of detection. > >>>> I.e., what to *avoid* in designing a UI so that the user > >>>> *won't* be overloaded/fatigued/etc. > > [snip] > > >> I suspect a lot can be done with creating unique "screens" in > >> visual interfaces -- so the user recognizes what is happening > >> *on* that screen simply by it's overall appearance (layout, etc.). > >> Though this requires a conscious effort throughout the entire > >> system design to ensure this uniqueness is preserved. I > >> suspect, too often, we strive for similarity in "screens" > >> instead of deliberate dis-similarity. > > > One other quick though, in some 'dedicated' systems,it can be very > > important to make any deviation from the operator's 'expectation' > > GREATLY noticeable. I've seen some poor early software in semi- > > Yes. > > > automated test stations, where some small line of text changes from > > 'pass' to fail. That's all. Well, the expectation could be something > > like 97% good boards. So, as an operator, can you be relied on to > > notice that text change when you have just tested 100-200 bds before a > > bad one comes along? I told the programmer i wanted the screen to > > change color, the font size to increase and, if available, a beeper to > > With non-graphic interfaces, just changing the formatting of > the text (even if this is a side-effect of the *amount* of > text being displayed) can be enough of a visual cue. E.g.: > > "OK" > > vs. > > "There is something that has gone unexpectedly wrong with whatever > you happened to be doing right now. So, you might want to think > twice before you buy any potato chips at the market this weekend" > > > sound! That is somewhat the opposite of information overload, perhaps > > we'd call it 'tedium' w'ever. But, as you say, these things are > > Yes. When responsible for debugging a large piece of ATE for > a client, I got bored with the tedium of the many hours the > tester would spend testing the UUT (the ATE device was actually > the UUT in this case :> -- tested by yet another bit of kit). > So, I would hack the test script (a proprietary format that > was easy to decode) to skip some of the longer tests (that I > knew already passed). > > I was careful to patch everything back before final sell-off. > > Almost. > > When the memory test came along (keep in mind, the device being > tested is an ATE device -- 600 pin tester -- so the "pattern > memory" was *big*), instead of the familiar: > > Test 1070 - Pattern Memory > > prompt on the test console, it, instead, said: > > Go for coffee > > This was different enough to be noticeable -- even in the > tedium of those hours of noninteractive tests. Customer > was not pleased. Boss was not pleased. *I* was not pleased > (as I now had to sit through the entire sell-off procedure > a second time with a "virgin" test disk) > > > important. Things like preset, 'check-off' lists, and systems that do > > not 'assume' an operator is paying attention and require 'distinct' > > inputs to keep them aware - I guess that all falls near this issue huh? > > I think -- to minimize the "distraction" -- you want to make these > distractions *really* "no-brainers". The kinds of things that > can be done in your sleep. I.e., the opposite of deliberately > making them require lots of your attention (to "get them right" > as in your example). > > Note how many interactive desktop applications/web sites > deliberately change their interfaces to force you to read > what they are saying. Sort of like stores rearranging > their product offerings to force you to "go looking" for what > you want (instead of mindlessly -- unattentively -- proceeding > directly *to* the items you seek). Interesting response, it seems we've had some similar experiences - though I'm from the tech end of things. I once had to do extensive environmental chamber testing and ,after a few iterations, the program finally got to a point where there was only one section (a watchdog timer test IIRC) that required operator observation, then followed by some more time, until some cabling needed to be switched over to a different unit. I bought a $7 electronic kitchen timer to clip to my shirt so I could be alerted to when I needed to be back at the chamber for either observation or cable-change. It made me more productive than just sitting there for 7-8 minutes twiddling my thumbs. I suppose nowadays, something could be done with BlueTooth/Zigbee w'ever to 'call' someone to attention. ATE stuff can be odd. Like confirming the correct COLOR LED was soldered in the right location, or the audio circuitry is functioning correctly, etc. I have also used a barcode system to 'marry' serial numbers together in inventory as a system (like daughter boards to a MB, or simm mem to a CPU board) in which the software was a little cumbersome requiring putting down the unit or the scan gun to make a simple keyboard entry (spacebar or enter). I REALLY wished at that time I'd had a footswitch hooked thru a wedge or something to make that linefeed. I suppose in today's systems USB might be a good way to implement that. (Software that was inconsistent 'block to block' ,about data entry is another pet peeve. Why is THIS screen 'anykey', the last screen was 'spacebar', the next one needs 'enter' - GRRRRR! lol!)
From: Leon on 23 Feb 2010 21:00 On 19 Feb, 21:20, D Yuniskis <not.going.to...(a)seen.com> wrote: > Hi, > > This is another "thought experiment" type activity (you, of > course, are free to implement any of these ideas in real > hardware and software to test your opinions -- but, I suspect > you will find it easier to "do the math" in your head, instead) > > I've been researching alternative user interface (UI) technologies > and approaches. A key issue which doesn't seem to have been > addressed is modeling how much information a "typical user" > (let's leave that undefined for the moment -- it's definition > significantly affects the conclusions, IMO) can manage *without* > the assistance of the UI. > > E.g., in a windowed desktop, it's not uncommon to have a dozen > "windows" open concurrently. And, frequently, the user can actively > be dividing his/her time between two or three applications/tasks > "concurrently" (by this, I mean, two or more distinct applications > which the user is *treating* as one "activity" -- despite their > disparate requirements/goals). > > But, the mere presence of these "other" windows (applications) > on the screen acts as a memory enhancer. I.e., the user can > forget about them while engaged in his/her "foreground" > activity (even if that activity requires the coordination of > activities between several "applications") because he/she > *knows* "where" they are being remembered (on his behalf). > > For example, if your "windows" session crashes, most folks > have a hard time recalling which applications (windows) were > open at the time of the crash. They can remember the (one) > activity that they were engaged in AT THE TIME but probably > can't recall the other things they were doing *alongside* > this primary activity. > > Similarly, when I am using one of my handhelds (i.e., the > entire screen is occupied by *an* application), it is hard > to *guess* what application lies immediately "behind" > that screen if the current application has engaged my > attention more than superficially. I rely on mechanisms > that "remind" me of that "pending" application (activity/task) > after I have completed work on the current "task". > > However, the current task may have been a "minor distraction". > E.g., noticing that the date is set incorrectly and having > to switch to the "set date" application while engaged in > the *original* application. I contend that those "distractions", > if not trivial to manage (cognitively), can seriously > corrupt your interaction with such "limited context" UI's > (i.e., cases where you can't be easily reminded of all the > "other things" you were engaged with at the time you were > "distracted"). > > I recall chuckling at the concept of putting a "depth" on > the concept of "short term memory" (IIRC, Winston claimed > something like 5 - 7 items :> ). But, over the years, > that model seems to just keep getting more appropriate > each time I revisit it! (though the 5 and 7 seem to shrink > with age). > > So, the question I pose is: given that we increasingly use > multipurpose devices in our lives and that one wants to > *deliberately* reduce the complexity of the UI's on those > devices (either because we don't want to overload the > user -- imagine having a windowed interface on your > microwave oven -- or because we simply can't *afford* a > rich interface -- perhaps owing to space/cost constraints), > what sorts of reasonable criteria would govern how an > interface can successfully manage this information while > taking into account the users' limitations? > > As an example, imagine "doing something" that is not > "display oriented" (as it is far too easy to think of > a windowed UI when visualizing a displayed interface) > and consider how you manage your "task queue" in real > time. E.g., getting distracted cooking dinner and > forgetting to take out the trash > > [sorry, I was looking for an intentionally "different" > set of tasks to avoid suggesting any particular type of > "device" and/or "device interface"] > > Then, think of how age, gender, infirmity, etc. impact > those techniques. > > From there, map them onto a UI technology that seems > most appropriate for the conclusions you've reached (?). > > (Boy, I'd be a ball-buster of a Professor! But, I *do* > come up with some clever designs by thinking of these > issues :> ) It was George Miller who discussed the seven or so items that can be held in immediate memory, in his famous paper "The Magical Number Seven, Plus or Minus One." Leon
From: D Yuniskis on 24 Feb 2010 18:08
Hi Carl, 1 Lucky Texan wrote: >> Note how many interactive desktop applications/web sites >> deliberately change their interfaces to force you to read >> what they are saying. Sort of like stores rearranging >> their product offerings to force you to "go looking" for what >> you want (instead of mindlessly -- unattentively -- proceeding >> directly *to* the items you seek). > > Interesting response, it seems we've had some similar experiences - > though I'm from the tech end of things. > I once had to do extensive environmental chamber testing and ,after a > few iterations, the program finally got to a point where there was > only one section (a watchdog timer test IIRC) that required operator > observation, then followed by some more time, until some cabling > needed to be switched over to a different unit. I bought a $7 This device (plus *it's* tester) didn't need anyone to babysit it. But, it was a "one-of-a-kind" system (we later built a "spare") so you tended not to take much for granted. Plus, the amount of *power* available within the racks made it dangerous to leave unattended (DC power distribution was via 1" dia exposed copper bars -- "remove all jewelry, belts, eyeglasses, etc. while servicing") In general, you wanted to be with the device because any faults that turned up could usually be fixed quickly and the test restarted. OTOH, if you wandered away and came back an hour later, the UUT plus tester could have been sitting there "idle" for the past 59 minutes... > electronic kitchen timer to clip to my shirt so I could be alerted to > when I needed to be back at the chamber for either observation or Ha! > cable-change. It made me more productive than just sitting there for > 7-8 minutes twiddling my thumbs. I suppose nowadays, something could Our device wasn't being "built" so much as being "debugged". So, you didn't *expect* it to pass all of the tests. But, you didn't know *when* it would uncover a problem that needed to be diagnosed/repaired. > be done with BlueTooth/Zigbee w'ever to 'call' someone to attention. > ATE stuff can be odd. Like confirming the correct COLOR LED was > soldered in the right location, or the audio circuitry is functioning > correctly, etc. Keyboard testers. :> > I have also used a barcode system to 'marry' serial numbers together > in inventory as a system (like daughter boards to a MB, or simm mem to > a CPU board) in which the software was a little cumbersome requiring > putting down the unit or the scan gun to make a simple keyboard entry > (spacebar or enter). I REALLY wished at that time I'd had a footswitch > hooked thru a wedge or something to make that linefeed. I suppose in > today's systems USB might be a good way to implement that. (Software > that was inconsistent 'block to block' ,about data entry is another > pet peeve. Why is THIS screen 'anykey', the last screen was > 'spacebar', the next one needs 'enter' - GRRRRR! lol!) This is just the same ol', same ol' issue... folks writing software that they never *use* (and, often, don't fully understand). E.g., the subject of my post: designing the *entire* user interface while keeping in mind how "distractions" will affect the user's efficiency and proficiency with the device. It's hard to get *everything* right when dealing with a user (esp. as users have different tastes/preferences). *But*, failing to even *consider* the device from the user's perspective is just irresponsible (sinful? negligent?) |