From: D Yuniskis on
Hi,

This is another "thought experiment" type activity (you, of
course, are free to implement any of these ideas in real
hardware and software to test your opinions -- but, I suspect
you will find it easier to "do the math" in your head, instead)

I've been researching alternative user interface (UI) technologies
and approaches. A key issue which doesn't seem to have been
addressed is modeling how much information a "typical user"
(let's leave that undefined for the moment -- it's definition
significantly affects the conclusions, IMO) can manage *without*
the assistance of the UI.

E.g., in a windowed desktop, it's not uncommon to have a dozen
"windows" open concurrently. And, frequently, the user can actively
be dividing his/her time between two or three applications/tasks
"concurrently" (by this, I mean, two or more distinct applications
which the user is *treating* as one "activity" -- despite their
disparate requirements/goals).

But, the mere presence of these "other" windows (applications)
on the screen acts as a memory enhancer. I.e., the user can
forget about them while engaged in his/her "foreground"
activity (even if that activity requires the coordination of
activities between several "applications") because he/she
*knows* "where" they are being remembered (on his behalf).

For example, if your "windows" session crashes, most folks
have a hard time recalling which applications (windows) were
open at the time of the crash. They can remember the (one)
activity that they were engaged in AT THE TIME but probably
can't recall the other things they were doing *alongside*
this primary activity.

Similarly, when I am using one of my handhelds (i.e., the
entire screen is occupied by *an* application), it is hard
to *guess* what application lies immediately "behind"
that screen if the current application has engaged my
attention more than superficially. I rely on mechanisms
that "remind" me of that "pending" application (activity/task)
after I have completed work on the current "task".

However, the current task may have been a "minor distraction".
E.g., noticing that the date is set incorrectly and having
to switch to the "set date" application while engaged in
the *original* application. I contend that those "distractions",
if not trivial to manage (cognitively), can seriously
corrupt your interaction with such "limited context" UI's
(i.e., cases where you can't be easily reminded of all the
"other things" you were engaged with at the time you were
"distracted").

I recall chuckling at the concept of putting a "depth" on
the concept of "short term memory" (IIRC, Winston claimed
something like 5 - 7 items :> ). But, over the years,
that model seems to just keep getting more appropriate
each time I revisit it! (though the 5 and 7 seem to shrink
with age).

So, the question I pose is: given that we increasingly use
multipurpose devices in our lives and that one wants to
*deliberately* reduce the complexity of the UI's on those
devices (either because we don't want to overload the
user -- imagine having a windowed interface on your
microwave oven -- or because we simply can't *afford* a
rich interface -- perhaps owing to space/cost constraints),
what sorts of reasonable criteria would govern how an
interface can successfully manage this information while
taking into account the users' limitations?

As an example, imagine "doing something" that is not
"display oriented" (as it is far too easy to think of
a windowed UI when visualizing a displayed interface)
and consider how you manage your "task queue" in real
time. E.g., getting distracted cooking dinner and
forgetting to take out the trash

[sorry, I was looking for an intentionally "different"
set of tasks to avoid suggesting any particular type of
"device" and/or "device interface"]

Then, think of how age, gender, infirmity, etc. impact
those techniques.

From there, map them onto a UI technology that seems
most appropriate for the conclusions you've reached (?).

(Boy, I'd be a ball-buster of a Professor! But, I *do*
come up with some clever designs by thinking of these
issues :> )
From: 1 Lucky Texan on
On Feb 19, 3:20 pm, D Yuniskis <not.going.to...(a)seen.com> wrote:
> Hi,
>
> This is another "thought experiment" type activity (you, of
> course, are free to implement any of these ideas in real
> hardware and software to test your opinions -- but, I suspect
> you will find it easier to "do the math" in your head, instead)
>
> I've been researching alternative user interface (UI) technologies
> and approaches.  A key issue which doesn't seem to have been
> addressed is modeling how much information a "typical user"
> (let's leave that undefined for the moment -- it's definition
> significantly affects the conclusions, IMO) can manage *without*
> the assistance of the UI.
>
> E.g., in a windowed desktop, it's not uncommon to have a dozen
> "windows" open concurrently.  And, frequently, the user can actively
> be dividing his/her time between two or three applications/tasks
> "concurrently" (by this, I mean, two or more distinct applications
> which the user is *treating* as one "activity" -- despite their
> disparate requirements/goals).
>
> But, the mere presence of these "other" windows (applications)
> on the screen acts as a memory enhancer.  I.e., the user can
> forget about them while engaged in his/her "foreground"
> activity (even if that activity requires the coordination of
> activities between several "applications") because he/she
> *knows* "where" they are being remembered (on his behalf).
>
> For example, if your "windows" session crashes, most folks
> have a hard time recalling which applications (windows) were
> open at the time of the crash.  They can remember the (one)
> activity that they were engaged in AT THE TIME but probably
> can't recall the other things they were doing *alongside*
> this primary activity.
>
> Similarly, when I am using one of my handhelds (i.e., the
> entire screen is occupied by *an* application), it is hard
> to *guess* what application lies immediately "behind"
> that screen if the current application has engaged my
> attention more than superficially.  I rely on mechanisms
> that "remind" me of that "pending" application (activity/task)
> after I have completed work on the current "task".
>
> However, the current task may have been a "minor distraction".
> E.g., noticing that the date is set incorrectly and having
> to switch to the "set date" application while engaged in
> the *original* application.  I contend that those "distractions",
> if not trivial to manage (cognitively), can seriously
> corrupt your interaction with such "limited context" UI's
> (i.e., cases where you can't be easily reminded of all the
> "other things" you were engaged with at the time you were
> "distracted").
>
> I recall chuckling at the concept of putting a "depth" on
> the concept of "short term memory" (IIRC, Winston claimed
> something like 5 - 7 items  :> ).  But, over the years,
> that model seems to just keep getting more appropriate
> each time I revisit it!  (though the 5 and 7 seem to shrink
> with age).
>
> So, the question I pose is:  given that we increasingly use
> multipurpose devices in our lives and that one wants to
> *deliberately* reduce the complexity of the UI's on those
> devices (either because we don't want to overload the
> user -- imagine having a windowed interface on your
> microwave oven -- or because we simply can't *afford* a
> rich interface -- perhaps owing to space/cost constraints),
> what sorts of reasonable criteria would govern how an
> interface can successfully manage this information while
> taking into account the users' limitations?
>
> As an example, imagine "doing something" that is not
> "display oriented" (as it is far too easy to think of
> a windowed UI when visualizing a displayed interface)
> and consider how you manage your "task queue" in real
> time.  E.g., getting distracted cooking dinner and
> forgetting to take out the trash
>
> [sorry, I was looking for an intentionally "different"
> set of tasks to avoid suggesting any particular type of
> "device" and/or "device interface"]
>
> Then, think of how age, gender, infirmity, etc. impact
> those techniques.
>
>  From there, map them onto a UI technology that seems
> most appropriate for the conclusions you've reached (?).
>
> (Boy, I'd be a ball-buster of a Professor!  But, I *do*
> come up with some clever designs by thinking of these
> issues :> )

If (and it's a big if) I understnd where you interest lies, it is less
in 'information overload' (I think the military has done a huge
amount of research in this area for fighter pilots/'battlefield'
conditions) and more in 'detection' of such overload/fatigue. If so, I
expect a system to monitor 'key strokes' (mouse moves w'ever - user
input) and their frequency/uniqueness rates. Possibly some type of eye
tracking could be helpful?

I dunno

Carl
From: D Yuniskis on
Hi Carl,

1 Lucky Texan wrote:
> On Feb 19, 3:20 pm, D Yuniskis <not.going.to...(a)seen.com> wrote:
>> I've been researching alternative user interface (UI) technologies
>> and approaches. A key issue which doesn't seem to have been
>> addressed is modeling how much information a "typical user"
>> (let's leave that undefined for the moment -- it's definition
>> significantly affects the conclusions, IMO) can manage *without*
>> the assistance of the UI.

[snip]

>> So, the question I pose is: given that we increasingly use
>> multipurpose devices in our lives and that one wants to
>> *deliberately* reduce the complexity of the UI's on those
>> devices (either because we don't want to overload the
>> user -- imagine having a windowed interface on your
>> microwave oven -- or because we simply can't *afford* a
>> rich interface -- perhaps owing to space/cost constraints),
>> what sorts of reasonable criteria would govern how an
>> interface can successfully manage this information while
>> taking into account the users' limitations?
>
> If (and it's a big if) I understnd where you interest lies, it is less
> in 'information overload' (I think the military has done a huge
> amount of research in this area for fighter pilots/'battlefield'
> conditions) and more in 'detection' of such overload/fatigue. If so, I

Yes. Though think of it as *prediction* instead of detection.
I.e., what to *avoid* in designing a UI so that the user
*won't* be overloaded/fatigued/etc.

An example that came up in another conversation (off list):

You're writing a letter to <someone>. At some point in the
composition, you notice that the date that has been filled
in (automatically) is incorrect.

You could wait until you are done writing the letter to
correct this. But, then you have to *hope* you REMEMBER
to do it! :>

Or, you can do it now while it is still fresh in your mind.
And return to the rest of your letter thereafter.

In, for example, Windows, you could Start | Settings | Control
Panel | Date/Time and make the changes there. Then, close
that dialog, close Control Panel and finally return to your
text editing. Or, you could double click on the time display
in the system tray and directly access the Date/Time Properties
panel.

The former requires a greater degree of focus for the user.
There is more navigation involved. As such, it is a greater
distraction and, thus, more likely to cause the user to lose
his train of thought -- which translates to an inefficiency
of the interface.

The latter requires less involvement of the user (assuming
knowledge of this "shortcut" is intuitive enough) and is
therefore less of a distraction.

Of course, this (Windows) example is flawed in that the user
can still *see* what he was doing prior to invoking the
"set date" command. Chances are, he can even *read* what
he has written *while* simultaneously setting the date.

Contrast this with limited context interfaces in which the
"previous activity" is completely obscured by the newer
activity (e.g., a handheld device, aural interface, etc.).

So, my question tries to identify / qualify those types
of issues that make UI's inefficient in these reduced
context deployments.

> expect a system to monitor 'key strokes' (mouse moves w'ever - user

Hmmm... that may have a corollary. I.e., if you assume keystrokes
(mouse clicks, etc.) represent some basic measure of work or
cognition). So, the fewer of these, the less taxing the
"distraction".

> input) and their frequency/uniqueness rates. Possibly some type of eye
> tracking could be helpful?
From: 1 Lucky Texan on
On Feb 21, 2:51 pm, D Yuniskis <not.going.to...(a)seen.com> wrote:
> Hi Carl,
>
> 1 Lucky Texan wrote:
> > On Feb 19, 3:20 pm, D Yuniskis <not.going.to...(a)seen.com> wrote:
> >> I've been researching alternative user interface (UI) technologies
> >> and approaches.  A key issue which doesn't seem to have been
> >> addressed is modeling how much information a "typical user"
> >> (let's leave that undefined for the moment -- it's definition
> >> significantly affects the conclusions, IMO) can manage *without*
> >> the assistance of the UI.
>
> [snip]
>
> >> So, the question I pose is:  given that we increasingly use
> >> multipurpose devices in our lives and that one wants to
> >> *deliberately* reduce the complexity of the UI's on those
> >> devices (either because we don't want to overload the
> >> user -- imagine having a windowed interface on your
> >> microwave oven -- or because we simply can't *afford* a
> >> rich interface -- perhaps owing to space/cost constraints),
> >> what sorts of reasonable criteria would govern how an
> >> interface can successfully manage this information while
> >> taking into account the users' limitations?
>
> > If (and it's a big if) I understnd where you interest lies, it is less
> > in 'information overload'  (I think the military has done a huge
> > amount of research in this area for fighter pilots/'battlefield'
> > conditions) and more in 'detection' of such overload/fatigue. If so, I
>
> Yes.  Though think of it as *prediction* instead of detection.
> I.e., what to *avoid* in designing a UI so that the user
> *won't* be overloaded/fatigued/etc.
>
> An example that came up in another conversation (off list):
>
> You're writing a letter to <someone>.  At some point in the
> composition, you notice that the date that has been filled
> in (automatically) is incorrect.
>
> You could wait until you are done writing the letter to
> correct this.  But, then you have to *hope* you REMEMBER
> to do it!  :>
>
> Or, you can do it now while it is still fresh in your mind.
> And return to the rest of your letter thereafter.
>
> In, for example, Windows, you could Start | Settings | Control
> Panel | Date/Time and make the changes there.  Then, close
> that dialog, close Control Panel and finally return to your
> text editing.  Or, you could double click on the time display
> in the system tray and directly access the Date/Time Properties
> panel.
>
> The former requires a greater degree of focus for the user.
> There is more navigation involved.  As such, it is a greater
> distraction and, thus, more likely to cause the user to lose
> his train of thought -- which translates to an inefficiency
> of the interface.
>
> The latter requires less involvement of the user (assuming
> knowledge of this "shortcut" is intuitive enough) and is
> therefore less of a distraction.
>
> Of course, this (Windows) example is flawed in that the user
> can still *see* what he was doing prior to invoking the
> "set date" command.  Chances are, he can even *read* what
> he has written *while* simultaneously setting the date.
>
> Contrast this with limited context interfaces in which the
> "previous activity" is completely obscured by the newer
> activity (e.g., a handheld device, aural interface, etc.).
>
> So, my question tries to identify / qualify those types
> of issues that make UI's inefficient in these reduced
> context deployments.
>
> > expect a system to monitor 'key strokes' (mouse moves w'ever - user
>
> Hmmm... that may have a corollary.  I.e., if you assume keystrokes
> (mouse clicks, etc.) represent some basic measure of work or
> cognition).  So, the fewer of these, the less taxing the
> "distraction".
>
> > input) and their frequency/uniqueness rates. Possibly some type of eye
> > tracking could be helpful?


Even reading rates could predict the onset of overload. Again, the Air
Force has bumped into this issue. There is likely an entire branch of
psychology dealing with these issues.

As for the mechanics in a system, some could perhaps be implemented
with present or near-term technology. Certainly the military could
justify eye-tracking, brainwave monitoring or other indicators. But
reading rates, mouse click rates, typing speed, etc. Might be doable
now. I can also envision some add-on widgets that might allow for, say
a double right click to create a 'finger string'. As in tying a sting
around your finger. A type of bookmark that would recall the precise
conditions of the system (time, date, screen display, url, etc.) when
the user detected something troubling. May not be as precise as 'the
infilled date was wrong', but it may be enough of a clue that, when
the user reviews the recalled screen later, it triggers a memory like
"hmmm, what was her....OH YEAH!, that date is wrong!" .

fun stuff to think about.

Carl
1 Lucky Texan
From: D Yuniskis on
Hi Carl,

1 Lucky Texan wrote:
> On Feb 21, 2:51 pm, D Yuniskis <not.going.to...(a)seen.com> wrote:
>> 1 Lucky Texan wrote:
>>> On Feb 19, 3:20 pm, D Yuniskis <not.going.to...(a)seen.com> wrote:
>>
>>>> So, the question I pose is: given that we increasingly use
>>>> multipurpose devices in our lives and that one wants to
>>>> *deliberately* reduce the complexity of the UI's on those
>>>> devices (either because we don't want to overload the
>>>> user -- imagine having a windowed interface on your
>>>> microwave oven -- or because we simply can't *afford* a
>>>> rich interface -- perhaps owing to space/cost constraints),
>>>> what sorts of reasonable criteria would govern how an
>>>> interface can successfully manage this information while
>>>> taking into account the users' limitations?
>>> If (and it's a big if) I understnd where you interest lies, it is less
>>> in 'information overload' (I think the military has done a huge
>>> amount of research in this area for fighter pilots/'battlefield'
>>> conditions) and more in 'detection' of such overload/fatigue. If so, I
>>
>> Yes. Though think of it as *prediction* instead of detection.
>> I.e., what to *avoid* in designing a UI so that the user
>> *won't* be overloaded/fatigued/etc.

[snip]

>> Contrast this with limited context interfaces in which the
>> "previous activity" is completely obscured by the newer
>> activity (e.g., a handheld device, aural interface, etc.).
>>
>> So, my question tries to identify / qualify those types
>> of issues that make UI's inefficient in these reduced
>> context deployments.
>>
>>> expect a system to monitor 'key strokes' (mouse moves w'ever - user
>> Hmmm... that may have a corollary. I.e., if you assume keystrokes
>> (mouse clicks, etc.) represent some basic measure of work or
>> cognition). So, the fewer of these, the less taxing the
>> "distraction".
>>
>>> input) and their frequency/uniqueness rates. Possibly some type of eye
>>> tracking could be helpful?
>
> Even reading rates could predict the onset of overload. Again, the Air

Yes, but keep in mind this is c.a.e and most of the "devices"
we deal with aren't typical desktop applications. I.e.,
the user rarely has to "read much". Rather, he spends
time looking for a "display" (item) and adjusting a "control"
to affect some change.

> Force has bumped into this issue. There is likely an entire branch of
> psychology dealing with these issues.
>
> As for the mechanics in a system, some could perhaps be implemented
> with present or near-term technology. Certainly the military could
> justify eye-tracking, brainwave monitoring or other indicators. But
> reading rates, mouse click rates, typing speed, etc. Might be doable
> now. I can also envision some add-on widgets that might allow for, say
> a double right click to create a 'finger string'. As in tying a sting
> around your finger. A type of bookmark that would recall the precise
> conditions of the system (time, date, screen display, url, etc.) when
> the user detected something troubling. May not be as precise as 'the

Actually, this is worth pursuing. Though not just when "detected
something troubling" but, also, to serve as a "remember what I was
doing *now*".

I suspect a lot can be done with creating unique "screens" in
visual interfaces -- so the user recognizes what is happening
*on* that screen simply by it's overall appearance (layout, etc.).
Though this requires a conscious effort throughout the entire
system design to ensure this uniqueness is preserved. I
suspect, too often, we strive for similarity in "screens"
instead of deliberate dis-similarity.

> infilled date was wrong', but it may be enough of a clue that, when
> the user reviews the recalled screen later, it triggers a memory like
> "hmmm, what was her....OH YEAH!, that date is wrong!" .
>
> fun stuff to think about.

*Taxing* stuff to think about! :> So much easier to just
look at a bunch of interfaces and say what's *wrong* with them!
yet, to do so in a way that allows "what's right" to be
extracted is challenging.