From: Jon Slaughter on
Suppose we have a state machine but instead of having it run from start to
finish we use a "virtual function"(or rather a pointer to a function) to
represent the current transition. After each transition the state is
returned to the caller.

Essentially instead of forcing the state machine to run from start to finish
or use threading we simply have each function sorta "save" it's position in
the process and it will be returned on the next call. So we "break" out of
the state machine at every transition to allow a sort of asynchronous
like(or parallel) behavior.

Is there a specific name for this sort of implementation of a state machine?
The goal is simply to allow parallel processing between two independent
state machines simultaneously without having to wait completely for one
state machine to reach some idle state. Or potentially another way of
looking at it is that we are inserting an idle state between every two
states(or in every transition).

It is very similar to threating/task switching except we don't interrupt in
inconvenient locations which require saving the "state" which costs cycles.


From: Gene on
On Mar 7, 11:02 am, "Jon Slaughter" <Jon_Slaugh...(a)Hotmail.com> wrote:
> Suppose we have a state machine but instead of having it run from start to
> finish we use a "virtual function"(or rather a pointer to a function) to
> represent the current transition. After each transition the state is
> returned to the caller.
>
> Essentially instead of forcing the state machine to run from start to finish
> or use threading we simply have each function sorta "save" it's position in
> the process and it will be returned on the next call. So we "break" out of
> the state machine at every transition to allow a sort of asynchronous
> like(or parallel) behavior.
>
> Is there a specific name for this sort of implementation of a state machine?
> The goal is simply to allow parallel processing between two independent
> state machines simultaneously without having to wait completely for one
> state machine to reach some idle state. Or potentially another way of
> looking at it is that we are inserting an idle state between every two
> states(or in every transition).
>
> It is very similar to threating/task switching except we don't interrupt in
> inconvenient locations which require saving the "state" which costs cycles.

Yes. It's called a "pull" implementation. The other is (you guessed
it) a "push" implementation. In the pull implementation, the caller
controls the input consumption. Control returns to the caller every
time some prescribed prefix of the input is consumed. In a push, input
consumption occurs in an inner loop not accessible to the programmer;
control is transferred back to the caller by some kind of callback or
other "action code" attachment mechanism.

A similar concept is "iterator," but that normally refers to pull
implementation of aggregate data structure access only. The aggregate
elements are being consumed in this context.

Gene
From: Jon Slaughter on
Gene wrote:
> On Mar 7, 11:02 am, "Jon Slaughter" <Jon_Slaugh...(a)Hotmail.com> wrote:
>> Suppose we have a state machine but instead of having it run from
>> start to finish we use a "virtual function"(or rather a pointer to a
>> function) to represent the current transition. After each transition
>> the state is returned to the caller.
>>
>> Essentially instead of forcing the state machine to run from start
>> to finish or use threading we simply have each function sorta "save"
>> it's position in the process and it will be returned on the next
>> call. So we "break" out of the state machine at every transition to
>> allow a sort of asynchronous like(or parallel) behavior.
>>
>> Is there a specific name for this sort of implementation of a state
>> machine? The goal is simply to allow parallel processing between two
>> independent state machines simultaneously without having to wait
>> completely for one state machine to reach some idle state. Or
>> potentially another way of looking at it is that we are inserting an
>> idle state between every two states(or in every transition).
>>
>> It is very similar to threating/task switching except we don't
>> interrupt in inconvenient locations which require saving the "state"
>> which costs cycles.
>
> Yes. It's called a "pull" implementation. The other is (you guessed
> it) a "push" implementation. In the pull implementation, the caller
> controls the input consumption. Control returns to the caller every
> time some prescribed prefix of the input is consumed. In a push, input
> consumption occurs in an inner loop not accessible to the programmer;
> control is transferred back to the caller by some kind of callback or
> other "action code" attachment mechanism.
>
> A similar concept is "iterator," but that normally refers to pull
> implementation of aggregate data structure access only. The aggregate
> elements are being consumed in this context.
>

Thanks! It's nice to know the names for things ;)