From: rick_s on

> So its a very complicated thing, and beyond speech, people analyze
> categorize and recall in context information that they see all day long.
>
> If you could get it to learn from TV, you could sure save a lot of time
> and effort.
>

So whats the solution to interactive holographic projected assistants?
Besides a lot of sweat and elbow grease and hard coding things for it to
say? Such as a lab assistant could be modelled by going in to a hundred
labs and picking similar people and writing down everything they say and
programming it all in as a set of rules of what to say and giving the
character a bit of personality as per how to say it.

And they still won't know what you are talking about the minute you do
or say something that you have not seen done in those hundred labs.

What SONY does with their robots is they have human robot assist.

What you might have might look like the AI helper programs that can
answer product related questions by analyzing a sentence and giving a
standard reply.

(See The Future Shop home page where they have a search term box, and a
short video of a person response who can answer questions that are in
the database as an example)

That's sort of like the older AI programs who can even parse a sentence
using fuzzy logic if your question is not a perfect match with something
in the database. The same guy might also be able to then use a web cam,
maintainng the same look, and answer the question in real time and
record that question and answer.
But that's product support, not social interaction or broad scope
conversation.

But if it is just a holograph, that can answer questions like, are there
any calls for me? Whats the weather like outside? You can still have a
reasonable interactive social discourse based on information that you
need or want on a daily basis such as the news, and your day planner,
and the kind of conversation you might have while gulping down a cup of
coffee and heading out the door.

That's not a lot of programming to do that.

Stray off topic when you are speaking to your assistant and you have
nothing again. Speech recognition software that can pick up speech from
TV is not here yet. You need to train the system according to a specific
voice pattern before it will recognize commands. Say the word "Hello"
you say it, and it is now recognizable. So getting an AI to learn from
TV would be speech recognition and then speech patterning and then if it
could isolate conversation and attribute who said what to whom, it could
write its own if statements. They might have a lot to do with soap
though and little to do with science.

But any question and answer it can log, allows it a response to that
question. So you see that sounds doable. Auto learning.



From: rick_s on
On 6/21/2010 2:15, rick_s wrote:
>

>
> But if it is just a holograph, that can answer questions like, are there
> any calls for me? Whats the weather like outside? You can still have a
> reasonable interactive social discourse based on information that you
> need or want on a daily basis such as the news, and your day planner,
> and the kind of conversation you might have while gulping down a cup of
> coffee and heading out the door.
>
> That's not a lot of programming to do that.
>
> Stray off topic when you are speaking to your assistant and you have
> nothing again. Speech recognition software that can pick up speech from
> TV is not here yet. You need to train the system according to a specific
> voice pattern before it will recognize commands. Say the word "Hello"
> you say it, and it is now recognizable. So getting an AI to learn from
> TV would be speech recognition and then speech patterning and then if it
> could isolate conversation and attribute who said what to whom, it could
> write its own if statements. They might have a lot to do with soap
> though and little to do with science.
>
> But any question and answer it can log, allows it a response to that
> question. So you see that sounds doable. Auto learning.
>
Another way to get learning into its head is to mobilize the grannies of
the world and get them all to write down all the questions and answers
they can think of based on every Wikipedia page there is, and get punks
to do the same for slang, and get real men to do the same about
bodybuilding, and if you had the resources you could write down a lot of
questions and answers to those questions including chatting until you
get to personal questions which are not relevant. Most of the questions
you program in will never come up in conversation with the AI.

But now consider if it might be worthwhile to program an AI like the one
in Time Machine remake, the Librarian. He also has a colorful
personality. So he has a set of styles that he can put on like clothes
to tailor his interaction to different types of people.
He is so advanced, it is a person acting because we can't even imagine
programming in that much behavioral information at this point.

But his answers he draws from the library. He is not matching up a
question with an answer but is using sentence parsing to match up
question and library data.
In real conversation you might say, "how do I do this complex task?"
or "I am going on holidays to Spain, what do I need to know?"
That might be asking too much but not for this AI in that show who is a
person in every respect. Easy to do if you just use a person.

They have grown rat brain cells and are using them to drive a little
test robot because the cells will respond to the sensor that says a wall
is straight ahead, so they can use the sensitivity of the cells to steer
the robot away from obstacles.
Mind you you can do that without the brain cells as well by using a
small computer that reads the input data from the sensor.

We are about as comfortable with that technology as we are with nanobots
crawling all over us.

The nice thing about a holograph is it has no substance. No disease
carrying and no accidental injury or any dangers associated with
intimate contact. It's not a robot, its more like watching something in
progress. As a learning tool or entertainment medium or art expression,
it can still add a lot of talent to a 3D pc, even if it can't converse
on every topic it could read sign language via input from theremin
device, and even reply by sign language, and in that way you make
communicating with it, within the grasp of today;s technology.

You can only say so much with sign language so most of it is about
pointing at other things or referring to known or recent things.

And by using that to begin with, you could easily pass th eTuring test
under those simplified behavioral conditions.