From: rick_s on
On 6/23/2010 14:04, rick_s wrote:


> In 3D you now have to do that 720 times.

A bitmap 720x720 you have to essentially put that to the screen 720
times and do that 25 times per second.

So lets cheat then and use a video, and project that into the room, like
the Kate Moss video, then lets animate those red dots at the end.

That is a smaller amount of ram, small cubes, just being projected into
a larger field of space-time.

Not cubes exactly but roughly speaking.

So then hearken back to that SONY page that had the shark sticking out
of the screen, then to the small collection on the desk below it that is
a hologram.

I think they are suggesting that is about the size we want to begin with
since this architecture we have now, which is 2D architecture is able to
fake 3D architecture and that is maybe the best they can do.

That size. Same with Ben's hologram in the movie from 2003, the woman is
about that size.

So then what is that? 320 by 320? Its smaller than that in those
examples but lets suppose you could manage 320 by 320 and do all those
complex calculations in real time to make a fully animated, moving smoke
and all 3D scene.

If wishing for 3D architecture this far from Christmas is too much to
expect.

WHy do I get the feeling that people are telling me they want these for
Christmas? Oh yeah, it said that in the Paycheck trailer at the end
because the show was probably released at Christmas.

So I am finally getting hint now, that SONY has released the sharks, and
thankfully, they do not have laser beams on their heads.


In your 320x320x320 magic box, yours can have laser beam eyes, or like
the second fashion video I showed from 2008, the reflected scenes off
glass of jelly fish etc. only that was just 2D imagery.

Keep in mind you can only see a flat 2D plane in front of you at any
time in the real world. So anything can be represented in that plane but
you have to cross section a 3D environment to produce that plane.

We here are expecting we will have 320 cubed to work with and we won't
worry about cross sections for display. Because we want to be able to
walk around it or view it at any angle standing on a horizontal plane.
We could rotate around it and it would all look real.

To make a larger scene, you do what they did in 2009, and use a video
hologram.

But we would be able to reconstruct, the black Librarian from the Time
Machine remake, since as an AI, he is showing things from the library,
that can be 2D, some of it is text, and he himself will be a 3D fully
interactive hologram, about as much as 2 feet high. And adjustable.

Scale is not important, it just depends on processing power.
His inner programming is small in power consumption compared to just
animating things interactively and realistically in a 3D environment.

No matter how much information he knows, google seems to be able to look
thing up in a second or less.

So if he is retrieving information then he needs to be good at parsing text.

But he can put up several examples at once and narrow things down etc.
Good indexing is the key to fast retrieval.

But if you want to see how fast your computer can find text, maybe a
word document search is not the best way, but to make a small program
and test it.

Combine 100 large text files into one giant text file, and then search
for a string in that large text file, by iterating through every line
parsing it for sub text.
That's equivalent to searching a knowledgebase. Now with indexing you
can jump through the text without iterating through every line.

So our AI assistant will show video, search things, do some pointing if
you want, interact as humanly as possible with his showcasing
capabilities. Just like that AI in Time Machine.

Now to have conversation well again you need to do a lot of if
statements, too many to do it your self or even with a group and expect
results of any merit.

You have to think of the pyramid effect and start small, encourage
development by others, and then it will grow.

And other people as well will have assistants. Now remember that
assistants in Windows have been there for a long time and they are
pretty useless. Program support assistants.

Your assistant will know things that are important to you.

So when you tell it using sign language, download Ben Afleck in
Paycheck. or using speech or in text or through whatever interface, it
will search and download the movie and it will then have that on file
and you should them be able to tell the AI to excerpt it for you,
and then you can use commands to teach it useful information that is in
that movie. Keep this, discard that. And give it a few meaningful index
words. You have to tell it what is important information.

Now if it could use speech recognition, it might be able to log a lot of
meaningful information itself.

If you can identify and separate Ben Afleck's voice and then you have
the main character and so even if you have a lot of useless information
tagging along, there might be useful information in there.

So that you might just tell it to get that movie, and it does that and
garners useful information from it itself.
And indexes the information either as thumbnails or some parsed text.
Use the net for reviews if it has nothing to do, get more information
from the book its based on if it can find it. Let it gather meaningful
information and then try to find ways to sift that information into
useful information.

It can then easily mimic a review of the movie, show you excerpts etc.

And do this for any film including documentaries and any on topic
information that it can gather in any form. Even mp3 audiobooks.

Storage of data is not really a problem. But you have to lend your mind
to the AI. You have to do the thinking for him when you program how he
will gather and sort useful information.


Working backwards is the best way and you notice in that movie trailer
they talk about reverse engineering. Which he does there as well.

I showed you the information pathway on that already.

So we ourselves will use that by imagining what the finish product is we
want and then working backwards with a backwards chaining inference
engine even though what we are reverse engineering does not yet exist.
As far as we know.

So we can reverse engineer something that has never been made before.

We just start a little farther up. We want t to look like this, so we
start with a conceptual drawing or example from film or wherever and
then make the machine.

we can do the same for our AI.

We say what does he need to know and when does he need to know it and
can we keep it simple but still be effective information transfer.