From: D.M. Procida on
Richard Tobin <richard(a)cogsci.ed.ac.uk> wrote:

> In article
> <1jhpxfv.14jvskl1fkags5N%real-not-anti-spam-address(a)apple-juice.co.uk>,
> D.M. Procida <real-not-anti-spam-address(a)apple-juice.co.uk> wrote:
>
> >> No-one ever asks if submarines can swim. The fact that they do ask
> >> whether computers can think shows that they don't mean it in a
> >> narrow definitional sense. Dijkstra's comment reads to me as a
> >> pedantic refusal to engage with the question.
>
> >Refusal, yes - but why pedantic? It's a slightly arch way of suggesting
> >that asking whether computers can think is simply the wrong question.
>
> *If* people were really asking the wrong question, I would agree. But
> in my experience of AI (which probably doesn't go back to whenever
> Dijkstra made his comment), they aren't. When people ask "can
> computers think" they are at worst phrasing the question poorly.

It seems like a perfectly good question to me; even if it turns out not
to be the one that ultimately needs to be answered, I think it demands
to be embraced.

So if Dijkstra refuses to engage with it, I think he's wrong. But I
don't see any reason to think that the reason for the refusal is
pedantry (or to consider his remark stupid, or indeed anything less than
a useful answer to the question itself).

Daniele
From: James Jolley on
On 2010-04-29 22:33:21 +0100,
real-not-anti-spam-address(a)apple-juice.co.uk (D.M. Procida) said:

> Richard Tobin <richard(a)cogsci.ed.ac.uk> wrote:
>
>> In article
>> <1jhpxfv.14jvskl1fkags5N%real-not-anti-spam-address(a)apple-juice.co.uk>,
>> D.M. Procida <real-not-anti-spam-address(a)apple-juice.co.uk> wrote:
>>
>>>> No-one ever asks if submarines can swim. The fact that they do ask
>>>> whether computers can think shows that they don't mean it in a
>>>> narrow definitional sense. Dijkstra's comment reads to me as a
>>>> pedantic refusal to engage with the question.
>>
>>> Refusal, yes - but why pedantic? It's a slightly arch way of suggesting
>>> that asking whether computers can think is simply the wrong question.
>>
>> *If* people were really asking the wrong question, I would agree. But
>> in my experience of AI (which probably doesn't go back to whenever
>> Dijkstra made his comment), they aren't. When people ask "can
>> computers think" they are at worst phrasing the question poorly.
>
> It seems like a perfectly good question to me; even if it turns out not
> to be the one that ultimately needs to be answered, I think it demands
> to be embraced.
>
> So if Dijkstra refuses to engage with it, I think he's wrong. But I
> don't see any reason to think that the reason for the refusal is
> pedantry (or to consider his remark stupid, or indeed anything less than
> a useful answer to the question itself).
>
> Daniele

It's an answer, how is it necesarrilly a useful one in todays age of research?

From: D.M. Procida on
James Jolley <jrjolley(a)me.com> wrote:

> > It seems like a perfectly good question to me; even if it turns out not
> > to be the one that ultimately needs to be answered, I think it demands
> > to be embraced.
> >
> > So if Dijkstra refuses to engage with it, I think he's wrong. But I
> > don't see any reason to think that the reason for the refusal is
> > pedantry (or to consider his remark stupid, or indeed anything less than
> > a useful answer to the question itself).
>
> It's an answer, how is it necesarrilly a useful one in todays age of research?

I think it's a useful answer because it obliges us to reconsider the
terms of the question.

Daniele
From: James Jolley on
On 2010-04-29 22:56:26 +0100,
real-not-anti-spam-address(a)apple-juice.co.uk (D.M. Procida) said:

> James Jolley <jrjolley(a)me.com> wrote:
>
>>> It seems like a perfectly good question to me; even if it turns out not
>>> to be the one that ultimately needs to be answered, I think it demands
>>> to be embraced.
>>>
>>> So if Dijkstra refuses to engage with it, I think he's wrong. But I
>>> don't see any reason to think that the reason for the refusal is
>>> pedantry (or to consider his remark stupid, or indeed anything less than
>>> a useful answer to the question itself).
>>
>> It's an answer, how is it necesarrilly a useful one in todays age of research?
>
> I think it's a useful answer because it obliges us to reconsider the
> terms of the question.
>
> Daniele

I suppose so. If you can call it reconsidering, it's more like
restructuring it to fit a proposed ethic.

From: Peter Ceresole on
James Jolley <jrjolley(a)me.com> wrote:

> >> It's an answer, how is it necesarrilly a useful one in todays age of
> >research?
> >
> > I think it's a useful answer because it obliges us to reconsider the
> > terms of the question.
> >
> > Daniele
>
> I suppose so. If you can call it reconsidering, it's more like
> restructuring it to fit a proposed ethic.

I think it makes us think about what we mean by 'think'. And that's
always a useful thing to do.

For what it's worth (not a lot) I don't believe that we know what we
mean by it, and I suspect that at some point in the future we will
realise that in all the ways we want machines to 'think' they are
already doing it, and we never realised it. And the main problem is that
we have an inflated idea of what our 'thinking' is. I think it's a lot
more plonking and constrained by circumstance than we believe. More
operational, if you like.

But as I say, in machine terms we're not there yet.
--
Peter
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Prev: Sliverlight 4.0 full screen.
Next: new user dialogues