From: unruh on
On 2010-01-31, Simon Johnson <simon.johnson(a)gmail.com> wrote:
> On Jan 31, 12:01?am, Paul Rubin <no.em...(a)nospam.invalid> wrote:
>> Simon ?Johnson <simon.john...(a)gmail.com> writes:
>>
>> > We now understand that the grammars that (good) programming languages
>> > provide are unambigious and relatively easy to understand. ...
>> > So I ask, can we not translate the existing body of the mathematics in
>> > to such a language and show proofs of the relevant theorems using this
>> > grammar? ?This would make mathematics accessible to people like me.
>
> First of all, thank you for the links. They were very interesting.
>
>> In reality though, mathematics (as practiced by humans) doesn't work
>> like that. ?It's all about the exercise and development of mathematical
>> thought patterns, just like dancing is about the exercise and
>> development of physical motion patterns.
>
> Having read the articles you linked to, I'm tempted to conclude that
> if you can't produce a machine verifiable proof what you have is not a
> proof; it's a conjecture.
>
> People objected to the four colour theorem because it was verified by
> computer and they totally and utterly missed the point. They asked how
> is it a real proof if a "computer" verified it? What they should have
> asked is: "hurray, what *else* can we now verify?"

No they objected because the proof cannot be checked. One of the things
you will discover when you start programming anything is that bugs creep
in. How do we know that that computer program did not contain bugs?


>
> I suspect that once machine verification becomes mature, you'll find
> that there are serious but subtle errors in a large number of proofs.

And that there are subtle errors in a large number of computer programs.

>
> Once these are discovered you'll see a giant (overnight?) watershed on
> the use of this technology.
>
> Anyone who has developed software will be fully aware of the subtle
> bugs software contains. Formal inspection (which is the norm for
> mathematics) only removes 90% of bugs. There's going to be some real
> shockers out there.
>
> You might argue that the proving programs will contain errors. They
> will, that's for sure but it's like double entry book keeping. The
> fact that you want to prove X with said program reduces errors in both
> the proof and the program. Moreover, once you've proved that a theorem
> prover works, that proof extends to any theorem it verifies.
>
> Also, the theroem prover is going to be much, much smaller than the
> set of theorems it proves. This "compression" makes mathematics more
> robust.

Huh. Compression removes robustness. Robustness comes in part from
redundnacy.


>
>> There was something like that called the QED Manifesto which didn't
>> get much traction:
>>
>> ?http://www.cs.chalmers.se/Cs/Research/Logic/TypesSS05/Extra/wiedijk_2...
>>
>
> This work was ahead of its time. Machines were much too slow to
> realise this dream when this paper was published.
> I think we have a better shot of doing this today.
>
> Saying that " In reality though, mathematics (as practiced by humans)
> doesn't work" is silly. Mathematics will change to use this.
>
> A century ago "computers" were humans who made (many) mistakes
> producing log tables and the like. Today, they're nearly flawless
> circuits that compute results to unimaginable levels of accuracy.
>
> Tomorrow, most proofs will be submitted in computer verifiable form. I
> fully expect every single one of the modern mathematicans to have to
> die to make this happen (as was the case with calculus) but it *will*
> happen.

Your history of mathematics is deeply flawed. Noone had to die to start
using calculus. They began using it immediately. the main thing that
prevented its adoption was the secrecy which people like Newton
surrounded it with.

>
> The question is whether your one of the defeatist conservatives or one
> of the champions of the movement? Fortune rarely favours the
>e conservative in mathematics.

There are many that fortune does not favour.

>
> Cheers,
>
> Simon
From: Mok-Kong Shen on
Simon Johnson wrote:
[snip]
> For example, if I wanted to understand the proof of the theorem that
> AES is resistant to differential cryptanalysis, I could recurse
> through the proof tree until such a point that the result is obvious.
[snip]

If my memory is right, I have read that there were machine proofs of
crypto communication protocols. I had not paid attention but perhaps
some experts of the group could provide the relevant references. This
is nothing extraordinary, since there has long time been much work done
on machine verification of "normal" protocols, employing diverse
theoretical tools, e.g. Petri nets, etc. and, if I don't err, even
special programming languages and software have been developed. The
whole may be subsumed under program verification, where sophisticated
logical procedures have been employed to do the work. At the very
beginning, program verification was barely practical, because the
machine power was then too feeble, but now this is no longer so and we
know that e.g. chip design essentially depends on verification software.

Attacks on cipher systems with math is math operating on an description
of ciphers. Thus I don't see any principal reason why one can't use
a machine to employ math to prove whether some types of attacks can
correctly function or not on a given cipher. Of course, some people may
be reluctant to accept a machine proof of a theorem, e.g. in the case of
the four colour theorem.

M. K. Shen
From: Kristian Gj�steen on
Simon Johnson <simon.johnson(a)gmail.com> wrote:
>There are many people in this group that will defend, until death, the
>notion that dense mathematical notion in academic papers is not only
>required but _necessary_.

This is nonsense, of course. People use mathematical notation because
it is convenient for expressing mathematical ideas and arguments.

>[...]
>So I ask, can we not translate the existing body of the mathematics in
>to such a language and show proofs of the relevant theorems using this
>grammar? This would make mathematics accessible to people like me.

First question: yes, in principle. For the second question: no.

>[...]
>For example, if I wanted to understand the proof of the theorem that
>AES is resistant to differential cryptanalysis, I could recurse
>through the proof tree until such a point that the result is obvious.

You might be able to verify the proof, but not understand it.

Understanding is much stronger than merely verifying.

>[...]
>I want to establish a similar campaign in mathematics.

Then you should use less colourful language. Now you sound like a crank.

If you really want to spend energy on something, spend it on making
papers more readable.

That said, there is an argument for applying formal methods and
computer verification to cryptographic protocols. In my experience,
I lose confidence in my "proofs" as my cryptographic protocols become
more complicated. Formal, automatically verifiable proofs may work
well here.

--
Kristian Gj�steen
From: Rainer Urian on

"Kristian Gj�steen" <kristiag+news(a)math.ntnu.no> schrieb im Newsbeitrag
news:hk4f9v$cio$1(a)orkan.itea.ntnu.no...
> Simon Johnson <simon.johnson(a)gmail.com> wrote:
>>There are many people in this group that will defend, until death, the
>>notion that dense mathematical notion in academic papers is not only
>>required but _necessary_.
>
> This is nonsense, of course. People use mathematical notation because
> it is convenient for expressing mathematical ideas and arguments.

Here I would partially agree with Simon. There are indeed lots of
mathematical papers with artificially complicated notation and arguments -
especially in the cryptographic sector.

Rainer

From: Simon Johnson on
> Here I would partially agree with Simon. There are indeed lots of
> mathematical papers with artificially complicated notation and arguments -
> especially in the cryptographic sector.

Kristian is right to some degree. There's a certain crankiness in
suggesting that everyone else should change their ways to accommodate
me.
Mathematicans do need a language that they can use to describe
problems to each other and restating what the notion means in each
paper would be silly.

However, I feel that some people write the papers in such a way as
they don't want them to be understood. I'm sure everyone has read a
paper that feels like that and its deeply frustrating when you
encounter one.

Just calling people out on that would be a start.

Cheers,

Simon
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5
Prev: two definitions of RSA?
Next: It's not science, it's sci.