Hugo Hacker News

The Inconsistency of Arithmetic (2011)

hypersoar 2021-08-17 00:38:34 +0000 UTC [ - ]

Huh, I remember when this happened, but I hadn't realized it happened in some blog commemts.

For those who are unaware, the comment pointing the flaw in Nelson's proof is Terrance Tao, one of the most extraordinary mathematicians in the world. He's won a good fraction of the field's medals, including the Fields Medal. A nice, down-to-earth guy, too, by all accounts.

I jokingly call this the time he literally saved mathematics.

btilly 2021-08-17 01:39:03 +0000 UTC [ - ]

He is also the author of what I consider one of the most ironic blog posts of all time.

In https://terrytao.wordpress.com/career-advice/does-one-have-t... he argues that you do not have to be a genius to do first rate mathematics.

The irony is that this is being argued by person with the best documented genius in the history of mathematics. His official IQ of 230 remains the highest officially measured IQ in the world. He taught himself to read by age 2. He was taking Calculus at age 7. He remains the youngest person for any level of medal in the International Mathematics Olympiad. That doesn't quite convey it. The youngest person to earn a bronze in that competition was 10. Silver was 11. Gold was 12. All three records were set by Terrence Tao, who then didn't bother competing any more.

See https://newsroom.ucla.edu/releases/Terence-Tao-Mozart-of-Mat... for some more of his accomplishments.

His success is comparable to any person in the history of mathematics. Yes I am including Euler, Gauss, and Erdös. Euler possibly covered a bigger breadth of math. Gauss created more fields. Erdös solved more problems. And Tao has made more progress in more hard problems that had stumped everyone else.

jhgb 2021-08-17 02:12:55 +0000 UTC [ - ]

> He taught himself to read by age 2.

Well, I taught myself to read by the age of 3...

> He was taking Calculus at age 7.

...OK, clearly something went awry with me between the ages of 3 and 7.

asddubs 2021-08-17 02:47:35 +0000 UTC [ - ]

i hit my personal rock bottom at age 5. i never recovered

hellbannedguy 2021-08-17 03:07:36 +0000 UTC [ - ]

I flunked kindergarten. (It hurt even at that age.)

chalst 2021-08-18 10:27:01 +0000 UTC [ - ]

&LOL;

The early mastery of swearing I picked up between backseat driving (that is, listening to my Scottish mother's running commentary on other drivers) and my spongelike absorption of Canadian broadcast media, earned my parents a stern reproach from the ultra-bourgeois kindergarten they sent me off.

glitchc 2021-08-17 03:37:37 +0000 UTC [ - ]

My oldest friend was held back in kindergarten. That’s how I met him.

kbelder 2021-08-17 22:05:32 +0000 UTC [ - ]

Of course he's your oldest friend; being held back, he's a year older than all your others.

charles_f 2021-08-17 05:10:46 +0000 UTC [ - ]

My kid is 3.5 and he still believes that 10 comes after 8. I'm deeply worried now.

kierkegaard7 2021-08-17 05:57:41 +0000 UTC [ - ]

Well, he's not wrong...

CRConrad 2021-08-17 14:03:10 +0000 UTC [ - ]

This is why you start your kids' acquaintance with computing in any OS but Windows.

beardyw 2021-08-17 11:39:51 +0000 UTC [ - ]

He is just counting base 9. You need to catch up.

2021-08-17 14:00:37 +0000 UTC [ - ]

jacobolus 2021-08-17 05:27:54 +0000 UTC [ - ]

Einstein didn’t speak until age 4, so your kid is probably too precociously verbal to be the world’s most famous physicist.

andi999 2021-08-17 08:31:19 +0000 UTC [ - ]

Probably he had to think it through which first words were appropriate.

jose-cl 2021-08-17 06:21:03 +0000 UTC [ - ]

nah, just let him/her play

chris_wot 2021-08-17 03:30:26 +0000 UTC [ - ]

I'm hoping I'm a late bloomer from between the ages of 42 and 47.

chalst 2021-08-18 09:59:59 +0000 UTC [ - ]

Insufficiently expensive education.

With the right calculus teacher, I'm sure you'd have got the gist of it by your 60th month.

OscarCunningham 2021-08-17 06:06:10 +0000 UTC [ - ]

The IQ figure of 230 can't be calculated on the usual scale with a mean of 100 and a standard deviation of 15, because if so it would mean that a world with our current population only had a 1 in 57 million chance of containing someone as smart as him. How could we possibly know enough about population statistics to know such a thing? Especially given that we do in fact have him in our world.

jhgb 2021-08-17 12:15:14 +0000 UTC [ - ]

Not only that but standard tests don't even reach reach the point where you can distinguish people near the top. If you have, for example, Raven's Advanced Progressive Matrices, you're dealing only with 48 questions. I vaguely remember that making four mistakes at the age of 17 means an IQ somewhere in the mid-150s (with a standard deviation of 15). If you can only distinguish three more values above 150s, I don't think you can reasonably reach even 200, not to mention 230.

btilly 2021-08-17 17:19:40 +0000 UTC [ - ]

You are right that no adult test will give an IQ that high.

However the original definition of IQ was in tests for children, and it was measured as mental age divided by physical age. Those tests are still used for children, and it IS possible for a 6 year old to get an official IQ of 230 on a test like the Stanford-Binet.

Which is exactly what Terry Tao did at the age of 6. Comparisons to adult tests may be problematic. But it remains an officially recorded IQ from a standard test.

hyperpallium2 2021-08-17 12:08:17 +0000 UTC [ - ]

His IQ is so high, it required several breakthroughs in neostatistics to properly measure it.

netr0ute 2021-08-17 13:44:16 +0000 UTC [ - ]

Is this satire?

hyperpallium2 2021-08-18 08:31:39 +0000 UTC [ - ]

A joke.

TchoBeer 2021-08-17 11:06:04 +0000 UTC [ - ]

Just want to point out that of 7B people, about 122 of them are one in 57 million

jhgb 2021-08-17 12:20:35 +0000 UTC [ - ]

I think what he meant was that if Tao indeed has an IQ of 230, then on average one seven-billion-people-world out of every 57 million such (random) worlds contains one person as smart as Tao, or, that you need to sample something like 400 quadrillion people to find one such person. So your "122" figure doesn't seem to have any meaning here (at least to me).

OscarCunningham 2021-08-17 12:32:02 +0000 UTC [ - ]

Yeah, that's what I meant. (Except that the world population is now nearly 8 billion.)

jhgb 2021-08-17 12:38:44 +0000 UTC [ - ]

Sometimes the population of the world seems like the price of Bitcoin to me. Wasn't is 7 billion just yesterday?

andi999 2021-08-17 14:08:15 +0000 UTC [ - ]

Yes, don't worry it will probably level at 11 billion

CRConrad 2021-08-17 14:05:08 +0000 UTC [ - ]

No, that was this morning. Yesterday it was six, and the other day four.

linschn 2021-08-17 11:27:49 +0000 UTC [ - ]

I think the parent's point is that you would need 50 something millions worlds like ours to find a single person with such a high IQ score.

I haven't checked the maths, but I know the Gaussian distribution is falling fast past a few standard deviation, so such a result would not surprise me.

TchoBeer 2021-08-17 11:42:17 +0000 UTC [ - ]

I admit I'm not very informed about what IQ denotes, but wouldn't such a probability need to include the population of the world? Is that a parameter in the calculation of IQ?

btilly 2021-08-17 18:45:02 +0000 UTC [ - ]

Allow me to inform you.

IQ stands for "intelligence quotient".

It was developed as a ratio between mental age and physical age, times 100. So if you performed as expected, your IQ was 100. It was originally developed as a way of finding people who were behind, literally "retarded". However it also proved useful as a way of finding people who are intelligent as well.

After IQ tests were adopted in school systems, we found that they were approximately normally distributed with a mean of 100 (by definition), and a standard deviation of 15-16. For a variety of reasons (including identifying military recruits to train for specialized roles), there was a desire to have ability tests aimed at young adults. We developed those and scaled them to be explicitly normal with a mean of 100 and standard deviation of 15 or 16 (depending on the test). We also found that childhood IQ is a fairly good predictor of your later adult abilities, and therefore many of those tests are ALSO called IQ, even though there is no quotient involved.

On the adult tests, the tests do not scale out to IQ 230, nor is it likely that anyone is that many standard deviations out. But on the child tests, there is no problem scaling it. And it turns out that, in practice, the tails are heavier for the original type of IQ test. Which means that it is more common to find a 6 year old who performs at a 14 year old's ability, than a person who is over 8 standard deviations out.

Terry Tao had his IQ officially measured on a childhood test, not an adult one. However there has been no test since that is able to reliability measure his ability, particularly in math.

Consider, at 7, his SAT score put him well within the top 1% of college-bound kids.

At 10, his performance on the International Math Olympiad meant that he was literally in the top handful of high school students in the world.

Yeah, we don't have properly scaled tests for that.

OscarCunningham 2021-08-17 20:10:47 +0000 UTC [ - ]

I'd add on top of that that these days adult IQ is instead defined to have a normal distribution with mean 100 and standard deviation 15. This is achieved by 'grading it on a curve'.

That's why my above comment was explaining why the 230 figure couldn't possibly make sense under the modern definition.

btilly 2021-08-17 20:28:10 +0000 UTC [ - ]

An adult test can't give such a score.

However the Stanford-Binet test is still used for children. And in young children can still produce extremely high IQ scores.

Also note that while usually the standard deviation is made to be 15 these days, there are still tests, like the Binet and OLSAT, where the standard deviation is 16. That's why I said that the standard deviation depends on the test.

GavinMcG 2021-08-17 11:51:38 +0000 UTC [ - ]

Theoretically not; IQ is ostensibly a statistical distribution, so it applies regardless of population size.

OscarCunningham 2021-08-17 12:33:03 +0000 UTC [ - ]

Yes, I used the world population figure when I was calculating that probability. But the world population size is not used in the definition of IQ itself.

BeetleB 2021-08-17 04:52:45 +0000 UTC [ - ]

> In https://terrytao.wordpress.com/career-advice/does-one-have-t... he argues that you do not have to be a genius to do first rate mathematics.

He says to do mathematics, not "first rate" mathematics.

chaboud 2021-08-17 02:52:13 +0000 UTC [ - ]

Unless he was born on February 29th (good luck getting that drink at a bar), reading after only hitting two birthdays is the stuff of legend.

That said, having been told so by a genius, I feel like I'm now unshackled, ready to pursue a life of mathematics as a non-genius.

wutbrodo 2021-08-17 04:39:39 +0000 UTC [ - ]

> reading after only hitting two birthdays is the stuff of legend.

I don't think this is quite true. I started reading shortly after my third birthday, and I don't think I just missed the cutoff for "legend".

Though any one or two of Tao's other accomplishments fairly easily clear that bar, IMO.

jamesdmiller 2021-08-17 14:26:01 +0000 UTC [ - ]

Children with hyperlexia can often learn to read before they are two. I personally knew a children who before the age of two was reading.

sdiupIGPWEfh 2021-08-18 01:43:34 +0000 UTC [ - ]

Echoing others, hyperlexia is a thing, and reading by age 2 isn't really all that special. I was reading by age two and a half, and my own kid was reading just before turning two. I'm no genius, while she's definitely sharp but not gifted.

spoonjim 2021-08-17 04:14:39 +0000 UTC [ - ]

My son was reading at age 2. Was definitely not doing calculus at 7.

eyelidlessness 2021-08-17 03:15:15 +0000 UTC [ - ]

This is the correct takeaway! Go learn and explore!

thaumasiotes 2021-08-17 05:08:55 +0000 UTC [ - ]

> Unless he was born on February 29th (good luck getting that drink at a bar), reading after only hitting two birthdays is the stuff of legend.

No it isn't. I would have called it "normal".

CRConrad 2021-08-17 14:06:52 +0000 UTC [ - ]

Re-read, or dial down.

Rioghasarig 2021-08-17 17:32:19 +0000 UTC [ - ]

> His official IQ of 230 remains the highest officially measured IQ in the world.

Terrence Tao doesn't have an official IQ. People like to make up numbers for smart people's IQs.

carnitine 2021-08-17 06:27:43 +0000 UTC [ - ]

It’s very odd to put Erdos amongst Gauss, Euler and Tao. He was very proficient, but nowhere near as talented.

jstx1 2021-08-17 07:22:05 +0000 UTC [ - ]

I also think it's odd to put Tao next to Gauss and Euler. Maybe he's one of the best mathematicians right now but that's different from being the same calibre as the most influential people in the field ever.

OscarCunningham 2021-08-17 09:27:05 +0000 UTC [ - ]

Gauss and Euler had the advantage of living among a much smaller population of mathematicians, and when much low hanging fruit had not yet been discovered. If they were born today they would not reach the same level as Tao.

jstx1 2021-08-17 09:32:59 +0000 UTC [ - ]

Yes, they made a lot of contributions because of the time they lived in (right after calculus was invented but before modern mathematics). Still, you have no way of knowing what they would be able to contribute today and that whole hypothetical is kind of pointless anyway.

CRConrad 2021-08-17 14:08:12 +0000 UTC [ - ]

Then your original claim was equally pointless, wasn't it?

2021-08-17 18:24:52 +0000 UTC [ - ]

mayankkaizen 2021-08-17 19:11:28 +0000 UTC [ - ]

That is baseless comment. Their achievements are low hanging fruits but only by today's standards. You can say the same thing practically for anybody. Einstein discovered relativity because low hanging fruit. von Neumann did all the amazing works because low hanging fruits.

And how can you say they would not reach the same level as Tao? You are indicating that being truly genius is some modern phenomenon. May be 100 years down the line, even Tao will proabably be downplayed and kids will be learning advance calculas before they turn 10.

Come on!

OscarCunningham 2021-08-17 20:04:24 +0000 UTC [ - ]

It's mostly a question of population sizes. The world population in their age was around 15% of what it is now, and only a fraction of those people would have had access to enough of an education to be noticed as a top mathematician. So it would be astonishing if their top mathematicians beat our own.

goatlover 2021-08-17 17:51:47 +0000 UTC [ - ]

How can you possibly know that Gauss and Euler would have been able to achieve less than Tao? That's a baseless claim.

ACow_Adonis 2021-08-17 02:38:09 +0000 UTC [ - ]

What's the source for some of those claims?

Calculus at seven I can believe, but teaching himself reading at 2 sounds like something that goes against what we know about biological and social development of the child.

And official IQs of 230 and 300 sounds like someone doesn't understand what IQ is or the meaningfulness of measuring/quantifying such in a standardised way.

Obviously the person may be exceptional, and I do not mean to take anything away from Terence in his work that i'm clearly unqualified to comment on (it wouldn't surprise me if a bit of digging shows him unconnected to such claims), but we shouldn't just accept such things as given. Extraordinary claims require extraordinary evidence...

btilly 2021-08-17 03:23:13 +0000 UTC [ - ]

Reading at 2 is well-attested by multiple sources.

The IQ was a case of going back to the original kind of IQ test for children, mental age divided by physical age. At the age of 6 he performed at a level to be expected of a 14 year old on the Stanford-Binet test. At the extremes of these tests, the bell curve approximation. So yes, his IQ was measured that high. But it is not strictly comparable to adult IQ tests.

It should be noted that Terrence Tao's early achievements were well documented because he met Dr Miraca Gross of New South Wales at age 3, who was running a longitudinal study of Australian gifted children. Tao, of course, wound up as the star of the approximately 60 children in the study.

ACow_Adonis 2021-08-17 05:03:09 +0000 UTC [ - ]

I must reiterate, I don't doubt Terence's abilities, what I doubt are repetition of borderline mythical tales without extraordinary evidence (albeit,I accept someone might say Terrance is the extraordinary evidence).

sure, we have many attestations of speed reading too, and of course there will be attestations of people trying to sell a particular child or education method (or indeed scam).

And I've seen some very convincing performances (and yes, 2 year olds can pick up some basic symbolism, recognition, and repetition). but they lack the ability to comprehend or extend past their specifically repeated contexts. if you film only the contexts of their true positive successes, you can almost sell it, but their errors in extension to anything but rote quickly reveals their comprehension of what they're capable of shouldn't be confused with literacy/numeracy.

different issues abound with attempts to measure or quantify IQs in the extreme ranges at all, let alone in children for which there will be almost no standard applicable method, and it's interpretation is even dodgier.

I mean, another cynical man might say, obviously someone who has lived a life like Terrance must have been given remarkable opportunities. since he clearly did not introduce himself to the gifted study at age 3, and given that we do not have standardised testing at such an age someone was likely a driving force behind him in his youth. the obvious candidate would be his parents, who partake in strong educational and repetitive training, but in the context of the pedagogy of Australian education of the time, would also have to had a very strong narrative needed to overcome some of the dominant "hold them back" attitude of the time.

I mean, we see the same thing with Beethoven and the like. everyone is so focused on the prodigious properties of the person in their later life, critical thinking and skeptical inquiry tends to go out the window. we want to explain greatness through some inherent difference, rather than say the alternative: maybe Terrence is great because of some innate ability. but primarily it is almost certainly the opportunities afforded to him via his circumstances combined with the work and practices and momentum of his parents and his family and then in later life, of himself.

relatively non-scientific hocus-pocus like "taught oneself to read at age 2" literally would go against almost all knowledge of language or child development theories. yet otherwise intelligent people accept such old wives tales without much critical thought.

choeger 2021-08-17 05:49:48 +0000 UTC [ - ]

Teaching a (bright) kid reading at age two may be possible by my experience. At that age, with a lot of talking to grown-ups or listening to stories, kids can have a surprisingly immense vocabulary. But "teaching himself" is a little bit too much for my taste.

I have a child that started reading at 5. It was very motivated and learned the basics in a couple of weeks. It really gets much simpler once they understand the basics. But to get there, we had to invest a couple of hours (say 5-10h total) into fundamental pronunciation and character recognition. Someone has to be there for the child to tell them the difference between d and b, or l and I, or e and a, and so on.

Now here's the catch: For someone in the position of teaching a motivated, intelligent child, it may actually feel like the child is teaching itself. But that's presumably skewed by our own school/college/university experience of learning stuff we were actually not that interested in.

flyinglizard 2021-08-17 06:27:47 +0000 UTC [ - ]

I bought my kids iPads and educational apps from Originator which taught them letters and numbers. Granted, papa lizard has given them very good genes but the result was them knowing the English alphabet and numbers past 100 at 2-3 and reading at 5. And English is not even our first language. Those iPads were the highest ROI of parenting so far. Please give your children high quality apps.

aeontech 2021-08-17 14:19:52 +0000 UTC [ - ]

Sorry for semi-off-topic reply, but would be very interested in any other recommendations you may have, have been looking for high quality resources too.

flyinglizard 2021-08-17 20:02:44 +0000 UTC [ - ]

Anything by Originator (Endless ABC, Endless 123, Reader, etc). You can get them as a bundle.

Teach Your Monster.

BusyShapes although they have some issue lately and the game crashes for us.

Sneaky Sasquatch as an adventure game. Both my kids got hooked on it and it has a ton of interactions and education value on what people do.

Monument is something my five years old is hooked on.

Atlas teaches them about the world.

YT Kids with a careful selection of channels.

sida 2021-08-17 03:37:12 +0000 UTC [ - ]

Do we know what happened to the other 59 children?

wheelinsupial 2021-08-17 04:28:21 +0000 UTC [ - ]

There is a book with 2 editions published by the researcher about them.

An excerpt can be found here: https://files.eric.ed.gov/fulltext/EJ746290.pdf

The outcomes depend on how much and when the students were accelerated in school.

ETA:

The two extremes:

“… 17 of the 60 young people were radically accelerated. None has regrets. Indeed, several say they would probably have preferred to accelerate still further or to have started earlier…. The majority entered college between ages 11 and 15. Several won scholarships to attend prestigious universi- ties in Australia or overseas. All have graduated with extremely high grades and, in most cases, university prizes for exemplary achieve- ment. All 17 are characterized by a passionate love of learning and almost all have gone on to obtain their Ph.D.s.”

“The remaining 33 young people were retained, for the duration of their schooling,… Two dropped out of high school and a number have dropped out of university. Several more have had ongoing difficul- ties at university,…”

Based on this HN comment [1] it appears the participants have been anonymized. It also quotes some of Terrence Tao’s education and makes a claim there was someone else who may have equaled Tao in math ability, but did not have the educational support structure to recognize and nurture it.

[1] https://news.ycombinator.com/item?id=11510032

ip26 2021-08-17 05:21:56 +0000 UTC [ - ]

It's great to see the positive outcomes of acceleration. I saw so many kids burn out on accelerated programs, and have generally become cautious about them.

It's also nice to know that 2 years was enough for positive lives even for the "genius" children, which sets a sort of upper bound!

ip26 2021-08-17 03:07:26 +0000 UTC [ - ]

As a parent I’ve come to believe a lot of the stories about how so and so was playing golf or skiing or playing piano by 3 are very biased retrospectives. My kid was playing notes on harmonica at 18 months. Could he carry a tune? Of course not, but if one day he turned out to be a musical genius I could brag he has been playing instruments since he was an infant. See the careful embellishment?

A kid who has memorized some sight words or some books can certainly claim “reading”, with just enough truth there for the tale to take hold when they do well later in life.

btilly 2021-08-17 03:26:30 +0000 UTC [ - ]

In many cases I'd agree with you. But in his case, he had come to the attention of researchers into high IQ by the age of 3, and the stories were recorded when they were still fairly fresh.

dwohnitmok 2021-08-17 04:06:13 +0000 UTC [ - ]

Yeah there's a considerable difference between "my kid is a genius among other kids" (probably what a lot of parents feel about their children) and "my kid is a genius even when compared to adults."

eyelidlessness 2021-08-17 03:19:27 +0000 UTC [ - ]

To reinforce this: I remember feeling embarrassed by being described as a “natural” playing guitar.

I also remember feeling humbled by my humility when I took guitar seriously and picked stuff up or figured out technique that astounded people.

ACow_Adonis 2021-08-17 12:47:44 +0000 UTC [ - ]

for the record I've got a son who is going on three now and I'm finding the developmental aspect fascinating. my wife is also studying some childhood development atm. I assumed he'll run risks of being my "problematically gifted" given his parents :P But i have a very skeptical mind, it pays the bills.

I've observed in my research (and in real life) many various impressive renditions of what i call "stupid toddler tricks". i saw a 1 year old count to 20. my own kids take books to bed at 2 and "reads" them to himself.

could the kid do math? could my kid read? No. of course not. you probe a little bit deeper and all the trappings of what we adults call cognisance fall away. the one year old has no idea what they were reciting. my kid repeats (albeit poorly) what he heard us say when . "reading",but a little bit of digging reveals the limits of his comprehension. kids use the pictures and other hints hints as cues, they're (the clever bastards),they have a brilliant verbal/aural ability at such an age, and they can have some kind of basic symbolic ability and rote repetition. And you can get them to do amazing things if you record them doing the trick but don't dig any deeper into their mental processes.

I'm guessing that people who are downvoting don't understand child mental development, have a vastly simpler definition of reading than myself, or don't understand how many layers of development are required to arrive at actual reading that have to be passed first. i do think you could quickly show such a trick to a well meaning person and get the legend started however...

sdiupIGPWEfh 2021-08-18 01:52:05 +0000 UTC [ - ]

If the two year old in question knows their alphabet, upper & lower case, knows most of the sounds the letters make, and can sound out unfamiliar words and sight read the rest, I don't really see what's controversial about calling that "reading". It's not a stupid toddler trick or a mark of genius, either.

If we're going to add comprehension requirements on top of that, then what passes as reading is pretty abysmal even for some adults.

2021-08-17 03:31:52 +0000 UTC [ - ]

AussieWog93 2021-08-17 02:54:42 +0000 UTC [ - ]

>What's the source for some of those claims?

Googling it, it seems like the claim that he taught himself to read when he was "a little over two" (ie not literally 24 months) has been repeated among multiple reputable news sources (New York Time, SMH, The Age). No comment on the IQ.

>Calculus at seven I can believe, but teaching himself reading at 2 sounds like something that goes against what we know about biological and social development of the child.

It doesn't seem too far-fetched, to be honest. Back when my wife worked in child-care, she knew this one kid who was speaking in multi-word sentences at 9 months old. Most kids don't reach this milestone until they're 2 1/2. Kids can and do surprise you.

philipswood 2021-08-17 04:07:23 +0000 UTC [ - ]

I'm only addressing teaching himself to read at two:

Check out "How Teach your baby to read" by Glen Doman.

He makes the case that it is pretty much an inmate ability if encouraged correctly.

The method has been around long enough to have some backing and witness, but it doesn't actually seem to confer much long term advantage.

I ended up not using it with my kids after feedback from a friend and some further reading.

My understanding is that for most people intelligence isn't a result of starting early, but rather having the brain finish late - i.e. remaining plastic longer.

ip26 2021-08-17 05:30:14 +0000 UTC [ - ]

I wasn't prepared for how innate verbal language, specifically stories, are. My kid would come home from preschool and accurately recite ten minute long books, full of unknown words, the same way I idly hum a tune. It was a lightbulb moment for me- I had always wondered how oral tradition could possibly hold up over centuries, but my god we're wired for it.

whoisburbansky 2021-08-17 04:38:28 +0000 UTC [ - ]

What was the feedback/further reading that made you decide not to use it?

ip26 2021-08-17 05:32:00 +0000 UTC [ - ]

doesn't actually seem to confer much long term advantage

kaetemi 2021-08-18 02:17:28 +0000 UTC [ - ]

Adults underestimate toddlers. I 'taught' my daughter to read some words at the age of 1. She could learn around 3 per week. Two weeks to learn to recognize them without any context. It's just a matter of providing the opportunity, and taking advantage of their curiosity.

jonahx 2021-08-17 03:25:10 +0000 UTC [ - ]

This video may help convince you:

https://www.youtube.com/watch?v=I_IFTN2Toak

ACow_Adonis 2021-08-17 12:14:22 +0000 UTC [ - ]

People seem to interpret my comments as some kind of tall-poppy syndrome or ignorance, but I actually believe that a lot of these mathematical abilities do generate at about the age that Terrence was said to display them (even if we accept he was early and prodigious).

if someone told you that they developed sight before they physically developed eyes, surely you'd pause for thought. But make similar claims concerning the neurological development or mental faculties in prodigies at a young age and suddenly everyone's critical thinking switches off.

I've actually read the study which involved Terrence before this thread, and I'll repeat it again: nothing I'm saying is trying to take away from his accomplishments or current abilities.

The far more plausible story is simply that he had two highly educated parents, one of whom was a math teacher, he has some innate ability, and his parents specifically focused on him and taught and pushed him in math from an early age, then they went about accessing opportunities for him to learn and continue his development at an abnormally early age and they additionally got him special access to educational resources, educators and equipment.

If anything, the video and further research confirms my early conclusions, but my point is really quite fundamental and has little to do with Terence's abilities or current day achievements: I don't believe a 2 year old can teach themselves to read (or at least to anything approximating what a skeptical person would call reading). I think anyone who's had dealing with kids and understands development would say that claim at least SMELLS of bullshit if it isn't actually so.

alisonkisk 2021-08-17 15:21:26 +0000 UTC [ - ]

Why can't a 2 yr old start reading but a 10 yr old can solve the hardest mathematics problems posed?

ACow_Adonis 2021-08-17 21:48:20 +0000 UTC [ - ]

well, firstly, the claim can be broken down into two parts: taught himself, and reading at two.

I won't address the first because the discussion will get too long if you think that's genuinely how children learn, even geniuses.

on the later, because the basic foundations of math can be found in (at least some) ten year old minds. what the common man believes is possible in math is mainly influenced by cultural exposure and the order in which we learn it. But there's really only 3 - 4 meta-concepts that underlie all of math, and the rest is about exposure/experimentation, syntax and terminology. Terrence himself I believe says as much and if you read through the accounts of interviews with Terrence at a young age it's apparent that's how his mind is working (and explains the concepts he understands, the mistakes he makes, and those areas he hasn't had exposure to).

  whereas the cognitive layers required for reading take time to develop in the child's brain, and you first have to wait for the verbal/aural language systems to develop the actual language structure first, then tack on some symbolic representation for letters, then understand composition for words, then attach words to their meanings, etc. You can skip some of those bits to perform party tricks (I.e rote symbolic recognition and repetition of aural sounds on cue), but this does not reading make. Reading we believe, from a cognitive science point of view, we think is a kind of a kind of kludge on top of later developed brain mechanics, whereas it's aural/verbal language that appears as an almost innate developmental ability around that age. (I say innate, but you still have to be exposed to other humans interacting and speaking for a young mind to learn the language).

Rerarom 2021-08-17 10:41:35 +0000 UTC [ - ]

Wtf, I was reading at 2½yo (but learned calculus at 16-17)

civilized 2021-08-17 01:57:20 +0000 UTC [ - ]

I wonder if only geniuses can write "it doesn't take a genius" posts, because making truly hard things easier actually does take a genius.

baetylus 2021-08-17 02:17:21 +0000 UTC [ - ]

That's part of the problem with "genius" - an emphasis on innate talent, excluding hard work or the degree to which the "genius" worked hard.

dgs_sgd 2021-08-17 01:56:46 +0000 UTC [ - ]

I personally hold him and Jon Von Neumann at a level above everyone else in modern mathematics

paulpauper 2021-08-17 04:09:29 +0000 UTC [ - ]

an iq of 230 probably does not exist . it is impossible to devise a test accurate enough measure such a high iq .More likely there are probably a hundred people in the world who can be considered the smartest, including Terrance Tao and other prodigies. And that German guy who recently got a Field's Medal.

He said to do math, not first-rate math. I have witnessed plenty of seemingly merely above-average-IQ people make interesting, novel contributions to math. Complex analysis, infinite series, matrices, stuff like that.They don't get much media coverage but produce interesting results and produce high quality math..that is probably what he was getting at.

-read the req. literature

-find an interesting problem, something u want to learn more about

-defer to literature to try to solve it

-if you succeed, write it up

IQ matters a lot though.no doubt.

eyelidlessness 2021-08-17 03:13:20 +0000 UTC [ - ]

Without even reading a single word of it, the non-irony is in your characterization. He taught himself. Ultimately we all taught ourselves and our peers and mentees. Intelligence is mostly rooted in curiosity and attention, not some innate capacity. People who are perceived to be at the lower end of that capacity often inform those who aren’t, and vice versa.

leeoniya 2021-08-17 03:17:09 +0000 UTC [ - ]

> not some innate capacity

i would wager a lot of money that no one here has the capacity to approach John von Neumann, Srinivasa Ramanujan or Terence Tao in a single lifetime of unlimited curiosity and attention.

irjustin 2021-08-17 01:48:27 +0000 UTC [ - ]

> A nice, down-to-earth guy, too, by all accounts.

Absolutely. His interview w/ Numberphile is very matter of fact and speaks about incredible things as if they're everyday occurrences/problems - speaks as if you could have these problems too.

https://www.youtube.com/watch?v=MXJ-zpJeY3E

OrangeMusic 2021-08-19 13:18:08 +0000 UTC [ - ]

> the field's medals, including the Fields Medal

I see what you did there ;)

warent 2021-08-17 01:19:58 +0000 UTC [ - ]

Is it possible for me to understand as a layperson why this is so important?

floatingatoll 2021-08-17 01:35:56 +0000 UTC [ - ]

Classy retractions are rare, or at least uncommon. We should all be more like them.

Uehreka 2021-08-17 01:49:24 +0000 UTC [ - ]

Yeah but GP said this was the time Terry Tao saved mathematics. I’d like to know what he meant by that.

mabbo 2021-08-17 01:59:39 +0000 UTC [ - ]

I'll take a stab as a complete amateur.

Peano arithmetic is the basis for a lot of math. There are very basic assumptions underlying it, but using just those assumptions you can make the natural numbers and perform math on them.

This was a bold attempt to prove that they are not consistent. That is, you can use the base peano axioms to prove 1 = 0 and anything else you want.

Terry Tao pointed out a mistake the author made, and thus 'saved' mathematics. Peano arithmetic remains consistent.

0xBABAD00C 2021-08-17 03:00:47 +0000 UTC [ - ]

> Peano arithmetic remains consistent.

This isn't quite the conclusion here. Peano arithmetic's consistency cannot be proven within its own limits, so it remains a hope/intuitive belief, but not a fact. Closest we've gotten, to my knowledge, is that there have been consistency proofs within other axiomatic systems: https://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof

dllthomas 2021-08-17 04:35:02 +0000 UTC [ - ]

I mean, Gödel told us that we can't have a system that proves its own consistency while being consistent, so we shouldn't expect to get closer than "we have proofs in other systems and also the axioms seem so dang simple and obvious and also people have been banging on them and haven't found any inconsistencies..."

But moreover, if Gödel's proof had gone the other way I'm not sure the situation is all that much changed. If I hold in my hand a proof of the consistency of a system of axioms within that system of axioms then... either the system is consistent or, by being inconsistent, could prove anything including its own consistency.

2021-08-17 04:17:25 +0000 UTC [ - ]

Sniffnoy 2021-08-17 02:22:01 +0000 UTC [ - ]

So, Nelson had claimed to have proven Peano arithmetic inconsistent.

Peano arithmetic is a set of axioms that describe basic properties of the whole numbers (nonnegative integers). It's a pretty simple set of statements. The axioms are described in terms of 0, S (the successor function, S(n) = the next number after n), plus, and times.

The axioms are:

1. If Sn = Sm, then n=m

2. For all n, Sn is not 0

3. For all n, n+0=n

4. For all n and m, n+Sm = S(n+m)

5. For all n, n*0=0

6. For all n and m, n*Sm = n*m + n

7. [Induction] If P(n,m_1,...,m_k) is a predicate, and m_1,...m_k are whole numbers, such that:

A. P(0,m_1,...m_k) holds, and

B. For all n, P(n,m_1,...,m_k) implies P(Sn,m_1,...,m_k)

Then for all n, P(n,m_1,...,m_k) holds.

...OK, that last one is maybe a bit complicated. Technically, that one is actually what's called an axiom schema, that generates infinitely many axioms, one for each possible predicate of one or more variables (note you can have k=0). The theory is about whole numbers, not about predicates; the theory can't actually talk directly about predicates. Anyway, that's a bit of technical detail you don't really need to know for these purposes, so let's just move on. If you didn't understand that part, it's OK.

As you probably know, if we have a mathematical theory, and that theory is supposed to describe something that is supposed to actually exist (such as the whole numbers), that theory had better be consistent.

Firstly, because if it's not consistent, it can't possibly be true (the technical term is "sound"). Reality is consistent, so if something is inconsistent, it's incorrect.

Secondly, because of the principle of explosion. If you know P and not P for some statement P, you can conclude any statement Q. This principle of logic may be counterintuitive, but it's pretty essential. Some people have made a version of logic that don't include it ("minimal logic"), and it's basically unusable.

Suppose you know both P and not P, and you want to prove a statement Q. Well, you know P, so you certainly know P or Q. But you also know not P; that eliminates P as a possibility, leaving Q. So in order to get rid of the principle of explosion, you'd have to get rid of the principle that if you know A or B, and also know not A, then you can eliminate A as a possibility to conclude B. That's a pretty big tool to go without!

(Well, or you'd have to get rid of the possibility that if you know A, you can conclude A or B, but that's even worse.)

So, if a set of foundational mathematical axioms is found to be inconsistent, it could be something of a disaster for mathematics. People would need to come up with new, weaker axioms that somehow didn't lead to this contradiction. This would be a difficult thing to do -- which principles do you keep, and which do you jettison? (And which do you replace with weaker versions? And what new weaker ones do you invent to take the place of ones that had to be tossed?) That's not an easy question!

So let's say that the ZFC axioms were found to be inconsistent. The ZFC axioms are a set of axioms (and axiom schemas) that describe set theory, and basically all of "ordinary mathematics" can be founded on them. (I'm not going to list them all here; they're not that complicated, but they're rather more complicated than Peano arithmetic.)

If the ZFC axioms were found to be inconsistent, well, it could be a project of years or decades to come up with a new, hopefully consistent, foundation of mathematics. It would be quite the problem. But... it wouldn't destroy mathematics. After all, ZFC itself is what people came up with after the earlier attempts to axiomatize set theory were found to lead to contradictions, so this has in a sense happened once before.

So why would it be such a big deal if the Peano axioms in particular were found to be inconsistent? Why would do people say that would destroy mathematics?

There are a few reasons for this.

Firstly, the Peano axioms describe the whole numbers, rather than set theory. Set theory is kind of out there -- who can really say what should or should not be true of vastly infinite sets? Yeah, the ZFC axioms all seem like they should be true, but so did the axiom of unrestricted comprehension, and that turned out to be no good. The Peano axioms, by contrast, describe the whole numbers, and are pretty basic statements about them. They had better be true, or else we are very wrong about the whole numbers!

But, that's not the only reason. After all, if that were it, it would still likely be possible to recover from the problem by passing to a weaker system of axioms. And there are various ones that have been proposed! The Peano axioms are mostly pretty simple, but that last one, induction, has a lot of hidden complexity to it. People have suggested ways you could weaken the induction axiom by limiting what sorts of predicates it applies to. And that would certainly be contentious, but if Peano arithmetic were found to be inconsistent, we'd have to. Thing is, that wouldn't solve the real problem.

The problem is that weakening the induction axiom is only really a possibility if you want to describe the whole numbers in isolation. In reality, we don't consider the whole numbers in isolation, described by the Peano axioms; but rather as part of the broader picture of mathematics, described by ZFC. We don't study just whole numbers, but sets of whole numbers, whole numbers interacting with real numbers and complex numbers and p-adic numbers, whole numbers interacting with graphs and groups and partitions, whole numbers interacting with vector spaces and manifolds and all the rest of mathematics.

So, in reality, if Peano arithmetic were found to be inconsistent, the real task wouldn't be to weaken the Peano axioms so they'd be consistent; it would be to weaken the axioms of ZFC, in such a way that that would weaken the Peano axioms to remove the contradiction.

But that's just about an impossible task. It'd be nearly impossible to weaken the axiom schema of induction, while also leaving a theory that can interact with the rest of mathematics. Because you see, if we want the whole numbers to be able to interact with the rest of mathematics, then we have to be able to talk about sets of whole numbers.

And if we can talk about sets of whole numbers, then we can talk about the following variant of induction: Let S be a set of whole numbers, and suppose that 0 is in S, and, for each whole number n, n being in S implies n+1 is in S. Then all whole numbers are in S.

Now this may sound like the same thing I said above -- they're both just induction, right? But this one is about sets, not predicates, because now we're in a setting where we can talk about sets. And this one is more general -- because, if we have a predicate about whole numbers, we can form the set of whole numbers satisfying it. Whereas notionally one could have a set not described by any predicate.

So, if you have this statement (set induction), you get all of predicate induction. So you'd somehow have to weaken the axioms of mathematics such that:

1. You can still talk about sets of natural numbers, but

2. You don't get predicate induction.

And how on earth would one do that?? Stopping set induction sounds pretty much like a dead-end; like if you did that, how would you get any form of induction at all? (And if you don't have any form of induction, then you don't really have the whole numbers; it's kind of their defining feature.) I mean you could make special assumptions about whole numbers, but if we're trying to make more general set-theoretic axioms, where whole numbers aren't fundamental, those don't really belong.

So the remaining option then is to stop the link from set induction to predicate induction, by limiting what sorts of sets you can make, so not every predicate on whole numbers can be used to form a set of whole numbers.

But (while I think that's considered less impossible), that's not really a viable option either! Because that would mean that somehow you'd have to introduce, into your set theoretic axioms, restrictions on what predicates can be used to form sets; and while there are sensible ways one could formulate restrictions in the limited context of whole numbers, there's not really any good way to formulate such restrictions that would make sense in the broader context of mathematics as a whole.

So, basically, we'd be stuck. Coming up with new axioms if ZFC were proved inconsistent would be difficult but doable. Coming up with new axioms if Peano arithemtic were proved inconsistent seems basically impossible. So, finding a contradiction derivable from the Peano axioms could indeed destroy mathematics.

Of course, since people generally expect that the Peano axioms are true, nobody really expects that to happen. But Edward Nelson -- an adherent of the fringe mathematical school known as ultrafinitism -- disagreed, believing induction was probably not true and not consistent. And here he claimed to have found an actual inconsistency. But, as has been mentioned, Terry Tao found a hole in his argument. And so mathematics was not destroyed. At least not that day.

Sniffnoy 2021-08-17 03:00:54 +0000 UTC [ - ]

Oh, let me close up one little hole in my comment, before anyone points it out. :) This may require rather more mathematical knowledge I'm afraid.

Above I said the only reasonable way to weaken Peano arithmetic was to weaken induction. But that's not quite true. There's another way: Not changing any of the axioms, but weakening the underlying logic.

See, there's a school of mathematics known as "constructivism", and they object to the usual laws of logic, saying that they're too strong; they use a weaker logic. It ought to be called "constructive logic", but for historical reasons (that are really not worth getting into) it's called "intuitionistic logic" instead.

And this might seem to be a better approach, because weakening the laws of logic is something that can be done without regard for the setting; it doesn't matter here whether you're talking about just whole numbers, or mathematics as a whole. (Well, almost. Actually, from ZFC and intuitionistic logic, you can effectively get back classical logic. But there are pretty good ideas about how to modify ZFC to avoid that.)

The problem is, doing this doesn't help. Because it's known that if you can prove a contradiction from the Peano axioms using classical logic, you can also do so using intuitionistic logic. So even if you went constructive, you'd still be stuck with all the same problems.

chithanh 2021-08-17 09:17:26 +0000 UTC [ - ]

In addition to weakening the logic (or avoiding the parts that introduce incompleteness, especially the existential quantifier) you also need to weaken your arithmetic. We know that Primitive Recursion (PR) is too weak to prove its own consistency, and Peano Arithmetic (PA) is too strong. So you may have to look at a strengthening of PR and/or weakening of PA in order to achieve a theory that proves its own consistency. Whether that theory exists is still an open question.

Sniffnoy 2021-08-17 19:47:48 +0000 UTC [ - ]

Getting a theory that proves its own consistency isn't what's being discussed here. What's being discussed here is just weakening a theory to remove a particular inconsistency after an inconsistency is found.

Trying to make a usable theory that also proves its own consistency is probably also a futile goal, per Gödel, but it's a separate one, and not one that people would likely go for if an inconsistency were found in Peano arithmetic.

nocturnial 2021-08-17 06:55:42 +0000 UTC [ - ]

> if you can prove a contradiction from the Peano axioms using classical logic, you can also do so using intuitionistic logic.

Are you sure it isn't the other way around? If you can prove a contradiction using intuitionistic logic, it's also a contradiction using classical.

If you proved the contradiction using the excluded middle then that proof wouldn't hold when you use intuitionistic logic. Maybe I'm missing or not understanding something here.

Sniffnoy 2021-08-17 07:28:02 +0000 UTC [ - ]

I mean, the other way around is also true, but the other way around is obvious. Obviously going from a weaker logic to a stronger logic can't eliminate contradictions.

What's surprising is that in this particular case (the Peano axioms), the reverse is also true; you won't be able to eliminate contradictions by passing to this weaker logic.

Note, all I said is, if you can prove a contradiction classically, then you can prove a contradiction constructively! I didn't say, if you have a classical proof of a contradiction then it's a valid constructive proof of a contradiction. Obviously not! But it will still be true that you'll be able to prove a contradiction constructively; it just won't necessarily be the same proof.

If you want to know more about the particular transformation, well, here's a relevant Wikipedia article: https://en.wikipedia.org/wiki/Double-negation_translation

nocturnial 2021-08-17 10:59:15 +0000 UTC [ - ]

Thank you for your explanation and pointing me to the double negation translation.

I know this is off-topic but if you have a book recommendation which teaches about intuitionistic logic/constructive math and also mentions the double negation translation, I'm interested. Most books I've read seem to either concentrate on only constructive or classical math.

Sniffnoy 2021-08-17 21:33:50 +0000 UTC [ - ]

I don't, sorry. I'm not really a logician, just a mathematician who thinks the foundations are worth understanding. :) I don't know what proper references on this material would be.

taejo 2021-08-17 07:30:23 +0000 UTC [ - ]

Classical logic can be embedded into intuitionistic logic. The double-negation translation N [0] turns any tautology of classical logic into a tautology of intuitionistic logic, so if θ ∧ ¬θ is provable with classical logic, N(θ) ∧ ¬N(θ) is provable with intuitionistic logic.

[0]: https://en.wikipedia.org/wiki/Double-negation_translation

hau 2021-08-17 07:37:47 +0000 UTC [ - ]

What makes you say that reality is consistent? How would we know?

jmholla 2021-08-17 03:10:32 +0000 UTC [ - ]

I think you may have a typo. Should 4 be Sn+Sm=S(n+m)?

Sniffnoy 2021-08-17 03:30:36 +0000 UTC [ - ]

No. Sn+Sm would be n+m+2, while S(n+m) is n+m+1.

JZumun 2021-08-17 12:56:58 +0000 UTC [ - ]

When I read about Peano axioms I mentally think of Sn as the function S(n) = n+1 and that helps me understand them better.

As a non mathematician sometimes I wonder if this substitution is causing me to misunderstand something else about them.

dreamcompiler 2021-08-17 06:10:33 +0000 UTC [ - ]

S is not a multiplier; it's a call of the successor function. The distributive property doesn't apply.

spoonjim 2021-08-17 00:55:02 +0000 UTC [ - ]

It's sad that this will disappear from history with the blog.

totetsu 2021-08-17 00:59:59 +0000 UTC [ - ]

http://web.archive.org/web/*/https://golem.ph.utexas.edu/cat... Maybe a donation to the internet archive could ensure it stays around for posterity

jonnycomputer 2021-08-17 01:30:50 +0000 UTC [ - ]

The difference between a crank and someone like Nelson is the ability to admit that their crazy disruptive idea, their challenge to orthodoxy, is after-all wrong. We need people to challenge orthodoxy; its healthy. But we also need people to admit when they are wrong. Nelson has my respect.

busterarm 2021-08-17 02:47:36 +0000 UTC [ - ]

People challenge orthodoxy and are willing to admit their crazy ideas are wrong all the time.

The problem is that outside the field of mathematics, challenging the orthodoxy worked up an angry mob and they probably torched the person's house before it could be worked out.

chithanh 2021-08-17 09:31:48 +0000 UTC [ - ]

What is true is that since we have computer-based theorem provers / proof checkers, "working it out" becomes a matter of getting your challenge past one of those, and the orthodoxy will have a hard time arguing against that.

But OTOH you see many people in other disciplines challenging the orthodoxy (in Physics: string theorists, MOND proponents, etc.) and while their views aren't always taken fully serious, their houses aren't being torched either.

busterarm 2021-08-17 09:57:01 +0000 UTC [ - ]

Clearly some sciences are better than others.

You can put air quotes around sciences if it helps make my point any clearer.

tzs 2021-08-17 06:45:59 +0000 UTC [ - ]

Another major difference is that Nelson's mistake was in a field in which he was an expert. Cranks are almost never experts in the fields they are making claims about.

jonnycomputer 2021-08-19 13:20:56 +0000 UTC [ - ]

True.

In another context, I recall long conversations between different physical anthropologists in a forum about one person's unorthodox belief that humans' closest common ancestor is shared with the orangutan, not the chimpanzee. While there was some occasional irritation, for the most part the conversations were civil, and informative, because the person who held this view was very knowledgeable about the field.

ivraatiems 2021-08-17 00:03:40 +0000 UTC [ - ]

Note that the actual message appears to be in the comments below the article: https://golem.ph.utexas.edu/category/2011/09/the_inconsisten...

irjustin 2021-08-17 00:45:06 +0000 UTC [ - ]

Thanks I was confused by that.

civilized 2021-08-17 00:12:11 +0000 UTC [ - ]

What's really great about this is Terry Tao, a mathematician known primarily for analytics and combinatorics, can, in the space of two blog comments, completely filet a Princeton prof's flawed argument in the largely unrelated field of mathematical logic.

SquishyPanda23 2021-08-17 00:47:15 +0000 UTC [ - ]

Tao also writes about how to do this sort of error spotting in a proof you suspect to be wrong: https://terrytao.wordpress.com/advice-on-writing-papers/on-l...

In general he thinks a lot about tool kits and I think what we're seeing here is the math equivalent to being great at debugging.

CTmystery 2021-08-17 00:48:05 +0000 UTC [ - ]

What I find most admirable is not the complete fileting, but the humble acceptance of error on something that proof author had obviously put much effort into. It sucks to hear that something you've worked hard on is fundamentally flawed, especially on such public display

Causality1 2021-08-17 00:59:59 +0000 UTC [ - ]

It sucks to hear that something you've worked hard on is fundamentally flawed, especially on such public display

Exactly what I say at parent-teacher conferences.

quercusa 2021-08-17 01:51:02 +0000 UTC [ - ]

Are you the teacher or the parent? I can't decide which I think would be funnier.

tomjakubowski 2021-08-17 00:41:37 +0000 UTC [ - ]

Tao is respectful in his criticism. And Nelson returns the respect in humbly withdrawing his claim. That is the great thing about this exchange to me.

civilized 2021-08-17 00:47:15 +0000 UTC [ - ]

The tone of the exchange is great, although normal in the company of accomplished pure mathematicians.

mhh__ 2021-08-17 00:26:26 +0000 UTC [ - ]

You're not wrong but as a professional mathematician it's not like Tao is exactly stumbling in blind. Logic is the basis of the bases

catgary 2021-08-17 01:21:34 +0000 UTC [ - ]

Yeah, my understanding is that analysts are often quite strong in that domain of mathematical logic (there’s a similar where algebraic topologists are often incredibly comfortable with categorical logic).

Rioghasarig 2021-08-17 18:06:43 +0000 UTC [ - ]

Even the smartest people can be wrong and it can take an outside perspective for them to realize this. This isn't an example of someone "fileting" another's argument. It's an example of two professionals have a conversation and arriving at an agreement.

Nelson was making an extraordinary claim so the odds were definitely against him.

2021-08-17 05:34:08 +0000 UTC [ - ]

lvs 2021-08-17 00:32:46 +0000 UTC [ - ]

Fileting someone's argument is not a positive goal. It's an egotistical one. Finding truth is a good goal. That seems to be what is praiseworthy here.

derefr 2021-08-17 00:41:15 +0000 UTC [ - ]

"Finding truth" isn't quite what happened here, though. Finding a contradiction/other invalidation of someone else's proof, doesn't advance the knowledge frontier. It means that there was a flaw in process of proving the original proof, but it doesn't disprove the idea behind the original proof. You'd need your own separate proof to do that—probably a proof by contradiction, of one of the main lemmas of the proof.

The useful skill, here, is one shared by logicians and compilers: the ability to quickly find flaws that invalidate a proof, without needing to fully understand what the proof is trying to prove.

Said skill, if it could be more widely-learned, would be very helpful in iterating toward sound proofs (or impossibility proofs for such.)

But when it's only a skill in the hands of another party external to the one writing the proof, that iterative aspect isn't really there.

Cogito 2021-08-17 00:46:54 +0000 UTC [ - ]

Proving that one branch of a tunnel system does not lead to the open air is advancing the knowledge frontier. It's not as 'advancing' as finding a tunnel that does lead to the surface, but it is still useful.

I think I agree with the rest of what you have said.

derefr 2021-08-17 01:21:01 +0000 UTC [ - ]

Sure, but I think the wrong image is conjured by saying "a branch of a tunnel system", since while branches in a real tunnel may themselves branch, they don't tend to branch every few millimeters, the way proofs can branch with every character in possibility space.

Invalidating a proof is essentially what a compiler does when it blows up over a typo in the code you've fed it.

The thing is, most of the time, it is just a typo. The code as-is is wrong, but the code with one character different — the code if "branched" in an alternate direction for just a few millimeters, and then course-corrected back onto the original existing path — is right, and the semantic meaning of the code isn't changed/compromised by that correction.

Finding a logic-level flaw in a (presented) mathematical proof is usually similar: 99% of the time, it's something fixable.

That 1% of the time does still exist; there can be "fatal flaws" in proofs, where the proof's author can't find a way to salvage their proof. But it's not the invalidation of the particular proof, where the knowledge that the proof is unsalvageable (i.e. that there's no path that follows the general "plan" of the branch to get from A to B) gets created. That knowledge is only discovered after much more work by the proof's author, to try to "dig around" the problem, that all turns out to hit other walls.

(Note here that I'm assuming that the proof isn't already believed to be true when it's invalidated. Which it usually isn't; most proofs that get invalidated are invalidated when they're still being circulated as something novel and for-scrutiny, rather than when they're already generally-accepted. If a proof was already thought to be true, then invalidating it would temporarily create "negative" knowledge — it would retract a previous consensus assertion of the truth-value of the proof.)

kortilla 2021-08-18 03:04:12 +0000 UTC [ - ]

There are infinite branches that lead nowhere though. This isn’t some data hinting that an effect/correlation doesn’t seem to exist. No useful data or knowledge was discovered here.

Cogito 2021-08-18 05:13:40 +0000 UTC [ - ]

A mathematician was exploring a proof, and was shown that the avenue of inquiry they were making could not possible lead where they wanted to go. As a consequence they stopped investigating that path and potentially, as they were shown why that path was invalid, can see possible ways around the problem.

Perhaps the most effective thing a person can spend their time on is pointing people along paths likely to lead to success, and guiding them away from dead ends.

civilized 2021-08-17 00:40:03 +0000 UTC [ - ]

Easy, there. It's just evocative language. I could have said "spot the core flaw" to the same effect.

ameister14 2021-08-17 01:19:40 +0000 UTC [ - ]

Not trying to attack you, just an aside - I don't know that 'spot the core flaw' is the same as 'filet someone's flawed argument.'

The latter implies 'ooh, he won;' the former only 'ah, there was a problem and he spotted it.'

lvs 2021-08-17 03:01:53 +0000 UTC [ - ]

Then I apologize for responding to the language that you wrote and not other language you could have used.

2021-08-17 04:34:57 +0000 UTC [ - ]

treyhuffine 2021-08-17 00:36:56 +0000 UTC [ - ]

And allowing one's ego to not get in the way of accepting the truth, even when it contradicts your original belief.

SavantIdiot 2021-08-17 01:59:52 +0000 UTC [ - ]

"You are quite right, and my original response was wrong. I withdraw my claim"

For homework, everyone on HN who posts should try saying this at least once this month, because you will be wrong at least once. I know I will.

mixedmath 2021-08-17 00:42:23 +0000 UTC [ - ]

I was beginning my math PhD around this time, and I remember seeing so much math discussion on google+ (which is mentioned frequently in the linked page). I don't know why this was the case, but it really was true --- there was so much professional math activity on google+.

JadeNB 2021-08-17 00:54:34 +0000 UTC [ - ]

I think John Baez and Terry Tao were, between them, a giant chunk of the reason for all that activity.

CRConrad 2021-08-17 14:20:01 +0000 UTC [ - ]

I followed Baez, but never knew Tao was there too. :-(

jonnycomputer 2021-08-17 01:27:20 +0000 UTC [ - ]

google+ had a lot of topic-oriented discussion. it was actually pretty cool.

wolverine876 2021-08-17 01:32:17 +0000 UTC [ - ]

> there was so much professional math activity on google+.

Is it all lost?

mixedmath 2021-08-17 12:56:27 +0000 UTC [ - ]

Now there are so many math fossils on google+. Activity is elsewhere.

The math blogosphere remains pretty healthy. Thankfully many mathematicians maintain useful and interesting personal sites. If Wordpress suddenly died, there would be a bigger problem (many mathematicians don't self-host).

andybak 2021-08-17 09:50:31 +0000 UTC [ - ]

Not sure what's in it but there's https://archive.org/details/archiveteam_googleplus

singhrac 2021-08-17 00:25:21 +0000 UTC [ - ]

Worth noting that this is from 2011 (though still interesting and fun). Nelson passed away in 2014.

_moof 2021-08-17 04:17:28 +0000 UTC [ - ]

I really hope I survive for longer than three years the next time I get completely destroyed.

newsbinator 2021-08-17 00:40:06 +0000 UTC [ - ]

The very rare time on social media I come across a statement like this, I instantly respect and follow the person saying it.

vymague 2021-08-17 17:27:22 +0000 UTC [ - ]

The renaming of the title is uncalled for. This response by Edward Nelson was the highlight:

> You are quite right, and my original response was wrong. Thank you for spotting my error.

> I withdraw my claim.

woopwoop 2021-08-17 17:31:14 +0000 UTC [ - ]

OP here. I agree. I'm a mathematician, and like most mathematicians I find discussions of foundations actually kind of tedious (although obviously a proof that Peano arithmetic was inconsistent would be a big deal). Also only a handful of HN readers have anything at all to gain from the technical discussion here. What is of interest to the HN readership is the sociology of mathematics on display. Hence the original title.

gameswithgo 2021-08-17 14:27:54 +0000 UTC [ - ]

I started reading this post, following along just fine, then the notation and terminology entered, and I started looking it up, ok, following along still but it is hard, then it quickly devolved where I didn't understand anything, and I felt really bad.

Then saw the first comment by Terry Tao and felt a lot better.

otterley 2021-08-17 00:03:54 +0000 UTC [ - ]

More of this, please.

Cogito 2021-08-17 00:48:50 +0000 UTC [ - ]

If you're not familiar, John Baez is consistently a nexus for interesting conversations like this one. Read the archives and his twitter for more.

H8crilA 2021-08-17 16:14:15 +0000 UTC [ - ]

This is debunked, but maybe a comment about how this relates to Godel's theorems:

1) the first Godel's theorem says that if a theory of arithmetics is complete then it must be inconsistent,

2) the second is that a theory of arithmetics cannot prove its own consistency, specifically if it proves its own consistency then it is actually inconsistent.

This thesis (with an invalid proof) says "no consistent theory of arithmetics can exist", regardless of whether it's completely complete.

commandlinefan 2021-08-17 18:07:54 +0000 UTC [ - ]

I've always been bothered (maybe not the right word... intrigued?) by how it's taken as axiomatic that a negative multiplied by another negative must be positive. I can see why it's defined that way, and can't see any other way to define it, but it also leads to a lot of irritating arithmetical inconsistencies.

lupire 2021-08-17 18:59:34 +0000 UTC [ - ]

Huh?

commandlinefan 2021-08-17 19:19:53 +0000 UTC [ - ]

Well, classical Euclidean geometry relies on the parallel postulate, although it can't itself be proven. Abstract mathematicians have put together alternate non-euclidean geometries that are consistent but don't rely on the parallel postulate - kind of along the same lines as Nelson here "rejecting" induction just to see what would happen. I've always wondered what might happen if you "allowed", in the same sense, a square root to be a real rather than imaginary number. Maybe nothing, but I always wondered why nobody ever tried.

depressedpanda 2021-08-17 19:32:57 +0000 UTC [ - ]

Why do you assume nobody's ever tried?

commandlinefan 2021-08-17 19:36:40 +0000 UTC [ - ]

Well, I guess I've never seen anybody try. Maybe they did!

2021-08-17 02:13:15 +0000 UTC [ - ]

ukj 2021-08-17 07:47:15 +0000 UTC [ - ]

Nelson later withdrew his proof (after Terrence Tao corrected him).

Instead, he took a philosophical position on the matter of whether Mathematics is discovered or invented: https://web.math.princeton.edu/~nelson/papers/rome.pdf

He sided with Computer Scientists.

depressedpanda 2021-08-17 20:31:52 +0000 UTC [ - ]

That was a great read, thanks for the link.

> Plato was a dreadful fellow, the source of a persistent evil from which the world has not yet been liberated, [...]

Does anyone more familiar with Plato know what Nelson could have meant by that?

Edit:

Alright, I think I found the reason: "platonism" is a form of mathematical realism, while Nelson strongly favored mathematical formalism.

nsonha 2021-08-17 01:10:17 +0000 UTC [ - ]

is it just me or Google+ seems popular among academics?

MayeulC 2021-08-17 09:47:59 +0000 UTC [ - ]

Was. It was shuttered by Google in 2019.

It started to gain traction in some technical circles; I think Linus Torvalds used it to some extent. That was the only "social network" I was interested in back then, as it seemed more "serious".

2021-08-17 02:45:12 +0000 UTC [ - ]

yablak 2021-08-17 01:28:40 +0000 UTC [ - ]

This is why I hacker news.