The Inconsistency of Arithmetic (2011)
jonnycomputer 2021-08-17 01:30:50 +0000 UTC [ - ]
busterarm 2021-08-17 02:47:36 +0000 UTC [ - ]
The problem is that outside the field of mathematics, challenging the orthodoxy worked up an angry mob and they probably torched the person's house before it could be worked out.
chithanh 2021-08-17 09:31:48 +0000 UTC [ - ]
But OTOH you see many people in other disciplines challenging the orthodoxy (in Physics: string theorists, MOND proponents, etc.) and while their views aren't always taken fully serious, their houses aren't being torched either.
busterarm 2021-08-17 09:57:01 +0000 UTC [ - ]
You can put air quotes around sciences if it helps make my point any clearer.
tzs 2021-08-17 06:45:59 +0000 UTC [ - ]
jonnycomputer 2021-08-19 13:20:56 +0000 UTC [ - ]
In another context, I recall long conversations between different physical anthropologists in a forum about one person's unorthodox belief that humans' closest common ancestor is shared with the orangutan, not the chimpanzee. While there was some occasional irritation, for the most part the conversations were civil, and informative, because the person who held this view was very knowledgeable about the field.
ivraatiems 2021-08-17 00:03:40 +0000 UTC [ - ]
civilized 2021-08-17 00:12:11 +0000 UTC [ - ]
SquishyPanda23 2021-08-17 00:47:15 +0000 UTC [ - ]
In general he thinks a lot about tool kits and I think what we're seeing here is the math equivalent to being great at debugging.
CTmystery 2021-08-17 00:48:05 +0000 UTC [ - ]
Causality1 2021-08-17 00:59:59 +0000 UTC [ - ]
Exactly what I say at parent-teacher conferences.
quercusa 2021-08-17 01:51:02 +0000 UTC [ - ]
tomjakubowski 2021-08-17 00:41:37 +0000 UTC [ - ]
civilized 2021-08-17 00:47:15 +0000 UTC [ - ]
mhh__ 2021-08-17 00:26:26 +0000 UTC [ - ]
catgary 2021-08-17 01:21:34 +0000 UTC [ - ]
Rioghasarig 2021-08-17 18:06:43 +0000 UTC [ - ]
Nelson was making an extraordinary claim so the odds were definitely against him.
lvs 2021-08-17 00:32:46 +0000 UTC [ - ]
derefr 2021-08-17 00:41:15 +0000 UTC [ - ]
The useful skill, here, is one shared by logicians and compilers: the ability to quickly find flaws that invalidate a proof, without needing to fully understand what the proof is trying to prove.
Said skill, if it could be more widely-learned, would be very helpful in iterating toward sound proofs (or impossibility proofs for such.)
But when it's only a skill in the hands of another party external to the one writing the proof, that iterative aspect isn't really there.
Cogito 2021-08-17 00:46:54 +0000 UTC [ - ]
I think I agree with the rest of what you have said.
derefr 2021-08-17 01:21:01 +0000 UTC [ - ]
Invalidating a proof is essentially what a compiler does when it blows up over a typo in the code you've fed it.
The thing is, most of the time, it is just a typo. The code as-is is wrong, but the code with one character different — the code if "branched" in an alternate direction for just a few millimeters, and then course-corrected back onto the original existing path — is right, and the semantic meaning of the code isn't changed/compromised by that correction.
Finding a logic-level flaw in a (presented) mathematical proof is usually similar: 99% of the time, it's something fixable.
That 1% of the time does still exist; there can be "fatal flaws" in proofs, where the proof's author can't find a way to salvage their proof. But it's not the invalidation of the particular proof, where the knowledge that the proof is unsalvageable (i.e. that there's no path that follows the general "plan" of the branch to get from A to B) gets created. That knowledge is only discovered after much more work by the proof's author, to try to "dig around" the problem, that all turns out to hit other walls.
(Note here that I'm assuming that the proof isn't already believed to be true when it's invalidated. Which it usually isn't; most proofs that get invalidated are invalidated when they're still being circulated as something novel and for-scrutiny, rather than when they're already generally-accepted. If a proof was already thought to be true, then invalidating it would temporarily create "negative" knowledge — it would retract a previous consensus assertion of the truth-value of the proof.)
kortilla 2021-08-18 03:04:12 +0000 UTC [ - ]
Cogito 2021-08-18 05:13:40 +0000 UTC [ - ]
Perhaps the most effective thing a person can spend their time on is pointing people along paths likely to lead to success, and guiding them away from dead ends.
civilized 2021-08-17 00:40:03 +0000 UTC [ - ]
ameister14 2021-08-17 01:19:40 +0000 UTC [ - ]
The latter implies 'ooh, he won;' the former only 'ah, there was a problem and he spotted it.'
lvs 2021-08-17 03:01:53 +0000 UTC [ - ]
treyhuffine 2021-08-17 00:36:56 +0000 UTC [ - ]
SavantIdiot 2021-08-17 01:59:52 +0000 UTC [ - ]
For homework, everyone on HN who posts should try saying this at least once this month, because you will be wrong at least once. I know I will.
mixedmath 2021-08-17 00:42:23 +0000 UTC [ - ]
JadeNB 2021-08-17 00:54:34 +0000 UTC [ - ]
jonnycomputer 2021-08-17 01:27:20 +0000 UTC [ - ]
wolverine876 2021-08-17 01:32:17 +0000 UTC [ - ]
Is it all lost?
mixedmath 2021-08-17 12:56:27 +0000 UTC [ - ]
The math blogosphere remains pretty healthy. Thankfully many mathematicians maintain useful and interesting personal sites. If Wordpress suddenly died, there would be a bigger problem (many mathematicians don't self-host).
andybak 2021-08-17 09:50:31 +0000 UTC [ - ]
singhrac 2021-08-17 00:25:21 +0000 UTC [ - ]
_moof 2021-08-17 04:17:28 +0000 UTC [ - ]
newsbinator 2021-08-17 00:40:06 +0000 UTC [ - ]
vymague 2021-08-17 17:27:22 +0000 UTC [ - ]
> You are quite right, and my original response was wrong. Thank you for spotting my error.
> I withdraw my claim.
woopwoop 2021-08-17 17:31:14 +0000 UTC [ - ]
gameswithgo 2021-08-17 14:27:54 +0000 UTC [ - ]
Then saw the first comment by Terry Tao and felt a lot better.
otterley 2021-08-17 00:03:54 +0000 UTC [ - ]
Cogito 2021-08-17 00:48:50 +0000 UTC [ - ]
H8crilA 2021-08-17 16:14:15 +0000 UTC [ - ]
1) the first Godel's theorem says that if a theory of arithmetics is complete then it must be inconsistent,
2) the second is that a theory of arithmetics cannot prove its own consistency, specifically if it proves its own consistency then it is actually inconsistent.
This thesis (with an invalid proof) says "no consistent theory of arithmetics can exist", regardless of whether it's completely complete.
commandlinefan 2021-08-17 18:07:54 +0000 UTC [ - ]
lupire 2021-08-17 18:59:34 +0000 UTC [ - ]
commandlinefan 2021-08-17 19:19:53 +0000 UTC [ - ]
depressedpanda 2021-08-17 19:32:57 +0000 UTC [ - ]
commandlinefan 2021-08-17 19:36:40 +0000 UTC [ - ]
ukj 2021-08-17 07:47:15 +0000 UTC [ - ]
Instead, he took a philosophical position on the matter of whether Mathematics is discovered or invented: https://web.math.princeton.edu/~nelson/papers/rome.pdf
He sided with Computer Scientists.
depressedpanda 2021-08-17 20:31:52 +0000 UTC [ - ]
> Plato was a dreadful fellow, the source of a persistent evil from which the world has not yet been liberated, [...]
Does anyone more familiar with Plato know what Nelson could have meant by that?
Edit:
Alright, I think I found the reason: "platonism" is a form of mathematical realism, while Nelson strongly favored mathematical formalism.
nsonha 2021-08-17 01:10:17 +0000 UTC [ - ]
MayeulC 2021-08-17 09:47:59 +0000 UTC [ - ]
It started to gain traction in some technical circles; I think Linus Torvalds used it to some extent. That was the only "social network" I was interested in back then, as it seemed more "serious".
hypersoar 2021-08-17 00:38:34 +0000 UTC [ - ]
For those who are unaware, the comment pointing the flaw in Nelson's proof is Terrance Tao, one of the most extraordinary mathematicians in the world. He's won a good fraction of the field's medals, including the Fields Medal. A nice, down-to-earth guy, too, by all accounts.
I jokingly call this the time he literally saved mathematics.
btilly 2021-08-17 01:39:03 +0000 UTC [ - ]
In https://terrytao.wordpress.com/career-advice/does-one-have-t... he argues that you do not have to be a genius to do first rate mathematics.
The irony is that this is being argued by person with the best documented genius in the history of mathematics. His official IQ of 230 remains the highest officially measured IQ in the world. He taught himself to read by age 2. He was taking Calculus at age 7. He remains the youngest person for any level of medal in the International Mathematics Olympiad. That doesn't quite convey it. The youngest person to earn a bronze in that competition was 10. Silver was 11. Gold was 12. All three records were set by Terrence Tao, who then didn't bother competing any more.
See https://newsroom.ucla.edu/releases/Terence-Tao-Mozart-of-Mat... for some more of his accomplishments.
His success is comparable to any person in the history of mathematics. Yes I am including Euler, Gauss, and Erdös. Euler possibly covered a bigger breadth of math. Gauss created more fields. Erdös solved more problems. And Tao has made more progress in more hard problems that had stumped everyone else.
jhgb 2021-08-17 02:12:55 +0000 UTC [ - ]
Well, I taught myself to read by the age of 3...
> He was taking Calculus at age 7.
...OK, clearly something went awry with me between the ages of 3 and 7.
asddubs 2021-08-17 02:47:35 +0000 UTC [ - ]
hellbannedguy 2021-08-17 03:07:36 +0000 UTC [ - ]
chalst 2021-08-18 10:27:01 +0000 UTC [ - ]
The early mastery of swearing I picked up between backseat driving (that is, listening to my Scottish mother's running commentary on other drivers) and my spongelike absorption of Canadian broadcast media, earned my parents a stern reproach from the ultra-bourgeois kindergarten they sent me off.
glitchc 2021-08-17 03:37:37 +0000 UTC [ - ]
kbelder 2021-08-17 22:05:32 +0000 UTC [ - ]
charles_f 2021-08-17 05:10:46 +0000 UTC [ - ]
kierkegaard7 2021-08-17 05:57:41 +0000 UTC [ - ]
CRConrad 2021-08-17 14:03:10 +0000 UTC [ - ]
beardyw 2021-08-17 11:39:51 +0000 UTC [ - ]
2021-08-17 14:00:37 +0000 UTC [ - ]
jacobolus 2021-08-17 05:27:54 +0000 UTC [ - ]
andi999 2021-08-17 08:31:19 +0000 UTC [ - ]
jose-cl 2021-08-17 06:21:03 +0000 UTC [ - ]
chris_wot 2021-08-17 03:30:26 +0000 UTC [ - ]
chalst 2021-08-18 09:59:59 +0000 UTC [ - ]
With the right calculus teacher, I'm sure you'd have got the gist of it by your 60th month.
OscarCunningham 2021-08-17 06:06:10 +0000 UTC [ - ]
jhgb 2021-08-17 12:15:14 +0000 UTC [ - ]
btilly 2021-08-17 17:19:40 +0000 UTC [ - ]
However the original definition of IQ was in tests for children, and it was measured as mental age divided by physical age. Those tests are still used for children, and it IS possible for a 6 year old to get an official IQ of 230 on a test like the Stanford-Binet.
Which is exactly what Terry Tao did at the age of 6. Comparisons to adult tests may be problematic. But it remains an officially recorded IQ from a standard test.
hyperpallium2 2021-08-17 12:08:17 +0000 UTC [ - ]
netr0ute 2021-08-17 13:44:16 +0000 UTC [ - ]
hyperpallium2 2021-08-18 08:31:39 +0000 UTC [ - ]
TchoBeer 2021-08-17 11:06:04 +0000 UTC [ - ]
jhgb 2021-08-17 12:20:35 +0000 UTC [ - ]
OscarCunningham 2021-08-17 12:32:02 +0000 UTC [ - ]
jhgb 2021-08-17 12:38:44 +0000 UTC [ - ]
andi999 2021-08-17 14:08:15 +0000 UTC [ - ]
CRConrad 2021-08-17 14:05:08 +0000 UTC [ - ]
linschn 2021-08-17 11:27:49 +0000 UTC [ - ]
I haven't checked the maths, but I know the Gaussian distribution is falling fast past a few standard deviation, so such a result would not surprise me.
TchoBeer 2021-08-17 11:42:17 +0000 UTC [ - ]
btilly 2021-08-17 18:45:02 +0000 UTC [ - ]
IQ stands for "intelligence quotient".
It was developed as a ratio between mental age and physical age, times 100. So if you performed as expected, your IQ was 100. It was originally developed as a way of finding people who were behind, literally "retarded". However it also proved useful as a way of finding people who are intelligent as well.
After IQ tests were adopted in school systems, we found that they were approximately normally distributed with a mean of 100 (by definition), and a standard deviation of 15-16. For a variety of reasons (including identifying military recruits to train for specialized roles), there was a desire to have ability tests aimed at young adults. We developed those and scaled them to be explicitly normal with a mean of 100 and standard deviation of 15 or 16 (depending on the test). We also found that childhood IQ is a fairly good predictor of your later adult abilities, and therefore many of those tests are ALSO called IQ, even though there is no quotient involved.
On the adult tests, the tests do not scale out to IQ 230, nor is it likely that anyone is that many standard deviations out. But on the child tests, there is no problem scaling it. And it turns out that, in practice, the tails are heavier for the original type of IQ test. Which means that it is more common to find a 6 year old who performs at a 14 year old's ability, than a person who is over 8 standard deviations out.
Terry Tao had his IQ officially measured on a childhood test, not an adult one. However there has been no test since that is able to reliability measure his ability, particularly in math.
Consider, at 7, his SAT score put him well within the top 1% of college-bound kids.
At 10, his performance on the International Math Olympiad meant that he was literally in the top handful of high school students in the world.
Yeah, we don't have properly scaled tests for that.
OscarCunningham 2021-08-17 20:10:47 +0000 UTC [ - ]
That's why my above comment was explaining why the 230 figure couldn't possibly make sense under the modern definition.
btilly 2021-08-17 20:28:10 +0000 UTC [ - ]
However the Stanford-Binet test is still used for children. And in young children can still produce extremely high IQ scores.
Also note that while usually the standard deviation is made to be 15 these days, there are still tests, like the Binet and OLSAT, where the standard deviation is 16. That's why I said that the standard deviation depends on the test.
GavinMcG 2021-08-17 11:51:38 +0000 UTC [ - ]
OscarCunningham 2021-08-17 12:33:03 +0000 UTC [ - ]
BeetleB 2021-08-17 04:52:45 +0000 UTC [ - ]
He says to do mathematics, not "first rate" mathematics.
chaboud 2021-08-17 02:52:13 +0000 UTC [ - ]
That said, having been told so by a genius, I feel like I'm now unshackled, ready to pursue a life of mathematics as a non-genius.
wutbrodo 2021-08-17 04:39:39 +0000 UTC [ - ]
I don't think this is quite true. I started reading shortly after my third birthday, and I don't think I just missed the cutoff for "legend".
Though any one or two of Tao's other accomplishments fairly easily clear that bar, IMO.
jamesdmiller 2021-08-17 14:26:01 +0000 UTC [ - ]
sdiupIGPWEfh 2021-08-18 01:43:34 +0000 UTC [ - ]
spoonjim 2021-08-17 04:14:39 +0000 UTC [ - ]
eyelidlessness 2021-08-17 03:15:15 +0000 UTC [ - ]
thaumasiotes 2021-08-17 05:08:55 +0000 UTC [ - ]
No it isn't. I would have called it "normal".
CRConrad 2021-08-17 14:06:52 +0000 UTC [ - ]
Rioghasarig 2021-08-17 17:32:19 +0000 UTC [ - ]
Terrence Tao doesn't have an official IQ. People like to make up numbers for smart people's IQs.
carnitine 2021-08-17 06:27:43 +0000 UTC [ - ]
jstx1 2021-08-17 07:22:05 +0000 UTC [ - ]
OscarCunningham 2021-08-17 09:27:05 +0000 UTC [ - ]
jstx1 2021-08-17 09:32:59 +0000 UTC [ - ]
CRConrad 2021-08-17 14:08:12 +0000 UTC [ - ]
2021-08-17 18:24:52 +0000 UTC [ - ]
mayankkaizen 2021-08-17 19:11:28 +0000 UTC [ - ]
And how can you say they would not reach the same level as Tao? You are indicating that being truly genius is some modern phenomenon. May be 100 years down the line, even Tao will proabably be downplayed and kids will be learning advance calculas before they turn 10.
Come on!
OscarCunningham 2021-08-17 20:04:24 +0000 UTC [ - ]
goatlover 2021-08-17 17:51:47 +0000 UTC [ - ]
ACow_Adonis 2021-08-17 02:38:09 +0000 UTC [ - ]
Calculus at seven I can believe, but teaching himself reading at 2 sounds like something that goes against what we know about biological and social development of the child.
And official IQs of 230 and 300 sounds like someone doesn't understand what IQ is or the meaningfulness of measuring/quantifying such in a standardised way.
Obviously the person may be exceptional, and I do not mean to take anything away from Terence in his work that i'm clearly unqualified to comment on (it wouldn't surprise me if a bit of digging shows him unconnected to such claims), but we shouldn't just accept such things as given. Extraordinary claims require extraordinary evidence...
btilly 2021-08-17 03:23:13 +0000 UTC [ - ]
The IQ was a case of going back to the original kind of IQ test for children, mental age divided by physical age. At the age of 6 he performed at a level to be expected of a 14 year old on the Stanford-Binet test. At the extremes of these tests, the bell curve approximation. So yes, his IQ was measured that high. But it is not strictly comparable to adult IQ tests.
It should be noted that Terrence Tao's early achievements were well documented because he met Dr Miraca Gross of New South Wales at age 3, who was running a longitudinal study of Australian gifted children. Tao, of course, wound up as the star of the approximately 60 children in the study.
ACow_Adonis 2021-08-17 05:03:09 +0000 UTC [ - ]
sure, we have many attestations of speed reading too, and of course there will be attestations of people trying to sell a particular child or education method (or indeed scam).
And I've seen some very convincing performances (and yes, 2 year olds can pick up some basic symbolism, recognition, and repetition). but they lack the ability to comprehend or extend past their specifically repeated contexts. if you film only the contexts of their true positive successes, you can almost sell it, but their errors in extension to anything but rote quickly reveals their comprehension of what they're capable of shouldn't be confused with literacy/numeracy.
different issues abound with attempts to measure or quantify IQs in the extreme ranges at all, let alone in children for which there will be almost no standard applicable method, and it's interpretation is even dodgier.
I mean, another cynical man might say, obviously someone who has lived a life like Terrance must have been given remarkable opportunities. since he clearly did not introduce himself to the gifted study at age 3, and given that we do not have standardised testing at such an age someone was likely a driving force behind him in his youth. the obvious candidate would be his parents, who partake in strong educational and repetitive training, but in the context of the pedagogy of Australian education of the time, would also have to had a very strong narrative needed to overcome some of the dominant "hold them back" attitude of the time.
I mean, we see the same thing with Beethoven and the like. everyone is so focused on the prodigious properties of the person in their later life, critical thinking and skeptical inquiry tends to go out the window. we want to explain greatness through some inherent difference, rather than say the alternative: maybe Terrence is great because of some innate ability. but primarily it is almost certainly the opportunities afforded to him via his circumstances combined with the work and practices and momentum of his parents and his family and then in later life, of himself.
relatively non-scientific hocus-pocus like "taught oneself to read at age 2" literally would go against almost all knowledge of language or child development theories. yet otherwise intelligent people accept such old wives tales without much critical thought.
choeger 2021-08-17 05:49:48 +0000 UTC [ - ]
I have a child that started reading at 5. It was very motivated and learned the basics in a couple of weeks. It really gets much simpler once they understand the basics. But to get there, we had to invest a couple of hours (say 5-10h total) into fundamental pronunciation and character recognition. Someone has to be there for the child to tell them the difference between d and b, or l and I, or e and a, and so on.
Now here's the catch: For someone in the position of teaching a motivated, intelligent child, it may actually feel like the child is teaching itself. But that's presumably skewed by our own school/college/university experience of learning stuff we were actually not that interested in.
flyinglizard 2021-08-17 06:27:47 +0000 UTC [ - ]
aeontech 2021-08-17 14:19:52 +0000 UTC [ - ]
flyinglizard 2021-08-17 20:02:44 +0000 UTC [ - ]
Teach Your Monster.
BusyShapes although they have some issue lately and the game crashes for us.
Sneaky Sasquatch as an adventure game. Both my kids got hooked on it and it has a ton of interactions and education value on what people do.
Monument is something my five years old is hooked on.
Atlas teaches them about the world.
YT Kids with a careful selection of channels.
sida 2021-08-17 03:37:12 +0000 UTC [ - ]
wheelinsupial 2021-08-17 04:28:21 +0000 UTC [ - ]
An excerpt can be found here: https://files.eric.ed.gov/fulltext/EJ746290.pdf
The outcomes depend on how much and when the students were accelerated in school.
ETA:
The two extremes:
“… 17 of the 60 young people were radically accelerated. None has regrets. Indeed, several say they would probably have preferred to accelerate still further or to have started earlier…. The majority entered college between ages 11 and 15. Several won scholarships to attend prestigious universi- ties in Australia or overseas. All have graduated with extremely high grades and, in most cases, university prizes for exemplary achieve- ment. All 17 are characterized by a passionate love of learning and almost all have gone on to obtain their Ph.D.s.”
“The remaining 33 young people were retained, for the duration of their schooling,… Two dropped out of high school and a number have dropped out of university. Several more have had ongoing difficul- ties at university,…”
Based on this HN comment [1] it appears the participants have been anonymized. It also quotes some of Terrence Tao’s education and makes a claim there was someone else who may have equaled Tao in math ability, but did not have the educational support structure to recognize and nurture it.
[1] https://news.ycombinator.com/item?id=11510032
ip26 2021-08-17 05:21:56 +0000 UTC [ - ]
It's also nice to know that 2 years was enough for positive lives even for the "genius" children, which sets a sort of upper bound!
ip26 2021-08-17 03:07:26 +0000 UTC [ - ]
A kid who has memorized some sight words or some books can certainly claim “reading”, with just enough truth there for the tale to take hold when they do well later in life.
btilly 2021-08-17 03:26:30 +0000 UTC [ - ]
dwohnitmok 2021-08-17 04:06:13 +0000 UTC [ - ]
eyelidlessness 2021-08-17 03:19:27 +0000 UTC [ - ]
I also remember feeling humbled by my humility when I took guitar seriously and picked stuff up or figured out technique that astounded people.
ACow_Adonis 2021-08-17 12:47:44 +0000 UTC [ - ]
I've observed in my research (and in real life) many various impressive renditions of what i call "stupid toddler tricks". i saw a 1 year old count to 20. my own kids take books to bed at 2 and "reads" them to himself.
could the kid do math? could my kid read? No. of course not. you probe a little bit deeper and all the trappings of what we adults call cognisance fall away. the one year old has no idea what they were reciting. my kid repeats (albeit poorly) what he heard us say when . "reading",but a little bit of digging reveals the limits of his comprehension. kids use the pictures and other hints hints as cues, they're (the clever bastards),they have a brilliant verbal/aural ability at such an age, and they can have some kind of basic symbolic ability and rote repetition. And you can get them to do amazing things if you record them doing the trick but don't dig any deeper into their mental processes.
I'm guessing that people who are downvoting don't understand child mental development, have a vastly simpler definition of reading than myself, or don't understand how many layers of development are required to arrive at actual reading that have to be passed first. i do think you could quickly show such a trick to a well meaning person and get the legend started however...
sdiupIGPWEfh 2021-08-18 01:52:05 +0000 UTC [ - ]
If we're going to add comprehension requirements on top of that, then what passes as reading is pretty abysmal even for some adults.
2021-08-17 03:31:52 +0000 UTC [ - ]
AussieWog93 2021-08-17 02:54:42 +0000 UTC [ - ]
Googling it, it seems like the claim that he taught himself to read when he was "a little over two" (ie not literally 24 months) has been repeated among multiple reputable news sources (New York Time, SMH, The Age). No comment on the IQ.
>Calculus at seven I can believe, but teaching himself reading at 2 sounds like something that goes against what we know about biological and social development of the child.
It doesn't seem too far-fetched, to be honest. Back when my wife worked in child-care, she knew this one kid who was speaking in multi-word sentences at 9 months old. Most kids don't reach this milestone until they're 2 1/2. Kids can and do surprise you.
philipswood 2021-08-17 04:07:23 +0000 UTC [ - ]
Check out "How Teach your baby to read" by Glen Doman.
He makes the case that it is pretty much an inmate ability if encouraged correctly.
The method has been around long enough to have some backing and witness, but it doesn't actually seem to confer much long term advantage.
I ended up not using it with my kids after feedback from a friend and some further reading.
My understanding is that for most people intelligence isn't a result of starting early, but rather having the brain finish late - i.e. remaining plastic longer.
ip26 2021-08-17 05:30:14 +0000 UTC [ - ]
whoisburbansky 2021-08-17 04:38:28 +0000 UTC [ - ]
ip26 2021-08-17 05:32:00 +0000 UTC [ - ]
kaetemi 2021-08-18 02:17:28 +0000 UTC [ - ]
jonahx 2021-08-17 03:25:10 +0000 UTC [ - ]
https://www.youtube.com/watch?v=I_IFTN2Toak
ACow_Adonis 2021-08-17 12:14:22 +0000 UTC [ - ]
if someone told you that they developed sight before they physically developed eyes, surely you'd pause for thought. But make similar claims concerning the neurological development or mental faculties in prodigies at a young age and suddenly everyone's critical thinking switches off.
I've actually read the study which involved Terrence before this thread, and I'll repeat it again: nothing I'm saying is trying to take away from his accomplishments or current abilities.
The far more plausible story is simply that he had two highly educated parents, one of whom was a math teacher, he has some innate ability, and his parents specifically focused on him and taught and pushed him in math from an early age, then they went about accessing opportunities for him to learn and continue his development at an abnormally early age and they additionally got him special access to educational resources, educators and equipment.
If anything, the video and further research confirms my early conclusions, but my point is really quite fundamental and has little to do with Terence's abilities or current day achievements: I don't believe a 2 year old can teach themselves to read (or at least to anything approximating what a skeptical person would call reading). I think anyone who's had dealing with kids and understands development would say that claim at least SMELLS of bullshit if it isn't actually so.
alisonkisk 2021-08-17 15:21:26 +0000 UTC [ - ]
ACow_Adonis 2021-08-17 21:48:20 +0000 UTC [ - ]
I won't address the first because the discussion will get too long if you think that's genuinely how children learn, even geniuses.
on the later, because the basic foundations of math can be found in (at least some) ten year old minds. what the common man believes is possible in math is mainly influenced by cultural exposure and the order in which we learn it. But there's really only 3 - 4 meta-concepts that underlie all of math, and the rest is about exposure/experimentation, syntax and terminology. Terrence himself I believe says as much and if you read through the accounts of interviews with Terrence at a young age it's apparent that's how his mind is working (and explains the concepts he understands, the mistakes he makes, and those areas he hasn't had exposure to).
Rerarom 2021-08-17 10:41:35 +0000 UTC [ - ]
civilized 2021-08-17 01:57:20 +0000 UTC [ - ]
baetylus 2021-08-17 02:17:21 +0000 UTC [ - ]
dgs_sgd 2021-08-17 01:56:46 +0000 UTC [ - ]
paulpauper 2021-08-17 04:09:29 +0000 UTC [ - ]
He said to do math, not first-rate math. I have witnessed plenty of seemingly merely above-average-IQ people make interesting, novel contributions to math. Complex analysis, infinite series, matrices, stuff like that.They don't get much media coverage but produce interesting results and produce high quality math..that is probably what he was getting at.
-read the req. literature
-find an interesting problem, something u want to learn more about
-defer to literature to try to solve it
-if you succeed, write it up
IQ matters a lot though.no doubt.
eyelidlessness 2021-08-17 03:13:20 +0000 UTC [ - ]
leeoniya 2021-08-17 03:17:09 +0000 UTC [ - ]
i would wager a lot of money that no one here has the capacity to approach John von Neumann, Srinivasa Ramanujan or Terence Tao in a single lifetime of unlimited curiosity and attention.
irjustin 2021-08-17 01:48:27 +0000 UTC [ - ]
Absolutely. His interview w/ Numberphile is very matter of fact and speaks about incredible things as if they're everyday occurrences/problems - speaks as if you could have these problems too.
https://www.youtube.com/watch?v=MXJ-zpJeY3E
OrangeMusic 2021-08-19 13:18:08 +0000 UTC [ - ]
I see what you did there ;)
warent 2021-08-17 01:19:58 +0000 UTC [ - ]
floatingatoll 2021-08-17 01:35:56 +0000 UTC [ - ]
Uehreka 2021-08-17 01:49:24 +0000 UTC [ - ]
mabbo 2021-08-17 01:59:39 +0000 UTC [ - ]
Peano arithmetic is the basis for a lot of math. There are very basic assumptions underlying it, but using just those assumptions you can make the natural numbers and perform math on them.
This was a bold attempt to prove that they are not consistent. That is, you can use the base peano axioms to prove 1 = 0 and anything else you want.
Terry Tao pointed out a mistake the author made, and thus 'saved' mathematics. Peano arithmetic remains consistent.
0xBABAD00C 2021-08-17 03:00:47 +0000 UTC [ - ]
This isn't quite the conclusion here. Peano arithmetic's consistency cannot be proven within its own limits, so it remains a hope/intuitive belief, but not a fact. Closest we've gotten, to my knowledge, is that there have been consistency proofs within other axiomatic systems: https://en.wikipedia.org/wiki/Gentzen%27s_consistency_proof
dllthomas 2021-08-17 04:35:02 +0000 UTC [ - ]
But moreover, if Gödel's proof had gone the other way I'm not sure the situation is all that much changed. If I hold in my hand a proof of the consistency of a system of axioms within that system of axioms then... either the system is consistent or, by being inconsistent, could prove anything including its own consistency.
2021-08-17 04:17:25 +0000 UTC [ - ]
Sniffnoy 2021-08-17 02:22:01 +0000 UTC [ - ]
Peano arithmetic is a set of axioms that describe basic properties of the whole numbers (nonnegative integers). It's a pretty simple set of statements. The axioms are described in terms of 0, S (the successor function, S(n) = the next number after n), plus, and times.
The axioms are:
1. If Sn = Sm, then n=m
2. For all n, Sn is not 0
3. For all n, n+0=n
4. For all n and m, n+Sm = S(n+m)
5. For all n, n*0=0
6. For all n and m, n*Sm = n*m + n
7. [Induction] If P(n,m_1,...,m_k) is a predicate, and m_1,...m_k are whole numbers, such that:
A. P(0,m_1,...m_k) holds, and
B. For all n, P(n,m_1,...,m_k) implies P(Sn,m_1,...,m_k)
Then for all n, P(n,m_1,...,m_k) holds.
...OK, that last one is maybe a bit complicated. Technically, that one is actually what's called an axiom schema, that generates infinitely many axioms, one for each possible predicate of one or more variables (note you can have k=0). The theory is about whole numbers, not about predicates; the theory can't actually talk directly about predicates. Anyway, that's a bit of technical detail you don't really need to know for these purposes, so let's just move on. If you didn't understand that part, it's OK.
As you probably know, if we have a mathematical theory, and that theory is supposed to describe something that is supposed to actually exist (such as the whole numbers), that theory had better be consistent.
Firstly, because if it's not consistent, it can't possibly be true (the technical term is "sound"). Reality is consistent, so if something is inconsistent, it's incorrect.
Secondly, because of the principle of explosion. If you know P and not P for some statement P, you can conclude any statement Q. This principle of logic may be counterintuitive, but it's pretty essential. Some people have made a version of logic that don't include it ("minimal logic"), and it's basically unusable.
Suppose you know both P and not P, and you want to prove a statement Q. Well, you know P, so you certainly know P or Q. But you also know not P; that eliminates P as a possibility, leaving Q. So in order to get rid of the principle of explosion, you'd have to get rid of the principle that if you know A or B, and also know not A, then you can eliminate A as a possibility to conclude B. That's a pretty big tool to go without!
(Well, or you'd have to get rid of the possibility that if you know A, you can conclude A or B, but that's even worse.)
So, if a set of foundational mathematical axioms is found to be inconsistent, it could be something of a disaster for mathematics. People would need to come up with new, weaker axioms that somehow didn't lead to this contradiction. This would be a difficult thing to do -- which principles do you keep, and which do you jettison? (And which do you replace with weaker versions? And what new weaker ones do you invent to take the place of ones that had to be tossed?) That's not an easy question!
So let's say that the ZFC axioms were found to be inconsistent. The ZFC axioms are a set of axioms (and axiom schemas) that describe set theory, and basically all of "ordinary mathematics" can be founded on them. (I'm not going to list them all here; they're not that complicated, but they're rather more complicated than Peano arithmetic.)
If the ZFC axioms were found to be inconsistent, well, it could be a project of years or decades to come up with a new, hopefully consistent, foundation of mathematics. It would be quite the problem. But... it wouldn't destroy mathematics. After all, ZFC itself is what people came up with after the earlier attempts to axiomatize set theory were found to lead to contradictions, so this has in a sense happened once before.
So why would it be such a big deal if the Peano axioms in particular were found to be inconsistent? Why would do people say that would destroy mathematics?
There are a few reasons for this.
Firstly, the Peano axioms describe the whole numbers, rather than set theory. Set theory is kind of out there -- who can really say what should or should not be true of vastly infinite sets? Yeah, the ZFC axioms all seem like they should be true, but so did the axiom of unrestricted comprehension, and that turned out to be no good. The Peano axioms, by contrast, describe the whole numbers, and are pretty basic statements about them. They had better be true, or else we are very wrong about the whole numbers!
But, that's not the only reason. After all, if that were it, it would still likely be possible to recover from the problem by passing to a weaker system of axioms. And there are various ones that have been proposed! The Peano axioms are mostly pretty simple, but that last one, induction, has a lot of hidden complexity to it. People have suggested ways you could weaken the induction axiom by limiting what sorts of predicates it applies to. And that would certainly be contentious, but if Peano arithmetic were found to be inconsistent, we'd have to. Thing is, that wouldn't solve the real problem.
The problem is that weakening the induction axiom is only really a possibility if you want to describe the whole numbers in isolation. In reality, we don't consider the whole numbers in isolation, described by the Peano axioms; but rather as part of the broader picture of mathematics, described by ZFC. We don't study just whole numbers, but sets of whole numbers, whole numbers interacting with real numbers and complex numbers and p-adic numbers, whole numbers interacting with graphs and groups and partitions, whole numbers interacting with vector spaces and manifolds and all the rest of mathematics.
So, in reality, if Peano arithmetic were found to be inconsistent, the real task wouldn't be to weaken the Peano axioms so they'd be consistent; it would be to weaken the axioms of ZFC, in such a way that that would weaken the Peano axioms to remove the contradiction.
But that's just about an impossible task. It'd be nearly impossible to weaken the axiom schema of induction, while also leaving a theory that can interact with the rest of mathematics. Because you see, if we want the whole numbers to be able to interact with the rest of mathematics, then we have to be able to talk about sets of whole numbers.
And if we can talk about sets of whole numbers, then we can talk about the following variant of induction: Let S be a set of whole numbers, and suppose that 0 is in S, and, for each whole number n, n being in S implies n+1 is in S. Then all whole numbers are in S.
Now this may sound like the same thing I said above -- they're both just induction, right? But this one is about sets, not predicates, because now we're in a setting where we can talk about sets. And this one is more general -- because, if we have a predicate about whole numbers, we can form the set of whole numbers satisfying it. Whereas notionally one could have a set not described by any predicate.
So, if you have this statement (set induction), you get all of predicate induction. So you'd somehow have to weaken the axioms of mathematics such that:
1. You can still talk about sets of natural numbers, but
2. You don't get predicate induction.
And how on earth would one do that?? Stopping set induction sounds pretty much like a dead-end; like if you did that, how would you get any form of induction at all? (And if you don't have any form of induction, then you don't really have the whole numbers; it's kind of their defining feature.) I mean you could make special assumptions about whole numbers, but if we're trying to make more general set-theoretic axioms, where whole numbers aren't fundamental, those don't really belong.
So the remaining option then is to stop the link from set induction to predicate induction, by limiting what sorts of sets you can make, so not every predicate on whole numbers can be used to form a set of whole numbers.
But (while I think that's considered less impossible), that's not really a viable option either! Because that would mean that somehow you'd have to introduce, into your set theoretic axioms, restrictions on what predicates can be used to form sets; and while there are sensible ways one could formulate restrictions in the limited context of whole numbers, there's not really any good way to formulate such restrictions that would make sense in the broader context of mathematics as a whole.
So, basically, we'd be stuck. Coming up with new axioms if ZFC were proved inconsistent would be difficult but doable. Coming up with new axioms if Peano arithemtic were proved inconsistent seems basically impossible. So, finding a contradiction derivable from the Peano axioms could indeed destroy mathematics.
Of course, since people generally expect that the Peano axioms are true, nobody really expects that to happen. But Edward Nelson -- an adherent of the fringe mathematical school known as ultrafinitism -- disagreed, believing induction was probably not true and not consistent. And here he claimed to have found an actual inconsistency. But, as has been mentioned, Terry Tao found a hole in his argument. And so mathematics was not destroyed. At least not that day.
Sniffnoy 2021-08-17 03:00:54 +0000 UTC [ - ]
Above I said the only reasonable way to weaken Peano arithmetic was to weaken induction. But that's not quite true. There's another way: Not changing any of the axioms, but weakening the underlying logic.
See, there's a school of mathematics known as "constructivism", and they object to the usual laws of logic, saying that they're too strong; they use a weaker logic. It ought to be called "constructive logic", but for historical reasons (that are really not worth getting into) it's called "intuitionistic logic" instead.
And this might seem to be a better approach, because weakening the laws of logic is something that can be done without regard for the setting; it doesn't matter here whether you're talking about just whole numbers, or mathematics as a whole. (Well, almost. Actually, from ZFC and intuitionistic logic, you can effectively get back classical logic. But there are pretty good ideas about how to modify ZFC to avoid that.)
The problem is, doing this doesn't help. Because it's known that if you can prove a contradiction from the Peano axioms using classical logic, you can also do so using intuitionistic logic. So even if you went constructive, you'd still be stuck with all the same problems.
chithanh 2021-08-17 09:17:26 +0000 UTC [ - ]
Sniffnoy 2021-08-17 19:47:48 +0000 UTC [ - ]
Trying to make a usable theory that also proves its own consistency is probably also a futile goal, per Gödel, but it's a separate one, and not one that people would likely go for if an inconsistency were found in Peano arithmetic.
nocturnial 2021-08-17 06:55:42 +0000 UTC [ - ]
Are you sure it isn't the other way around? If you can prove a contradiction using intuitionistic logic, it's also a contradiction using classical.
If you proved the contradiction using the excluded middle then that proof wouldn't hold when you use intuitionistic logic. Maybe I'm missing or not understanding something here.
Sniffnoy 2021-08-17 07:28:02 +0000 UTC [ - ]
What's surprising is that in this particular case (the Peano axioms), the reverse is also true; you won't be able to eliminate contradictions by passing to this weaker logic.
Note, all I said is, if you can prove a contradiction classically, then you can prove a contradiction constructively! I didn't say, if you have a classical proof of a contradiction then it's a valid constructive proof of a contradiction. Obviously not! But it will still be true that you'll be able to prove a contradiction constructively; it just won't necessarily be the same proof.
If you want to know more about the particular transformation, well, here's a relevant Wikipedia article: https://en.wikipedia.org/wiki/Double-negation_translation
nocturnial 2021-08-17 10:59:15 +0000 UTC [ - ]
I know this is off-topic but if you have a book recommendation which teaches about intuitionistic logic/constructive math and also mentions the double negation translation, I'm interested. Most books I've read seem to either concentrate on only constructive or classical math.
Sniffnoy 2021-08-17 21:33:50 +0000 UTC [ - ]
taejo 2021-08-17 07:30:23 +0000 UTC [ - ]
[0]: https://en.wikipedia.org/wiki/Double-negation_translation
hau 2021-08-17 07:37:47 +0000 UTC [ - ]
jmholla 2021-08-17 03:10:32 +0000 UTC [ - ]
Sniffnoy 2021-08-17 03:30:36 +0000 UTC [ - ]
JZumun 2021-08-17 12:56:58 +0000 UTC [ - ]
As a non mathematician sometimes I wonder if this substitution is causing me to misunderstand something else about them.
dreamcompiler 2021-08-17 06:10:33 +0000 UTC [ - ]
spoonjim 2021-08-17 00:55:02 +0000 UTC [ - ]
totetsu 2021-08-17 00:59:59 +0000 UTC [ - ]