Marc Andreessen on Investing and Tech
hncurious 2021-08-18 16:37:02 +0000 UTC [ - ]
"Anyway, science right now is in an existential crisis. This is a real, real issue, and there’s now a generation of scientists who specialize in pointing this out and analyzing it. Andrew Gelman and others. I’ll give you an example: I had a conversation with the long-time head of one of the big federal funding agencies for healthcare research who is also a very accomplished entrepreneur, and I said, “do you really think it’s true that 50-70% of biomedical research is fake?” This is a guy who has spent his life in this world. And he said “oh no, that’s not true at all. It’s 90%.” [Richard laughs]. I was like “holy shit,” I was flabbergasted that it could be 90%.
He’s like “well look, 90% of everything is shit”, which is literally this thing called Sturgeon’s law which says that 90% of everything is bad. 90% of every novel written is bad, 90% of music, 90% of art… 90% of everything is bad. So his analysis was, anything you get in the field of medical experimentation, biomedical development, and this is going to be true of any field, there’s like five labs total in the world that are really good at what they’re doing and doing really cutting edge work. And this is true of quantum computing; pick any field for advanced technology you want.
So those five labs have a pretty good shot at doing interesting work, even some of that is going to reproduce and some of it isn’t. But once you get out of those top five labs, it’s pretty much make-work, incremental, marginal improvements at best, and a complete waste of time otherwise. And I said “good God, why does the other 90% continue to get funded if you know this?” And he said, “well, there are all these universities and professors who have tenure, there are all these journals, there are all these systems and people have been promised lifetime employment.” Anyway, a longwinded way of saying that we have pretty serious structural and incentive problems in the research complex. "
helixc 2021-08-18 17:39:36 +0000 UTC [ - ]
On another note, I have seen small no-name labs published impactful scientific results, and then went back to stealth mode again. It's super cool. Science labs, unlike a business entity, should not be measured by recurrent revenue and growth.
habitue 2021-08-19 00:12:14 +0000 UTC [ - ]
So, usually when people say this about sturgeon's law type stuff, it's because we genuinely don't know how to determine what's crap and what's not overall. For example, if we're talking about 90% of academic papers are crap, ok, it's true, but we pay the cost because we don't apriori which 10% will be the good ones.
The issue is that in academics, we actually have a pretty good idea which programs produce the best and most impactful work. Now it doesn't seem like a cost we have to pay, but rather a conscious decision not to cut out the crap (or to keep funding the crap).
I think the ideal outcome is that the culture and resources of all programs are approximately the same, and each produces 90% crap, and 10% good stuff.
lambdatronics 2021-08-19 04:55:22 +0000 UTC [ - ]
908B64B197 2021-08-18 20:58:33 +0000 UTC [ - ]
Did the individual contributors stay at these labs?
j7ake 2021-08-18 17:04:37 +0000 UTC [ - ]
Newton did first rate work, but he also spent a lot of time working on alchemy and theology which didn’t get anywhere.
Research grants are generally competitive, if you haven’t shown you’ve ever done something my significant or haven’t done something significant in a long time, it is difficult to survive. But indeed even the top scientists spend a lot of time on dead ends.
It is true that often only top 5 labs or so doing good work, but those top 5 labs are not constant over time, they wax and wane like fashion trends.
The lesson is just because over last few years the top work came from only a few labs, it’s premature to eliminate all other labs because those top labs could be lower in the next cycle.
lisper 2021-08-18 17:58:22 +0000 UTC [ - ]
WalterBright 2021-08-18 18:43:33 +0000 UTC [ - ]
Sometimes people do investigate the crap, and discover it's actually fertilizer. The trouble is, one cannot tell in advance.
lambdatronics 2021-08-19 05:07:13 +0000 UTC [ - ]
tmn 2021-08-18 22:16:55 +0000 UTC [ - ]
petermcneeley 2021-08-18 16:55:40 +0000 UTC [ - ]
This law seems to lack any kind of rigor or theoretical basis and seems like a purely rhetorical device.
ysavir 2021-08-18 16:59:48 +0000 UTC [ - ]
vickspicks 2021-08-18 17:05:12 +0000 UTC [ - ]
Enginerrrd 2021-08-18 17:53:07 +0000 UTC [ - ]
There's all kinds of reasons to expect that too... Everything from funding, likely collaboration partners, talent, etc.
That would give it some theoretical basis.
majormajor 2021-08-18 17:07:26 +0000 UTC [ - ]
In many fields the definition of "crap" depends, after all, on what the rest of the field's output looks like.
quantified 2021-08-19 00:42:51 +0000 UTC [ - ]
It’s not quite the Liar’s Paradox, I think Sturgeon’s fits into the other 10%.
boringg 2021-08-18 17:04:45 +0000 UTC [ - ]
nonameiguess 2021-08-18 16:57:20 +0000 UTC [ - ]
That isn't by any means to suggest we shouldn't try to make science better by being more replicable and reducing fraud, but if it is an improvement, we're already doing something right and there is far less cause for despair and cynicism.
It also, of course, doesn't do any base rate comparison to non-scientific ways of generating facts. If 90% of published psychology research turns out to be wrong, but 99% of folk psychology turns out to be wrong, then science is still winning there.
jollybean 2021-08-18 17:35:52 +0000 UTC [ - ]
It doesn't matter a whole lot with respect to 'how it compares to 50 years ago' if the assumption that science is supposed to be objective and credible.
It's supposed to be 'approaching 100%' i.e. they are publishing findings, not fake findings.
i.e. it's not a game of 'getting a higher percentage' it's a game of integrity.
The implications that 'most science is rubbish' is pretty scary actually and it has profound consequences for the future both in practical terms (i.e. we can't trust a lot of it), but especially in populist terms (i.e. people not trusting climate change, vaccine research etc.).
"If 90% of published psychology research turns out to be wrong, but 99% of folk psychology turns out to be wrong, then science is still winning there. "
Is it though? We spend bazillions on psychology and nothing on pop psychology, so we'd have to contemplate how valuable that '10%' is to society - especially in the context that we may not even know 'which 10%'.
This is kind of a big problem.
majormajor 2021-08-18 17:56:01 +0000 UTC [ - ]
If, on the other hand, you just dumped all those psychology researchers out of the academic environment and sent them to marketing and advertising companies... are we better off?
TapWaterBandit 2021-08-19 00:04:21 +0000 UTC [ - ]
Honestly? Very possibly.
The reason is that regardless of your opinion of marketing/advertising (mine is not overly positive while acknowledging that informing consumers of new products etc is a worthwhile endeavour broadly speaking) at least if the psychology researchers were trying to implement whatever their pet theory is in a company the market will quite quickly let them know whether or not it is horseshit. Markets have faults of course but at least if the psychological theories were implemented at companies you could reasonably easily tell if they were effective or not simply based on do they increase sales etc. If they turn out to be nonsense, this acts as a signal to the individual as well as others that this avenue of study is not worth further exploration.
Whereas Academia can often act as an artificial greenhouse environment for theories/hypotheses that won't survive contact with the real world but seem quite successful in the artificial environment.
dotcommand 2021-08-18 17:57:44 +0000 UTC [ - ]
I think there is a difference between "being shit" and "being fake" though. 'do you really think it’s true that 50-70% of biomedical research is fake?'
Fake seems more insidious and problematic.
Also this brings up a very troubling issue. If 90% of the "research is fake/shit" and lets say that means that 90% of the researchers are "fake/shit", then it means that 'scientific' consensus is also likely "fake/shit".
This is why I'm always skeptical of 'scientific' consensus. Science is about evidence, testing, etc. Not consensus - which is the realm of politics, law, etc.
And I'd suspect 99.99% of social 'sciences' is probably 'fake/shit' and that is used to push/change society/government/etc.
techfoolery 2021-08-19 00:02:26 +0000 UTC [ - ]
majormajor 2021-08-18 18:09:25 +0000 UTC [ - ]
(Fudging here around the 0.05 threshold threshold, since stuff on that boundary is the most likely to produce false positives, not sure what how to use a more realistic range... I think this would just be upper-bound on false positives? Please correct/clarify if your probability/statistics knowledge here is deeper than mine!)
Let's say twenty different researchers have a similar idea, they all pursue it, and one of them gets (un)lucky and gets apparently statistically significant results. They're the only ones that try to publish, and nobody ever has the knowledge that it only worked 1 time out of 20 (or more?) - even the people who's experiments failed who see it are only aware of their own attempt, and maybe they just messed something up.
Let's say 90% of all published results are non-reproducible. What sort of pipeline would be required to create that? For every reproducible, significant finding, there are 9 false-positives. For each of those false-positives there were 19 true-negatives. So about 1 out of 180 research attempts are reproducible truly-meaningful novel results. Does that sound reasonable? It's hard, right, especially to keep finding new things on top of everyone else's continued progress too? But it isn't necessarily malicious or fraudulent every step of the way - we just have our system structured in a way that biases in favor of creating false positives.
(Put in other numbers, say 10% of all research has significant results, then 90% of it doesn't, and at the 0.05 level, that's still (up to?) 4.5 false positives for every 10 truly statistically significant things, meaning close to 1 out of every 3 papers would still be expected to be a false positive.)
Publishing negative results would require a bigger publication industry, and more people spending more time filtering through it all, but you'd at least be able to compare to the literature to say "this looks like it worked, but ... it hasn't ever worked for anyone else."
u385639 2021-08-18 17:00:49 +0000 UTC [ - ]
screye 2021-08-18 19:24:02 +0000 UTC [ - ]
People look to get 3 things in a job: Compensation, Interest and Reputation.
The professor is a researcher who teaches. However, they do not want to merely be a teacher. Especially when it comes with a substantial pay cut in comparison to the industry. So, you offer them repute and interest by giving tenure and something to research. The university gets a better teacher for cheap and the professor gets an inflated sense of ego and self-actualization.
From the student's point of view, it opens up alternate avenues for promotion. If only the top 5 labs get funded enough to do good work, then entering those top 5 labs early is absolutely essential. Anyone who doesn't get on the bandwagon early, will be left to the wolves like medical residencies. It will greatly exacerbate the competition in standardized exams and sideline students who don't abide by a rigid template early in life. (eg: a lot of friends went to tier 2 universities for their masters, did excellent work and got into the top 5 labs purely because the top 50 lab was funded enough for them to make up for lost time early in life)
I have also seen lab impact scores jump around substantially enough, that while the 90% hypothesis may be true, we have no way to picking out who will create the 10% that's gold. While half of the 10% might get reliably produced by the top 5 labs. The other half can be notoriously hard to pin down. You will see this in top industry groups (MSR, Brain) that hire exclusively out of the top 5 labs. Even there, the 90% number holds, despite there being a hiring system specifically built for capitalistic impact.
On the flip side, any academic field whose employability exists only within academia is a dictionary-definition pyramid scheme. It is a common accusation that's correctly levied on the liberal arts, but STEM should not be immune to it.
yuy910616 2021-08-18 17:34:29 +0000 UTC [ - ]
Of course - I think if anyone act on the belief that 90% of their bets are duds, they would not be in the venture business. It's really interesting that a process could lead to a distribution - but knowing the outcome distribution is actually...useless?
majormajor 2021-08-18 17:05:29 +0000 UTC [ - ]
This seems like an non-thorough and fairly uninteresting answer to the question.
I would want to know how much useful-but-not-groundbreaking stuff still comes out of the "make work" crowd, how wide you need the funnel to be at the top to make sure you catch the people who can end up in the top labs and who make breakthroughs, how much of a "safety net" that you need to make sure people are motivated to jump into the funnel in the first place rather than going into something safer, etc.
A world where we have fewer "wasteful" make-work research jobs but also have fewer breakthroughs isn't great. Hell, you could call the majority of today's service- and information-oriented non-academic jobs "make-work" too (let's up those clickthrough rates! let's stream this video in 8k instead of 4k! let's move this money around slightly faster!). So is there necessarily much utility in moving "make-work" researchers into other fields?
sjg007 2021-08-18 21:16:24 +0000 UTC [ - ]
ren_engineer 2021-08-18 16:59:52 +0000 UTC [ - ]
satellite2 2021-08-18 21:49:57 +0000 UTC [ - ]
908B64B197 2021-08-18 21:00:04 +0000 UTC [ - ]
That's also true for software engineers. A handful of 10x can carry a company a long way.
version_five 2021-08-18 17:05:20 +0000 UTC [ - ]
(But this should be obvious to a VC so I may be reading it wrong)
JacobDotVI 2021-08-18 18:03:51 +0000 UTC [ - ]
nostrademons 2021-08-18 21:13:18 +0000 UTC [ - ]
Without being able to independently verify results, a lot of journals rely on signals like your research & results being noteworthy, which is anticorrelated with them being true. If our understanding of the world is at all correct, you'd expect most experiments to give boring, status-quo results, and your first impulse when you get something noteworthy is to assume the experimenter made a mistake. (Interestingly, this is how things work at the undergrad level - if you measure the speed of light at 1 million meters/sec, your prof will assume you screwed up.)
Probably we need a separate verification step after publication, with researchers evaluated on what fraction of their published results survive verification, not just how many papers they publish. Publish all the noteworthy results you get, but assume that published research doesn't mean much until it's been independently reproduced.
yuy910616 2021-08-18 17:39:55 +0000 UTC [ - ]
So let's be thankful of the crappy medium haha
jollybean 2021-08-18 17:40:31 +0000 UTC [ - ]
New business ventures are inherently risky.
Research is inherently risky, and we would expect 90% to 'fail' but for 90% of papers to be rubbish would be a matter of 'extreme corruption' along the lines of 90% of judges taking bribes etc..
"I don't see a better system."
Well, one thing we could do is punish people for knowingly publishing crap results, i.e. have a 'secondary review system' that investigates 'why' results were rubbish, and if there was malfeasance, well, put in place some kind of retribution.
We could also change our tune with respect to expectations, and come to 'expect' that the majority of research should publish then 'null' result and that should just be it.
nostrademons 2021-08-18 21:20:03 +0000 UTC [ - ]
It's possible for 90% of papers to be rubbish without any ill intent or dishonesty on the part of the experimenter. They might simply suffer from "works for me" syndrome, as it afflicts software developers. In science, this equates to failing to control for all the possible confounding factors that might affect the result of the experiment. They might have carefully controlled for everything they thought of and gotten methodologically valid results, but then when another experimenter tries to replicate it, they find their experimental environment is different in a way that neither considered important but alters the results.
Finding all these conflaters is the point of science. If you can't replicate the experiment, it means the results don't hold, but it doesn't mean the investigators were bad people - and assuming they are is unlikely to get better science.
yuy910616 2021-08-18 17:44:06 +0000 UTC [ - ]
cscurmudgeon 2021-08-18 23:46:28 +0000 UTC [ - ]
You need to fund avenues that don't look promising now. Some new developments may make them good. E.g. deep learning before 2011.
WalterBright 2021-08-18 19:04:49 +0000 UTC [ - ]
Xerox did it with the copy machine. Which then remade the business world. They did it again with the user interface, though it was Apple that capitalized on it.
Frank Whittle soldiered on for many years trying to invent the jet engine, funded by venture capitalism at the time. The government wasn't interested until he demonstrated flying jet aircraft. In the meantime the US government shut down Lockheed's nascent jet engine project.
FooBarBizBazz 2021-08-19 04:49:57 +0000 UTC [ - ]
> But I think of sociability or socializing as a collective action problem.
is excellent. He nails it in a way I hadn't articulated before. Marc disagrees, but I'm really with Richard on this one.
lambdatronics 2021-08-19 05:24:55 +0000 UTC [ - ]
https://www.strongtowns.org/journal/2021/1/6/college-campuse...
https://news.ku.edu/2018/03/06/study-reveals-number-hours-it...
lumost 2021-08-19 00:42:46 +0000 UTC [ - ]
Is it possible that the sciences and other fields have exceptionally high coordination taxes? Or is it likely that we’ve structured them such that only a small number of slots exist for people to move the needle.
The latter seems eminently true given the finite number of conference slots and the former seems true given the the need to convince more people of a given direction.
If such a dynamic exists than the advent of communication technologies may actually hamper forward progress due to elimination of smaller research networks.
quantified 2021-08-19 00:46:47 +0000 UTC [ - ]
Interesting perception, seems to have explanatory power. I’ll note that building games and gaming them has accelerated with tech. (Facebook, Tiktok, Amazon, REvil, Robinhood.) These aren’t institutional games, these are societal games. Different and fluid yet roughly defined groups.
highenergystar 2021-08-18 21:23:46 +0000 UTC [ - ]
Social media definitely has negative impacts (e.g. directly tied to mobs in India and Burma) and positive impacts (so much easier to stay connected across the world) - it would've been 'interesting' to hear a more nuanced take from one of the inventors of the modern internet
petermcneeley 2021-08-18 18:29:36 +0000 UTC [ - ]
I really enjoy it when people say exactly what they think.
david927 2021-08-18 19:57:07 +0000 UTC [ - ]
If they got lucky, it was because they were "super-elite." No one's going to win a nurture-or-nature argument but there's a clear self-interest in these situations to call nature what was, at least in part, nurture.
jdminhbg 2021-08-18 20:51:32 +0000 UTC [ - ]
david927 2021-08-18 21:36:24 +0000 UTC [ - ]
It's like talking about how there are "just a certain number of super-elite" players of pickle ball. It's conflating datasets of those who have been given access to the entire spectrum of attributes necessary to know and be good at pickle ball, with the vast seas of those who haven't. How can you say it's rare based on such a small data sample? If you were a pickle ball champion, would you really brag about how, "Let's face it, there are only a couple of us; we're just that good."
jdminhbg 2021-08-18 22:11:09 +0000 UTC [ - ]
david927 2021-08-18 22:31:12 +0000 UTC [ - ]
But if you're unfamiliar with the hubris and elitism that runs wildly unfettered in the Bay Area -- well, lucky you.
achillesheels 2021-08-18 22:07:17 +0000 UTC [ - ]
jstx1 2021-08-18 18:45:39 +0000 UTC [ - ]
The only way the statement could be false would be for everyone to be at the same level of skill/knowledge/aptitude/etc. but there's clearly some non-uniform distribution and regardless of where you draw the line to define super-elite, some people will fall in that group.
colinmhayes 2021-08-18 18:59:55 +0000 UTC [ - ]
majormajor 2021-08-18 19:11:57 +0000 UTC [ - ]
But he doesn't present much evidence for this beyond a speculative "If someone’s truly a member of the elite, are able to generate elite-level results, if you wanted to demotivate them and draw them out of the field, what would you do? You would surround them with mediocrity and drown them in bullshit" that's extrapolated from what the founders he's talking to are telling him.
One potential blind spot seems to be "maybe weeding through tedium is a good quality in many researchers, since much of research is more tedious than founding a startup." After all, he's basing this on talking to people who left the system, not talking to today's top researchers.
I think a bigger one is that he says even in his system, most (50%+) of the companies he funds fail to do anything interesting. So this is still, from the same sort of standard of efficiency applied to researchers here, woefully wasteful.
I'm also not really clear on how he thinks it should be changed.
abvdasker 2021-08-18 18:35:56 +0000 UTC [ - ]
thewarrior 2021-08-18 18:40:54 +0000 UTC [ - ]
"Imagine you're an expert in only one thing and it's computer programming. Books are a waste of time except as quick references; you can achieve full expertise just by teaching yourself through internet searches, discussion forums, and maybe the occasional blog post if you're really keen. When you have to solve a problem, you strip away as much of the context as possible, until you can isolate it down to a single phrase of logic. As soon as you solve that one isolated microproblem, the results blow up exponentially to satisfy all kinds of needs and desires you haven't even thought of.
Now imagine that even though you're doing journeyman skilled labor like a plumber or electrician (except it doesn't even require professional certification), you earn a salary well into the six figures - or even equity that grows to the millions or billions - in a Wall Street-like testosterone bubble where you and your colleagues unabashedly believe you're the smartest people in the room, the beneficiaries of a pure meritocracy, and you're Changing The World as much as you can when unimpeded by the hordes of fuzzies who just don't know how to think.
Of course you come to believe that anyone who claims to be a credentialed expert, and says this or that issue is actually much more complicated and contextual than you appreciate, is probably just a big faker whose whole field of study could be solved elegantly if a smart person like you just spent some hobby time stripping away the context to isolate that one neat trick of logic."
Not sure who wrote this originally found it here - https://www.reddit.com/r/SneerClub/comments/i7a24j/tech_brai...
achillesheels 2021-08-18 22:12:24 +0000 UTC [ - ]
It’s myopic to judge the tech world around mobile apps instead of the decades long capitalizations that can bring mass market super computers into the palm of our hands.
Now compare that to the research interests over the same time-horizon. We still have psychology departments not educating basic cellular biology as a graduation requirement!
abvdasker 2021-08-18 18:25:12 +0000 UTC [ - ]
> Traditionally if I wanted to work at a company in another country I would have to go to that country. I’d have to be an immigrant from my country, at least for a while. In a world of remote work and Zoom and Slack, I can now work for people anywhere in the world. I can work for companies doing any kind of knowledge work. Immigration policies apply to the atoms of human beings, they don’t apply to the bits. I can go to work for some company anywhere in the world if it’s a remote-friendly company. I may never travel there, I may never have to travel there.
Marc Andreessen is getting very close to saying the quiet part out loud. This is the clearest statement I've seen to date from a venture capitalist that white collar offshoring is almost certainly going to accelerate due to the shift towards remote work brought on by the pandemic.
exolymph 2021-08-18 19:06:14 +0000 UTC [ - ]
abvdasker 2021-08-18 19:32:07 +0000 UTC [ - ]
dboreham 2021-08-18 20:01:02 +0000 UTC [ - ]
mattnewton 2021-08-18 21:54:24 +0000 UTC [ - ]
snovv_crash 2021-08-18 20:28:51 +0000 UTC [ - ]
There's an ancient Dilbert about this somewhere...
908B64B197 2021-08-18 18:48:24 +0000 UTC [ - ]
If anyone isn't hiring international remote right now they are losing a huge opportunity. Lots of people are looking at jumping ships, and western companies can outbid almost anyone for top talent.
mym1990 2021-08-18 18:21:18 +0000 UTC [ - ]
birdyrooster 2021-08-19 05:03:50 +0000 UTC [ - ]
cblconfederate 2021-08-18 20:06:47 +0000 UTC [ - ]
I often wonder if all that negative press about leaks, facebook's privacy violations etc. are just publicity stunts. So many tech articles seeking outrage and smell of fake.
hamburgerwah 2021-08-18 18:59:11 +0000 UTC [ - ]
dang 2021-08-18 20:02:17 +0000 UTC [ - ]
https://news.ycombinator.com/newsguidelines.html
Your story is interesting, of course; but you're not actually telling us the story.
Also, "total buffoon and had just been in the right place at the right time" is a pretty sweeping conclusion to say you realized "almost immediately".
HNPoliticalBias 2021-08-18 22:08:59 +0000 UTC [ - ]
wombatmobile 2021-08-18 19:14:31 +0000 UTC [ - ]
Sometimes it takes a while, and some deep context, and then some more context, ideally in a different set of circumstances, before you can assess someone's capabilities fairly.
Some people shine in their specialty domain, and flail outside of it.
Others are good in fair weather, but fragile in a crisis. Others are kind of the opposite, losing steam day to day, but rising to the occasion when shit happens.
Seasoned salespeople know all this. They do well consistently in spite of it all by presenting themselves credibly irrespective of their competency in any domain. They dress right for their audience, and they speak persuasively, always mindful of what their audience wants and needs to hear.
But listen to Andreesen talk! He sounds like he's never had to learn how to persuade anybody of anything. Well, not any regular people.
https://audioboom.com/posts/7923122-flying-x-wings-into-the-...
hamburgerwah 2021-08-18 19:41:11 +0000 UTC [ - ]
wombatmobile 2021-08-18 19:51:06 +0000 UTC [ - ]
Oh, I just googled it, and the answer is yes.
It's a curious thing, wealth accumulation.
jereees 2021-08-18 19:38:54 +0000 UTC [ - ]
daxuak 2021-08-18 16:31:47 +0000 UTC [ - ]
WalterBright 2021-08-18 18:49:35 +0000 UTC [ - ]
Interesting. I commonly hear asserted that breakthrough fundamental inventions don't come from capitalism.
yborg 2021-08-18 22:39:45 +0000 UTC [ - ]
WalterBright 2021-08-18 23:10:41 +0000 UTC [ - ]
I hear it all the time.
> certainly in the 19th and early 20th century many inventions were created by individual entrepreneurs, for example the Wright Brothers.
The Wright Bros are my usual reply to such assertions. Also the light bulb, transistor, jet engines, liquid fueled rocket engines, and on and on.
geomark 2021-08-19 00:30:29 +0000 UTC [ - ]
redis_mlc 2021-08-19 03:55:04 +0000 UTC [ - ]
Silicon Valley's origin is closely tied to the military for example.
All aviation either had military R&D funding, or they were an early customer (true for most of your examples: the Wright Brothers, and rocket and jet engines in your example. Transistor production was scaled up for the US military.)
Incremental advancements and commercial production are very capitalistic though.
What patent lawyers do say is that companies do invention, not individuals, which is pretty accurate these days.
The reason you're hearing anti-capitalist rhetoric these days is that young people hope a Marxist government will forgive their student loans. We now have a Marxist government, but they've resisted that so far.
Most young people don't even know what capitalism or Marxism are, or why Marxism has failed everywhere that it was tried.
WalterBright 2021-08-18 18:56:46 +0000 UTC [ - ]
Much of it came from the telegraph network that had been developed in the previous century. See "The Victorian Internet" by Standage.
Also, in the 1970s working on computers, everybody with two computers invented ways to hook them together, i.e. invented networking. Many became commercial, like Compuserve and MCImail, some became free, like the BBS systems. Then there's Ethernet and TokenRing for LAN networks.
(Note that DARPA figured out how to connect two computers as soon as they had two. It's just inevitable, not something coming from nowhere. Given two computers and the telephone lines, inventing a way to network them is going to happen in short order, just as water flows downhill.)
We'd have had an internet one way or another.
zepto 2021-08-18 22:46:25 +0000 UTC [ - ]
Then DARPA contribution was literally to work out how to connect separate networks together into a practical wide area network.
The internet is a network of networks.
We would of course have had one in some form. The one we have is based on DARPA’s.
WalterBright 2021-08-18 23:00:45 +0000 UTC [ - ]
I know that. My point is to point out that government funding was not the key to the internet. The two keys to the internet are computers and wires (going back to the telegraphy system).
> Then DARPA contribution was literally to work out how to connect separate networks together into a practical wide area network.
That was also invented several times. They were often called "gateways" at the time.
zepto 2021-08-19 00:03:21 +0000 UTC [ - ]
As for government funding not being key. I’m not sure we can say either way. It certainly was key to what we ended up with.
We’d have had some kind of internet in any case, but who knows what it would have looked like and what kind of tollbooths etc would have been set up on it had the model been different.
WalterBright 2021-08-19 00:54:43 +0000 UTC [ - ]
What a gateway does is connect two different protocols - just what gave it the name "inter"net.
> had the model been different
Indeed nobody can know the details of what might have been. But consider operating systems. Capitalism has delivered a free one that is arguably very, very good. Linux, if that's not obvious :-) Who could have predicted that? Capitalism also produced very high quality software that goes with it, for free.
It's possible we would have had something better than the internet protocol. For example, one that is completely decentralized and not dependent on a central authority to manage things like domain names and assign IP addresses. For example, GUIDs could be used instead. Are you sure that the current internet protocol is the best system? Or is it just that the internet protocol won out because it had a head start in the universities?
For another example of a fault in the internet protocol, say I go to google.com. It fails. What's at fault - the browser, the network card, the cable, the router, tje cable modem, the cable to the house, the ISP, or google's server? There appears to be no way to know, or at least in 25 years of using the internet it's still a matter of poking around the system in my house, rebooting the router, phoning the ISP to see if they know of a failure, etc.
All I get is a screen saying it didn't work.
zepto 2021-08-19 04:44:22 +0000 UTC [ - ]
Linus was at a free public university with free healthcare and the state supporting his living expenses when he created Linux. About as far from capitalism as you can get.
> Capitalism also produced very high quality software that goes with it, for free.
I like capitalism and defend it often, however I think that investment matters, regardless of whether it comes from the state or private sources.
> It's possible we would have had something better than the internet protocol.
Yes, or it’s possible we’d have a closed system run by a giant company.
WalterBright 2021-08-19 05:14:22 +0000 UTC [ - ]
That's a pretty hard stretch there. The university did not finance Linux, it financed his education. Linus could just as well have partied instead. Free healthcare is nice, but for the vast majority of 20 year olds, very little healthcare is needed.
bastawhiz 2021-08-18 23:17:09 +0000 UTC [ - ]
https://www.cybertelecom.org/notes/telegraph.htm
WalterBright 2021-08-19 01:18:39 +0000 UTC [ - ]
What worked was when Morse interested Amos Kendall, who helped set up the Magnetic Telegraph Company in 1845 with private funding. They set up a line between New York and Philadelphia, Boston, Buffalo, and on to the Mississippi.
This was profitable on its first day, and there was no stopping it after that.
bastawhiz 2021-08-19 03:49:23 +0000 UTC [ - ]
During the American civil war, the North's US Military Telegraph Corp ran 15,000 miles of cable.
Additionally, Marconi didn't invent the telegraph, he invented the radio telegraph.
WalterBright 2021-08-19 05:37:48 +0000 UTC [ - ]
As for the Civil War, of course the military funded it's expansion. Communications are absolutely essential to a military.
For comparison, two years after Morse's 40 mile experimental line in 1846, there was 2,000 miles of wire. 2 years after that, 12,000 miles and twenty companies. In 1852, 23,000 miles, with another 10,000 miles under construction. 9 years before the military began stringing wire.
WalterBright 2021-08-19 05:22:00 +0000 UTC [ - ]
dboreham 2021-08-18 19:51:49 +0000 UTC [ - ]
acchow 2021-08-18 20:26:28 +0000 UTC [ - ]
thanhhaimai 2021-08-18 20:55:01 +0000 UTC [ - ]
If one computer takes more than 10000sqft, then it's should be rare enough and verifiable. For comparison, a large California house with 5+ bedrooms are around 3000sqft.
mhh__ 2021-08-19 02:28:48 +0000 UTC [ - ]
I can't dig up the link right now but highly recommend reading the manuals they wrote for engineers working on the computer. They are from the good old days when manuals were closer to textbooks than what we have now - it starts with the basics of logic and electronics and basically intuits the computer inductively from there.
svachalek 2021-08-18 20:52:24 +0000 UTC [ - ]