Why is it so hard to be rational?
btilly 2021-08-16 14:19:13 +0000 UTC [ - ]
We should all know that given a belief about the world, and evidence, Bayes' Theorem describes how to update our beliefs.
But what if we have a network of interrelated beliefs? That's called a Bayesian net, and it turns out that Bayes' Theorem also prescribes a unique answer. However, unfortunately, it turns out that working out that answer is NP-hard.
OK, you say, we can come up with an approximate answer. Sorry, no, coming up with an approximate answer that gets within probability 0.5 - ε, for 0 < ε, is ALSO NP-hard. It is literally true that under the right circumstances a single data point logically should be able to flip our entire world view, and which data point does it is computationally intractible.
Therefore our brains use a bunch of heuristics, with a bunch of known failure modes. You can read all the lesswrong you want. You can read Thinking, Fast and Slow and learn why we fail as we do. But the one thing that we cannot do, no matter how much work or effort we put into it, is have the sheer brainpower required to actually BE rational.
The effort of doing better is still worthwhile. But the goal itself is unachievable.
strulovich 2021-08-16 17:38:19 +0000 UTC [ - ]
An NP hard problem, even if it cannot be approximated does not mean the average input cannot be solved efficiently.
Examples:
- An NP hard problem is not sufficient for building crypto.
- Type solving for many programming languages is EXP TIME complete, yet those languages prosper and compile just fine.
Beware the idea of taking a mathematical concept and proof and inducing from it to the world outside the model.
nostrademons 2021-08-16 18:37:10 +0000 UTC [ - ]
The article is asking why it's so hard to be rational though, i.e. follow a logically-valid set of inferences forward to an unambiguous conclusion. Assuming one of your premises is that correct rationality implies reasoning statistically about a network of interrelated beliefs, the uncomputability of a Bayesian net is relevant to that.
munk-a 2021-08-16 23:06:56 +0000 UTC [ - ]
I think the article is more focused on those big decisions where rationality is certainly warranted and so often ignored. People who are highly skilled at life have developed their gut feelings and instincts to be able to determine which decisions they really need to sit down and think hard about and which ones they can mostly ignore. When most people buy their first house the decision is so immensely large and represents such a high value (more than half a million at least for a lot of city folk) that there is a desire to detach from it to free yourself from responsibility - since you cannot sanely account for all factors it is "safer" to protect your ego by delegating the decision entirely on your id - doing so allows you to, post de facto, entirely free yourself from any responsibility of your poor decision. This, I think, is the main factor we need to fight against to make rational decisions - you must accept failure and be willing to be wrong without shame. Do your best to evaluate your options on important decisions and realize that there are a number of decisions you obviously can't fully rationalize out - you can only make your best attempt. But realize that making your best attempt and being wrong - as much as it might hurt your ego - is a better alternative than "letting it ride" and being able to stand blameless on the far end.
The fight for rationality is mostly a fight against emotional fragility and intellectual laziness.
btilly 2021-08-16 21:53:44 +0000 UTC [ - ]
However, in practice, complex Bayesian nets do wind up being computationally intractable. Therefore attempts to build real world machine learning systems consistently find themselves going to computationally tractable heuristic methods with rather obvious failure modes.
strulovich 2021-08-16 20:19:24 +0000 UTC [ - ]
User23 2021-08-16 20:20:04 +0000 UTC [ - ]
DiggyJohnson 2021-08-16 17:58:20 +0000 UTC [ - ]
I think using chaos theory / Bayesian concepts is a significantly better metaphor for "life as we experience it" than it is for the examples you gave.
morpheos137 2021-08-16 23:05:16 +0000 UTC [ - ]
amelius 2021-08-16 20:44:33 +0000 UTC [ - ]
yann2 2021-08-16 14:27:31 +0000 UTC [ - ]
The recommendation of the theory is if you cant be rational about a specific problem pick another problem, preferably a simpler problem.
Unfortunately lots of chimps in the troupe are incapable of doing that and therefore we shall always have drama.
btilly 2021-08-16 15:44:30 +0000 UTC [ - ]
The NP demonstrations that, in theory, updating a Bayesian is a computationally infeasible problem was G. F. Cooper in 1990 (for Bayesian Networks). The stronger result that approximating the update is also computationally infeasible was Dagum, P. & Luby, M., 1993.
So Simon's work relates to what I said, but isn't based on it.
WastingMyTime89 2021-08-16 14:54:26 +0000 UTC [ - ]
It's a model not a fact. As a model, it can't really be correct only more or less accurate.
zepto 2021-08-16 17:32:03 +0000 UTC [ - ]
This is not true. Models of an external world may be only more or less accurate, but models of other models may be true or false. Mathematical proofs rely on this. Rationality itself is a model so models of rationality may be true or false.
WastingMyTime89 2021-08-16 18:03:38 +0000 UTC [ - ]
In economy in a way that is not dissimilar to physics, model has a precise meaning. To quote Wikipedia, it is a simplified version of reality that allows us to observe, understand, and make predictions about economic behavior. You can't have a model of a model. That just doesn't really make sense.
> Mathematical proofs rely on this
I'm confused by what you want to say here. Mathematical proofs don't use models.
Every proved statements in mathematics can be built from axioms which are presupposed true applying logical rules which are themselves part of the axiomatic system. Saying that something is mathematically proved basically means that given this set of rules we can build up to that point.
> Rationality itself is a model
Once again I'm fairly lost by what you are trying to mean. I'm fairly certain that for most accepted meaning of the world model and the world rationality, rationality is in fact not a model in the same way that a dog is not a theory.
zepto 2021-08-16 18:15:40 +0000 UTC [ - ]
You may want to look up the difference between formal models and informal models.
Since both rationality and the paper showing that it is bounded are based on formal models, it is reasonable to assume this is what we are talking about.
WastingMyTime89 2021-08-16 18:57:48 +0000 UTC [ - ]
> Since both rationality and the paper showing that it is bounded are based on formal models
There is no paper showing that "rationality" is bounded. Models use to consider actors making purely rational choices in the sense that they are always optimizing their utility functions using all available information. Bounded rationility is a different way of modeling actors choice function. It's just a different model. There is no model of models.
Still I don't see what any of that has to do with the difference between formal and informal models. Informal model is a term I have never heard used outside of policy discussion. It's basically dress up for "because the expert said so".
zepto 2021-08-16 19:49:52 +0000 UTC [ - ]
Understood.
It’s worth noting that the definition of a model that you said you were using doesn’t match with typical definitions of a formal model.
You aren’t talking about formal models, and I accept that you are only thinking in terms of economic models.
Perhaps that explains where the difference in understanding lies.
md224 2021-08-16 19:44:04 +0000 UTC [ - ]
Maybe the person you replied to was taking a model theoretic perspective?
loopz 2021-08-16 16:52:24 +0000 UTC [ - ]
WastingMyTime89 2021-08-16 17:49:59 +0000 UTC [ - ]
I mean everyone know it doesn't really work that way.
The actual question is: does viewing the average actor as trying to perfectly optimise their utility function using all the information available constitute a good estimation of how actors work in aggregate and does it yield accurate and interesting predictions?
The real insight of Simon in Models of Man is not that actors are not in fact perfectly rational. It's that you can actually model the limits of actors while keeping a fairly rigorous and manageable formalization.
techbio 2021-08-16 23:02:29 +0000 UTC [ - ]
loopz 2021-08-16 22:09:56 +0000 UTC [ - ]
carrolldunham 2021-08-17 01:33:23 +0000 UTC [ - ]
vendiddy 2021-08-16 19:33:46 +0000 UTC [ - ]
glial 2021-08-16 19:37:27 +0000 UTC [ - ]
ggm 2021-08-16 21:44:53 +0000 UTC [ - ]
Tell me, are you aware of the myriad of alternate words to express your agreement with somebody else aside from correct? You aren't here as judge of the right or wrong. Semantically, philosophically, you're expressing agreement not correctness.
Or .. am I incorrect...?
nonameiguess 2021-08-16 17:29:56 +0000 UTC [ - ]
Even beyond the hard process bottleneck on creating or lucking upon events that produce the evidence we need, however, there is also the limitation that Bayes only gives you a probability. It doesn't give you a decision theory or even a thresholding function. For those, you need a whole lot of other things like utility functions, discount rates, receiver operating characteristics and an understanding of asymmetric costs of false positives versus false negatives, that are often different for each decision domain.
And, of course, to get a utility function meaningful for humans, you need values. There is no algorithm that can give you values. They're just there as a basic primitive input to all other decision making procedures, yet they often conflict in ways that cannot be reconciled even within a single person, let alone across a society of many people.
lalaithion 2021-08-16 14:33:03 +0000 UTC [ - ]
AndrewKemendo 2021-08-16 20:54:03 +0000 UTC [ - ]
I do believe this is zero-sum in that improving on one set of decisions means no applying the same rigor to others.
This is often seen in the form of very smart people also believing conspiracy theories or throwing their hands up around other massive issues. As an example, the "Rationalist crowd" has de-emphasized work on climate change mitigation in favor of more abstract work on AI safety.
ret2plt 2021-08-16 22:07:53 +0000 UTC [ - ]
To be clear, the argument (in rationalist circles) is not that climate change is no big deal, it's that there's already a ton of people worrying about it, so it is better to allocate some extra resources to underfunded problems.
hanche 2021-08-16 14:44:02 +0000 UTC [ - ]
whatshisface 2021-08-16 17:13:56 +0000 UTC [ - ]
mensetmanusman 2021-08-16 18:04:35 +0000 UTC [ - ]
whatshisface 2021-08-16 19:30:28 +0000 UTC [ - ]
mensetmanusman 2021-08-16 19:39:49 +0000 UTC [ - ]
whatshisface 2021-08-16 19:44:16 +0000 UTC [ - ]
karpierz 2021-08-16 20:23:24 +0000 UTC [ - ]
btilly 2021-08-17 00:14:35 +0000 UTC [ - ]
A problem is in P if there is a polynomial time algorithm to solve it.
A problem is in NP if there is a polynomial time algorithm to check a purported solution.
A problem is in NP-hard if, if there was a polynomial time algorithm to solve it, every problem in NP could be solved in polynomial time.
A problem is NP-complete if it is both in NP and in NP-hard.
For Traveling Salesman, the NP-complete problem is, "...find a solution with total weight less than X." (Linear verification, check it is Hamiltonian, check its weight.) An NP-hard version is, "...find whether there is a solution with total weight less than X." (To verify, you have to search. Oops.) But ANOTHER NP-hard version is, "...find the solution with least total weight". (To verify, you have to search. Oops.)
hanche 2021-08-16 17:53:10 +0000 UTC [ - ]
kirse 2021-08-16 14:46:16 +0000 UTC [ - ]
tines 2021-08-16 15:35:54 +0000 UTC [ - ]
wombatmobile 2021-08-16 14:40:15 +0000 UTC [ - ]
Why is that "the" goal?
Who sets "the" goal?
voxic11 2021-08-16 14:41:19 +0000 UTC [ - ]
analog31 2021-08-16 22:14:39 +0000 UTC [ - ]
I'm probably rational enough but also can't make sense of much of the rationalist literature, so I simply follow my own compass and hope for the best. I'm skeptical of Bayes Theater.
JohnPrine 2021-08-16 19:56:08 +0000 UTC [ - ]
Nav_Panel 2021-08-16 15:03:22 +0000 UTC [ - ]
Pragmatically, the goals themselves appeal to individuals who want to maintain conventional (liberal) morality yet also position themselves as superior, typically as a form of compensation.
nitrogen 2021-08-16 17:55:41 +0000 UTC [ - ]
This is why we can't have nice things. Any time someone tries to find more effective ways of making good decisions or accomplishing their goals, someone has to bring out the most tortured cynical interpretation to tear them down.
Nav_Panel 2021-08-16 18:02:56 +0000 UTC [ - ]
skosch 2021-08-17 01:33:18 +0000 UTC [ - ]
It's cheap and easy to make fun of the lesswrong community as a cringy cult of AI-obsessed neckbeards. And to be fair, the writing style on LW tends to support that impression. But I've found that most of the actual people within the rationality/AI safety/effective altruism communities actually don't fit that stereotype at all.
Nav_Panel 2021-08-17 03:54:14 +0000 UTC [ - ]
I consider EA separate but related, and it definitely qualifies as staking out a superior position within the constraints of liberal morality.
lmm 2021-08-17 02:36:46 +0000 UTC [ - ]
FeepingCreature 2021-08-17 05:08:15 +0000 UTC [ - ]
But then, as a lesswrongy person, maybe it's me, maybe "feeling superior" or whatever is just normal to me :shrug:
(But my actual theory is, it's probably just geographical.)
JohnPrine 2021-08-16 19:54:56 +0000 UTC [ - ]
klipt 2021-08-17 03:21:28 +0000 UTC [ - ]
JohnPrine 2021-08-17 15:36:26 +0000 UTC [ - ]
As a specific example, I made a comment to my roommate last winter about how I thought his girlfriend's hyper caution around COVID was limiting my personal freedom. I realized that I had been crass and apologized to him, but he told his girlfriend anyways and it caused a great deal of tension between the three of us. My own girlfriend told me I should apologize to her. I believed I had nothing to apologize for since I hadn't said anything to her directly, I didn't believe my roommate should have repeated the comment to her in the first place, and I had apologized to him for it already. My girlfriend gave me reasons why an apology was in order, though, and I assigned a lot of weight to her reasoning since I know her to be a more sensitive and emotionally intelligent person than myself. I was able to let go of the belief in my own righteousness and write a heartfelt apology which did wonderfully to mend the relationship.
A previous version of me would have clung to the belief that I was in the right, and either not apologize or write a half-assed apology that would do nothing to fix the situation. The current version of me which strives to be rational was aware of my own biases, recognized that my internal map may not match the territory, and was willing to update based on the evidence from my girlfriend's greater authority on emotional matters.
snarf21 2021-08-16 15:25:27 +0000 UTC [ - ]
Take a non political example: How safe are whole tomatoes to eat? What did the grocery store spray on them? Is it safe? Will it wash off? What about the warehouse were they were stored for months, what did they put on them to keep them from spoiling? What about the farmer, what did they spray on them to protect against pests? What is in the water, is it safe? Now we're ready to eat: Does anyone in my family have any kind of intolerance to raw tomatoes? And this is a pretty simple toy example.... In general, we've collectively decided to trust in the good in people. We hope that if something is bad/lie/harmful, then someone in the know will raise the alarm for the group.
heresie-dabord 2021-08-16 19:54:19 +0000 UTC [ - ]
The goal of rational thinking is not some conceit of perfection [1] but debugging the runtime for a better result. Humans are in fact very good at communication and at debugging language errors. They have evolved a rational capacity. It can evidently be developed but it needs to be exercised.
This is where hypothesis of an educational system often enters the discussion.
[1] Galef and others call the "Star Trek" Spock character a Vulcan Strawman or Straw Vulcan. https://en.wikipedia.org/wiki/Julia_Galef
kazinator 2021-08-16 18:09:26 +0000 UTC [ - ]
I further maintain that it's definitionally impossible. Before we find it computationally impossible, we will find that we can't write the a complete, detailed requirements specification defining what rational is.
(Of course, we recognize egregious irrationality when we see it; that's not what I mean; you can't just define rationality as the opposite of that.)
People can behave rationally (or not) with respect to some stated values that they have. But those can be arbitrary. So the requirement specification for rationality has to refer to a "configuration space", so to speak, where we program these values. This means that the output is dependent on it; we can't write some absolute test case for rationality that doesn't include this.
Problem is, people with different values look at each other's values and point to them and say, "those values are irrational; those people should adopt my values instead".
UnFleshedOne 2021-08-16 19:46:50 +0000 UTC [ - ]
Luckily we get our values from bunch of heuristics developed through millions of years of biological and social evolution, so we mostly have the same ones, just with different relative weights.
Won't be true if we ever meet (or make) some other sentient critters.
kazinator 2021-08-16 20:55:24 +0000 UTC [ - ]
People basically do say that, though.
(Values can be contradictory/inconsistent. E.g. you say you value self-preservation, but you also enjoy whacking your head with a hammer. That would be a kind of irrational. That's not what I'm referring to though.)
UnFleshedOne 2021-08-16 21:53:43 +0000 UTC [ - ]
polote 2021-08-16 14:55:11 +0000 UTC [ - ]
Not all questions have answers, if you want to be rational when you are asked to answer those question, you can just say "I dont know"
At the beginning of the pandemic, when politics were saying mask dont work. You could just say, well, if we transmit covid by air, then putting something in front of my mouth is going to decrease the spread. That's what is being rational. Of course that's not going to be all the time the good answer, but you have still thought rationally.
I'm not really sure what you are trying to prove. Of curse being rational is possible. All people are rational for most of their decisions.
dahfizz 2021-08-16 16:46:22 +0000 UTC [ - ]
If I squint at a statement like this, I guess it could be called rational, but it is certainly not rigorous or convincing. You brush over too much and are making lots of assumptions.
Are these statements rational?
The sun is warm, so if I climb a ladder I will be closer to the sun and therefore warmer.
Masks impede airflow, so if I wear a mask I will suffocate.
Bleach kills germs, so drinking bleach will make me healthier.
It is very easy to make an incorrect idea seem rational. You should wear masks because rigorous science tells us that it is effective. That is the only valid justification. "Common sense" is used to justify a lot of junk science.
nonameiguess 2021-08-16 17:15:55 +0000 UTC [ - ]
clairity 2021-08-16 18:11:21 +0000 UTC [ - ]
you've really just glossed over the hard part, which is when and where masks work, which is in turn the difficult political problem to solve.
simplifying, covid spreads mouth-to-mouth with a brief stint in the air, not mouth-to-air-then-(much)-later-to-mouth, which is the mediopolitical narrative that's being pushed vehemently but irrationally, and upon which masking policies are erroneously based.
what's always ignored in these narratives is that the virus falls apart quickly all by itself outside the cozy confines of the body, not to mention floats away to oblivion quickly when outside.
if we're really concerned about masks working, we'd have to force people to wear them among friends and family in private spaces like homes, not outside and in grocery stores where they have basically no effect.
"masks work" is a grossly overreaching blanket political statement, not a summary of "the science". scientific evidence suggests masks reduce droplets (and aerosols, with better masks) being ejected into the air. there's less clear evidence that it reduces airborne viral particles being inhaled through the mask. but there's almost no evidence that the way we've deployed masks is doing much other than signalling our fears and concerns.
i'd be open to supporting mask policies that are based on actual evidence (e.g., wear them when socializing at home), but not the mediopolitically fearmongering policies we have.
not2b 2021-08-16 15:05:57 +0000 UTC [ - ]
polote 2021-08-16 15:10:45 +0000 UTC [ - ]
btilly 2021-08-16 16:01:57 +0000 UTC [ - ]
Second, your simplistic analysis demonstrated that you, personally, are ignorant of the real tradeoffs involved in whether masks work.
Wearing a mask reduces how much virus leaves your mouth. But when you breathe out, most of the virus is in larger droplets that quickly hit the ground. However breathing out through a mask creates perfect conditions to create an aerosol, which can allow more of the virus to stay in the air for an indefinite period of time. So there is a tradeoff, and there were reasons to question whether cloth masks were better than simple social distancing.
It turns out that what matters most is not that you get exposed, but rather the initial viral load that you get. You see, the virus will go on an exponential growth until the relatively fixed time it takes the immune system to figure things out and start shutting it down. If the virus gets a solid head start, the odds of serious illness go up. Therefore the lingering aerosol from a mask is (except if it accumulates in poorly ventilated indoor spaces) of less concern than an unmasked person talking directly to you.
So the result is that masks work. Even crappy cloth masks work.
s1artibartfast 2021-08-16 17:58:54 +0000 UTC [ - ]
Very little new knowledge was added.
>So the result is that masks work. Even crappy cloth masks work.
I would agree if you change "results" to expert conjecture and "work" to probably do something.
But again, this was always known.
varjag 2021-08-16 16:43:04 +0000 UTC [ - ]
btilly 2021-08-16 17:29:42 +0000 UTC [ - ]
The last opportunity to study the effectiveness of mandating low-quality masks in preventing community spread during a pandemic was around a century old. (Literally, the Spanish Flu epidemic.) In the meantime a lot of new and untried modeling tools were in use, as well as updated disease models, and lots of reasons to question old data.
See https://www.albertahealthservices.ca/assets/info/ppih/if-ppi... for an idea of what was reasonable for educated specialists in public health to believe. Note phrases like, "There was agreement that although the evidence base is poor, the use of masks in the community is likely to be useful in reducing transmission from community based infected persons, particularly those with symptomatic illness."
So it is accurate to say that we had reason to believe that masks work. But it is easy to overstate how much we "knew" it to be true at the time.
varjag 2021-08-17 06:49:28 +0000 UTC [ - ]
The only reason masks were in doubt was the incompetent advice from WHO bureaucrats, and the bureaucrats on national advisory levels mindlessly droning it. This was not evidence based.
btilly 2021-08-17 17:01:14 +0000 UTC [ - ]
To get everyone to mask up, we needed to put the general public in low quality masks. And there was a whole heck of a lot less research on low quality masks being used by the general public.
But, regardless, I have no percentage in convincing you of what is true. Have a nice day.
kbelder 2021-08-16 15:38:48 +0000 UTC [ - ]
polote 2021-08-16 15:55:34 +0000 UTC [ - ]
But if you dont trust the governement, and for this specific case you followed them, then this is not rational.
mcguire 2021-08-16 18:41:34 +0000 UTC [ - ]
UnFleshedOne 2021-08-16 20:02:15 +0000 UTC [ - ]
tunesmith 2021-08-16 21:56:50 +0000 UTC [ - ]
As the science changed to suggest that COVID was aerosol, scientific opinions on masks got updated as well.
It also didn't help that some hyper-rational people got hung up on ranting about how masks weren't perfect, and how the virus could still get through if you wore a mask. It was as if they imagined they heard someone said "masks are 100% effective" and really really wanted to register their counterpoints. So they said "they don't work!" when they meant "they're not 100% effective!", and other people heard "they don't work!" and took it to mean "they're 0% effective!" That's one of those patterns you start to see all over the place when you know to look for it - people confusing "there exists" and "forall".
mcguire 2021-08-16 18:37:40 +0000 UTC [ - ]
Are straw-man statements rational?
"Then there is the infamous mask issue. Epidemiologists have taken a lot of heat on this question in particular. Until well into March 2020, I was skeptical about the benefit of everyone wearing face masks. That skepticism was based on previous scientific research as well as hypotheses about how covid was transmitted that turned out to be wrong. Mask-wearing has been a common practice in Asia for decades, to protect against air pollution and to prevent transmitting infection to others when sick. Mask-wearing for protection against catching an infection became widespread in Asia following the 2003 SARS outbreak, but scientific evidence on the effectiveness of this strategy was limited.
"Before the coronavirus pandemic, most research on face masks for respiratory diseases came from two types of studies: clinical settings with very sick patients, and community settings during normal flu seasons. In clinical settings, it was clear that well-fitting, high-quality face masks, such as the N95 variety, were important protective equipment for doctors and nurses against viruses that can be transmitted via droplets or smaller aerosol particles. But these studies also suggested careful training was required to ensure that masks didn’t get contaminated when surface transmission was possible, as is the case with SARS. Community-level evidence about mask-wearing was much less compelling. Most studies showed little to no benefit to mask-wearing in the case of the flu, for instance. Studies that have suggested a benefit of mask-wearing were generally those in which people with symptoms wore masks — so that was the advice I embraced for the coronavirus, too.
"I also, like many other epidemiologists, overestimated how readily the novel coronavirus would spread on surfaces — and this affected our view of masks. Early data showed that, like SARS, the coronavirus could persist on surfaces for hours to days, and so I was initially concerned that face masks, especially ill-fitting, homemade or carelessly worn coverings could become contaminated with transmissible virus. In fact, I worried that this might mean wearing face masks could be worse than not wearing them. This was wrong. Surface transmission, it emerged, is not that big a problem for covid, but transmission through air via aerosols is a big source of transmission. And so it turns out that face masks do work in this case.
"I changed my mind on masks in March 2020, as testing capacity increased and it became clear how common asymptomatic and pre-symptomatic infection were (since aerosols were the likely vector). I wish that I and others had caught on sooner — and better testing early on might have caused an earlier revision of views — but there was no bad faith involved."
"I’m an epidemiologist. Here’s what I got wrong about covid."(https://www.washingtonpost.com/outlook/2021/04/20/epidemiolo...)
notsureaboutpg 2021-08-16 15:31:23 +0000 UTC [ - ]
Rationally, the ability to distinguish colors varies between human beings, so much so that with a sufficient number of tomatoes (say 50), you will have different people have different answers for which are the greenest.
Knowing that your ability to distinguish these colors of tomatoes might not be as strong as, say, a tomato farmer's (since he likely works with these specific fruits and colors all the time), you may be rationally inclined to follow his logic in choosing which are the greenest.
Do you follow your intuition or trust an expert? Your contrived example is already difficult to actually make the most rational decision for.
6gvONxR4sf7o 2021-08-16 15:49:01 +0000 UTC [ - ]
varjag 2021-08-16 16:40:14 +0000 UTC [ - ]
irrational 2021-08-16 16:43:09 +0000 UTC [ - ]
jaredhansen 2021-08-16 17:06:26 +0000 UTC [ - ]
mcguire 2021-08-16 18:30:02 +0000 UTC [ - ]
irrational 2021-08-16 18:38:30 +0000 UTC [ - ]
mistermann 2021-08-16 19:24:51 +0000 UTC [ - ]
Tenoke 2021-08-16 17:22:01 +0000 UTC [ - ]
nicoburns 2021-08-16 20:49:38 +0000 UTC [ - ]
See also Gigerenzer's Ecological Rationality.
Tenoke 2021-08-16 21:49:25 +0000 UTC [ - ]
lisper 2021-08-16 15:47:29 +0000 UTC [ - ]
btilly 2021-08-16 17:04:09 +0000 UTC [ - ]
This is a serious question. We should always challenge our preconceptions. To take your examples:
1. Traditional Judeo-Christian religions all claim we should believe because of claims made in holy books of questionable provenance, held by primitive people who believed things like (for example) disease being caused by demons. What rational reason is there for believing these holy books to be particularly truthful? (I was careful to not include Buddhism, whose basis is in experiences that people have while in altered states of consciousness from meditation.)
2. The shortcomings of libertarianism involve various tragedies of the commons. (My favorite book on this being, The Logic of Collective Action.) However the evidence in favor of most government interventions is rather weak. And the evidence is very strong that well-intended government interventions predictably will, after regulatory capture, wind creating severe problems of their own. How do you know that the interventions which you like will actually lead to good results? (Note, both major US parties are uneasy coalitions of convenience kept together through the only electoral realities of winner takes all. On the left, big labor and environmentalism are also uncomfortable bedfellows.)
3. To the extent that the observer is described by quantum mechanics, many-worlds is provably a correct description of the process of observation. In the absence of concrete evidence that quantum mechanics breaks down for observers like us, what rational reason is there to advocate for any other interpretation? (The fact that it completely violates our preconceptions about how the world should work is an emotional argument, not a rational one.)
lisper 2021-08-16 17:22:08 +0000 UTC [ - ]
lisper 2021-08-16 19:29:05 +0000 UTC [ - ]
http://blog.rongarret.info/2019/07/the-trouble-with-many-wor...
btilly 2021-08-16 20:18:36 +0000 UTC [ - ]
1. Many worlds is indeed what QM predicts should happen.
2. Popular descriptions are oversimplified and the full explanation is very complicated.
3. Even if many worlds is true, it doesn't change my experience and should not rationally change how I act when faced with quantum uncertainty.
If I am correct, then I'm in violent agreement with all three points. And am left with, "So until more data, I will provisionally accept many worlds as the best explanation."
My impression is that you seem to be left with, "If it is true, then it is irrelevant to my life, and so I don't care about whether it might be true."
lisper 2021-08-16 21:49:33 +0000 UTC [ - ]
No. Many-worlds is what the SE predicts should happen. But the SE != QM. MW does not explain the Born rule, which is part of QM's predictions. MW is also violently at odds with subjective experience. So MW is not a good explanation of what is observed.
btilly 2021-08-17 17:13:14 +0000 UTC [ - ]
Now let's modify the experiment to assume a hyper-intelligent cat, with access to a full physics laboratory inside of the box.
QM predicts that there is no experiment that is possible for the cat to conduct that can tell whether collapse happened. The QM description of what's going on in the box is guaranteed to be alien to the cat's experience, but perfectly predicts what the cat does experience. Furthermore, even though in this hypothetical, collapse happens when the box is opened, there is no experiment that can be done by the outside experimenter which can verify that collapse happened when the box opened, and not before. Nor is there any experiment that the experimenter can perform that confirms that collapse does not happen afterwards.
And yes, this includes attempts by the cat to confirm the Born rule. As far as the cat can determine, the Born rule will be true.
Therefore our assumption in this hypothetical that QM describes the cat leads to MW being true for the cat no matter what is ultimately true.
And this is what I mean by saying that, to the extent that the experimenter is described by QM, MW is true.
lisper 2021-08-17 17:43:42 +0000 UTC [ - ]
A.k.a. Wigner's Friend.
https://en.wikipedia.org/wiki/Wigner%27s_friend
> QM describes the cat leads to MW being true for the cat no matter what is ultimately true.
You should read this:
https://www.nature.com/articles/s41467-018-05739-8
And this:
btilly 2021-08-17 21:02:47 +0000 UTC [ - ]
As for the differences between Convivial Solipsism and Many Worlds, I am indifferent to them. It is immaterial to me whether I am a singular observer who can only be aware of part of the wave function, or I am one component of a superposition of observers, each of which is only aware of part of the wave function.
I personally lean away from solipsism because I do not think that I, or my observation of reality, are that important. But that is a preference. And I don't have any particular justification for it or reason to disagree with anyone with the opposite opinion.
breuleux 2021-08-16 22:37:29 +0000 UTC [ - ]
One thing I'm curious about: I haven't read the literature all that well, but my personal understanding of MWI, after trying to wrap my head around it, is that there's probably no branching or peeling at all: every possible configuration of the universe immutably exists and is associated with a complex amplitude. What does change are the amplitudes. When I make a choice at point A and the universe "splits" into B and C, the only thing that happens is that the amplitude in bucket A is split into buckets B and C. But there's no reason to think A, B and C were ever empty or will ever be empty: after all, some other state Z might pour amplitude into A at the same time A pours into B and C. We might even currently be in a steady state where the universal wavefunction is perfectly static, because every single "branch" is perfectly compensated by a "join". If so, MWI would challenge the very idea that existence is a binary predicate (it's actually a continuous complex amplitude). I'm honestly not sure how we're even supposed to reason about that thing.
Does that make any sense, or am I way off base?
hindsightbias 2021-08-16 17:54:58 +0000 UTC [ - ]
Even when SA himself eventually started questioning his response/allegations, few of the mob (there really is no other word for it) would not have it. All absolutist and conspiracy laden.
PG said keep your identity small. I’ve found few rationalist or libertarians of any bent who meet that criteria.
Jensson 2021-08-16 16:45:40 +0000 UTC [ - ]
jdmichal 2021-08-16 19:42:01 +0000 UTC [ - ]
This seems to be a pretty good overview:
didibus 2021-08-16 18:16:50 +0000 UTC [ - ]
The question is, can we organize and educate ourselves so we can leverage that parallel power and let each person become experts in their areas with proper trusts and incentives? And manage to pass along the previous generation computation to the next, without corrupting the data?
Edit: And I forgot all the tools we've designed to help us compute all that, of which I'd count math as a tool to help us compute, and computers as another.
AndrewKemendo 2021-08-16 20:50:14 +0000 UTC [ - ]
I'd go further to say that there are real world issues that compound the variables. Namely that individual actions increasingly have global consequences eg. individual purchasing behaviors have externalities that the market is not pricing in and thus fall to the consumer to have to calculate.
Further, given that these global issues these kinds of calculations are game theoretic by their nature, making it even more complicated.
dwd 2021-08-16 22:29:54 +0000 UTC [ - ]
joe_the_user 2021-08-16 17:22:26 +0000 UTC [ - ]
yibg 2021-08-16 22:00:53 +0000 UTC [ - ]
threatofrain 2021-08-16 20:15:31 +0000 UTC [ - ]
ad8e 2021-08-17 03:35:40 +0000 UTC [ - ]
garbagetime 2021-08-16 21:06:35 +0000 UTC [ - ]
Should we? What of the problem of induction?
btilly 2021-08-17 17:15:26 +0000 UTC [ - ]
And neither logic nor mathematics offers a solution. In practice, however, we do. But, as any parent should know, we don't do it through a rational process.
ulucs 2021-08-16 17:12:07 +0000 UTC [ - ]
drdeca 2021-08-16 18:11:30 +0000 UTC [ - ]
Are you just saying “people aren’t logically omniscient, and can’t be because of incompleteness”?
tisthetruth 2021-08-16 17:59:40 +0000 UTC [ - ]
I would still like to see some studies which delve into whether sugar and caffeine are catalysts for biasing us towards system 1 and how they affect system 2, mindfulness, patience, etc...
6gvONxR4sf7o 2021-08-16 18:23:01 +0000 UTC [ - ]
One thing I'll add that drives me nuts is the fetishization of bayesian reasoning I see some times here on HN. There are times that bayesian reasoning is helpful and times that it isn't. Specifically, when you don't trust your model, bayes rule can mislead you badly (frequently when it comes to missing/counterfactual data). It's just a tool. There are others. It makes me crazy when it's someone's only hammer, so everything starts to look like a nail. Sometimes, more appropriate tools leave you without an answer.
Apparently that's not something we're willing to live with.
hinkley 2021-08-16 18:37:18 +0000 UTC [ - ]
I like to tell people that charts work better for asking questions than answering them. Once people know you look for answers there, the data changes. More so than they do for question asking (people will try to smooth the data to avoid awkward questions).
belter 2021-08-16 18:41:05 +0000 UTC [ - ]
belter 2021-08-16 18:38:22 +0000 UTC [ - ]
tomjakubowski 2021-08-16 20:19:51 +0000 UTC [ - ]
RogerL 2021-08-17 02:13:39 +0000 UTC [ - ]
MrPowers 2021-08-16 14:25:02 +0000 UTC [ - ]
Learning about logical fallacies and identifying them in conversations is great. Don't tell the counterparty of their logical fallacies in conversations cause that's off putting. Just note them internally for a more rational inner dialogue.
Learning other languages and cultures is another way to learn about how different societies interact with objective truth. Living other places taught me a lot about how denial works in different places.
Thinking rationally is quite hard and I've learned how to abandon it in a lot of situations in favor of human emotions. How someone feels is more important than how they should feel.
anyfoo 2021-08-17 00:28:24 +0000 UTC [ - ]
This also had a rather frustrating effect. It is true that not just intensely traveling (not in the sight seeing way), but also actual living in several different countries and cultures, changed my horizon a lot. It definitely had the effect you talk about.
But then what? You cannot tell your partner in discussion "you would not think like that if you had traveled/lived outside of your culture", and it's also impossible to send everyone off to travel in order to experience the same. Much less in the US, where for most of the country you cannot just hop into a train for a few hours to encounter a completely different language and culture. (I grew up in Europe and moved to the US as an adult, but I've also lived in several different European countries before, and traveled to far away places like Asia.)
nostromo 2021-08-16 15:50:32 +0000 UTC [ - ]
I see it everywhere, from my own decision making process to international politics. Just this morning I was thinking about it as I read the news about the US leaving Afghanistan, and last week talking with a friend who is staying at a bad job.
mcguire 2021-08-16 18:46:53 +0000 UTC [ - ]
And here's the answer: Persistence is good when it is successful. If the activity us unsuccessful, it's an example of the irrational sunk cost fallacy. (Making decisions without knowledge of future events is quite hard.)
And the important lesson: If you bail at the first sign of adversity, no one can ever accuse you of being irrational. Of course, as the old saying goes, all progress is made due to the irrational.
clairity 2021-08-16 20:31:31 +0000 UTC [ - ]
the sunk cost fallacy is simply considering existing loss when deciding on continued investment (in time, money and other resources), when you should only consider future cost for future benefit. it's thinking erroneously that existing loss is not already locked in, that it's salvageable somehow. but no, it's already lost.
in a project with continuously updating probabilities of success, and under imperfect information, the go-or-no-go decision should only be based on the likelihood of future gains exceeding future losses, not future+existing losses.
in this framework, persistence would be having credible evidence (e.g., non-public information), not just belief, of the likelihood of future net gain relative to opportunity cost. it'd be irrational to be persistent simply on belief rather than credible information and probability estimation.
542354234235 2021-08-17 15:01:07 +0000 UTC [ - ]
After is working on repairs, and after significant investment time and money, you are not as far along as you thought you would be. You update your calculations on time and cost based on your progress so far and any new problems you have uncovered.
A persistent person looks at the costs of a fixer upper, sees it is still likely worth doing, and is willing to put in the additional effort they had not originally planned for to see the project through. But they can also look at the future costs, recognize that the house is likely a money pit and continued work would be unlikely to ever yield a return on their investment, and that the time and money they have already spent are gone no matter what, but that they can prevent additional loss.
Someone biased by the sunk cost fallacy sees both projects the same. They look at the money pit and see that it is unlikely to show a return, but they hold on to the time and money they have already spent being lost if they walk away from the project, influencing them to continue.
To look at it another way, a persistent person would make the same calculation of the likely success of a project regardless if they came into it at the first point or the second point. They are persistent, so they won’t give up on something because it is more difficult than they originally thought and they won’t give up on worthwhile work just because it is hard. Someone biased by the sunk cost fallacy will not make the same calculation after they have invested effort, as they will hold on to already invested effort as a reason in and of itself to continue.
aidenn0 2021-08-16 22:30:24 +0000 UTC [ - ]
You can't go back in time and not work hard on something, so whether or not you should continue is purely a function of whether or not you think you will succeed, not a function of how much effort you've already put into it.
oldsklgdfth 2021-08-16 19:01:03 +0000 UTC [ - ]
It's not an easy task. But 10 minutes a day can add up and reinforce that information.
A related idea is cognitive distortion. It's basically an irrational thought pattern that perpetuates negative emotions and a distorted view of reality. One example many here can relate to is imposter syndrome. But to feel like an imposter you have to overlook your achievements and assets and cherry-pick negative data points.
wyager 2021-08-16 18:25:34 +0000 UTC [ - ]
jitter_ 2021-08-16 20:16:51 +0000 UTC [ - ]
Can you elaborate on that?
This really piqued my interest. I feel like logic is easy to apply retrospectively (especially so for spotting fallacies), but trying to catch myself in a fallacy in the present feels like excessive second quessing and overanalyzing. The sort that prevents forward momentum and learning.
Would you by any change have any recommendations on reading on the topic?
wyager 2021-08-16 20:56:47 +0000 UTC [ - ]
Intuitively, people find “bob is an idiot so he’s wrong” a reasonable statement.
Technically, the implication does not hold (stupid people can be correct) and this is an ad hominem fallacy.
However, if we analyze this statement from a Bayesian standpoint (which we should), the rules of entailment are different and actually bob being stupid is evidence that he’s wrong. So maybe this is actually a pretty reasonable thing to say! Certainly reasonable people should use speakers’ intelligence when deciding how much to trust speakers’ claims, even though this is narrowly “fallacious” in an Aristotelian sense.
I’m not aware of any reading on this topic. It seems under-explored in my circles. However I know some other people have been having similar thoughts recently.
RonaldRaygun 2021-08-17 17:33:56 +0000 UTC [ - ]
However I myself would probably label the statement "Bob is an idiot" (or perhaps less abrasively, "Bob has often been wrong in the past in easily verifiable ways") not as evidence that he's wrong per se, but as a signal, possibly a rather strong signal, that he is likely also incorrect in the current matter.
A minor semantic quibble, but in my own experience I've found that conceiving of it as such helps frame the situation as a "sensor fusion of individually unreliable data sources" type of problem, as opposed to one of "collecting experimental results in a logbook and deriving conclusions from them."
The latter of which can lead pretty seamlessly to a towering edifice of belief built upon some ultimately pretty shaky foundations. Ask me how I know ;)
wyager 2021-08-17 19:34:37 +0000 UTC [ - ]
adam_arthur 2021-08-17 01:02:07 +0000 UTC [ - ]
It's important to understand that something being a "logical fallacy" just implies that you can't unilaterally justify conclusion X by using reasoning Y.
But that does not mean that reasoning Y is not valid or helpful in understanding conclusion X.
Ultimately it's important to justify your views with sound reasoning, but life is full of heuristics, so often use of heuristics to reach a conclusion can be reasonable. It just means the conclusion is not definitive from a logical point of view.
Ideally you use a combination of logically sound and heuristic based statements to support an argument.
Following your Bob example... It's important that the person making the argument uses stronger reasoning than just calling Bob an idiot. But agreed that it's a totally valid point of supporting evidence.. assuming that Bob is an idiot is a fairly agreed upon statement.
newbamboo 2021-08-16 16:55:57 +0000 UTC [ - ]
contravariant 2021-08-16 23:40:22 +0000 UTC [ - ]
As far as arguments go "That's an XXX fallacy" is one of the weaker ones, if not fallacious in and of itself.
SMAAART 2021-08-16 14:16:16 +0000 UTC [ - ]
Big business want people to buy things they don't need, with money they don't have to impress people they don't like
Politicians want people who will drink the cool-aid and follow what they (the politicians) say (and not what they do)
Religions... well, same.
And so all messages from advertisement, to movies, TV, narrative is about hijacking people's feelings and suppressing rationality. Common sense is no longer common, and doesn't make much sense.
ret2plt 2021-08-16 22:45:17 +0000 UTC [ - ]
jimbokun 2021-08-16 15:56:01 +0000 UTC [ - ]
They are rejecting the authorities that in the past have tried to associate themselves with "rationality". The political think tanks. The seminaries. The universities. Government agencies. Capitalist CEOs following the "invisible hand" of the market.
All of these so-called elites have biases and agendas, so of course none of them should be accepted at face value.
I think what's missed, is rationality is not about trusting people and organizations, but about trusting a process. Trusting debates over lectures. Trusting well designed studies over trusting scientists. Trusting free speech and examining a broad range of ideas over speech codes and censorship. Trusting empirical observation over ideological purity.
This is the value system of the so called "classical liberals", and they are an ever more lonely and isolated group. There is a growing embrace for authoritarianism and defense of tribal identity on both the "left" and the "right" taking its place.
pessimizer 2021-08-16 18:46:41 +0000 UTC [ - ]
DoingIsLearning 2021-08-16 14:20:53 +0000 UTC [ - ]
cortesoft 2021-08-16 14:31:57 +0000 UTC [ - ]
chromaton 2021-08-16 14:58:25 +0000 UTC [ - ]
tim333 2021-08-17 10:31:44 +0000 UTC [ - ]
toshk 2021-08-16 14:36:25 +0000 UTC [ - ]
athenot 2021-08-16 14:22:15 +0000 UTC [ - ]
Siira 2021-08-16 14:38:22 +0000 UTC [ - ]
marcod 2021-08-16 18:06:20 +0000 UTC [ - ]
kerblang 2021-08-16 18:07:07 +0000 UTC [ - ]
BurningFrog 2021-08-16 19:24:41 +0000 UTC [ - ]
We know you're talking about other engineers, and we agree about those fools!
kerblang 2021-08-16 20:22:51 +0000 UTC [ - ]
DamnYuppie 2021-08-16 18:20:08 +0000 UTC [ - ]
nescioquid 2021-08-16 23:46:05 +0000 UTC [ - ]
_moof 2021-08-17 00:53:45 +0000 UTC [ - ]
Also, you know how software engineers like to think that they're rocket scientists? Well, it brings me no pleasure to report that rocket scientists think they're software engineers.
PicassoCTs 2021-08-16 19:25:03 +0000 UTC [ - ]
The real interesting thing here, is the answer to why emotions, work as they do and what the patterns and bits are that trigger them. To turn over that particular rock is to go to some deeply disturbing places. And to loose the illusion that emotion make one more "human" - meanwhile, if ones reaction is more hard coded, shouldn't it be considered more machine-like?
alecst 2021-08-16 14:03:28 +0000 UTC [ - ]
Rationality, to me, is really about an open-minded approach to beliefs. Allowing multiple beliefs to overlap, to compete, to adapt, without interfering too much with the process.
polote 2021-08-16 15:14:55 +0000 UTC [ - ]
If you want to be rational about an opinion, you have to think first, "what are my hypothesis". Most people start with the opinion and then go down to the hypothesis. That can't work like that. That's the hypothesis + the logic that should create an opinion. Not the other way around
sjg007 2021-08-16 15:13:02 +0000 UTC [ - ]
Focus on yourself and controlling your emotions. Be the calm.
XorNot 2021-08-16 14:09:28 +0000 UTC [ - ]
This doesn't seem very rational. If your beliefs are in conflict and you're content to not resolve that, then pretty much by definition you're accepting a logical inconsistency.
If resolving the intersection doesn't lead to a new stable belief system, then aren't you basically going with "whatever I'm feeling that day"?
claudiawerner 2021-08-16 14:44:41 +0000 UTC [ - ]
However, the drive for total and pure consistency is also misguided in my judgement. One reason why we usually feel so motivated and conflicted (to the point where it can lead to depression) with inconsistency is the psychological effect of cognitive dissonance. It's not clear to me that the only way to quieten cognitive dissonance is to resolve the dissenting thoughts.
Another way is to accept that not everything needs to be resolved. This can be great for mental health - again, just in my experience. Don't let the (sometimes irrational) effects of cognitive dissonance override your decision making. Resolution can work, but so can acceptance.
jcims 2021-08-16 14:43:22 +0000 UTC [ - ]
This is just my perspective, but very few beliefs or values map to the whole of reality...they tend to bind to certain aspects of it with a variable priority along the spectrum of that particular dimension, wither its personal agency, the color red, public health, spiders, etc.
However, reality rarely provides us with the ability take a position purely on one factor...nearly every context in which a decision is required operates at the nexus of an uncountable number of these dimensions. Some you can feel swelling to the fore as their slope in your mental 'values' model increases, others stay dormant because you don't see how they apply. This is how most of my decisions that might look outwardly 'inconsistent' arise, there are confounding factors that dominate the topology and steer me in a different direction.
alecst 2021-08-16 14:19:46 +0000 UTC [ - ]
And, also, sometimes you think you've settled on the right path, but then you later get a new piece of information and have to reevaluate.
So to me it's not so cut and dry.
mindslight 2021-08-16 15:36:39 +0000 UTC [ - ]
This dual-thinking is related to the computer security mindset - you can't naively write code thinking your assertions will simply hold as you intend, but rather you need to be continually examining what every assertion "gives away" to a hostile counterparty.
There are alternative systems of logic that attempt to formalize reasoning in the presence of contradictions, to keep a single contradiction from being able to prove everything. For example, intuitionistic logic and paraconsistent logic. These feel much more in line with reasoning in an open world where a lack of a negative doesn't necessarily imply truth. The focus on a singular "logic" that asserts that everything has some single rational "answer" is a source of much of our modern strife.
nvilcins 2021-08-16 14:29:42 +0000 UTC [ - ]
Dealing with contradictions in our own beliefs (paradoxes) is a part of life. The rational approach is to accept that and "fuse" those beliefs carefully, not (a) accept one and reject the others or (b) avoid the topic entirely.
lotsofpulp 2021-08-16 14:42:53 +0000 UTC [ - ]
If you are using contradicting assumptions, then you should probably check to see if you are doing so because you want the conclusion that you are getting from the assumption.
amanaplanacanal 2021-08-16 16:29:47 +0000 UTC [ - ]
lotsofpulp 2021-08-16 17:12:58 +0000 UTC [ - ]
karmakaze 2021-08-16 15:27:39 +0000 UTC [ - ]
antisthenes 2021-08-16 19:18:36 +0000 UTC [ - ]
That's such an incredibly rare occurrence, that having a stable belief system far outweighs its potential drawbacks. Not to mention that rationality itself encompasses the ability to make such a switch anyway if the new information actually does upend volumes of "settled" knowledge.
A much bigger problem, though, is people lacking critical thinking skills to adequately assign probabilities to the new information being valuable/useful/correct.
Hint: it's very low. (in the current stage of civilization, there are definitely periods where it was different).
karmakaze 2021-08-16 22:57:23 +0000 UTC [ - ]
bluetomcat 2021-08-16 15:41:11 +0000 UTC [ - ]
s1artibartfast 2021-08-16 21:33:39 +0000 UTC [ - ]
jscipione 2021-08-16 16:53:35 +0000 UTC [ - ]
President Dwight D. Eisenhower put it succinctly in his farewell address to the nation:
"The prospect of domination of the nation's scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded. Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific technological elite."
WhompingWindows 2021-08-16 15:23:30 +0000 UTC [ - ]
In meditation, a common teaching is to examine an object for a long period, really just stare at it and allow your mind to focus on it fully. I see a coffee mug, it has a handle and writing on it, it's off-white and has little coffee stains. This descriptive mind goes a mile-a-minute normally, but eventually you can break through that and realize, this is just a collection of atoms, this is something reflecting photons and pushing back electrically against my skins' atoms. Even deeper, it's just part of the environment, all the things I can notice, like everything else we care about.
Such exercises can help reveal the nature of mind. There are many layers of this onion, and many separate onions vying for our attention at once. Rationality relies upon peeling back these superficial layers of the thought onion to get towards "the truth." That means peeling back biases, emotions, hunches, instincts, and all the little mental heuristics that are nice "shortcuts" for a biologically limited thinker.
But outside our minds, how is there any rationality left? It feels like another program or heuristic we use to make decisions to help us survive and reproduce.
danans 2021-08-16 16:51:09 +0000 UTC [ - ]
If our ancestors would have made the rational assessment that there is unlikely to be a predator hiding behind the bush, that would have worked only as long as it worked, until one day they got eaten.
Irrationally overestimating threats and risks is not an optimal approach, but as long as you can survive it can be a long-term optimal approach.
Humans using irrational stories to enable group cohesion and coordination are similarly irrational but intrinsic ways of being that also provide an evolutionary advantage.
Rationality, however is an incredible optimization tool when operating in domains that are well understood, like the example of stereo equipment that the author gave in the article. It can also help in the process of expanding knowledge by helping a systematically compare and contrast signals.
But it doesn't prevent the lion from eating you or the religious or temporal authority from ostracizing you from the safety of the settlement, and it may even make both of those outcomes more likely.
SamBam 2021-08-16 17:44:37 +0000 UTC [ - ]
That wouldn't have been a rational assessment, because it wouldn't have been an accurate assessment of the risks of being wrong, and the behavior required to avoid them.
If there's only a 1% chance that a predator is behind a bush, and that predator might eat you, it's absolutely rational to act as though there is a predator. You'll be seeing lots of bushes in your life, and you can't escape from those 1% chances for long.
The same thinking is why it would have been rational to try and avoid global warming 30 years ago. Even if the science was not settled, in the worst-case scenario, you'd have "wasted" a bunch of money making green energy production. In the best-case scenario, you saved the planet.
fallous 2021-08-16 19:38:40 +0000 UTC [ - ]
Avoidance of all possible risk is a recipe for paralysis. Part of being rational is evaluation of risks vs rewards as well as recognizing the dangers of unintended consequences and the fact that nearly all meaningful decisions are made with incomplete information and time limits.
slingnow 2021-08-16 20:05:09 +0000 UTC [ - ]
The OP merely stated you should adjust your behavior to the 1% chance. That would include weighing it against the risk of dying from dehydration, in your example.
lazide 2021-08-16 17:15:58 +0000 UTC [ - ]
In the past, it is a rational concern to be worried about being jumped by a predator from behind a bush, and if you don’t know if or if not there is a predator, it is perfectly rational to be worried about such a concern!
Same with diseases and causes when you don’t know what is causing them, etc.
It’s a tendency to dismiss older concerns from a time when there was a severe lack of information as irrational, where when you know your limits and see the results, there is no other rational way to behave except to be concerned or avoid those things. While also not rational to believe clearly contradictory religious dogma that covers the topic, it is rational to follow or support it when it has clear alignment with visibly effective methods encoded in it for avoiding disease and other problems.
danans 2021-08-16 17:39:06 +0000 UTC [ - ]
I think we agree, but I also think you are using "rational" here in the colloquial sense to mean the "smartest" thing to do.
The article, and my comment in response, uses the traditional definition of "rational" as something derived from logic, and not from impulse or instinct.
The two definitions are not the same (not that one is better than the other, they just mean different things).
lazide 2021-08-16 18:00:52 +0000 UTC [ - ]
If you don’t know what is behind x thing, and every y times someone walks by a thing like x thing they get jumped by a leopard, then only walk by x thing when the risk is worth it. Which it rarely is.
If you’re referring to formal logic, then sure - but almost no one in that thread seems to be using that definition either. Formal logic is incredibly expensive (mentally), and only a few percent of folks even now can afford to use it with any regularity.
wyager 2021-08-16 18:30:05 +0000 UTC [ - ]
UnFleshedOne 2021-08-16 20:59:31 +0000 UTC [ - ]
meatmanek 2021-08-16 23:31:46 +0000 UTC [ - ]
In Bayesian decision theory, you'd choose the action (walk directly by the bush; walk by the bush but have your guard up; steer clear of the bush) that minimizes your loss function (e.g. probability of dying or probability of your blood line dying out). You'd end up picking a path that balances the risk of being eaten by a lion with the cost of having to walk further (and thus having less time and energy to gather food; or tripping and cutting yourself and dying of infection; or whatever).
mbesto 2021-08-17 04:12:14 +0000 UTC [ - ]
FinanceAnon 2021-08-16 15:20:28 +0000 UTC [ - ]
Simple example:
Let's say the same pair of shoes is available in two different shops, but in one shop it's more expensive. It seem more rational to buy it in the cheaper shop. However, what if you've heard that the cheaper shop is very unethical in how it conducts the business. Is it still more rational to buy the shoes there?
And then you might also start considering this situation "in the grand scheme of things" - in the grand scheme of things does it make any difference if I buy it in shop A or B?
And at which point does it become irrational to be overthinking simple things in order to try to be rational? What if trying to always be rational is stressing you out, and turns out to be worse in the long run?
UnFleshedOne 2021-08-16 21:19:08 +0000 UTC [ - ]
If consumer ethics is important to you then it obviously warrants some deliberation, weighted by an upper bound of your potential impact. But identifying areas of meaningless choice and simply choosing randomly (and not even caring if the choice is sufficiently random) frees up a lot of mental energy.
dgant 2021-08-17 14:39:08 +0000 UTC [ - ]
MisterBastahrd 2021-08-16 17:42:21 +0000 UTC [ - ]
Some will say that buying from Amazon simply perpetuates Amazon... but Amazon is so large at this point that it doesn't matter WHAT I do. So ultimately, is the world better off with my two donations from my Amazon purchase or giving my money away for the same product to ShoeCo?
SamBam 2021-08-16 17:48:33 +0000 UTC [ - ]
If your donations have some tiny bit of meaning to them, then removing a tiny bit of business from Amazon and paying your local shopkeeper probably also has meaning.
notahacker 2021-08-16 18:51:01 +0000 UTC [ - ]
(notwithstanding better objections to the original example: in practice most donors' finances aren't so tight that buying the $90 product rather than the $100 dollar one is really necessary to free up the donor funds for a worthy cause, as opposed to emotionally salve donor conscience for buying from an unworthy vendor...)
vdqtp3 2021-08-16 18:30:34 +0000 UTC [ - ]
It might be fair to say that removing business from Amazon has no real impact but giving that business to a small business does.
MisterBastahrd 2021-08-16 23:57:16 +0000 UTC [ - ]
wizzwizz4 2021-08-16 14:37:18 +0000 UTC [ - ]
This matches my observations, too.
> Cowen suggested that to understand reality you must not just read about it but see it firsthand; he has grounded his priors in visits to about a hundred countries, once getting caught in a shoot-out between a Brazilian drug gang and the police.
kubb 2021-08-16 14:43:44 +0000 UTC [ - ]
someguy321 2021-08-16 14:53:02 +0000 UTC [ - ]
I don't mind if part of his motivation is to impress others, or if it's wasteful, etc. Why would his motivations have to be pure for it to be meaningful for him?
karmakaze 2021-08-16 15:09:21 +0000 UTC [ - ]
kubb 2021-08-16 15:05:14 +0000 UTC [ - ]
You could understand more about a country by studying it from home than by visiting it for a week.
I don't like that it's presented as a lifestyle that people should strive to pursue. I know certain people here will vehemently oppose this opinion, because in effect it's a critique of them or that which they admire.
Retric 2021-08-16 15:19:42 +0000 UTC [ - ]
No you really can’t understand a culture from a week of study the same way you can from being there for a week. The issue is the millions of unknown unknowns that you never really consider. How large is people’s personal space, where do they stand and look in an elevator, what’s traffic like, how loud are people, etc etc. Of course a week or three isn’t that long, but there are real diminishing returns here.
On the other hand personal experience is very narrow in scope. You’re never going to find out country wide crime rates by wondering around for a week.
tonyedgecombe 2021-08-16 17:53:41 +0000 UTC [ - ]
I suspect you have to live and work in a place to really understand it. If you are wealthy and visiting a poor country there is virtually zero chance, you will always be too insulated from the reality.
pessimizer 2021-08-16 18:54:59 +0000 UTC [ - ]
michael1999 2021-08-17 03:01:17 +0000 UTC [ - ]
zepto 2021-08-16 17:37:25 +0000 UTC [ - ]
SamoyedFurFluff 2021-08-16 15:06:50 +0000 UTC [ - ]
skybrian 2021-08-16 15:56:30 +0000 UTC [ - ]
But you’re not going to learn the same things you would from travel. For example, you’re not likely to learn another language if everyone you talk to speaks English. Similarly for learning about other cultures that aren’t near you.
But I’m not sure how much brief travel to see the tourist sites helps, and hanging out with expats might not help so much.
legrande 2021-08-16 13:59:32 +0000 UTC [ - ]
jjbinx007 2021-08-16 14:06:43 +0000 UTC [ - ]
There's a YouTube channel (1) called Street Epistemology which has a guy interview members of the public and ask them if they have a belief they hold to be true such as "the supernatural exists" or "climate change is real" or "x is better than y".
He then asks them to estimate how certain they are that it's true.
Then they talk. The interviewer asks a question and makes notes, then tries to summarise the reply. He questions how they know what they think they know and at the end he asks them to again say how confident they are that what they said is true.
It's fascinating to see people actually talk about and discuss what are usually unsaid thoughts and it shows some glaring biases logical fallacies.
legrande 2021-08-16 14:32:53 +0000 UTC [ - ]
Thanks for correcting me. I will refrain from ever using virii again!
digitalsushi 2021-08-16 14:52:06 +0000 UTC [ - ]
WhompingWindows 2021-08-16 15:15:32 +0000 UTC [ - ]
jklinger410 2021-08-16 14:10:27 +0000 UTC [ - ]
Exactly what you said. Once you accept one toxic thought, it tends to branch out into other decisions. Unfortunately there are many, many memes out there ready to cause an infection.
These things can be fatal.
OnACoffeeBreak 2021-08-16 15:28:25 +0000 UTC [ - ]
FinanceAnon 2021-08-16 18:50:56 +0000 UTC [ - ]
rafaelero 2021-08-16 18:29:27 +0000 UTC [ - ]
Now, to be more generous, I will assume that people are actually criticizing how "institutions impose a mainstream view that is difficult to replaced even when facts say it should". To that I say: fine. But even in this case, there should be enough resources to form a rational opinion over the matter (with probabilistic reasoning). Hell, I have a lot of non-orthodox opinions that are so out of Overton Window that I rarely can discuss them. And even in these cases, the internet and Google Scholar/Sci-hub were sources that helped me explore it.
So, I have no sympathy for this "institutions lied to us, let me believe now whatever I want" bullshit.
throwaway9690 2021-08-16 14:37:15 +0000 UTC [ - ]
I know a guy who hates foo (using a place holder). In fact he's downright foophobic. He is pretty convinced he has a natural unbiased hate of foo and is being rational when he expresses it.
To me as an outsider it is pretty obvious that his hate of foo is the result of cultural conditioning. To him it is perfectly rational to hate foo and to me it is totally irrational, especially since he can't give any concrete reason for it.
So who is right and who is being rational?
carry_bit 2021-08-16 20:33:27 +0000 UTC [ - ]
It could be that, like dietary restrictions to reduce the spread of disease, the foophobia is no longer needed, but keep Chesterton's fence in mind before you say it's unneeded.
dfxm12 2021-08-16 15:17:15 +0000 UTC [ - ]
I think part of the problem is that most people are conditioned into many beliefs from a young age
I think it's irrational to not consider new information when processed. So, again, this depends on what foo is. If it is obeying speed limits even when no one else is on the road, and your friend learns the penalties for not obeying road signs when they get their license, they would probably find it irrational to not do the speed limit, even if they hate it. They wouldn't want to risk the fines, license suspension, etc.
However, let's say your friend's brother has stronger beliefs and can afford any fines and legal action. He could think about it and still decide that it's rational to not obey the speed limit. This doesn't make it right; I think right and rational are mutually exclusive.
throwaway9690 2021-08-16 16:26:00 +0000 UTC [ - ]
For example: Throw salt over your shoulder if you spill some -or- Green skinned people are bad and you should never trust them or allow them in your neighborhood.
Now the former is pretty harmless but not so the latter. In both cases the only explanation is "that's how I was raised" which I don't find compelling or rational.
wizzwizz4 2021-08-17 13:28:27 +0000 UTC [ - ]
pessimizer 2021-08-16 18:58:31 +0000 UTC [ - ]
...is a pretty silly phrase. If you don't have a reason for something, it can't (by definition) be reasonable.
someguy321 2021-08-16 14:46:38 +0000 UTC [ - ]
I like chocolate ice cream more than vanilla ice cream, and you're not gonna convince me otherwise by debating the flavor with me. It entirely could be the case that my preference is from cultural conditioning, but it's not my concern.
If your friend has a mindset of "to each his own" there's no problem.
teddyh 2021-08-16 15:38:05 +0000 UTC [ - ]
In my experience, people usually can give ‘concrete’ reasons for it, but what constitutes ‘concrete’ is a matter of opinion, and I don’t consider everybody’s reasons to be valid. But of course, they do.
wizzwizz4 2021-08-16 14:38:33 +0000 UTC [ - ]
throwaway9690 2021-08-16 16:29:09 +0000 UTC [ - ]
jsight 2021-08-16 14:18:17 +0000 UTC [ - ]
MarioMan 2021-08-16 16:15:25 +0000 UTC [ - ]
1) It’s not reasonable to expect someone to dig so deeply, and there isn’t enough time to do it for every issue.
2) Someone, somewhere, has done an even deeper dive into the same issue. From their perspective, I’m the one that hasn’t done my research. When it’s “enough” is a fuzzy line.
esarbe 2021-08-16 16:03:48 +0000 UTC [ - ]
achenatx 2021-08-16 14:34:32 +0000 UTC [ - ]
Virtually every political disagreement is based on values, though most of the time people dont recognize it.
Values determine priorities and priorities underpin action.
For example some people feel that liberty (e.g. choice) is more important than saving lives when it comes to vaccines.
Some people feel that economic efficiency is less important than reducing suffering.
Some people feel that the life of an unborn child is worth less than the ability to choose whether to have that child
Even in the article, is a stereo that sounds better actually better than a stereo that looks better? That is a value judgement and there is no right or wrong.
No one is actually wrong since everything is value judgements. Many people believe in universal view of ethics/morality. There is almost no universal set of ethics/morality if you look across space and time.
However some values allow a culture to out compete other cultures causing the "inferior" values to disappear. New mutations are constantly being created. Most are neutral and have no impact on societal survival. Some are negative and some are positive.
derbOac 2021-08-16 18:18:02 +0000 UTC [ - ]
Take money for example. You can create a theoretical decision-making dilemma involving certain sums of money, and work out what the most rational strategy is, but in reality, the differences between different sums of money is going to differ between people depending on different value systems and competing interests. So then you get into this scenario where 1 unit of money means something different to different people (the value you put on 1 € is going to be different from the value I put on it; the exchange rates are sort of an average over all these valuations), which might throw off the relevance of the theoretical scenario for reality, or change the optimal decision scenario.
The other issue beside the one you're relating to -- the subjectivity of the weights assigned to different outcomes, the achille's heel of utility theory -- is uncertainty not just about the values in the model, but whether the model is even correct at all. That is, you can create some idea that some course of action is more rational, but what happens when there's some nontrivial probability that the whole framework is incorrect? Your decision about A and B, then, shouldn't just be modeled in terms of whatever is in your model, but all the other things you're not accounting for. Maybe there are other decisions, C and D, which you're not even aware of, or someone else is, but you have to choose B to get to them.
Just yesterday I read this very well-reasoned, elegant, rational explanation by an epidemiologist about why boosters aren't needed. But about 3/4 of the way through I realized it was all based on an assumption that is very suspect, and which throws everything out the window. There are still other things their arguments were missing. So by the end of it I was convinced of the opposite conclusion.
Rationality as a framework is important, but it's limited and often misleading.
_greim_ 2021-08-16 16:23:56 +0000 UTC [ - ]
Disagree; value systems are the inputs to rationality. The only constraint is that you do the introspection in order to know what it is that you value. In that sense buying a stereo based on appearance is the right decision if you seek status among peers or appreciate aesthetics. It's the wrong decision if you want sound quality or durability.
I think the real issue is that people don't do the necessary introspection, and instead just glom onto catch-phrases or follow someone else's lead. That's why so many people hold political views that are contrary to their own interests.
mariodiana 2021-08-16 18:01:18 +0000 UTC [ - ]
esarbe 2021-08-16 15:59:55 +0000 UTC [ - ]
That we are able to think somewhat rational-ish is only because we adapted by adopting extensive modeling simulations. The fundamental function of these simulations is to simulate other beings, primarily human. And in that our brainware is lazy as hell, because - to quote evolution; why do perfect, when you can do good enough? Saves a ton of energy.
The wetware we employ was never expected to rationally solve differential equations or do proper statistical analysis. At best it was expected to guess the parabola of a thrown stone or spear, or empate the best way to mate without facing repercussions from the tribe.
So, really. It's not that thinking is hard. It's just that we're just not equipped to do it.
mncharity 2021-08-16 15:10:43 +0000 UTC [ - ]
linuxhansl 2021-08-16 17:52:12 +0000 UTC [ - ]
"Confirmation Bias" does not quite capture it. Really just laziness. :)
The other part, being decisive... I can definitely relate to that. I noticed that I often have a hard time making decisions and realized it's because I tend look at the world in terms of what I can possibly lose instead of looking at something new in terms of excitement.
SavantIdiot 2021-08-16 17:56:39 +0000 UTC [ - ]
I would argue we've largely been anesthetized due to successful Gish Galloping. I have great admiration for people who put the effort in to sort out the issues, academics and journalists. But just now everyone eye-rolled when I said those two terms.
johnwheeler 2021-08-16 14:37:00 +0000 UTC [ - ]
raldi 2021-08-16 15:10:30 +0000 UTC [ - ]
johnwheeler 2021-08-16 15:51:33 +0000 UTC [ - ]
raldi 2021-08-16 17:37:37 +0000 UTC [ - ]
johnwheeler 2021-08-16 18:12:13 +0000 UTC [ - ]
Being perfectly rational is impossible.
See: perfect rationality vs bounded rationality
JohnPrine 2021-08-16 21:42:01 +0000 UTC [ - ]
raldi 2021-08-16 21:20:59 +0000 UTC [ - ]
Are you using a different definition?
johnwheeler 2021-08-16 22:19:21 +0000 UTC [ - ]
Being perfectly rational is impossible.
raldi 2021-08-16 22:22:59 +0000 UTC [ - ]
johnwheeler 2021-08-17 00:03:41 +0000 UTC [ - ]
But again, being perfectly rational is impossible.
I don't know how to make it any clearer.
karmakaze 2021-08-16 15:35:49 +0000 UTC [ - ]
Here's an 20m audio interview[0] with the author of "The Scout Mindset: Why Some People See Things Clearly and Others Don’t"
It very well summarizes the way I like to gather information in an area so that I can form an opinion and direction of movement on a problem.
paulpauper 2021-08-17 03:15:26 +0000 UTC [ - ]
paganel 2021-08-16 20:45:21 +0000 UTC [ - ]
Early on during the pandemic (the first half of February 2020) the people writing on Twitter about covid in China were being labeled as conspiracy nuts, with some of them outright having their accounts suspended by Twitter. Covid/coronavirus was (I think purposefully) kept out of the trending charts on Twitter, the Oscars were seen as much more important.
And these are only two recent examples that came to my mind where the "rational" parts of our society (the experts and the media) failed completely, as such it's only rational not to trust these pseudo-rational entities anymore. Imo I think in a way the post-modernists were right, (almost) everything is negotiable or a social construct, there's no true or false, apart from death, I would say.
elihu 2021-08-17 02:44:43 +0000 UTC [ - ]
I think a lot of political disagreements aren't really about logical arguments at all, but rather differences in opinion over relative priority of some ideals that are all important. There isn't always an objective right answer.
natmaka 2021-08-17 12:54:30 +0000 UTC [ - ]
This approach even favors the most informed and trained (the "best" being preferable to the "better"), offering an even more difficult challenge.
Indirect democracy replaces rationality with ill-formed trust.
raman162 2021-08-16 19:31:28 +0000 UTC [ - ]
Being self-aware I've only started learning post college and is something I wish I was taught more growing up. As a child I was always informed that I should do x and y because that's what you're supposed to do! Only now as an adult I'm taking the time to slowly ponder and analyze myself and be more strategic with my future goals.
Side note. Really enjoyed the audio version of this long form article
heisenzombie 2021-08-16 23:20:06 +0000 UTC [ - ]
I highly recommend reading it. I found it extremely clarifying as a working scientist/engineer and someone who has been persistently nagged by the deification of rationality.
The OP even uses the term “metarational” (though used to mean something different), which made me surprised when “The Eggplant” was not mentioned.
adrhead 2021-08-16 14:12:03 +0000 UTC [ - ]
TuringTest 2021-08-16 14:31:44 +0000 UTC [ - ]
Reason needs axioms (beliefs) to build a rational discourse, and without emotions, it is impossible to choose a limited set of starting axioms to begin making logical inferences from.
I agree with the person above who said being rational is about making post-hoc rationalizations. We know by cognitive science that a majority of explanations are build that way: after observing facts, we intuitively develop a story that is consistent with our expectations about the fact, as well as with our preconceived beliefs. "Being rational" in this context would be limited to reviewing our beliefs when these ad-hoc rationalizations become inconsistent one with another.
eevilspock 2021-08-16 18:30:43 +0000 UTC [ - ]
> Greg...became a director at a hedge fund. His net worth is now several thousand times my own.
swayvil 2021-08-16 14:15:37 +0000 UTC [ - ]
So many distractions. Wind, rain, bees, rampant squirrels.
And what makes that game more interesting than a squirrel anyway?
inkblotuniverse 2021-08-17 00:57:12 +0000 UTC [ - ]
(And you're playing the game agaonst the squirrels anyway.)
mrxd 2021-08-16 21:00:08 +0000 UTC [ - ]
Rationality is a form of communication. Its purpose to persuade other people and coordinate group activity, e.g. hunters deciding where they should hunt and making arguments about where the prey might be. In that setting, rationality works perfectly well because humans are quite good at detecting bad reasoning when they see it in others.
Because of the assumptions of psychological individualism, rationality is misunderstood as a type of cognition that guides an individual's actions. To a certain extent, this is a valid approach because incentives within organizations encourage people to act this way. We reward individual accomplishments more than collaboration.
But many cognitive biases disappear when you aren't working under the assumptions of psychological individualism. For example, in the artificial limitations of a lab, you can show that people are unduly influenced by irrelevant factors when making purchase decisions. But in reality, when a salesperson is influencing someone to spend too much on a car, people say things like "Let me talk it over with my wife."
We instinctively seek out an environment of social communication and collaboration where rationality can operate. Much of the advice about how to be individually rational comes down to simulating those conditions within your own mind, like scrutinizing your own thinking as if it was an argument being made by another person. That can work, but the vast majority of people adopt a more straightforward approach, which is to simply use rationality as it was designed to be used.
Rationality is hard, but only for a small number of "smart people" who live in an individualistic culture prevents them from using it in the optimal way.
UnFleshedOne 2021-08-16 21:22:57 +0000 UTC [ - ]
tomgp 2021-08-16 17:01:06 +0000 UTC [ - ]
I think the hardest bit of this is in some ways the middle, wanting things. How do we know we really want what we want, and how do we know what will make us happy. That’s the bit I struggle with anyway.
andi999 2021-08-16 14:46:59 +0000 UTC [ - ]
_moof 2021-08-16 18:05:34 +0000 UTC [ - ]
flixic 2021-08-16 18:20:56 +0000 UTC [ - ]
_moof 2021-08-16 19:05:46 +0000 UTC [ - ]
UnFleshedOne 2021-08-16 21:42:15 +0000 UTC [ - ]
rafaelero 2021-08-16 19:28:47 +0000 UTC [ - ]
jhgb 2021-08-16 19:23:12 +0000 UTC [ - ]
coldtea 2021-08-16 18:15:47 +0000 UTC [ - ]
Not being rational - and instead being based on guts - has an evolutionary advantage (it cuts through the noise, which, in the past could be a life or death situation).
dnissley 2021-08-16 18:20:09 +0000 UTC [ - ]
morpheos137 2021-08-16 16:31:40 +0000 UTC [ - ]
m3kw9 2021-08-16 17:55:26 +0000 UTC [ - ]
paulpauper 2021-08-17 02:56:30 +0000 UTC [ - ]
okamiueru 2021-08-16 15:52:59 +0000 UTC [ - ]
newbamboo 2021-08-16 16:51:00 +0000 UTC [ - ]
nathias 2021-08-16 14:02:13 +0000 UTC [ - ]
MrBuddyCasino 2021-08-16 14:18:01 +0000 UTC [ - ]
nathias 2021-08-17 05:55:51 +0000 UTC [ - ]
jhgb 2021-08-16 19:09:56 +0000 UTC [ - ]
jgeada 2021-08-16 14:07:01 +0000 UTC [ - ]
TheGigaChad 2021-08-16 14:32:03 +0000 UTC [ - ]
HPsquared 2021-08-16 20:07:24 +0000 UTC [ - ]
esarbe 2021-08-17 18:57:26 +0000 UTC [ - ]
myfavoritedog 2021-08-16 20:47:26 +0000 UTC [ - ]
Not synching up with reality would likely cost you your ability to be in the genetic pool back in the day.
joelbondurant 2021-08-16 16:16:02 +0000 UTC [ - ]
neonate 2021-08-16 19:12:45 +0000 UTC [ - ]