Hugo Hacker News

Why is it so hard to be rational?

neonate 2021-08-16 19:12:45 +0000 UTC [ - ]

btilly 2021-08-16 14:19:13 +0000 UTC [ - ]

I maintain that it isn't just hard, it is computationally impossible.

We should all know that given a belief about the world, and evidence, Bayes' Theorem describes how to update our beliefs.

But what if we have a network of interrelated beliefs? That's called a Bayesian net, and it turns out that Bayes' Theorem also prescribes a unique answer. However, unfortunately, it turns out that working out that answer is NP-hard.

OK, you say, we can come up with an approximate answer. Sorry, no, coming up with an approximate answer that gets within probability 0.5 - ε, for 0 < ε, is ALSO NP-hard. It is literally true that under the right circumstances a single data point logically should be able to flip our entire world view, and which data point does it is computationally intractible.

Therefore our brains use a bunch of heuristics, with a bunch of known failure modes. You can read all the lesswrong you want. You can read Thinking, Fast and Slow and learn why we fail as we do. But the one thing that we cannot do, no matter how much work or effort we put into it, is have the sheer brainpower required to actually BE rational.

The effort of doing better is still worthwhile. But the goal itself is unachievable.

strulovich 2021-08-16 17:38:19 +0000 UTC [ - ]

NP hard problems get abused for justifying things they cannot.

An NP hard problem, even if it cannot be approximated does not mean the average input cannot be solved efficiently.

Examples:

- An NP hard problem is not sufficient for building crypto.

- Type solving for many programming languages is EXP TIME complete, yet those languages prosper and compile just fine.

Beware the idea of taking a mathematical concept and proof and inducing from it to the world outside the model.

nostrademons 2021-08-16 18:37:10 +0000 UTC [ - ]

And human beings make approximate solutions for the average input all the time. That's what gut feelings, instincts, heuristics, and motivated reasoning are, along with all the other shortcuts we take to function in daily life.

The article is asking why it's so hard to be rational though, i.e. follow a logically-valid set of inferences forward to an unambiguous conclusion. Assuming one of your premises is that correct rationality implies reasoning statistically about a network of interrelated beliefs, the uncomputability of a Bayesian net is relevant to that.

munk-a 2021-08-16 23:06:56 +0000 UTC [ - ]

Following a fully logically valid set of inferences seems extremely inefficient - we need to make decisions constantly and relying on short hand for most of those seems perfectly rational - it's rational to trust irrational gut feelings for most unimportant decisions because trying to fully prove all actions is a fool's errand.

I think the article is more focused on those big decisions where rationality is certainly warranted and so often ignored. People who are highly skilled at life have developed their gut feelings and instincts to be able to determine which decisions they really need to sit down and think hard about and which ones they can mostly ignore. When most people buy their first house the decision is so immensely large and represents such a high value (more than half a million at least for a lot of city folk) that there is a desire to detach from it to free yourself from responsibility - since you cannot sanely account for all factors it is "safer" to protect your ego by delegating the decision entirely on your id - doing so allows you to, post de facto, entirely free yourself from any responsibility of your poor decision. This, I think, is the main factor we need to fight against to make rational decisions - you must accept failure and be willing to be wrong without shame. Do your best to evaluate your options on important decisions and realize that there are a number of decisions you obviously can't fully rationalize out - you can only make your best attempt. But realize that making your best attempt and being wrong - as much as it might hurt your ego - is a better alternative than "letting it ride" and being able to stand blameless on the far end.

The fight for rationality is mostly a fight against emotional fragility and intellectual laziness.

btilly 2021-08-16 21:53:44 +0000 UTC [ - ]

You are correct. For example the worst and average cases for the Simplex Method are dramatically different.

However, in practice, complex Bayesian nets do wind up being computationally intractable. Therefore attempts to build real world machine learning systems consistently find themselves going to computationally tractable heuristic methods with rather obvious failure modes.

strulovich 2021-08-16 20:19:24 +0000 UTC [ - ]

Also, adding on my previous comment, for an interesting take on the limitations of NP hard applicability to real life problems see Parameterized Complexity:

https://en.wikipedia.org/wiki/Parameterized_complexity

User23 2021-08-16 20:20:04 +0000 UTC [ - ]

Similarly even mediocre programmers do a pretty good job writing programs that halt.

DiggyJohnson 2021-08-16 17:58:20 +0000 UTC [ - ]

First of all I do see that you called it an example; I don't think you're straw-manning or anything:

I think using chaos theory / Bayesian concepts is a significantly better metaphor for "life as we experience it" than it is for the examples you gave.

morpheos137 2021-08-16 23:05:16 +0000 UTC [ - ]

One thing I notice these days on line is people overgeneralise about everything (irony intended).

amelius 2021-08-16 20:44:33 +0000 UTC [ - ]

Ok, so what is the class of problems that is hard for any input?

MichaelZuo 2021-08-16 22:57:48 +0000 UTC [ - ]

Reducing entropy.

amelius 2021-08-17 07:58:28 +0000 UTC [ - ]

Only in closed systems.

yann2 2021-08-16 14:27:31 +0000 UTC [ - ]

Correct. Rationality is Bounded. That fact won a Nobel Prize - https://en.wikipedia.org/wiki/Herbert_A._Simon

The recommendation of the theory is if you cant be rational about a specific problem pick another problem, preferably a simpler problem.

Unfortunately lots of chimps in the troupe are incapable of doing that and therefore we shall always have drama.

btilly 2021-08-16 15:44:30 +0000 UTC [ - ]

That Nobel was won in 1978, and is based on the fact that in practice we can't be rational.

The NP demonstrations that, in theory, updating a Bayesian is a computationally infeasible problem was G. F. Cooper in 1990 (for Bayesian Networks). The stronger result that approximating the update is also computationally infeasible was Dagum, P. & Luby, M., 1993.

So Simon's work relates to what I said, but isn't based on it.

WastingMyTime89 2021-08-16 14:54:26 +0000 UTC [ - ]

> Correct. Rationality is Bounded. That fact won a Nobel Prize.

It's a model not a fact. As a model, it can't really be correct only more or less accurate.

zepto 2021-08-16 17:32:03 +0000 UTC [ - ]

> It's a model not a fact. As a model, it can't really be correct only more or less accurate.

This is not true. Models of an external world may be only more or less accurate, but models of other models may be true or false. Mathematical proofs rely on this. Rationality itself is a model so models of rationality may be true or false.

WastingMyTime89 2021-08-16 18:03:38 +0000 UTC [ - ]

Unless you have very peculiar and idiosyncratic definition of the world model, I am fairly confident that what you are saying doesn't make much sense.

In economy in a way that is not dissimilar to physics, model has a precise meaning. To quote Wikipedia, it is a simplified version of reality that allows us to observe, understand, and make predictions about economic behavior. You can't have a model of a model. That just doesn't really make sense.

> Mathematical proofs rely on this

I'm confused by what you want to say here. Mathematical proofs don't use models.

Every proved statements in mathematics can be built from axioms which are presupposed true applying logical rules which are themselves part of the axiomatic system. Saying that something is mathematically proved basically means that given this set of rules we can build up to that point.

> Rationality itself is a model

Once again I'm fairly lost by what you are trying to mean. I'm fairly certain that for most accepted meaning of the world model and the world rationality, rationality is in fact not a model in the same way that a dog is not a theory.

zepto 2021-08-16 18:15:40 +0000 UTC [ - ]

> Once again I'm fairly lost by what you are trying to mean.

You may want to look up the difference between formal models and informal models.

Since both rationality and the paper showing that it is bounded are based on formal models, it is reasonable to assume this is what we are talking about.

WastingMyTime89 2021-08-16 18:57:48 +0000 UTC [ - ]

Sorry, I don't understand your argument. Are you actually talking about rational choice theory?

> Since both rationality and the paper showing that it is bounded are based on formal models

There is no paper showing that "rationality" is bounded. Models use to consider actors making purely rational choices in the sense that they are always optimizing their utility functions using all available information. Bounded rationility is a different way of modeling actors choice function. It's just a different model. There is no model of models.

Still I don't see what any of that has to do with the difference between formal and informal models. Informal model is a term I have never heard used outside of policy discussion. It's basically dress up for "because the expert said so".

zepto 2021-08-16 19:49:52 +0000 UTC [ - ]

> Sorry, I don't understand your argument.

Understood.

It’s worth noting that the definition of a model that you said you were using doesn’t match with typical definitions of a formal model.

You aren’t talking about formal models, and I accept that you are only thinking in terms of economic models.

Perhaps that explains where the difference in understanding lies.

md224 2021-08-16 19:44:04 +0000 UTC [ - ]

> Mathematical proofs don't use models.

Maybe the person you replied to was taking a model theoretic perspective?

https://plato.stanford.edu/entries/modeltheory-fo/

loopz 2021-08-16 16:52:24 +0000 UTC [ - ]

It's a fact many people believe themselves rational and use models expecting rational actors. Proof lacks that it actually works that way. The opposite is often most probable since you rarely have perfect knowledge in practice. Exceptions can be games like tic-tac-toe and chess.

WastingMyTime89 2021-08-16 17:49:59 +0000 UTC [ - ]

> Proof lacks that it actually works that way

I mean everyone know it doesn't really work that way.

The actual question is: does viewing the average actor as trying to perfectly optimise their utility function using all the information available constitute a good estimation of how actors work in aggregate and does it yield accurate and interesting predictions?

The real insight of Simon in Models of Man is not that actors are not in fact perfectly rational. It's that you can actually model the limits of actors while keeping a fairly rigorous and manageable formalization.

techbio 2021-08-16 23:02:29 +0000 UTC [ - ]

I'll have to read Simon, but I'll be looking for some stable types of responses to uncertainty, not rational but adaptive: reversion to past/training, confusion, overreaction, etc. ie something prevalent enough that it can be reasoned about.

loopz 2021-08-16 22:09:56 +0000 UTC [ - ]

Sure, but such models are hackable / breakable, just by breaking the rules or reinventing the game.

abc_lisper 2021-08-16 16:05:55 +0000 UTC [ - ]

any model better than naive intuition is better imo

carrolldunham 2021-08-17 01:33:23 +0000 UTC [ - ]

* A Nobel Memorial Prize in Economic Sciences, officially the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, commonly but falsely referred to as the Nobel Prize in Economics

vendiddy 2021-08-16 19:33:46 +0000 UTC [ - ]

Could someone explain in laymen terms what "bounded" rationality means?

glial 2021-08-16 19:37:27 +0000 UTC [ - ]

Rationality can be re-phrased as coming up with the optimal solution to a problem. If you only have finite compute/memory/time, your solution is 'bounded' by those constraints - i.e. your job is now find the best solution possible given the constraints.

ggm 2021-08-16 21:44:53 +0000 UTC [ - ]

I've made it a life rule to try to avoid conversations with people who use "correct" to respond to statements, unless they are in the role of a teacher.

Tell me, are you aware of the myriad of alternate words to express your agreement with somebody else aside from correct? You aren't here as judge of the right or wrong. Semantically, philosophically, you're expressing agreement not correctness.

Or .. am I incorrect...?

nonameiguess 2021-08-16 17:29:56 +0000 UTC [ - ]

We should note the limitations of Bayes as well. I already responded to another comment in this thread that giving an update procedure based on seeing new evidence is necessarily bounded in how quickly it can get you to beliefs more likely to be true by your ability to actually gather that evidence or possibly even to generate it if it doesn't already exist. We don't have any perfect algorithms for doing that, and it is of course not a purely computational problem anyway. Take general relativity. It was proposed in 1915 and only confirmed in the very strong gravitational field limit in 2016, because that was our first opportunity to observe a black hole merger, which is not something we have the ability to recreate in a lab.

Even beyond the hard process bottleneck on creating or lucking upon events that produce the evidence we need, however, there is also the limitation that Bayes only gives you a probability. It doesn't give you a decision theory or even a thresholding function. For those, you need a whole lot of other things like utility functions, discount rates, receiver operating characteristics and an understanding of asymmetric costs of false positives versus false negatives, that are often different for each decision domain.

And, of course, to get a utility function meaningful for humans, you need values. There is no algorithm that can give you values. They're just there as a basic primitive input to all other decision making procedures, yet they often conflict in ways that cannot be reconciled even within a single person, let alone across a society of many people.

lalaithion 2021-08-16 14:33:03 +0000 UTC [ - ]

That's why it's called "less wrong". The goal isn't to be perfect, the goal is to do better. To be less wrong.

AndrewKemendo 2021-08-16 20:54:03 +0000 UTC [ - ]

I think this misses the point. Even being "less" wrong requires an amount of work that even the best/smartest etc... cannot consistently apply.

I do believe this is zero-sum in that improving on one set of decisions means no applying the same rigor to others.

This is often seen in the form of very smart people also believing conspiracy theories or throwing their hands up around other massive issues. As an example, the "Rationalist crowd" has de-emphasized work on climate change mitigation in favor of more abstract work on AI safety.

ret2plt 2021-08-16 22:07:53 +0000 UTC [ - ]

> This is often seen in the form of very smart people also believing conspiracy theories or throwing their hands up around other massive issues. As an example, the "Rationalist crowd" has de-emphasized work on climate change mitigation in favor of more abstract work on AI safety.

To be clear, the argument (in rationalist circles) is not that climate change is no big deal, it's that there's already a ton of people worrying about it, so it is better to allocate some extra resources to underfunded problems.

hanche 2021-08-16 14:44:02 +0000 UTC [ - ]

I wouldn’t be surprised to learn that even being less wrong is NP-hard.

whatshisface 2021-08-16 17:13:56 +0000 UTC [ - ]

The saving grace of being able to survive in the universe is that it's possible to climb up NP hard problems far enough to get real results with hard work.

mensetmanusman 2021-08-16 18:04:35 +0000 UTC [ - ]

It also means that you might not know if the hard work is climbing up or down (towards or away) from the solution.

whatshisface 2021-08-16 19:30:28 +0000 UTC [ - ]

No, you know if you're getting better or worse in an NP problem because checking answers is in P.

mensetmanusman 2021-08-16 19:39:49 +0000 UTC [ - ]

If you find a solution, yes

whatshisface 2021-08-16 19:44:16 +0000 UTC [ - ]

Oh, you're thinking of NP-hard yes-no problems. Many if not most NP-hard problems of practical importance, including the traveling salesman, involve integer rather than boolean scores.

karpierz 2021-08-16 20:23:24 +0000 UTC [ - ]

NP-hardness by definition is only yes-no decision problems. The NP-hard formulation of Traveling Salesman is "given the weighted graph G and an integer X, is there a Hamiltonian cycle in G with total weight less than X?"

btilly 2021-08-17 00:14:35 +0000 UTC [ - ]

You are using a non-standard definition of NP-hard. Here are standard definitions.

A problem is in P if there is a polynomial time algorithm to solve it.

A problem is in NP if there is a polynomial time algorithm to check a purported solution.

A problem is in NP-hard if, if there was a polynomial time algorithm to solve it, every problem in NP could be solved in polynomial time.

A problem is NP-complete if it is both in NP and in NP-hard.

For Traveling Salesman, the NP-complete problem is, "...find a solution with total weight less than X." (Linear verification, check it is Hamiltonian, check its weight.) An NP-hard version is, "...find whether there is a solution with total weight less than X." (To verify, you have to search. Oops.) But ANOTHER NP-hard version is, "...find the solution with least total weight". (To verify, you have to search. Oops.)

mensetmanusman 2021-08-16 20:15:31 +0000 UTC [ - ]

Thanks for the additional clarification :)

hanche 2021-08-16 17:53:10 +0000 UTC [ - ]

Indeed. And thanks! I needed a little morale booster now, for reasons unrelated to this topic.

kirse 2021-08-16 14:46:16 +0000 UTC [ - ]

Which is ironic because the pursuit of knowledge only continues to increase the landscape of unknowns towards infinity - the branches of the tree of undiscovered and unknown knowledge continues to grow exponentially. It's as-if today we thought the choices were A or B, yet tomorrow we discover there was a C, and the next day D and so forth. If anything we are only discovering we are "more wrong" every day.

tines 2021-08-16 15:35:54 +0000 UTC [ - ]

Actually I think this is the same fallacy as one of Zeno's paradoxes, and has the same resolution. We are discovering more wrong, as you say, but the "infinity" of wrongs is in the direction of the infinitely small (or "infinitely detailed"), not the infinitely large. In other words, every time we fill in a gap in our knowledge, we create two more gaps, so to speak, but nevertheless we know more than we did before.

wombatmobile 2021-08-16 14:40:15 +0000 UTC [ - ]

> the goal is to do better

Why is that "the" goal?

Who sets "the" goal?

voxic11 2021-08-16 14:41:19 +0000 UTC [ - ]

Its the goal of the LessWrong/Rationalist community.

analog31 2021-08-16 22:14:39 +0000 UTC [ - ]

Less Wrong seems to be a manifestation of a thing that comes in cycles: Something triggers the rise of a "rationalist" movement, including possibly a new evangelist or a new medium. Eventually, rationalism and rational people end up at a standoff. Then the whole thing repeats itself after a period of time.

I'm probably rational enough but also can't make sense of much of the rationalist literature, so I simply follow my own compass and hope for the best. I'm skeptical of Bayes Theater.

JohnPrine 2021-08-16 19:56:08 +0000 UTC [ - ]

The goal is to make decisions that are "better" as defined by your own utility function given limited information. This is also called "winning"

Nav_Panel 2021-08-16 15:03:22 +0000 UTC [ - ]

Yudkowsky and other prominent contributors set "the" goal as a sort of revealed wisdom regarding their speculations about what a super powerful AI will do.

Pragmatically, the goals themselves appeal to individuals who want to maintain conventional (liberal) morality yet also position themselves as superior, typically as a form of compensation.

nitrogen 2021-08-16 17:55:41 +0000 UTC [ - ]

position themselves as superior

This is why we can't have nice things. Any time someone tries to find more effective ways of making good decisions or accomplishing their goals, someone has to bring out the most tortured cynical interpretation to tear them down.

Nav_Panel 2021-08-16 18:02:56 +0000 UTC [ - ]

Have you hung out much with rationalists?

skosch 2021-08-17 01:33:18 +0000 UTC [ - ]

Have you? Because the rationalists I know are genuinely well-adjusted people.

It's cheap and easy to make fun of the lesswrong community as a cringy cult of AI-obsessed neckbeards. And to be fair, the writing style on LW tends to support that impression. But I've found that most of the actual people within the rationality/AI safety/effective altruism communities actually don't fit that stereotype at all.

Nav_Panel 2021-08-17 03:54:14 +0000 UTC [ - ]

Yes, I was in the community for several years and then "left" but still spend a lot of time on its edges. I'm not trying to call them "neckbeards" (although I knew many of those too), only saying that there is an air of superiority without any actually radical content.

I consider EA separate but related, and it definitely qualifies as staking out a superior position within the constraints of liberal morality.

lmm 2021-08-17 02:36:46 +0000 UTC [ - ]

I went to a couple of meetups and found one cool dude surrounded by a group that exactly fit the stereotype, FWIW.

FeepingCreature 2021-08-17 05:08:15 +0000 UTC [ - ]

I went to a couple of meetups and never met anyone who would fit that description.

But then, as a lesswrongy person, maybe it's me, maybe "feeling superior" or whatever is just normal to me :shrug:

(But my actual theory is, it's probably just geographical.)

JohnPrine 2021-08-16 19:54:56 +0000 UTC [ - ]

i consider myself a rationalist (or at least, an aspiring rationalist), and people like hanging out with me. learning about this stuff has changed my life and relationships for the better.

klipt 2021-08-17 03:21:28 +0000 UTC [ - ]

Interesting, can you give a specific example of using rationalist techniques to solve a relationship problem?

JohnPrine 2021-08-17 15:36:26 +0000 UTC [ - ]

Sure. In the past, I've been a person with a strong desire to be right in every situation. If I thought I was right I would be very willing to stubbornly argue my point, without any consideration of the counterpoints that were presented to me. Rationality is in part about understanding that every piece of knowledge I have is probabilistic - that there's a chance that each belief I have is wrong, and that I need to be able to let go of a belief if I encounter enough counter-evidence. I'm sure you've met people who can't be swayed from a belief no matter what you tell them, and you know how difficult it can be to get along with those people. Internalizing the methods of rationality have helped me transform from one of those sorts of people to a more reasonable one whom people have a more pleasant time interacting with.

As a specific example, I made a comment to my roommate last winter about how I thought his girlfriend's hyper caution around COVID was limiting my personal freedom. I realized that I had been crass and apologized to him, but he told his girlfriend anyways and it caused a great deal of tension between the three of us. My own girlfriend told me I should apologize to her. I believed I had nothing to apologize for since I hadn't said anything to her directly, I didn't believe my roommate should have repeated the comment to her in the first place, and I had apologized to him for it already. My girlfriend gave me reasons why an apology was in order, though, and I assigned a lot of weight to her reasoning since I know her to be a more sensitive and emotionally intelligent person than myself. I was able to let go of the belief in my own righteousness and write a heartfelt apology which did wonderfully to mend the relationship.

A previous version of me would have clung to the belief that I was in the right, and either not apologize or write a half-assed apology that would do nothing to fix the situation. The current version of me which strives to be rational was aware of my own biases, recognized that my internal map may not match the territory, and was willing to update based on the evidence from my girlfriend's greater authority on emotional matters.

snarf21 2021-08-16 15:25:27 +0000 UTC [ - ]

Agreed. The world is too complicated. There is too much noise. It might be possible to become fairly knowledgeable about a single issue but it would need to amount to an obsession. This is why we are so keen to belong. We've always tried to apply the "wisdom of the crowds". It is why people latch onto one viewpoint or the other, e.g. Red/Blue, FOX/CNN, etc., it takes all the work out of it. Once you find a source that you agree with on even ONE issue, just blindly trust/agree with them for everything. We'd rather spend our time streaming shows and living life than investing into deep knowledge of any subject.

Take a non political example: How safe are whole tomatoes to eat? What did the grocery store spray on them? Is it safe? Will it wash off? What about the warehouse were they were stored for months, what did they put on them to keep them from spoiling? What about the farmer, what did they spray on them to protect against pests? What is in the water, is it safe? Now we're ready to eat: Does anyone in my family have any kind of intolerance to raw tomatoes? And this is a pretty simple toy example.... In general, we've collectively decided to trust in the good in people. We hope that if something is bad/lie/harmful, then someone in the know will raise the alarm for the group.

heresie-dabord 2021-08-16 19:54:19 +0000 UTC [ - ]

> I maintain that it isn't just hard, it is computationally impossible. [...] The effort of doing better is still worthwhile. But the goal itself is unachievable.

The goal of rational thinking is not some conceit of perfection [1] but debugging the runtime for a better result. Humans are in fact very good at communication and at debugging language errors. They have evolved a rational capacity. It can evidently be developed but it needs to be exercised.

This is where hypothesis of an educational system often enters the discussion.

[1] Galef and others call the "Star Trek" Spock character a Vulcan Strawman or Straw Vulcan. https://en.wikipedia.org/wiki/Julia_Galef

kazinator 2021-08-16 18:09:26 +0000 UTC [ - ]

> I maintain that it isn't just hard, it is computationally impossible.

I further maintain that it's definitionally impossible. Before we find it computationally impossible, we will find that we can't write the a complete, detailed requirements specification defining what rational is.

(Of course, we recognize egregious irrationality when we see it; that's not what I mean; you can't just define rationality as the opposite of that.)

People can behave rationally (or not) with respect to some stated values that they have. But those can be arbitrary. So the requirement specification for rationality has to refer to a "configuration space", so to speak, where we program these values. This means that the output is dependent on it; we can't write some absolute test case for rationality that doesn't include this.

Problem is, people with different values look at each other's values and point to them and say, "those values are irrational; those people should adopt my values instead".

UnFleshedOne 2021-08-16 19:46:50 +0000 UTC [ - ]

You can't say values are irrational -- they just are. If you really like paperclips, no amount of logic can tell you otherwise. What logic can tell you (and other people could), is that your values conflict with each other and you have to balance one against another. Turning whole universe into paperclips is counterproductive if you also value pins. If you literally have no value the other person is basing their arguments on, then they can't convince you to have it.

Luckily we get our values from bunch of heuristics developed through millions of years of biological and social evolution, so we mostly have the same ones, just with different relative weights.

Won't be true if we ever meet (or make) some other sentient critters.

kazinator 2021-08-16 20:55:24 +0000 UTC [ - ]

> You can't say values are irrational

People basically do say that, though.

(Values can be contradictory/inconsistent. E.g. you say you value self-preservation, but you also enjoy whacking your head with a hammer. That would be a kind of irrational. That's not what I'm referring to though.)

UnFleshedOne 2021-08-16 21:53:43 +0000 UTC [ - ]

I think they make a category mistake when they do then. Values tell you where you want to be, rationality is a most accurate process to get where you want to go and maybe to check if you want to be there before actually getting there and checking out personally. (I think we basically agree btw btw, it is all those other people who are wrong :))

polote 2021-08-16 14:55:11 +0000 UTC [ - ]

Well, it depends on the topic. Choosing rationally between the most green of two tomatoes is easy. You are limited by your ability to distinguish colors but you can still decide rationally.

Not all questions have answers, if you want to be rational when you are asked to answer those question, you can just say "I dont know"

At the beginning of the pandemic, when politics were saying mask dont work. You could just say, well, if we transmit covid by air, then putting something in front of my mouth is going to decrease the spread. That's what is being rational. Of course that's not going to be all the time the good answer, but you have still thought rationally.

I'm not really sure what you are trying to prove. Of curse being rational is possible. All people are rational for most of their decisions.

dahfizz 2021-08-16 16:46:22 +0000 UTC [ - ]

> You could just say, well, if we transmit covid by air, then putting something in front of my mouth is going to decrease the spread. That's what is being rational.

If I squint at a statement like this, I guess it could be called rational, but it is certainly not rigorous or convincing. You brush over too much and are making lots of assumptions.

Are these statements rational?

The sun is warm, so if I climb a ladder I will be closer to the sun and therefore warmer.

Masks impede airflow, so if I wear a mask I will suffocate.

Bleach kills germs, so drinking bleach will make me healthier.

It is very easy to make an incorrect idea seem rational. You should wear masks because rigorous science tells us that it is effective. That is the only valid justification. "Common sense" is used to justify a lot of junk science.

nonameiguess 2021-08-16 17:15:55 +0000 UTC [ - ]

I think yes, you can call those statements rational, but that just gets at an additional level of difficulty here. Bayes only gets you so far as holding a belief with maximum probability it is true, given some level of seen evidence. To actually get maximally probably true beliefs without the qualification, you need to actually gather more evidence. In some cases, that may just mean accumulating knowledge that other people already generated, but in some cases, you may need to generate knowledge from scratch. The ability to do that may be severely bounded by resource and time constraints. One person can't personally do all science, so now you need division of labor and assignment of workers to efforts, so you need optimal matching and scheduling algorithms. These are theoretically not computationally intractable, but the algorithms rely upon pre-existing accurate ability and preference ranking, so now you need to go back to information gathering and suddenly you have a bootstrapping problem here that feeding your algorithm the data it needs to tell you how to gather data in the first place requires you to gather data first.

clairity 2021-08-16 18:11:21 +0000 UTC [ - ]

> "You should wear masks because rigorous science tells us that it is effective."

you've really just glossed over the hard part, which is when and where masks work, which is in turn the difficult political problem to solve.

simplifying, covid spreads mouth-to-mouth with a brief stint in the air, not mouth-to-air-then-(much)-later-to-mouth, which is the mediopolitical narrative that's being pushed vehemently but irrationally, and upon which masking policies are erroneously based.

what's always ignored in these narratives is that the virus falls apart quickly all by itself outside the cozy confines of the body, not to mention floats away to oblivion quickly when outside.

if we're really concerned about masks working, we'd have to force people to wear them among friends and family in private spaces like homes, not outside and in grocery stores where they have basically no effect.

"masks work" is a grossly overreaching blanket political statement, not a summary of "the science". scientific evidence suggests masks reduce droplets (and aerosols, with better masks) being ejected into the air. there's less clear evidence that it reduces airborne viral particles being inhaled through the mask. but there's almost no evidence that the way we've deployed masks is doing much other than signalling our fears and concerns.

i'd be open to supporting mask policies that are based on actual evidence (e.g., wear them when socializing at home), but not the mediopolitically fearmongering policies we have.

2021-08-16 23:27:05 +0000 UTC [ - ]

not2b 2021-08-16 15:05:57 +0000 UTC [ - ]

A rationalist would recognize that we update our beliefs as new evidence is available and not attack people for having erroneous beliefs before that evidence was available. The "masks don't work" advice was active for a short time in March 2020 and almost immediately dumped. They thought at the time that only n95 masks would be good enough, these masks were in short supply and health care workers needed them, this was the "politics" of it. But by mid March 2020 people were already being encouraged to make cloth masks and how to do it. That is when my daughter got out the sewing machine and made a bunch, based on instructions from nurses.

polote 2021-08-16 15:10:45 +0000 UTC [ - ]

There was no new evidence. I'm not sure we have even learned anything regarding the efficacy of masks trough the pandemic. All what we know was already known prior of it.

btilly 2021-08-16 16:01:57 +0000 UTC [ - ]

First, there is active research and we demonstrably have learned something. See https://aricjournal.biomedcentral.com/articles/10.1186/s1375..., https://www.nature.com/articles/s41598-020-72798-7, and https://www.pnas.org/content/118/4/e2014564118 for several examples.

Second, your simplistic analysis demonstrated that you, personally, are ignorant of the real tradeoffs involved in whether masks work.

Wearing a mask reduces how much virus leaves your mouth. But when you breathe out, most of the virus is in larger droplets that quickly hit the ground. However breathing out through a mask creates perfect conditions to create an aerosol, which can allow more of the virus to stay in the air for an indefinite period of time. So there is a tradeoff, and there were reasons to question whether cloth masks were better than simple social distancing.

It turns out that what matters most is not that you get exposed, but rather the initial viral load that you get. You see, the virus will go on an exponential growth until the relatively fixed time it takes the immune system to figure things out and start shutting it down. If the virus gets a solid head start, the odds of serious illness go up. Therefore the lingering aerosol from a mask is (except if it accumulates in poorly ventilated indoor spaces) of less concern than an unmasked person talking directly to you.

So the result is that masks work. Even crappy cloth masks work.

s1artibartfast 2021-08-16 17:58:54 +0000 UTC [ - ]

Most of all of your linked papers are summaries of earlier experiments, going back to the 1940's in some cases.

Very little new knowledge was added.

>So the result is that masks work. Even crappy cloth masks work.

I would agree if you change "results" to expert conjecture and "work" to probably do something.

But again, this was always known.

varjag 2021-08-16 16:43:04 +0000 UTC [ - ]

…and as they mentioned, we knew that masks work already.

btilly 2021-08-16 17:29:42 +0000 UTC [ - ]

The quality of our evidence is easy to misjudge in retrospect.

The last opportunity to study the effectiveness of mandating low-quality masks in preventing community spread during a pandemic was around a century old. (Literally, the Spanish Flu epidemic.) In the meantime a lot of new and untried modeling tools were in use, as well as updated disease models, and lots of reasons to question old data.

See https://www.albertahealthservices.ca/assets/info/ppih/if-ppi... for an idea of what was reasonable for educated specialists in public health to believe. Note phrases like, "There was agreement that although the evidence base is poor, the use of masks in the community is likely to be useful in reducing transmission from community based infected persons, particularly those with symptomatic illness."

So it is accurate to say that we had reason to believe that masks work. But it is easy to overstate how much we "knew" it to be true at the time.

varjag 2021-08-17 06:49:28 +0000 UTC [ - ]

There wasn't poor evidence, Jesus Christ. Masks were one of the most researched forms of PPE out there, with many million man-years of practice. And the FUD about masks was not restricted to home-made stuff but any form of respiration filters.

The only reason masks were in doubt was the incompetent advice from WHO bureaucrats, and the bureaucrats on national advisory levels mindlessly droning it. This was not evidence based.

btilly 2021-08-17 17:01:14 +0000 UTC [ - ]

There was a ton of high quality research on high quality N95 masks. Nobody doubted that. Also, we had no supply of that.

To get everyone to mask up, we needed to put the general public in low quality masks. And there was a whole heck of a lot less research on low quality masks being used by the general public.

But, regardless, I have no percentage in convincing you of what is true. Have a nice day.

kbelder 2021-08-16 15:38:48 +0000 UTC [ - ]

'We' as the scientific community may not have, but 'we' the unwashed public learned much.

polote 2021-08-16 15:55:34 +0000 UTC [ - ]

Being rational doesn't prevent you to be wrong. If your assumption is that the government tells the truth and you conclude that mask don't work. Then you have reasoned rationally.

But if you dont trust the governement, and for this specific case you followed them, then this is not rational.

mcguire 2021-08-16 18:41:34 +0000 UTC [ - ]

Aside: As a general rule of thumb, conspiracy theories are not rational.

UnFleshedOne 2021-08-16 20:02:15 +0000 UTC [ - ]

2 years ago I would agree with you 100%. Lately though conspiracy theories become conspiracy facts with alarming frequency. And the speed with which media reaches "We've always been at war with Eastasia" zeitgeist on each shift does not inspire confidence.

tunesmith 2021-08-16 21:56:50 +0000 UTC [ - ]

What a lot of people have forgotten is that in March 2020, they thought COVID was droplets, not aerosol - remember all the emphasis on washing hands and hand sanitizer? - and as such, masks would be overkill for most. Combine that with the worry that people would hoard masks when PPE was in short supply for people would be interacting directly with patients, then the initial discouragement on masks seems more understandable.

As the science changed to suggest that COVID was aerosol, scientific opinions on masks got updated as well.

It also didn't help that some hyper-rational people got hung up on ranting about how masks weren't perfect, and how the virus could still get through if you wore a mask. It was as if they imagined they heard someone said "masks are 100% effective" and really really wanted to register their counterpoints. So they said "they don't work!" when they meant "they're not 100% effective!", and other people heard "they don't work!" and took it to mean "they're 0% effective!" That's one of those patterns you start to see all over the place when you know to look for it - people confusing "there exists" and "forall".

mcguire 2021-08-16 18:37:40 +0000 UTC [ - ]

> At the beginning of the pandemic, when politics were saying mask dont work.

Are straw-man statements rational?

"Then there is the infamous mask issue. Epidemiologists have taken a lot of heat on this question in particular. Until well into March 2020, I was skeptical about the benefit of everyone wearing face masks. That skepticism was based on previous scientific research as well as hypotheses about how covid was transmitted that turned out to be wrong. Mask-wearing has been a common practice in Asia for decades, to protect against air pollution and to prevent transmitting infection to others when sick. Mask-wearing for protection against catching an infection became widespread in Asia following the 2003 SARS outbreak, but scientific evidence on the effectiveness of this strategy was limited.

"Before the coronavirus pandemic, most research on face masks for respiratory diseases came from two types of studies: clinical settings with very sick patients, and community settings during normal flu seasons. In clinical settings, it was clear that well-fitting, high-quality face masks, such as the N95 variety, were important protective equipment for doctors and nurses against viruses that can be transmitted via droplets or smaller aerosol particles. But these studies also suggested careful training was required to ensure that masks didn’t get contaminated when surface transmission was possible, as is the case with SARS. Community-level evidence about mask-wearing was much less compelling. Most studies showed little to no benefit to mask-wearing in the case of the flu, for instance. Studies that have suggested a benefit of mask-wearing were generally those in which people with symptoms wore masks — so that was the advice I embraced for the coronavirus, too.

"I also, like many other epidemiologists, overestimated how readily the novel coronavirus would spread on surfaces — and this affected our view of masks. Early data showed that, like SARS, the coronavirus could persist on surfaces for hours to days, and so I was initially concerned that face masks, especially ill-fitting, homemade or carelessly worn coverings could become contaminated with transmissible virus. In fact, I worried that this might mean wearing face masks could be worse than not wearing them. This was wrong. Surface transmission, it emerged, is not that big a problem for covid, but transmission through air via aerosols is a big source of transmission. And so it turns out that face masks do work in this case.

"I changed my mind on masks in March 2020, as testing capacity increased and it became clear how common asymptomatic and pre-symptomatic infection were (since aerosols were the likely vector). I wish that I and others had caught on sooner — and better testing early on might have caused an earlier revision of views — but there was no bad faith involved."

"I’m an epidemiologist. Here’s what I got wrong about covid."(https://www.washingtonpost.com/outlook/2021/04/20/epidemiolo...)

notsureaboutpg 2021-08-16 15:31:23 +0000 UTC [ - ]

>You are limited by your ability to distinguish colors but you can still decide rationally.

Rationally, the ability to distinguish colors varies between human beings, so much so that with a sufficient number of tomatoes (say 50), you will have different people have different answers for which are the greenest.

Knowing that your ability to distinguish these colors of tomatoes might not be as strong as, say, a tomato farmer's (since he likely works with these specific fruits and colors all the time), you may be rationally inclined to follow his logic in choosing which are the greenest.

Do you follow your intuition or trust an expert? Your contrived example is already difficult to actually make the most rational decision for.

6gvONxR4sf7o 2021-08-16 15:49:01 +0000 UTC [ - ]

That’s true for arbitrary graphs, but I don’t believe it’s practically relevant here any more than the fact that I can’t compute the first 10000 digits of pi in my head is. We are much worse than our computational limits.

varjag 2021-08-16 16:40:14 +0000 UTC [ - ]

Thing is the common failures of rational thinking are not approaching any computational limits. Witness the dumbassery of the past two years.

irrational 2021-08-16 16:43:09 +0000 UTC [ - ]

Past 5-6 years you mean.

jaredhansen 2021-08-16 17:06:26 +0000 UTC [ - ]

All past years you mean. It's not exactly a recent phenomenon.

mcguire 2021-08-16 18:30:02 +0000 UTC [ - ]

Life would be easier if we could agree on one rational decisions in history and then just repeat it as necessary.

irrational 2021-08-16 18:38:30 +0000 UTC [ - ]

It's both sides, right? Right....

mistermann 2021-08-16 19:24:51 +0000 UTC [ - ]

Logically, "both" sides seems like the correct answer to me.

Tenoke 2021-08-16 17:22:01 +0000 UTC [ - ]

Nobody is disputing this. You can, however, clearly be more or less 'rational', adopt better or worse heuristics, etc. which is what you attempt to get from reading LessWrong or Kahneman.

nicoburns 2021-08-16 20:49:38 +0000 UTC [ - ]

But what constitutes a better heuristic is context dependent. In particular, if I must make a decision in a time-constrained manner then any heuristic that blows the time budget is going to be worse even if it would be better given more time. And one can't really know in advance how much time spent thinking is optimal. So one has to pick a strategy. The fact that humans have evolved to use a variety of strategies fast and slow (depending on the human) suggests that there is no single optimal strategy.

See also Gigerenzer's Ecological Rationality.

Tenoke 2021-08-16 21:49:25 +0000 UTC [ - ]

Sure, but I doubt you actually think that everyone is already operating as well as they can within the contexts they are placed. There's definitely room for improvement. There are all sorts of scenarios where even knowing nearly optimal techniques outcomes can be improved by going the TimSort way due to the context you most often find yourself in.

lisper 2021-08-16 15:47:29 +0000 UTC [ - ]

Ironically, some of the most irrational people I know are the ones who profess to hew to rationality, to the point where, in certain circles, "rationality" has become a sort of cult. This is particularly evident in militant anti-theism (whose adherents insist that the only possible explanation for someone believing in God is that they are idiots or otherwise mentally deficient), hard-core libertarians (who, ironically, end up politically aligned with hard-core fundamentalist Christians, at least in the U.S.) and a particularly weird strain of this disease that causes people to subscribe to (and actively proselytize!) the many-worlds interpretation of quantum mechanics. It's bizarre, and unendingly frustrating. Sometimes I feel like I'm the only rational creature in the universe because, of course, none of my beliefs are anything at all like theirs.

btilly 2021-08-16 17:04:09 +0000 UTC [ - ]

How do you know that they are the ones who are being irrational here, and not you?

This is a serious question. We should always challenge our preconceptions. To take your examples:

1. Traditional Judeo-Christian religions all claim we should believe because of claims made in holy books of questionable provenance, held by primitive people who believed things like (for example) disease being caused by demons. What rational reason is there for believing these holy books to be particularly truthful? (I was careful to not include Buddhism, whose basis is in experiences that people have while in altered states of consciousness from meditation.)

2. The shortcomings of libertarianism involve various tragedies of the commons. (My favorite book on this being, The Logic of Collective Action.) However the evidence in favor of most government interventions is rather weak. And the evidence is very strong that well-intended government interventions predictably will, after regulatory capture, wind creating severe problems of their own. How do you know that the interventions which you like will actually lead to good results? (Note, both major US parties are uneasy coalitions of convenience kept together through the only electoral realities of winner takes all. On the left, big labor and environmentalism are also uncomfortable bedfellows.)

3. To the extent that the observer is described by quantum mechanics, many-worlds is provably a correct description of the process of observation. In the absence of concrete evidence that quantum mechanics breaks down for observers like us, what rational reason is there to advocate for any other interpretation? (The fact that it completely violates our preconceptions about how the world should work is an emotional argument, not a rational one.)

lisper 2021-08-16 17:22:08 +0000 UTC [ - ]

I kind of intended that comment to be ironic self-deprecating humor because, of course, I have no way of knowing whether or not I'm being irrational. Irrational people think they're rational, and so the fact that I think I'm rational does not mean that I am. But it's likewise for everyone. The real point is that everyone ought to have a little more humility about their own rationality (especially all the idiots who are downvoting my original comments. Now they are being totally irrational!)

lisper 2021-08-16 19:29:05 +0000 UTC [ - ]

To late to edit the above comment, but just for the record, this is my actual response to the many-worlders:

http://blog.rongarret.info/2019/07/the-trouble-with-many-wor...

btilly 2021-08-16 20:18:36 +0000 UTC [ - ]

I read it, but from it you seem to be making three points.

1. Many worlds is indeed what QM predicts should happen.

2. Popular descriptions are oversimplified and the full explanation is very complicated.

3. Even if many worlds is true, it doesn't change my experience and should not rationally change how I act when faced with quantum uncertainty.

If I am correct, then I'm in violent agreement with all three points. And am left with, "So until more data, I will provisionally accept many worlds as the best explanation."

My impression is that you seem to be left with, "If it is true, then it is irrelevant to my life, and so I don't care about whether it might be true."

lisper 2021-08-16 21:49:33 +0000 UTC [ - ]

> Many worlds is indeed what QM predicts should happen.

No. Many-worlds is what the SE predicts should happen. But the SE != QM. MW does not explain the Born rule, which is part of QM's predictions. MW is also violently at odds with subjective experience. So MW is not a good explanation of what is observed.

btilly 2021-08-17 17:13:14 +0000 UTC [ - ]

Let's weaken it slightly. Consider Schrödinger's cat. Let's assume the Copenhagen interpretation, and assume that collapse does not happen until the box is opened.

Now let's modify the experiment to assume a hyper-intelligent cat, with access to a full physics laboratory inside of the box.

QM predicts that there is no experiment that is possible for the cat to conduct that can tell whether collapse happened. The QM description of what's going on in the box is guaranteed to be alien to the cat's experience, but perfectly predicts what the cat does experience. Furthermore, even though in this hypothetical, collapse happens when the box is opened, there is no experiment that can be done by the outside experimenter which can verify that collapse happened when the box opened, and not before. Nor is there any experiment that the experimenter can perform that confirms that collapse does not happen afterwards.

And yes, this includes attempts by the cat to confirm the Born rule. As far as the cat can determine, the Born rule will be true.

Therefore our assumption in this hypothetical that QM describes the cat leads to MW being true for the cat no matter what is ultimately true.

And this is what I mean by saying that, to the extent that the experimenter is described by QM, MW is true.

lisper 2021-08-17 17:43:42 +0000 UTC [ - ]

> Now let's modify the experiment to assume a hyper-intelligent cat, with access to a full physics laboratory inside of the box.

A.k.a. Wigner's Friend.

https://en.wikipedia.org/wiki/Wigner%27s_friend

> QM describes the cat leads to MW being true for the cat no matter what is ultimately true.

You should read this:

https://www.nature.com/articles/s41467-018-05739-8

And this:

https://arxiv.org/abs/1812.06451v4

btilly 2021-08-17 21:02:47 +0000 UTC [ - ]

In return for you https://www.scottaaronson.com/blog/?p=3975 for a fundamental flaw in the Nature paper.

As for the differences between Convivial Solipsism and Many Worlds, I am indifferent to them. It is immaterial to me whether I am a singular observer who can only be aware of part of the wave function, or I am one component of a superposition of observers, each of which is only aware of part of the wave function.

I personally lean away from solipsism because I do not think that I, or my observation of reality, are that important. But that is a preference. And I don't have any particular justification for it or reason to disagree with anyone with the opposite opinion.

breuleux 2021-08-16 22:37:29 +0000 UTC [ - ]

Thanks, that was interesting :)

One thing I'm curious about: I haven't read the literature all that well, but my personal understanding of MWI, after trying to wrap my head around it, is that there's probably no branching or peeling at all: every possible configuration of the universe immutably exists and is associated with a complex amplitude. What does change are the amplitudes. When I make a choice at point A and the universe "splits" into B and C, the only thing that happens is that the amplitude in bucket A is split into buckets B and C. But there's no reason to think A, B and C were ever empty or will ever be empty: after all, some other state Z might pour amplitude into A at the same time A pours into B and C. We might even currently be in a steady state where the universal wavefunction is perfectly static, because every single "branch" is perfectly compensated by a "join". If so, MWI would challenge the very idea that existence is a binary predicate (it's actually a continuous complex amplitude). I'm honestly not sure how we're even supposed to reason about that thing.

Does that make any sense, or am I way off base?

lisper 2021-08-16 23:04:54 +0000 UTC [ - ]

> Does that make any sense

Not to me, sorry.

hindsightbias 2021-08-16 17:54:58 +0000 UTC [ - ]

Watching the SSC and NYTimes drama was pretty eye opening about rationlists rational discourse.

Even when SA himself eventually started questioning his response/allegations, few of the mob (there really is no other word for it) would not have it. All absolutist and conspiracy laden.

PG said keep your identity small. I’ve found few rationalist or libertarians of any bent who meet that criteria.

Jensson 2021-08-16 16:45:40 +0000 UTC [ - ]

Being rational includes being rational about computation power and heuristics used on a specific choice. Therefore irrational is when people make completely stupid choices that aren't computationally hard to make, not that people can't solve NP-hard problems.

jdmichal 2021-08-16 19:42:01 +0000 UTC [ - ]

In addition to Thinking, Fast and Slow, I'd recommend Annie Duke's Thinking in Bets. It builds on literature such as Thinking, Fast and Slow to discuss the separation of decisions and results. Specifically, thanks to luck, the quality of the decision is not always represented by the quality of the result. And in order to learn, one has to be able to recognize good and bad decisions regardless of results.

This seems to be a pretty good overview:

https://www.athenarium.com/thinking-in-bets-annie-duke/

didibus 2021-08-16 18:16:50 +0000 UTC [ - ]

That might be true given a single brain, but we as a species have access to billions of brains.

The question is, can we organize and educate ourselves so we can leverage that parallel power and let each person become experts in their areas with proper trusts and incentives? And manage to pass along the previous generation computation to the next, without corrupting the data?

Edit: And I forgot all the tools we've designed to help us compute all that, of which I'd count math as a tool to help us compute, and computers as another.

AndrewKemendo 2021-08-16 20:50:14 +0000 UTC [ - ]

This nails it.

I'd go further to say that there are real world issues that compound the variables. Namely that individual actions increasingly have global consequences eg. individual purchasing behaviors have externalities that the market is not pricing in and thus fall to the consumer to have to calculate.

Further, given that these global issues these kinds of calculations are game theoretic by their nature, making it even more complicated.

dwd 2021-08-16 22:29:54 +0000 UTC [ - ]

Memory and learning is additive - we don't have a delete key, except for where a model can be completely replaced with something new, which is usually at that simple fact level - but it's then assimilated into the rest of what we believe (like a wave function collapse) but it allows for discordant ideas at a distance - irrationality!

joe_the_user 2021-08-16 17:22:26 +0000 UTC [ - ]

Not mention we don't actually know with certain the probability of even simple things. The average person is almost never reasoning about simple, repeatable sequences of events. You know there's a chance of your car breaking down each day but you don't know the probability of that event and yet you still deal with that possibility.

yibg 2021-08-16 22:00:53 +0000 UTC [ - ]

Being NP-hard doesn't make it computationally impossible in all cases though. So while it might be computationally impossible to be rational in ALL cases, it could be computationally possible to be rational in some (or even many) cases. I think that's the goal to strive for.

threatofrain 2021-08-16 20:15:31 +0000 UTC [ - ]

Daniel Kahneman has soured on his own System 1/2 theory, plus his original theory discussed bounded rationality and not the kind of objective rationality which fell out of favor in econ literature a long time ago.

ad8e 2021-08-17 03:35:40 +0000 UTC [ - ]

I'm interested in learning more. Is there a resource where I can see Kahneman's changed thoughts?

2021-08-16 16:44:07 +0000 UTC [ - ]

garbagetime 2021-08-16 21:06:35 +0000 UTC [ - ]

> We should all know that given a belief about the world, and evidence, Bayes' Theorem describes how to update our beliefs.

Should we? What of the problem of induction?

btilly 2021-08-17 17:15:26 +0000 UTC [ - ]

The problem of induction is a problem of how we may create a prior out of raw experience.

And neither logic nor mathematics offers a solution. In practice, however, we do. But, as any parent should know, we don't do it through a rational process.

ulucs 2021-08-16 17:12:07 +0000 UTC [ - ]

Why bother reasoning with NP-hardness when you can just invoke incompleteness? No brain power limitations are needed.

drdeca 2021-08-16 18:11:30 +0000 UTC [ - ]

Because incompleteness isn’t really relevant here?

Are you just saying “people aren’t logically omniscient, and can’t be because of incompleteness”?

tisthetruth 2021-08-16 17:59:40 +0000 UTC [ - ]

Not being jacked up on sugar and caffeine can help tremendously.

I would still like to see some studies which delve into whether sugar and caffeine are catalysts for biasing us towards system 1 and how they affect system 2, mindfulness, patience, etc...

6gvONxR4sf7o 2021-08-16 18:23:01 +0000 UTC [ - ]

There are some good bits in here. I love the subtitle especially: "The real challenge isn’t being right but knowing how wrong you might be." Knowing when not to provide an answer is hard. A big part of my job is communicating statistical findings and giving a good non-answer is much harder than giving a good answer, both technically speaking and socially speaking.

One thing I'll add that drives me nuts is the fetishization of bayesian reasoning I see some times here on HN. There are times that bayesian reasoning is helpful and times that it isn't. Specifically, when you don't trust your model, bayes rule can mislead you badly (frequently when it comes to missing/counterfactual data). It's just a tool. There are others. It makes me crazy when it's someone's only hammer, so everything starts to look like a nail. Sometimes, more appropriate tools leave you without an answer.

Apparently that's not something we're willing to live with.

hinkley 2021-08-16 18:37:18 +0000 UTC [ - ]

Thinking Fast and Slow left me with a feeling of despair about the human inability to reason effectively about statistics.

I like to tell people that charts work better for asking questions than answering them. Once people know you look for answers there, the data changes. More so than they do for question asking (people will try to smooth the data to avoid awkward questions).

belter 2021-08-16 18:41:05 +0000 UTC [ - ]

"Thinking Fast and Slow" left me with the same feeling but not because of "Thinking Fast and Slow"

https://news.ycombinator.com/item?id=27261501

belter 2021-08-16 18:38:22 +0000 UTC [ - ]

I am with you :-) https://xkcd.com/1132/

tomjakubowski 2021-08-16 20:19:51 +0000 UTC [ - ]

Maybe I'm just missing the joke here, but "Bayesian reasoning" is hardly needed to realize that if the sun did explode, the $50 you'd lose in the bet is worthless anyway.

RogerL 2021-08-17 02:13:39 +0000 UTC [ - ]

That's not the joke. The point is that the sun exploding is extremely rare, so you can essentially conclude that the trial is a false positive. If you just look at the p-value you ignore p(B| not A) (false positive), and conclude that the sun exploded (no actual statistician would make that claim in this example, but they sure do in more subtle situations).

MrPowers 2021-08-16 14:25:02 +0000 UTC [ - ]

Studying logical fallacies and behavioral economics biases have been the best ways for me to become more rational. I'm constantly calling myself out for confirmation bias, home country bias, and the recency effect in my internal investment thought process.

Learning about logical fallacies and identifying them in conversations is great. Don't tell the counterparty of their logical fallacies in conversations cause that's off putting. Just note them internally for a more rational inner dialogue.

Learning other languages and cultures is another way to learn about how different societies interact with objective truth. Living other places taught me a lot about how denial works in different places.

Thinking rationally is quite hard and I've learned how to abandon it in a lot of situations in favor of human emotions. How someone feels is more important than how they should feel.

anyfoo 2021-08-17 00:28:24 +0000 UTC [ - ]

> Learning other languages and cultures is another way to learn about how different societies interact with objective truth. Living other places taught me a lot about how denial works in different places.

This also had a rather frustrating effect. It is true that not just intensely traveling (not in the sight seeing way), but also actual living in several different countries and cultures, changed my horizon a lot. It definitely had the effect you talk about.

But then what? You cannot tell your partner in discussion "you would not think like that if you had traveled/lived outside of your culture", and it's also impossible to send everyone off to travel in order to experience the same. Much less in the US, where for most of the country you cannot just hop into a train for a few hours to encounter a completely different language and culture. (I grew up in Europe and moved to the US as an adult, but I've also lived in several different European countries before, and traveled to far away places like Asia.)

nostromo 2021-08-16 15:50:32 +0000 UTC [ - ]

The sunk cost fallacy is particularly important to learn about and teach your children about.

I see it everywhere, from my own decision making process to international politics. Just this morning I was thinking about it as I read the news about the US leaving Afghanistan, and last week talking with a friend who is staying at a bad job.

mcguire 2021-08-16 18:46:53 +0000 UTC [ - ]

Here's a question for you: what is the difference between the sunk cost fallacy and persistence?

And here's the answer: Persistence is good when it is successful. If the activity us unsuccessful, it's an example of the irrational sunk cost fallacy. (Making decisions without knowledge of future events is quite hard.)

And the important lesson: If you bail at the first sign of adversity, no one can ever accuse you of being irrational. Of course, as the old saying goes, all progress is made due to the irrational.

clairity 2021-08-16 20:31:31 +0000 UTC [ - ]

that's not irrationality, that's decision-making under uncertainty, which is the norm, not the exception. probabilities are dynamic, information is imperfect, and so decision-making must incorporate that uncertainty.

the sunk cost fallacy is simply considering existing loss when deciding on continued investment (in time, money and other resources), when you should only consider future cost for future benefit. it's thinking erroneously that existing loss is not already locked in, that it's salvageable somehow. but no, it's already lost.

in a project with continuously updating probabilities of success, and under imperfect information, the go-or-no-go decision should only be based on the likelihood of future gains exceeding future losses, not future+existing losses.

in this framework, persistence would be having credible evidence (e.g., non-public information), not just belief, of the likelihood of future net gain relative to opportunity cost. it'd be irrational to be persistent simply on belief rather than credible information and probability estimation.

542354234235 2021-08-17 15:01:07 +0000 UTC [ - ]

There is a real difference. Take the example of the “money pit” house, where the cost of repairs vastly outweighs the gain you get at the end (either in resale or in livability), vs a fixer upper house.

After is working on repairs, and after significant investment time and money, you are not as far along as you thought you would be. You update your calculations on time and cost based on your progress so far and any new problems you have uncovered.

A persistent person looks at the costs of a fixer upper, sees it is still likely worth doing, and is willing to put in the additional effort they had not originally planned for to see the project through. But they can also look at the future costs, recognize that the house is likely a money pit and continued work would be unlikely to ever yield a return on their investment, and that the time and money they have already spent are gone no matter what, but that they can prevent additional loss.

Someone biased by the sunk cost fallacy sees both projects the same. They look at the money pit and see that it is unlikely to show a return, but they hold on to the time and money they have already spent being lost if they walk away from the project, influencing them to continue.

To look at it another way, a persistent person would make the same calculation of the likely success of a project regardless if they came into it at the first point or the second point. They are persistent, so they won’t give up on something because it is more difficult than they originally thought and they won’t give up on worthwhile work just because it is hard. Someone biased by the sunk cost fallacy will not make the same calculation after they have invested effort, as they will hold on to already invested effort as a reason in and of itself to continue.

aidenn0 2021-08-16 22:30:24 +0000 UTC [ - ]

The difference between sunk-cost fallacy and persistence is that of motivation. If you keep doing something because "you've worked so hard already" then that's sunk-cost fallacy. If you keep doing something because "success is just around the corner" then that's persistence.

You can't go back in time and not work hard on something, so whether or not you should continue is purely a function of whether or not you think you will succeed, not a function of how much effort you've already put into it.

oldsklgdfth 2021-08-16 19:01:03 +0000 UTC [ - ]

In an attempt to catch myself in the act of logical fallacies I have a flash card app on my phone. One of the sets I have is of logical fallacies. Educating myself has helped make me more aware of them and when I fall victim to them.

It's not an easy task. But 10 minutes a day can add up and reinforce that information.

A related idea is cognitive distortion. It's basically an irrational thought pattern that perpetuates negative emotions and a distorted view of reality. One example many here can relate to is imposter syndrome. But to feel like an imposter you have to overlook your achievements and assets and cherry-pick negative data points.

wyager 2021-08-16 18:25:34 +0000 UTC [ - ]

“Logical fallacies” are mostly Boolean/Aristotelian and identifying them is completely useless and/or counterproductive in 99% of real world scenarios. Most of your reasoning should be Bayesian, not Boolean, and under Bayesian reasoning a lot of “fallacies” like sunk cost, slippery slope, etc. are actually powerful heuristics for EV optimization.

jitter_ 2021-08-16 20:16:51 +0000 UTC [ - ]

> under Bayesian reasoning a lot of “fallacies” like sunk cost, slippery slope, etc. are actually powerful heuristics for EV optimization.

Can you elaborate on that?

This really piqued my interest. I feel like logic is easy to apply retrospectively (especially so for spotting fallacies), but trying to catch myself in a fallacy in the present feels like excessive second quessing and overanalyzing. The sort that prevents forward momentum and learning.

Would you by any change have any recommendations on reading on the topic?

wyager 2021-08-16 20:56:47 +0000 UTC [ - ]

Sure. Fallacies, as usually stated, tell you when something that feels like a logical entailment isn’t actually a logical entailment.

Intuitively, people find “bob is an idiot so he’s wrong” a reasonable statement.

Technically, the implication does not hold (stupid people can be correct) and this is an ad hominem fallacy.

However, if we analyze this statement from a Bayesian standpoint (which we should), the rules of entailment are different and actually bob being stupid is evidence that he’s wrong. So maybe this is actually a pretty reasonable thing to say! Certainly reasonable people should use speakers’ intelligence when deciding how much to trust speakers’ claims, even though this is narrowly “fallacious” in an Aristotelian sense.

I’m not aware of any reading on this topic. It seems under-explored in my circles. However I know some other people have been having similar thoughts recently.

RonaldRaygun 2021-08-17 17:33:56 +0000 UTC [ - ]

No disagreement with the main thrust of your comment, it's a very good one and imo goes to the heart of the seemingly intractable divide between the logician's approach to truth and that of damn near everyone else - which tends to leave the logician reasoning into the void, doing not a bit of good for anyone.

However I myself would probably label the statement "Bob is an idiot" (or perhaps less abrasively, "Bob has often been wrong in the past in easily verifiable ways") not as evidence that he's wrong per se, but as a signal, possibly a rather strong signal, that he is likely also incorrect in the current matter.

A minor semantic quibble, but in my own experience I've found that conceiving of it as such helps frame the situation as a "sensor fusion of individually unreliable data sources" type of problem, as opposed to one of "collecting experimental results in a logbook and deriving conclusions from them."

The latter of which can lead pretty seamlessly to a towering edifice of belief built upon some ultimately pretty shaky foundations. Ask me how I know ;)

wyager 2021-08-17 19:34:37 +0000 UTC [ - ]

I just use the term “evidence” from probability theory. “Signal” feels pretty synonymous.

adam_arthur 2021-08-17 01:02:07 +0000 UTC [ - ]

Yes, 100% agreed! Your post reflects my feelings on this.

It's important to understand that something being a "logical fallacy" just implies that you can't unilaterally justify conclusion X by using reasoning Y.

But that does not mean that reasoning Y is not valid or helpful in understanding conclusion X.

Ultimately it's important to justify your views with sound reasoning, but life is full of heuristics, so often use of heuristics to reach a conclusion can be reasonable. It just means the conclusion is not definitive from a logical point of view.

Ideally you use a combination of logically sound and heuristic based statements to support an argument.

Following your Bob example... It's important that the person making the argument uses stronger reasoning than just calling Bob an idiot. But agreed that it's a totally valid point of supporting evidence.. assuming that Bob is an idiot is a fairly agreed upon statement.

newbamboo 2021-08-16 16:55:57 +0000 UTC [ - ]

Some are grateful to have them pointed out, after a bit of initial discomfort and resistance. Didn’t work out so well for Socrates of course, but we’re more enlightened now.

contravariant 2021-08-16 23:40:22 +0000 UTC [ - ]

If you want to be like Socrates it'd be better to not simply point out the fallacy but make people realize the fallacy with the Socratic method.

As far as arguments go "That's an XXX fallacy" is one of the weaker ones, if not fallacious in and of itself.

Matticus_Rex 2021-08-16 18:06:59 +0000 UTC [ - ]

> but we’re more enlightened now

We hope.

SMAAART 2021-08-16 14:16:16 +0000 UTC [ - ]

Nobody wants to deal with rational people.

Big business want people to buy things they don't need, with money they don't have to impress people they don't like

Politicians want people who will drink the cool-aid and follow what they (the politicians) say (and not what they do)

Religions... well, same.

And so all messages from advertisement, to movies, TV, narrative is about hijacking people's feelings and suppressing rationality. Common sense is no longer common, and doesn't make much sense.

ret2plt 2021-08-16 22:45:17 +0000 UTC [ - ]

It's worse than that. The problem is that being truly rational is hard, unpleasant work that few people want to do. If you read an article that makes your political opponents look bad, you can't just feel smugly superior, you have to take into account that you are predisposed to believe convenient sounding things, so you have to put extra effort into checking the truth of that claim. If you follow the evidence instead of tribal consensus, you will probably end up with some beliefs that your friends and relatives wont like, etc.

jimbokun 2021-08-16 15:56:01 +0000 UTC [ - ]

I think this is connected to another reason why so many seem to reject "rationality" today.

They are rejecting the authorities that in the past have tried to associate themselves with "rationality". The political think tanks. The seminaries. The universities. Government agencies. Capitalist CEOs following the "invisible hand" of the market.

All of these so-called elites have biases and agendas, so of course none of them should be accepted at face value.

I think what's missed, is rationality is not about trusting people and organizations, but about trusting a process. Trusting debates over lectures. Trusting well designed studies over trusting scientists. Trusting free speech and examining a broad range of ideas over speech codes and censorship. Trusting empirical observation over ideological purity.

This is the value system of the so called "classical liberals", and they are an ever more lonely and isolated group. There is a growing embrace for authoritarianism and defense of tribal identity on both the "left" and the "right" taking its place.

pessimizer 2021-08-16 18:46:41 +0000 UTC [ - ]

"Classical liberalism" has little or no relationship to any sentiment you've expressed here, as far as I know.

DoingIsLearning 2021-08-16 14:20:53 +0000 UTC [ - ]

I don't disagree but I have to say this absolute reads like a voice-over from an Adam Curtis documentary.

cortesoft 2021-08-16 14:31:57 +0000 UTC [ - ]

I think part of it is a quote from Fight Club

chromaton 2021-08-16 14:58:25 +0000 UTC [ - ]

Quote Investigator says it's from 1928 newspaper column: https://quoteinvestigator.com/2016/04/21/impress/

zentropia 2021-08-16 15:00:42 +0000 UTC [ - ]

Fight Club, bus scene

tim333 2021-08-17 10:31:44 +0000 UTC [ - ]

Sometimes you want to deal with rational people - for example if you want things fixed and to work. I'd like a rational doctor, plumber and government. But I see your point that there are major incentives for encouraging irrationality in your customers.

toshk 2021-08-16 14:36:25 +0000 UTC [ - ]

When all experiences we have are based in meaning those emotional experiences in some sense might be more "real" then a logical thought.

athenot 2021-08-16 14:22:15 +0000 UTC [ - ]

This sounds cynical but yes, unfortunately, there are many incentives to not be rational.

Siira 2021-08-16 14:38:22 +0000 UTC [ - ]

I think you’re confusing group rationality with individual rationality. There is never an individual incentive not to be individually rational, by definition. Bad Nash equilibria, in game-theoretic terms.

marcod 2021-08-16 18:06:20 +0000 UTC [ - ]

I maintain that the concept of "common sense" is also quite useless now :p

kerblang 2021-08-16 18:07:07 +0000 UTC [ - ]

My problem in everyday work is so often I have to deal with so-called software engineers who fancy themselves quite the scientific thinkers but whose irrationality borders on delusional. In fact a lot of them believe "I'm very smart, so I am therefore the most rational" which is obviously not true at all. In fact this will probably make a lot of so-called software engineers angry but I tend to think of the non-technical folk as the rational ones and much easier to deal with as a result. Purely anecdotal though.

BurningFrog 2021-08-16 19:24:41 +0000 UTC [ - ]

You won't make any engineers angry.

We know you're talking about other engineers, and we agree about those fools!

kerblang 2021-08-16 20:22:51 +0000 UTC [ - ]

I appreciate the humor, but I'm not. There are bitter disagreements based on different interpretations of the facts, and there are bitter disagreements based on a complete disregard for the facts, a refusal to verify assumptions, a persistent use of arrogance as a substitute for competence, blaming the tools for failures of the person using them, and more. In fact there is nothing so maddening as dealing with a delusional person and being told, "I don't know why you're always getting in arguments with them!" as if it's just one of those "personality conflicts" - I would describe it as practically a personality disorder conflict.

DamnYuppie 2021-08-16 18:20:08 +0000 UTC [ - ]

I have observed that behavior in many other professions where the participants view themselves as very smart. Physicians and lawyers are at the top of that list.

nescioquid 2021-08-16 23:46:05 +0000 UTC [ - ]

I was chatting with a couple of software engineers, one of whom had just come from working at a research hospital on a project involving some imaging devices. The other engineer asked him if the physicians were tough to deal with, and he responded that no, not especially. But the physicists! Oh, the doctors hated the physicists for thinking they knew everything!

_moof 2021-08-17 00:53:45 +0000 UTC [ - ]

Indeed. Academicians are some of the worst offenders in my experience.

Also, you know how software engineers like to think that they're rocket scientists? Well, it brings me no pleasure to report that rocket scientists think they're software engineers.

guskel 2021-08-17 02:19:09 +0000 UTC [ - ]

Can you provide an example?

PicassoCTs 2021-08-16 19:25:03 +0000 UTC [ - ]

I find the distinction between emotions and logic to be quite synthetic. Emotions is nothing but logic, just hard coded, subconscious and hard to trace back from the inside. Alot of "rational" thought though, falls into a similar category as the emotional pre-chosen outcome is just decorated with "rational" arguments. The reason ultimately is the same as everywhere in life. Economics. In this case energy economics. Heuristics and early-outs, are more desirable then a long, energy-intensive search of a complex space, coming to a indecisive conclusion to wander between local maximums.

The real interesting thing here, is the answer to why emotions, work as they do and what the patterns and bits are that trigger them. To turn over that particular rock is to go to some deeply disturbing places. And to loose the illusion that emotion make one more "human" - meanwhile, if ones reaction is more hard coded, shouldn't it be considered more machine-like?

alecst 2021-08-16 14:03:28 +0000 UTC [ - ]

It's really hard (for me, and I imagine, for everyone else) to not put myself into my views and opinions. Like, when someone shows me that I'm wrong, it's natural for me to feel attacked, instead of just taking it as a learning moment. Noticing when this happens and working with it has been my main struggle in learning how to be more rational. Those views and opinions really don't need to be a part of what I consider "myself."

Rationality, to me, is really about an open-minded approach to beliefs. Allowing multiple beliefs to overlap, to compete, to adapt, without interfering too much with the process.

polote 2021-08-16 15:14:55 +0000 UTC [ - ]

The basis of a rational decision, is to work with hypothesis. When someone shows you that you are wrong. Just ask yourself, my belief is based of which hypothesis ? Did his points showed that the logic between my hypothesis and my opinion were flawed? Did he show that my hypothesis were false ?

If you want to be rational about an opinion, you have to think first, "what are my hypothesis". Most people start with the opinion and then go down to the hypothesis. That can't work like that. That's the hypothesis + the logic that should create an opinion. Not the other way around

sjg007 2021-08-16 15:13:02 +0000 UTC [ - ]

If you demonstrate an open mind when someone says you're wrong you are more likely to open their mind. That's a win.

Focus on yourself and controlling your emotions. Be the calm.

XorNot 2021-08-16 14:09:28 +0000 UTC [ - ]

> Allowing multiple beliefs to overlap

This doesn't seem very rational. If your beliefs are in conflict and you're content to not resolve that, then pretty much by definition you're accepting a logical inconsistency.

If resolving the intersection doesn't lead to a new stable belief system, then aren't you basically going with "whatever I'm feeling that day"?

claudiawerner 2021-08-16 14:44:41 +0000 UTC [ - ]

I've personally come to see this as a more complicated issue. Often, rational priorities contradict and overlap in scope - for example, discrepancies between moral reasoning and instrumental reasoning. Although I try to be reasonable about these, it's not always possible or preferable to side with one over the other.

However, the drive for total and pure consistency is also misguided in my judgement. One reason why we usually feel so motivated and conflicted (to the point where it can lead to depression) with inconsistency is the psychological effect of cognitive dissonance. It's not clear to me that the only way to quieten cognitive dissonance is to resolve the dissenting thoughts.

Another way is to accept that not everything needs to be resolved. This can be great for mental health - again, just in my experience. Don't let the (sometimes irrational) effects of cognitive dissonance override your decision making. Resolution can work, but so can acceptance.

jcims 2021-08-16 14:43:22 +0000 UTC [ - ]

>This doesn't seem very rational. If your beliefs are in conflict and you're content to not resolve that, then pretty much by definition you're accepting a logical inconsistency.

This is just my perspective, but very few beliefs or values map to the whole of reality...they tend to bind to certain aspects of it with a variable priority along the spectrum of that particular dimension, wither its personal agency, the color red, public health, spiders, etc.

However, reality rarely provides us with the ability take a position purely on one factor...nearly every context in which a decision is required operates at the nexus of an uncountable number of these dimensions. Some you can feel swelling to the fore as their slope in your mental 'values' model increases, others stay dormant because you don't see how they apply. This is how most of my decisions that might look outwardly 'inconsistent' arise, there are confounding factors that dominate the topology and steer me in a different direction.

alecst 2021-08-16 14:19:46 +0000 UTC [ - ]

It's an ambitious and admirable goal to be completely logically consistent, but I've given up on that. Sometimes there are two different but consistent stories for the same thing. I get that maybe it doesn't seem rational, but sometimes there's no way to pick between stories.

And, also, sometimes you think you've settled on the right path, but then you later get a new piece of information and have to reevaluate.

So to me it's not so cut and dry.

mindslight 2021-08-16 15:36:39 +0000 UTC [ - ]

Your thinking is most certainly rational. The contraposition to Godel's incompleteness theorem tells us that any framework with sufficient explanatory power will necessarily contain contradictions. Since we attempt to reason about everything, our framework is necessarily large enough to be full of contradictions. Since we've got to deal with contradictions, they are not something to be avoided but rather acknowledged. If you're not acknowledging the contradictions and the "opposite side" for the implications you visit, then you will miss when that "other side" starts making more sense than the chain you're following. Not doing this means ending up at a nonsensical position while ignoring its contradictory obvious truth, a result we call cognitive dissonance.

This dual-thinking is related to the computer security mindset - you can't naively write code thinking your assertions will simply hold as you intend, but rather you need to be continually examining what every assertion "gives away" to a hostile counterparty.

There are alternative systems of logic that attempt to formalize reasoning in the presence of contradictions, to keep a single contradiction from being able to prove everything. For example, intuitionistic logic and paraconsistent logic. These feel much more in line with reasoning in an open world where a lack of a negative doesn't necessarily imply truth. The focus on a singular "logic" that asserts that everything has some single rational "answer" is a source of much of our modern strife.

nvilcins 2021-08-16 14:29:42 +0000 UTC [ - ]

We all operate with abstractions and simplifications - because it's impractical (and actually impossible given the complexity of the world) to process end evaluate every single detail.

Dealing with contradictions in our own beliefs (paradoxes) is a part of life. The rational approach is to accept that and "fuse" those beliefs carefully, not (a) accept one and reject the others or (b) avoid the topic entirely.

lotsofpulp 2021-08-16 14:42:53 +0000 UTC [ - ]

The rational approach is to acknowledge that you do not have sufficient information to proceed, or acknowledge the various assumptions (better word than “belief) that you are using.

If you are using contradicting assumptions, then you should probably check to see if you are doing so because you want the conclusion that you are getting from the assumption.

amanaplanacanal 2021-08-16 16:29:47 +0000 UTC [ - ]

We make decisions based on imperfect information and conflicting values every day. We generally can’t wait until we have sufficient information to proceed.

lotsofpulp 2021-08-16 17:12:58 +0000 UTC [ - ]

That does not require using conflicting assumptions though.

karmakaze 2021-08-16 15:27:39 +0000 UTC [ - ]

People who gain knowledge by adding to a consistent/stable belief system are the ones who have the most difficulty adapting to new situations and processing new information that may upend volumes of settled knowledge. You can recognize them as the dogmatic types that remember the rules but forget how/why they adopted them and are at a loss to update them.

antisthenes 2021-08-16 19:18:36 +0000 UTC [ - ]

> People who gain knowledge by adding to a consistent/stable belief system are the ones who have the most difficulty adapting to new situations and processing new information that may upend volumes of settled knowledge.

That's such an incredibly rare occurrence, that having a stable belief system far outweighs its potential drawbacks. Not to mention that rationality itself encompasses the ability to make such a switch anyway if the new information actually does upend volumes of "settled" knowledge.

A much bigger problem, though, is people lacking critical thinking skills to adequately assign probabilities to the new information being valuable/useful/correct.

Hint: it's very low. (in the current stage of civilization, there are definitely periods where it was different).

karmakaze 2021-08-16 22:57:23 +0000 UTC [ - ]

We may be in agreement and only categorizing 'stable' differently. Of course you want a single-coherent world view. What doesn't work well is if inferred or partial-case knowledge is committed as rigid facts that are incompatible with new information.

2021-08-16 14:40:22 +0000 UTC [ - ]

bluetomcat 2021-08-16 15:41:11 +0000 UTC [ - ]

You can only be rational within a greater framework defined by a set of beliefs. When society at large believes that market capitalism is the only way for promoting prosperity, the rational action for a single individual is to get a job, pay the bills and have a life. Other possible actions might have a stronger moral justification, but aren't as beneficial or rational for the individual.

s1artibartfast 2021-08-16 21:33:39 +0000 UTC [ - ]

The is no division between moral action and rationality. People just pick what they wish to optomize for. You can rationally pursue any moral cause just as easily as personal comfort.

jscipione 2021-08-16 16:53:35 +0000 UTC [ - ]

It is hard to be rational in the way the New Yorker intends because we are constantly being lied to and having information hidden from us by institutions and so we have lost trust in them.

President Dwight D. Eisenhower put it succinctly in his farewell address to the nation:

"The prospect of domination of the nation's scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded. Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific technological elite."

WhompingWindows 2021-08-16 15:23:30 +0000 UTC [ - ]

Can rationality exist outside of our minds? Is it just another mental heuristic?

In meditation, a common teaching is to examine an object for a long period, really just stare at it and allow your mind to focus on it fully. I see a coffee mug, it has a handle and writing on it, it's off-white and has little coffee stains. This descriptive mind goes a mile-a-minute normally, but eventually you can break through that and realize, this is just a collection of atoms, this is something reflecting photons and pushing back electrically against my skins' atoms. Even deeper, it's just part of the environment, all the things I can notice, like everything else we care about.

Such exercises can help reveal the nature of mind. There are many layers of this onion, and many separate onions vying for our attention at once. Rationality relies upon peeling back these superficial layers of the thought onion to get towards "the truth." That means peeling back biases, emotions, hunches, instincts, and all the little mental heuristics that are nice "shortcuts" for a biologically limited thinker.

But outside our minds, how is there any rationality left? It feels like another program or heuristic we use to make decisions to help us survive and reproduce.

danans 2021-08-16 16:51:09 +0000 UTC [ - ]

I think there is a simpler explanation that draws from evolutionary theory: being excessively rational is not a good survival strategy, be it in the distant past or today.

If our ancestors would have made the rational assessment that there is unlikely to be a predator hiding behind the bush, that would have worked only as long as it worked, until one day they got eaten.

Irrationally overestimating threats and risks is not an optimal approach, but as long as you can survive it can be a long-term optimal approach.

Humans using irrational stories to enable group cohesion and coordination are similarly irrational but intrinsic ways of being that also provide an evolutionary advantage.

Rationality, however is an incredible optimization tool when operating in domains that are well understood, like the example of stereo equipment that the author gave in the article. It can also help in the process of expanding knowledge by helping a systematically compare and contrast signals.

But it doesn't prevent the lion from eating you or the religious or temporal authority from ostracizing you from the safety of the settlement, and it may even make both of those outcomes more likely.

SamBam 2021-08-16 17:44:37 +0000 UTC [ - ]

> If our ancestors would have made the rational assessment that there is unlikely to be a predator hiding behind the bush, that would have worked only as long as it worked, until one day they got eaten.

That wouldn't have been a rational assessment, because it wouldn't have been an accurate assessment of the risks of being wrong, and the behavior required to avoid them.

If there's only a 1% chance that a predator is behind a bush, and that predator might eat you, it's absolutely rational to act as though there is a predator. You'll be seeing lots of bushes in your life, and you can't escape from those 1% chances for long.

The same thinking is why it would have been rational to try and avoid global warming 30 years ago. Even if the science was not settled, in the worst-case scenario, you'd have "wasted" a bunch of money making green energy production. In the best-case scenario, you saved the planet.

fallous 2021-08-16 19:38:40 +0000 UTC [ - ]

It's not actually rational, let alone long-term optimal, to act as though there is a predator behind every bush given a 1% (in reality it's probably a couple of orders of magnitude less likely, but we'll ignore that). If you need water and head for the local watering hole, avoiding bushes will most likely result in you not getting water since bushes tend to grow where there is water. I may be 1% likely to get eaten by something hiding behind the bush but I am 100% likely to die if I don't drink water.

Avoidance of all possible risk is a recipe for paralysis. Part of being rational is evaluation of risks vs rewards as well as recognizing the dangers of unintended consequences and the fact that nearly all meaningful decisions are made with incomplete information and time limits.

slingnow 2021-08-16 20:05:09 +0000 UTC [ - ]

Somehow you took their response to mean "the rational thing to do is avoid all bushes, forever, no matter the consequences".

The OP merely stated you should adjust your behavior to the 1% chance. That would include weighing it against the risk of dying from dehydration, in your example.

lazide 2021-08-16 17:15:58 +0000 UTC [ - ]

Humans operate by doing, then rationalizing, and much of the attempts at rational thought here demonstrate how easy it is to fool ourselves into thinking we are being rational, when really we are acting on feelings and delusions and then constructing what feels like a rational argument that we originally had - but falls apart upon analysis.

In the past, it is a rational concern to be worried about being jumped by a predator from behind a bush, and if you don’t know if or if not there is a predator, it is perfectly rational to be worried about such a concern!

Same with diseases and causes when you don’t know what is causing them, etc.

It’s a tendency to dismiss older concerns from a time when there was a severe lack of information as irrational, where when you know your limits and see the results, there is no other rational way to behave except to be concerned or avoid those things. While also not rational to believe clearly contradictory religious dogma that covers the topic, it is rational to follow or support it when it has clear alignment with visibly effective methods encoded in it for avoiding disease and other problems.

danans 2021-08-16 17:39:06 +0000 UTC [ - ]

> In the past, it is a rational concern to be worried about being jumped by a predator from behind a bush, and if you don’t know if or if not there is a predator, it is perfectly rational to be worried about such a concern!

I think we agree, but I also think you are using "rational" here in the colloquial sense to mean the "smartest" thing to do.

The article, and my comment in response, uses the traditional definition of "rational" as something derived from logic, and not from impulse or instinct.

The two definitions are not the same (not that one is better than the other, they just mean different things).

lazide 2021-08-16 18:00:52 +0000 UTC [ - ]

Nope, explicitly using logic. We didn’t invent thinking about things in the last hundred years after all.

If you don’t know what is behind x thing, and every y times someone walks by a thing like x thing they get jumped by a leopard, then only walk by x thing when the risk is worth it. Which it rarely is.

If you’re referring to formal logic, then sure - but almost no one in that thread seems to be using that definition either. Formal logic is incredibly expensive (mentally), and only a few percent of folks even now can afford to use it with any regularity.

wyager 2021-08-16 18:30:05 +0000 UTC [ - ]

This is also captured in the “midwit phenomenon”, where people who are just smart enough to start applying “rationality” make worse decisions than stupid people. This is because stupid people are operating off of hard-earned adaptations (encoded as traditions, folk wisdom, etc.). Midwits are smart enough to realize that the putative justifications for these adaptations are wrong, and therefore they toss out the adaptations. People who think about it even harder realize that these adaptations were mostly there for good reasons, and getting rid of them isn’t a good idea even if the relevant just-so stories explaining them don’t hold up to “rational” scrutiny.

UnFleshedOne 2021-08-16 20:59:31 +0000 UTC [ - ]

Midwits (which we all are to one degree or another) can be mostly fixed by applying Chesterton's Fence principle though. We just need a knock or two in both directions to better estimate a relative weight of that rule as a heuristic.

meatmanek 2021-08-16 23:31:46 +0000 UTC [ - ]

Just because there's a 99% chance the bush has no predators behind it does not make it rational to assume there are no predators.

In Bayesian decision theory, you'd choose the action (walk directly by the bush; walk by the bush but have your guard up; steer clear of the bush) that minimizes your loss function (e.g. probability of dying or probability of your blood line dying out). You'd end up picking a path that balances the risk of being eaten by a lion with the cost of having to walk further (and thus having less time and energy to gather food; or tripping and cutting yourself and dying of infection; or whatever).

mbesto 2021-08-17 04:12:14 +0000 UTC [ - ]

I like this, but I also like the even simpler explanation. Our bodies/minds require energy and "skipping" to conclusions (irrationally) expends less energy.

FinanceAnon 2021-08-16 15:20:28 +0000 UTC [ - ]

It's impossible to be absolutely rational. I feel like there is so many different levels and viewpoints that there is no right answer.

Simple example:

Let's say the same pair of shoes is available in two different shops, but in one shop it's more expensive. It seem more rational to buy it in the cheaper shop. However, what if you've heard that the cheaper shop is very unethical in how it conducts the business. Is it still more rational to buy the shoes there?

And then you might also start considering this situation "in the grand scheme of things" - in the grand scheme of things does it make any difference if I buy it in shop A or B?

And at which point does it become irrational to be overthinking simple things in order to try to be rational? What if trying to always be rational is stressing you out, and turns out to be worse in the long run?

UnFleshedOne 2021-08-16 21:19:08 +0000 UTC [ - ]

Deciding when to stop overthinking is also a rational process. Some choices truly don't matter, or not matter enough to spend time and energy on them.

If consumer ethics is important to you then it obviously warrants some deliberation, weighted by an upper bound of your potential impact. But identifying areas of meaningless choice and simply choosing randomly (and not even caring if the choice is sufficiently random) frees up a lot of mental energy.

dgant 2021-08-17 14:39:08 +0000 UTC [ - ]

This dilemma is addressable in a rational framework. Utilitarianism suggests you can attempt to quantify the harm done by your purchase, and decide how to discount that harm against your own self interest. You're also able to estimate whether the costs involved are probably so marginal that it's outweighed by the cost of thinking about the dilemma such that it would be best to commit to either decision rather than continue deliberating.

MisterBastahrd 2021-08-16 17:42:21 +0000 UTC [ - ]

Yeah, for example, let's say that I can buy from ShoeCo or big, evil Amazon. But big, evil Amazon allows me to donate a portion of their proceeds to a charity of my choice, and furthermore, I am also within my rights as an individual to take the difference between ShoeCo's price and Amazon's and donate it to another cause as well.

Some will say that buying from Amazon simply perpetuates Amazon... but Amazon is so large at this point that it doesn't matter WHAT I do. So ultimately, is the world better off with my two donations from my Amazon purchase or giving my money away for the same product to ShoeCo?

SamBam 2021-08-16 17:48:33 +0000 UTC [ - ]

If Amazon is so big that your purchase is meaningless, then the problems of the world are also so big that your donations are probably meaningless.

If your donations have some tiny bit of meaning to them, then removing a tiny bit of business from Amazon and paying your local shopkeeper probably also has meaning.

notahacker 2021-08-16 18:51:01 +0000 UTC [ - ]

Don't think that follows automatically. My dollar - in isolation - can feed someone tomorrow, even if it doesn't feed others and they're all hungry next week. Lack of my dollar alone won't change the ethics of Amazon in the slightest, and much as the more ethical shopkeeper won't mind the extra number in his bank account it's unlikely to allow him to displace ethical companies or do anything else wonderful with it. The difference between direct, tangible outcomes and perhaps more significant outcomes which depend on a lot more other people acting in a particular way is one of the thornier questions about what's rational to prioritise. tbh when I do boycott stuff it's mostly an emotional response

(notwithstanding better objections to the original example: in practice most donors' finances aren't so tight that buying the $90 product rather than the $100 dollar one is really necessary to free up the donor funds for a worthy cause, as opposed to emotionally salve donor conscience for buying from an unworthy vendor...)

vdqtp3 2021-08-16 18:30:34 +0000 UTC [ - ]

> removing a tiny bit of business from Amazon and paying your local shopkeeper probably also has meaning.

It might be fair to say that removing business from Amazon has no real impact but giving that business to a small business does.

MisterBastahrd 2021-08-16 23:57:16 +0000 UTC [ - ]

Whether a business is local means little to me if the local owner is a spendthrift who mistreats or pays his employees poorly. For example, when I was a kid, the local video shop owner would hire high school kids at minimum wage and keep them on payroll right underneath the statutory limit for certain workplace protections before firing them. Should I reward that asshole just because he's local?

wizzwizz4 2021-08-16 14:37:18 +0000 UTC [ - ]

> In a recent interview, Cowen—a superhuman reader whose blog, Marginal Revolution, is a daily destination for info-hungry rationalists—told Ezra Klein that the rationality movement has adopted an “extremely culturally specific way of viewing the world.” It’s the culture, more or less, of winning arguments in Web forums.

This matches my observations, too.

> Cowen suggested that to understand reality you must not just read about it but see it firsthand; he has grounded his priors in visits to about a hundred countries, once getting caught in a shoot-out between a Brazilian drug gang and the police.

kubb 2021-08-16 14:43:44 +0000 UTC [ - ]

One of my many pet peeves are people who travel to more than a 100 countries to get "experiences". It feels misguided, wasteful, excessive and done to impress others, as a sort of a status symbol. I bet he wouldn't be able to name all those countries and cities that he's been to. A deep and meaningful experience requires way more than a superficial visit.

someguy321 2021-08-16 14:53:02 +0000 UTC [ - ]

I read that fellow's blog (marginalrevolution.com) and he goes out of the way to get the best authentic local food he can get, he's well read about the history of many different countries and the economic implications of the recent history (he's an academic economist). He often does a brief blog writeup about the particularly culturally unique bits of places after he visits. Part of his job as an academic/ popular econ culture writer is to understand cultures and economies around the world.

I don't mind if part of his motivation is to impress others, or if it's wasteful, etc. Why would his motivations have to be pure for it to be meaningful for him?

karmakaze 2021-08-16 15:09:21 +0000 UTC [ - ]

That actually sounds very resourceful than wasteful, as readers can have vicarious experiences through his writings.

kubb 2021-08-16 15:05:14 +0000 UTC [ - ]

Don't get me wrong, gorging yourself on a variety of foods from around the world can be pleasurable. It also gives you zero insight into how people in that country are different than elsewhere.

You could understand more about a country by studying it from home than by visiting it for a week.

I don't like that it's presented as a lifestyle that people should strive to pursue. I know certain people here will vehemently oppose this opinion, because in effect it's a critique of them or that which they admire.

Retric 2021-08-16 15:19:42 +0000 UTC [ - ]

It goes both ways.

No you really can’t understand a culture from a week of study the same way you can from being there for a week. The issue is the millions of unknown unknowns that you never really consider. How large is people’s personal space, where do they stand and look in an elevator, what’s traffic like, how loud are people, etc etc. Of course a week or three isn’t that long, but there are real diminishing returns here.

On the other hand personal experience is very narrow in scope. You’re never going to find out country wide crime rates by wondering around for a week.

tonyedgecombe 2021-08-16 17:53:41 +0000 UTC [ - ]

>Of course a week or three isn’t that long, but there are real diminishing returns here.

I suspect you have to live and work in a place to really understand it. If you are wealthy and visiting a poor country there is virtually zero chance, you will always be too insulated from the reality.

pessimizer 2021-08-16 18:54:59 +0000 UTC [ - ]

If you are wealthy and born and raised in a poor country, you will likely be quite ignorant of most of the lifestyle of most of its people.

michael1999 2021-08-17 03:01:17 +0000 UTC [ - ]

He's an academic in economics with interests in the role of institutions and culture in how we economies develop and function (or don't function). How else shall he do his job? I would suggest you should save your disdain for economists who never leave their office.

zepto 2021-08-16 17:37:25 +0000 UTC [ - ]

The people you describe do seem to exist, but what makes you think Cohen is one of them?

SamoyedFurFluff 2021-08-16 15:06:50 +0000 UTC [ - ]

I agree that a culture of “winning arguments in Web forums” often has bias in of itself that requires going out and diversifying experiences. But I don’t think it will always require travel. Volunteering at a soup kitchen, fostering a rescue animal, organizing a community event, and talking to the elderly in care facilities will all expose you to experiences outside of the internet and don’t require travel.

skybrian 2021-08-16 15:56:30 +0000 UTC [ - ]

Sure, those are good for learning more about your own community.

But you’re not going to learn the same things you would from travel. For example, you’re not likely to learn another language if everyone you talk to speaks English. Similarly for learning about other cultures that aren’t near you.

But I’m not sure how much brief travel to see the tourist sites helps, and hanging out with expats might not help so much.

legrande 2021-08-16 13:59:32 +0000 UTC [ - ]

I try to avoid mind viruses, or ideas that can hijack your decisions and thought process and take over. Think of a mind virus as a sort of dangerous meme that underpins everything you do. This is why first principles and making decisions based on sound foundations is better, absent of some sort of virulent dogma.

jjbinx007 2021-08-16 14:06:43 +0000 UTC [ - ]

Viruses. Virii isn't the plural of virus.

There's a YouTube channel (1) called Street Epistemology which has a guy interview members of the public and ask them if they have a belief they hold to be true such as "the supernatural exists" or "climate change is real" or "x is better than y".

He then asks them to estimate how certain they are that it's true.

Then they talk. The interviewer asks a question and makes notes, then tries to summarise the reply. He questions how they know what they think they know and at the end he asks them to again say how confident they are that what they said is true.

It's fascinating to see people actually talk about and discuss what are usually unsaid thoughts and it shows some glaring biases logical fallacies.

(1) https://youtube.com/c/AnthonyMagnabosco210

legrande 2021-08-16 14:32:53 +0000 UTC [ - ]

> Virii isn't the plural of virus.

Thanks for correcting me. I will refrain from ever using virii again!

https://en.wikipedia.org/wiki/Plural_of_virus

digitalsushi 2021-08-16 14:52:06 +0000 UTC [ - ]

I knew what you meant. I feel like we almost have our own culture, sometimes. Weird.

WhompingWindows 2021-08-16 15:15:32 +0000 UTC [ - ]

I may be wrong, but "Mind Virii" could be using the genitive or possessive form of Virus, like "Mind of a Virus" or "Virus's Mind".

jklinger410 2021-08-16 14:10:27 +0000 UTC [ - ]

Glad to hear you aren't the only person thinking of the mind virus idea!

Exactly what you said. Once you accept one toxic thought, it tends to branch out into other decisions. Unfortunately there are many, many memes out there ready to cause an infection.

These things can be fatal.

OnACoffeeBreak 2021-08-16 15:28:25 +0000 UTC [ - ]

Sci-fi novel "Lexicon" by Max Barry explores the idea of words used for persuasion to the extent of actually hacking the brain via spoken word to take control of the subject's thoughts and actions.

FinanceAnon 2021-08-16 18:50:56 +0000 UTC [ - ]

I thought about something similar in the context of "dangerous" AI. In a hypothetical scenario where super-smart AI got control of the internet and all the devices, would it be able to start controlling people?

rafaelero 2021-08-16 18:29:27 +0000 UTC [ - ]

I am seeing a lot of "institutions lied to us and are actively keeping information from ourselves" when people try to justify acting irrationaly. I don't agree with this premise at all. What do you mean they keep information from you? This assumes that information can be contained, which in most cases is impossible. There is always leakage.

Now, to be more generous, I will assume that people are actually criticizing how "institutions impose a mainstream view that is difficult to replaced even when facts say it should". To that I say: fine. But even in this case, there should be enough resources to form a rational opinion over the matter (with probabilistic reasoning). Hell, I have a lot of non-orthodox opinions that are so out of Overton Window that I rarely can discuss them. And even in these cases, the internet and Google Scholar/Sci-hub were sources that helped me explore it.

So, I have no sympathy for this "institutions lied to us, let me believe now whatever I want" bullshit.

throwaway9690 2021-08-16 14:37:15 +0000 UTC [ - ]

I think part of the problem is that most people are conditioned into many beliefs from a young age

I know a guy who hates foo (using a place holder). In fact he's downright foophobic. He is pretty convinced he has a natural unbiased hate of foo and is being rational when he expresses it.

To me as an outsider it is pretty obvious that his hate of foo is the result of cultural conditioning. To him it is perfectly rational to hate foo and to me it is totally irrational, especially since he can't give any concrete reason for it.

So who is right and who is being rational?

carry_bit 2021-08-16 20:33:27 +0000 UTC [ - ]

It could be a case of implicit vs explicit knowledge. In the context of evolved culture beliefs, the foophobia may serve some real purpose, even if most/all of the enculturated individuals can't explicitly state what the real purpose is.

It could be that, like dietary restrictions to reduce the spread of disease, the foophobia is no longer needed, but keep Chesterton's fence in mind before you say it's unneeded.

dfxm12 2021-08-16 15:17:15 +0000 UTC [ - ]

It really depends what foo is. I don't think it's rational to waste time on unimportant things. If foo is eating red meat, then I don't think it's rational to really worry about it one way or another.

I think part of the problem is that most people are conditioned into many beliefs from a young age

I think it's irrational to not consider new information when processed. So, again, this depends on what foo is. If it is obeying speed limits even when no one else is on the road, and your friend learns the penalties for not obeying road signs when they get their license, they would probably find it irrational to not do the speed limit, even if they hate it. They wouldn't want to risk the fines, license suspension, etc.

However, let's say your friend's brother has stronger beliefs and can afford any fines and legal action. He could think about it and still decide that it's rational to not obey the speed limit. This doesn't make it right; I think right and rational are mutually exclusive.

throwaway9690 2021-08-16 16:26:00 +0000 UTC [ - ]

When I mention conditioning, I mean from a very young age.

For example: Throw salt over your shoulder if you spill some -or- Green skinned people are bad and you should never trust them or allow them in your neighborhood.

Now the former is pretty harmless but not so the latter. In both cases the only explanation is "that's how I was raised" which I don't find compelling or rational.

wizzwizz4 2021-08-17 13:28:27 +0000 UTC [ - ]

If the person has some stronger belief (e.g. people are important, and hurting them unnecessarily is bad) that can override “never allow green-skinned people in your neighbourhood”, they're redeemable. If they don't, they're evil. (Evil people can be rational, too.)

pessimizer 2021-08-16 18:58:31 +0000 UTC [ - ]

> natural unbiased hate

...is a pretty silly phrase. If you don't have a reason for something, it can't (by definition) be reasonable.

someguy321 2021-08-16 14:46:38 +0000 UTC [ - ]

Value judgements exist in a separate domain than pure rationality.

I like chocolate ice cream more than vanilla ice cream, and you're not gonna convince me otherwise by debating the flavor with me. It entirely could be the case that my preference is from cultural conditioning, but it's not my concern.

If your friend has a mindset of "to each his own" there's no problem.

teddyh 2021-08-16 15:38:05 +0000 UTC [ - ]

> to me it is totally irrational, especially since he can't give any concrete reason for it.

In my experience, people usually can give ‘concrete’ reasons for it, but what constitutes ‘concrete’ is a matter of opinion, and I don’t consider everybody’s reasons to be valid. But of course, they do.

wizzwizz4 2021-08-16 14:38:33 +0000 UTC [ - ]

Preferences do not need to be rationally justified; without axiomic preferences, we have no preferences at all.

throwaway9690 2021-08-16 16:29:09 +0000 UTC [ - ]

I'm referring more to prejudices rather than preferences.

jsight 2021-08-16 14:18:17 +0000 UTC [ - ]

Isn't the answer obvious? Because doing otherwise involves a lot more work, and people choose the easier path.

MarioMan 2021-08-16 16:15:25 +0000 UTC [ - ]

Sometimes I go into deep-dives to try to find some truth to a contentious issue. I think it’s important not to take the easy path; certainly not if you want a well-learned opinion. Any sense of superiority this gives me is dashed when I realize:

1) It’s not reasonable to expect someone to dig so deeply, and there isn’t enough time to do it for every issue.

2) Someone, somewhere, has done an even deeper dive into the same issue. From their perspective, I’m the one that hasn’t done my research. When it’s “enough” is a fuzzy line.

esarbe 2021-08-16 16:03:48 +0000 UTC [ - ]

To quote evolution; why go for perfect when you can go for good enough?

achenatx 2021-08-16 14:34:32 +0000 UTC [ - ]

The ultimate issue is that underpinning every action is a value system. Value systems are opinions and are fundamentally not rational.

Virtually every political disagreement is based on values, though most of the time people dont recognize it.

Values determine priorities and priorities underpin action.

For example some people feel that liberty (e.g. choice) is more important than saving lives when it comes to vaccines.

Some people feel that economic efficiency is less important than reducing suffering.

Some people feel that the life of an unborn child is worth less than the ability to choose whether to have that child

Even in the article, is a stereo that sounds better actually better than a stereo that looks better? That is a value judgement and there is no right or wrong.

No one is actually wrong since everything is value judgements. Many people believe in universal view of ethics/morality. There is almost no universal set of ethics/morality if you look across space and time.

However some values allow a culture to out compete other cultures causing the "inferior" values to disappear. New mutations are constantly being created. Most are neutral and have no impact on societal survival. Some are negative and some are positive.

derbOac 2021-08-16 18:18:02 +0000 UTC [ - ]

I came to say something similar, that rational decision making is really a poorly posed problem at some level.

Take money for example. You can create a theoretical decision-making dilemma involving certain sums of money, and work out what the most rational strategy is, but in reality, the differences between different sums of money is going to differ between people depending on different value systems and competing interests. So then you get into this scenario where 1 unit of money means something different to different people (the value you put on 1 € is going to be different from the value I put on it; the exchange rates are sort of an average over all these valuations), which might throw off the relevance of the theoretical scenario for reality, or change the optimal decision scenario.

The other issue beside the one you're relating to -- the subjectivity of the weights assigned to different outcomes, the achille's heel of utility theory -- is uncertainty not just about the values in the model, but whether the model is even correct at all. That is, you can create some idea that some course of action is more rational, but what happens when there's some nontrivial probability that the whole framework is incorrect? Your decision about A and B, then, shouldn't just be modeled in terms of whatever is in your model, but all the other things you're not accounting for. Maybe there are other decisions, C and D, which you're not even aware of, or someone else is, but you have to choose B to get to them.

Just yesterday I read this very well-reasoned, elegant, rational explanation by an epidemiologist about why boosters aren't needed. But about 3/4 of the way through I realized it was all based on an assumption that is very suspect, and which throws everything out the window. There are still other things their arguments were missing. So by the end of it I was convinced of the opposite conclusion.

Rationality as a framework is important, but it's limited and often misleading.

_greim_ 2021-08-16 16:23:56 +0000 UTC [ - ]

> is a stereo that sounds better actually better than a stereo that looks better? That is a value judgement and there is no right or wrong.

Disagree; value systems are the inputs to rationality. The only constraint is that you do the introspection in order to know what it is that you value. In that sense buying a stereo based on appearance is the right decision if you seek status among peers or appreciate aesthetics. It's the wrong decision if you want sound quality or durability.

I think the real issue is that people don't do the necessary introspection, and instead just glom onto catch-phrases or follow someone else's lead. That's why so many people hold political views that are contrary to their own interests.

mariodiana 2021-08-16 18:01:18 +0000 UTC [ - ]

Yes, and I think when people claim to be describing what a "rational actor" would do, what they often leave out are the normative assumptions inherent in their rational analysis. Moreover, I suspect the omission at times is not accidental.

esarbe 2021-08-16 15:59:55 +0000 UTC [ - ]

Because we didn't evolve to be rational. We evolved to reproduce as often as possible, not to thing as precises as possible. We're not thinking machines, we're reproduction machines.

That we are able to think somewhat rational-ish is only because we adapted by adopting extensive modeling simulations. The fundamental function of these simulations is to simulate other beings, primarily human. And in that our brainware is lazy as hell, because - to quote evolution; why do perfect, when you can do good enough? Saves a ton of energy.

The wetware we employ was never expected to rationally solve differential equations or do proper statistical analysis. At best it was expected to guess the parabola of a thrown stone or spear, or empate the best way to mate without facing repercussions from the tribe.

So, really. It's not that thinking is hard. It's just that we're just not equipped to do it.

mncharity 2021-08-16 15:10:43 +0000 UTC [ - ]

Jim Keller (famous cpu designer; Lex Fridman interview)[1]: "Really? To get out of all your assumptions, you think that's not going to be unbelievably painful?" "Imagine 99% of your thought process is protecting your self conception, and 98% of that's wrong". "For a long time I've suspected you could get better [...] think more clearly, take things apart [...] there are lots of examples of that, people who do that". "I would say my brain has this idea that you can question first [sic] assumptions, and but I can go days at a time and forget that, and you have to kind of like circle back to that observation [...] it's hard to keep it front and center [...]".

[1] https://www.youtube.com/watch?v=Nb2tebYAaOA&t=4962s

linuxhansl 2021-08-16 17:52:12 +0000 UTC [ - ]

I read somewhere (truly forgot where, sorry) that we humans are mostly just lazy, that we avoid thinking as best as we can and rather gravitate towards that (people, circle, or news) which confirms what we already believe so that we do not have to think.

"Confirmation Bias" does not quite capture it. Really just laziness. :)

The other part, being decisive... I can definitely relate to that. I noticed that I often have a hard time making decisions and realized it's because I tend look at the world in terms of what I can possibly lose instead of looking at something new in terms of excitement.

SavantIdiot 2021-08-16 17:56:39 +0000 UTC [ - ]

Critical thought is actually really, really hard. Pre-internet the problem was too little signal, post-internet the problem is too much noise.

I would argue we've largely been anesthetized due to successful Gish Galloping. I have great admiration for people who put the effort in to sort out the issues, academics and journalists. But just now everyone eye-rolled when I said those two terms.

2021-08-16 14:21:32 +0000 UTC [ - ]

johnwheeler 2021-08-16 14:37:00 +0000 UTC [ - ]

Perfect rationality is impossible because in order to make correct decisions you need all the facts and a rational actor would do nothing at all given that all the facts can’t be had. The best you can do is to be an odds maker;therefore, an odds maker would spend their time looking for as many of the lowest effort ventures with the highest chances of success and biggest payoffs relative to effort and chance. In their free time (time when no reasonable opportunities were present), they would learn as much as possible to increase decision making power thus odds of success.

raldi 2021-08-16 15:10:30 +0000 UTC [ - ]

Your opening sentence makes no sense. If you and I don’t know the results of a coin toss, and I offer you a two-for-one wager on the result, the rational choice for you would be to take that bet, even without knowing the most relevant fact.

johnwheeler 2021-08-16 15:51:33 +0000 UTC [ - ]

ah, you should have read the second sentence

raldi 2021-08-16 17:37:37 +0000 UTC [ - ]

I don't see how the second sentence makes sense of the first. A perfectly rational actor would not do nothing; they would carry out the most reasonable action given the information available.

johnwheeler 2021-08-16 18:12:13 +0000 UTC [ - ]

But then you're not being perfectly rational. You're calculating the odds, which is what my second sentence says.

Being perfectly rational is impossible.

See: perfect rationality vs bounded rationality

JohnPrine 2021-08-16 21:42:01 +0000 UTC [ - ]

I think you may have a confused definition of what it means to be a rational actor. Being rational means making the optimal decision given the information available

johnwheeler 2021-08-16 22:22:26 +0000 UTC [ - ]

raldi 2021-08-16 21:20:59 +0000 UTC [ - ]

Maybe I don't understand what you mean by "perfectly rational". I'm using the definition from the article: A perfectly calibrated individual will be right X% of the time about statements in which they are X% confident.

Are you using a different definition?

johnwheeler 2021-08-16 22:19:21 +0000 UTC [ - ]

Perfectly rational means just what it sounds like. Making the correct decision because you have all available data.

Being perfectly rational is impossible.

raldi 2021-08-16 22:22:59 +0000 UTC [ - ]

Would a perfectly rational person duck if there were a 50% chance they were about to be punched? Or would they do nothing?

johnwheeler 2021-08-17 00:03:41 +0000 UTC [ - ]

A perfectly rational person would have all available information. Since ducking wastes energy vs. doing nothing, if they were going to get hit they would duck, and if not, they would do nothing.

But again, being perfectly rational is impossible.

I don't know how to make it any clearer.

karmakaze 2021-08-16 15:35:49 +0000 UTC [ - ]

Recognition of "motivated reasoning" can replace a whole lot of recognizing logical fallacies in your own or others' thought processes.

Here's an 20m audio interview[0] with the author of "The Scout Mindset: Why Some People See Things Clearly and Others Don’t"

It very well summarizes the way I like to gather information in an area so that I can form an opinion and direction of movement on a problem.

[0] https://www.cbc.ca/player/play/1881404483658

paulpauper 2021-08-17 03:15:26 +0000 UTC [ - ]

Greg has much more money but one of the perks of being a profesional writer or coljmnists is ppl care about your views and they have potentially some impact on society. The classics are immortalized but ray dalio a year ago wrote a book about his 'values'..it made a splash for a week and was soon forgotten, condemned to an early pulp-ing.

2021-08-16 18:45:12 +0000 UTC [ - ]

paganel 2021-08-16 20:45:21 +0000 UTC [ - ]

The rational powers that be were saying out loud 3 days ago that in an optimistic scenario the Afghan government would hang on for another 90 days, in a pessimistic scenario only for 30 days. As we all know it collapsed completely in just 2-3 days.

Early on during the pandemic (the first half of February 2020) the people writing on Twitter about covid in China were being labeled as conspiracy nuts, with some of them outright having their accounts suspended by Twitter. Covid/coronavirus was (I think purposefully) kept out of the trending charts on Twitter, the Oscars were seen as much more important.

And these are only two recent examples that came to my mind where the "rational" parts of our society (the experts and the media) failed completely, as such it's only rational not to trust these pseudo-rational entities anymore. Imo I think in a way the post-modernists were right, (almost) everything is negotiable or a social construct, there's no true or false, apart from death, I would say.

elihu 2021-08-17 02:44:43 +0000 UTC [ - ]

I think the main problem with trying to be 100% rational is that even if you could predict the outcome of your decisions perfectly, the question of which outcome is "best" is inherently not a rational one. First you need to have some goal in mind, then (maybe) you can apply logic and reason to achieve that goal.

I think a lot of political disagreements aren't really about logical arguments at all, but rather differences in opinion over relative priority of some ideals that are all important. There isn't always an objective right answer.

natmaka 2021-08-17 12:54:30 +0000 UTC [ - ]

"hand over the thinking keys when someone else is better informed or better trained." implies rationality, as without it determining who is better informed/trained is difficult. Therefore we are back to square one.

This approach even favors the most informed and trained (the "best" being preferable to the "better"), offering an even more difficult challenge.

Indirect democracy replaces rationality with ill-formed trust.

raman162 2021-08-16 19:31:28 +0000 UTC [ - ]

I particularly enjoyed the concepts presented in this article, from recognizing how confident you are in a certain idea to understanding the steps it takes for someone to be rational.

Being self-aware I've only started learning post college and is something I wish I was taught more growing up. As a child I was always informed that I should do x and y because that's what you're supposed to do! Only now as an adult I'm taking the time to slowly ponder and analyze myself and be more strategic with my future goals.

Side note. Really enjoyed the audio version of this long form article

heisenzombie 2021-08-16 23:20:06 +0000 UTC [ - ]

This is a bit similar to the book-in-progress “In the cells of the eggplant”.

https://metarationality.com/

I highly recommend reading it. I found it extremely clarifying as a working scientist/engineer and someone who has been persistently nagged by the deification of rationality.

The OP even uses the term “metarational” (though used to mean something different), which made me surprised when “The Eggplant” was not mentioned.

2021-08-16 16:48:12 +0000 UTC [ - ]

adrhead 2021-08-16 14:12:03 +0000 UTC [ - ]

It is quite hard to become rational as humans are emotional beings. Sometimes, emotions will take over rationality in making decisions. This is why people struggle to make wise decisions.

TuringTest 2021-08-16 14:31:44 +0000 UTC [ - ]

Conversely, it is impossible to be rational without emotions.

Reason needs axioms (beliefs) to build a rational discourse, and without emotions, it is impossible to choose a limited set of starting axioms to begin making logical inferences from.

I agree with the person above who said being rational is about making post-hoc rationalizations. We know by cognitive science that a majority of explanations are build that way: after observing facts, we intuitively develop a story that is consistent with our expectations about the fact, as well as with our preconceived beliefs. "Being rational" in this context would be limited to reviewing our beliefs when these ad-hoc rationalizations become inconsistent one with another.

eevilspock 2021-08-16 18:30:43 +0000 UTC [ - ]

Rational thought is important, but not sufficient. For example, moral conscience is a far more important trait to me. Some people will argue that pure reason is enough to establish a sound moral system; I don't agree but that is a debate for another time. Looking at the end result, Greg is not someone I admire or would want to be:

> Greg...became a director at a hedge fund. His net worth is now several thousand times my own.

swayvil 2021-08-16 14:15:37 +0000 UTC [ - ]

Rationality is a game of checkers played outside in a meadow.

So many distractions. Wind, rain, bees, rampant squirrels.

And what makes that game more interesting than a squirrel anyway?

inkblotuniverse 2021-08-17 00:57:12 +0000 UTC [ - ]

The money.

(And you're playing the game agaonst the squirrels anyway.)

mrxd 2021-08-16 21:00:08 +0000 UTC [ - ]

It's actually not hard.

Rationality is a form of communication. Its purpose to persuade other people and coordinate group activity, e.g. hunters deciding where they should hunt and making arguments about where the prey might be. In that setting, rationality works perfectly well because humans are quite good at detecting bad reasoning when they see it in others.

Because of the assumptions of psychological individualism, rationality is misunderstood as a type of cognition that guides an individual's actions. To a certain extent, this is a valid approach because incentives within organizations encourage people to act this way. We reward individual accomplishments more than collaboration.

But many cognitive biases disappear when you aren't working under the assumptions of psychological individualism. For example, in the artificial limitations of a lab, you can show that people are unduly influenced by irrelevant factors when making purchase decisions. But in reality, when a salesperson is influencing someone to spend too much on a car, people say things like "Let me talk it over with my wife."

We instinctively seek out an environment of social communication and collaboration where rationality can operate. Much of the advice about how to be individually rational comes down to simulating those conditions within your own mind, like scrutinizing your own thinking as if it was an argument being made by another person. That can work, but the vast majority of people adopt a more straightforward approach, which is to simply use rationality as it was designed to be used.

Rationality is hard, but only for a small number of "smart people" who live in an individualistic culture prevents them from using it in the optimal way.

UnFleshedOne 2021-08-16 21:22:57 +0000 UTC [ - ]

I think you are confusing the original purpose of our thinking apparatus (social proof first, discovering true facts distant second, unless facts can eat you quickly) and rationality as a system for discovering facts as true as possible with given energy budget that is running on that faulty hardware.

tomgp 2021-08-16 17:01:06 +0000 UTC [ - ]

“Know things; want things; use what you know to get what you want”

I think the hardest bit of this is in some ways the middle, wanting things. How do we know we really want what we want, and how do we know what will make us happy. That’s the bit I struggle with anyway.

andi999 2021-08-16 14:46:59 +0000 UTC [ - ]

I believe it is also an evolutionary advantage. Let's assume with all information available it looks like the rational best decision to do something. Then unexpectedly that thing kills you. There is only a species left if not everybody did it.

_moof 2021-08-16 18:05:34 +0000 UTC [ - ]

It's impossible to be perfectly rational without perfect and complete information. Crucially, for questions that affect us personally, this includes perfect insight. I've yet to meet anyone who qualifies.

flixic 2021-08-16 18:20:56 +0000 UTC [ - ]

That's why I appreciate that a rationality website is called LessWrong. Of course you can't be perfectly rational, but you can be less wrong.

_moof 2021-08-16 19:05:46 +0000 UTC [ - ]

Thanks for the reply. I think what I was trying to say by implication is that I think folks fall so far short of the ideal that it's actually a regression. Related to this is what I see as an implicit belief that "rationality" means completely dismissing the lived experience of actual humans, i.e. lots of people are suffering but hey, at least we applied principles in a soulless and mathematical way, because that's what's important.

UnFleshedOne 2021-08-16 21:42:15 +0000 UTC [ - ]

A soulless and mathematically applied principles is a good was to actually reduce the number of people suffering. Assuming that's what your goal was from the start. If you only look at "lived experience" and then make a random change you feel might help, but don't actually check if it does, you can make things worse (see the outcomes of all the aid to Africa for example).

rafaelero 2021-08-16 19:28:47 +0000 UTC [ - ]

What a ridiculous take. Rationality is not the same as omniscience. Being rational is optimizing predictability by using the best available evidence we have. No one is claiming to know the answer for some future event, but trying to reach the best way to aggregate the current information.

jhgb 2021-08-16 19:23:12 +0000 UTC [ - ]

Why? Are you equating rationality with omniscience? Then why have the separate word "rationality" in the first place?

coldtea 2021-08-16 18:15:47 +0000 UTC [ - ]

For starters, who said it's better to be rational?

Not being rational - and instead being based on guts - has an evolutionary advantage (it cuts through the noise, which, in the past could be a life or death situation).

dnissley 2021-08-16 18:20:09 +0000 UTC [ - ]

Intuition could be said to be the opposite of reason, but not rationality. There are whole parts of the rationalist diaspora that emphasize how important it is to be in touch with one's intuitions / feelings and to integrate them successfully into one's decision making process with an aim towards being more rational.

morpheos137 2021-08-16 16:31:40 +0000 UTC [ - ]

Because people have feelings. Because rationality is poorly defined. For example some times it may be rational to agree with something that is factually wrong if it is popular or serves one's self interest.

m3kw9 2021-08-16 17:55:26 +0000 UTC [ - ]

Because certain degree of emotions have rational basis. Asking humans to know which parts of their emotion is rational turns into a multidimensional problem they can’t just solve in a heat of the moment

paulpauper 2021-08-17 02:56:30 +0000 UTC [ - ]

How much do you think Greg is worth? Several thousand is a lot but if the author has debt then maybe not

okamiueru 2021-08-16 15:52:59 +0000 UTC [ - ]

It's going against entropy. There are few ways to be rational, and infinitely many ways to irrational.

newbamboo 2021-08-16 16:51:00 +0000 UTC [ - ]

My answer, in jeopardy format: What is Psychology? Every mind is different; a feature not a bug.

nathias 2021-08-16 14:02:13 +0000 UTC [ - ]

Ah yes, the modern rationalists, few things are as cringe as modern adaptation of classical intellectual currents. Like reddit atheism, it makes a great disservice to the concept from which they steal their name. They have no education beyond their narrow limits, no interest in what lies beyond their time or their common sense.

MrBuddyCasino 2021-08-16 14:18:01 +0000 UTC [ - ]

Sure, naive rationalism is intellectually dead, but post-rationalism deserves a better endorsement, thus the downvotes. I suspect most people aren't yet familiar with the discourse. If I was more qualified I'd write it myself, but alas.

nathias 2021-08-17 05:55:51 +0000 UTC [ - ]

Yea, I consider myself a rationalist because I share a similar epistemological framework to the classical rationalism, I was very disappointed to find that a 'rationalist' movement exists and takes nothing but the name from rationalism.

jhgb 2021-08-16 19:09:56 +0000 UTC [ - ]

What is this "naive rationalism" and "post-rationalism"? And how is rationalism dead in the first place? Did science and logic suddenly stop working without us noticing?

jgeada 2021-08-16 14:07:01 +0000 UTC [ - ]

And they are ever so full of themselves. They're a perfect embodiment of Dunning-Kruger.

TheGigaChad 2021-08-16 14:32:03 +0000 UTC [ - ]

Idiot dumbass, get cancer and die squealing like a lab rat.

damoe 2021-08-16 22:44:28 +0000 UTC [ - ]

Because there is a good chance reality is not rational.

HPsquared 2021-08-16 20:07:24 +0000 UTC [ - ]

It's irrational to pretend as if we are rational.

esarbe 2021-08-17 18:57:26 +0000 UTC [ - ]

It's funny how many people take offense. We really aren't rational creatures, are we..

eevilspock 2021-08-16 18:21:39 +0000 UTC [ - ]

myfavoritedog 2021-08-16 20:47:26 +0000 UTC [ - ]

Human irrationality will only get worse on average. There's very little evolutionary disadvantage for humans to be irrational in our modern society.

Not synching up with reality would likely cost you your ability to be in the genetic pool back in the day.

joelbondurant 2021-08-16 16:16:02 +0000 UTC [ - ]

USA members need the Fact-Check algorithm integrated into permanent surgically installed face masks.

marsven_422 2021-08-16 18:09:08 +0000 UTC [ - ]

We are human, glorious humans.