Saturday, January 14, 2017

The Religious Cognition Of Atheists

Religions exist across cultures and one explanation for their existence is that humans have cognitive architectures tending to confabulate and preserve the stuff of religions. I think I first heard it from Sapolsky that purifying rituals of religions might have been historically innovated by people with minds tending toward OCD and that the ideas of unseen spirits might have come from people tending toward schizophrenia. Given that some abnormal behaviors and beliefs are historically innovated, to understand the existence of religions we then need to look to more neurotypical incentives for why religious content is widely culturally adopted. Maybe individuals have needs for tribal culture which religious content provides to them, or maybe some religion-stuff is the right amount of weird and emotionally compelling to give people a sense of insightfulness or personal meaning, or maybe some religion stuff is mememtically virulent (which is the mind projected way of saying that that humans copy and repeat some things even without seeing value of doing so), or maybe part of the reason for individual adoption of religion is gullibility or acceptance of social authority, and probably other stuff.

So maybe the genesis of a religion proceeds like this: innovation of compelling nonsense by somewhat abnormal persons followed by pragmatic adoption by neurotypical people who are in need of some emotionally compelling weirdness. 

This is a nice narrative, but I think it's too easy on the normies. Watching people talk about their religion (or politics) is almost as terrifying to me as watching people who have been confabulated by dementia or traumatic brain injuries. Neurotypical religious thought might be normal, but it is not sound. The fuel which feeds the fire of religion isn't "pragmatic interest in communal narrative and ritual" or anything nearly so rational, it's the fact that people are cognitively misshapen and dysfunctional, universally, pathetically.

If the scary, sad, gross cognitive failures of neurotypical religious thought are human universals, as religion is a human universal, then we atheists should not be so proud as to think ourselves spared. We have the same disease, even if some of our symptoms have cleared.

This in mind, I wanted to identify some of the failures of human cognition which are associated with religions cross-culturally. Maybe I should have just started with a list of biases from Kahneman and Tversky, but that's not what I did.

One failing of religious thought is compartmentalization, which is mostly to say that people practicing a religion are unable to scrutinize the beliefs of the religion in a way that can successfully produce doubt. Questions of internal consistency can be raised, but somehow these questions don't seem to bear much on whether the person will continue to retain the conceptual edifice as a whole. Further, the edifice can be maintained in the mind with almost no explanatory or predictive connection to reality. For many people, "the suffering observed in the world" doesn't seem to enter the radar of "facts which might challenge a belief in a beneficent god". The philosophical discourses on the question of evil, plentiful as they may be, are not enough to discount this huge daily disconnect between the beliefs and sense data available to neurotypical religious people. I propose that any topic which you think would be controversial to bring up in polite multicultural company, such as cryonics or antinatalism or artificial singletons in my own case, are worth special scrutiny, of the sort that could successfully produce doubt, should the topic merit doubt. If this doesn't sound like an immense epistemic burden, think about the level of reflective scrutiny that it would take for a religious person to abandon their faith in the absence of new social incentives.

So much as the attribution of sacredness to a thing is associated with an unwillingness to quantify the value of the thing or to contemplate trades of the thing against mundane things, all sacredness attribution is a kind of compartmentalization and is deserving of special scrutiny. Maybe you know the dollar value of a human life, but there's probably something out there which you value emotively but still treat in an economically incoherent way. Spooky stuff, right? Someone get Night Vale on the line.

I want to write up the other religion-associated cognitive failings that I identified tonight in long-form, but it's getting late and I have a thing to do tomorrow, so in the interest of posting something at all, which has not been my strength, I'm going to paste the remainder of my twitter tweets on the subject with very little editing.

Another cognitive failing of religious thought is overestimating the causal power of categorization judgments? Or, no, I want to identify something more general than that. What's going on in humans such that they think prayer is a thing worth doing? We overestimate the power of linguistic utterances? That covers more behavior but it's less cognitively abstract. Maybe the core failure is that humans expect reality to condition its behavior socially on our social displays. This is kind of like attributing animacy widely to reality.

For the next failure of religious cognition that I want to talk about, consider divination, numerology, omens, and delusions of reference. Part of what's going on with these is an expectation of a natural cosmic order where the world is self-similar or self-describing, maybe? Divination and numerology are schools of thought for deriving predictions from principles that (the practitioners believe) operate throughout the universe structurally (like a folk notion of physics, but phrased directly in terms of the concepts and objects of daily experience, instead of being phrased in terms of particle-field-shaped math). So that's interesting. 

"Reality is self-similar" is, I think, a good guess; it feels close to one of the mistaken inductive principles which is common to lots of religious cognition cross-culturally, but it's also not quite the principle that I want to identify. Maybe the problem with it is that divination isn't just like reading the structure of the heavens in a spilled drop of cola, it's not just seeing the self-similar structure, but it also has a social component of requesting information before possibly receiving it. Like religious people think that the cosmos knows things and may choose to reveal info them if it thinks that they're ready? And the rituals and superstitions of divination are like a cargo cult for getting the attention of that Knowing Natural Order? Does that make sense? When people look for spiritually guiding signs in their daily lives, those signs are like communicative events to them. I think that makes sense. But to the degree that I see personally significant messages in the mundane events of my life, those signs don't really feel communicated to me, so maybe "there is a knowledgeable natural order which has a plan for you" isn't one of the core universal religious mental failings that I need to identify and be vigilant against in myself. I'll keep looking for other magical thinking that might be cognitively upstream of delusions of reference and which plausibly applies to me and other neurotypical people. And the same for prayer, because "expecting reality to respond socially to your social displays" isn't quite as persuasive an account of prayer as I'd like. That was the previous failure that I identified, remember? After compartmentalization and sacredness.

Finally, Zodiacs seem like evidence of another culturally pervasive religion-associated cognitive failure. Maybe the genesis of zodiacs happens like this: people have a general over-reliance on explanations of behavior in terms of stable personality traits, and some people prone to apophenia notice false patterns in who gets stuck with which stable traits (again, which traits the people may not have stably), and then people generally suck at knowing when to abandon named/reified concepts, and so the confabulated patterns of explaining how traits are allocated stay in discussion and eventually become common knowledge. 

Nothing in this explanation of zodiacs really rings of magical cognition. Maybe that's because zodiacs aren't all that religion-bound, or maybe it's because I haven't done a good job of finding a supernatural-inductive-bias-like cognitive failing which is upstream of people caring about zodiacs and which is also plausibly at work in my own mind to my detriment. Something something swearing, something something did you know that I hate the Myers–Briggs Type Indicator?

That's the end for tonight, folks. Be good to yourselves.

Edit: I just heard about thought-fusion beliefs as studied in abnormal psychology. One example is a person thinking that {an event is more likely to happen because they've been imagining the event happening}. Another class of thought-fusions beliefs is like when a person feels that it's roughly as blameworthy to perform a socially prohibited action as it is to imagine doing so. Are those two examples of religion-like cognition? I think they might be. I think thought fusion might be a kind of widely operating inductive bias which partly explains the cross-cultural genesis and retention of religious content and which also might afflict the minds of atheists. Rad. And maybe the first of those two thought fusion beliefs, where motivated focus on a topic alters the expectation that it will happen, maybe that's also part of the explanation of prayer. Prayer is a way to convince yourself of a desirable fiction through motivated focus. I like that explanation a lot. It's definitely something I can be on the guard against in myself.

Monday, November 14, 2016

Questions, declarations, demands

Questions are approximately statements of ignorance, but they're better at getting people to respond. Why? Consider:

S1. Declaration of ignorance
Alani: Oh no, here come's Hukov. He's such a prick.
Hukov: Hi, Alani. I don't know how the world would look if participatory consent were a socially overvalued moral construct. Idk if you know either.
Alani: Hukov, I don't care about you or what you know. Go away.

S2. Inquiry
Alani: Oh no, here come's Hukov. He's such a prick.
Hukov: Hi, Alani. What if Jesus were a raisin?
Alani: I ....hm....what if Jesus were a raisin? Let's talk this out.
...

So maybe one reason that questions are more effective than declarations of ignorance in getting an informative response is that questions, by skipping the mention of the speaker and their mental state, give the audience fewer opportunities to decide they're not interested.

Another reason is that questions are kind of like demands for answers, and people have a quick mental reaction that promotes compliance with demands.

If a demand is roughly a statement of the speaker's desire for the audience to perform a behavior, then maybe a question also encodes a little bit of a preference for the audience to respond, and a little bit of an assumption by the speaker that they have social authority to voice their preference with a reasonable expectation that the audience will comply.

Neat.

Friday, October 14, 2016

Habits, Perceived Affordances, and Interrepted Causal Contingencies

D asked me to fix the toilet. The chain that allows the handle outside to tank to raise the flapper within the tank (the toilet's epiglottis) had become detached, and she couldn't figure out how to reattach it. I sat down to examine the internals of toilet tank, closed the valve on the water supply line, and placed my hands into the tank water.

"Eeh! Gosh, that's cold!" I exclaimed.

"Well," D said, "you already thought to close the valve on the water supply line. Why don't you flush the remaining water out of the toilet tank?"

Right! Yes! That's what I ...almost did? Why didn't I do that? I depressed the lever handle, ...

And the toilet didn't flush because the chain that allows the lever handle outside the tank to raise the flapper had become detached. That's the thing that I was trying to fix, remember? Pretty funny.

Or, no, probably not funny, but maybe it's the sort of thing I could write a post about? Yes! Done.

Sunday, October 2, 2016

Of Confidence, Understanding, and Conclusions

Confusion and understanding are important concepts in rationality and metaethics. The concepts of conclusions and resolved beliefs are similarly important. These concepts appear in the written works of our culture much as they appear in our daily language: without precise definition or explicit connection to the mathematics of probability theory. One may wonder, what is the connection, if any, between epistemic confidence and understanding, or between confidence and resolved belief? Below I will outline some guesses.

What if like:
You're chatting with a peer. They ask a question and you pause for just a beat before replying. I propose that the condition of your mind during that pause is nearly the same as the mental condition of confusion. The difference is that the condition of a mental pause is almost imperceptively brief, while confusion persists for noticeable and even uncomfortable spans of time. Confusion is a condition where the mind is generating and discarding ideas, having not yet settled on an idea which is good enough to justify ending the search. Confusion doesn't have to be preceded by a surprise, and not all uncertainty is accompanied by confusion. Where in a bayesian mind do we place confusion? We place it not in uncertainty, and not in surprise, but in the failures of attempted retrieval and production of ideas, theories, strategies. On the time scale of milliseconds, these failures are mental pauses. On the time scale of moments and minutes and more, these failures are called confusion.

So much as the subjective feeling of confusion comes from noticing one's own failure at cognitive production, so contrariwise the feeling of understanding comes from noticing a productive fluency; noticing that one is quickly able to generate ideas, hypotheses, strategies in a domain, and to feel confident that the produced cognitive contents are valuable, and that the value justifies terminating a mental procedure of search or construction.

From the idea of terminating a mental train of thought, we naturally come to the concept of conclusions. I once wrote, "Our beliefs gain the status of conclusions when further contemplation of them is no longer valuable." There are many reasons why you might disvalue further contemplation of an idea! You could have other things which you prioritize thinking about! You could expect that there are no further reasons which could change your belief! You could notice a limit on your reasoning, like an inability to quickly find reasons which would change your belief, while still accepting that such reasons might exist! There could be unpleasant environmental consequences of continuing to contemplate a topic, like if someone didn't want you to think about the thing and they could tell when you were doing so!

So conclusions are merely those beliefs that you maintain when a train of thought terminates, and which you're likely to recall and make future use of. Resolved beliefs are different! I know that "resolved belief" isn't a standard term in the rhetoric of rationality or metaethics, but maybe it should be. Resolved belief is the great promise of maturity! A resolved belief is a conclusion where you have a reasonable expectation that no further reasons exist that would change your mind. The conservation of expected evidence means that something has gone wrong in your mind if you expect that your belief will change some specific way after seeing some unknown evidence, but it is still entirely rational for you to expect that your beliefs will change in some unknown direction after seeing unkown evidence. If you expect that your beliefs will change a lot (in some unknown direction!) that just means you're about to see strong evidence. One way to phrase this situation is to say that the question whose truth value you are contemplating is about to be strongly tested.

I'm not totally sure that updating on lots of bits of evidence is the same as reaching a resolved belief. It seems like a different thing to say, "this issue is probably settled hereafter" than to say, "our inquiry changed our beliefs drastically". If the latter is to say that an idea has been tested, then the former is to say that the idea perhaps can be tested no further: that there is no test the observation of whose result will cause us to update our beliefs. The relationship between these two sounds like something that mathematicians have probably already figured out, either in analyzing converge of beliefs in learning algorithms, or perhaps in logic or metamathematics, where settled conclusions, once tested now untestable, are very salient. Unfortunately, I'm too stupid for real math. Some quantities that seem relevant when making a judgement of whether you've dealt with most of the important possibilities are 1) the conceptual thoroughness of your investigations, per genus et differentiam, 2) the length of time that you've spent feeling confident about the topic and similar topics, and maybe 3) the track record of other people who thought they had investigated thoroughly or who thought that the issues were settled. For some issues, maybe you can brute force an answer by looking at how your beliefs change in subjectively-possible continuations of your universe? That's probably not a topic about which I can think productively.

On the topic of other maybe-relevant math that I'm too stupid to understand, I'm reminded of the idea of the Markov blanket of a variable in a probabilistic causal model. Within the pretense of validity of a given causal model, if you condition your belief about a variable X on the states of a certain set of nearby variables, the Markov blanket of X, then your belief about X shouldn't change after conditioning on the states of any variables outside of the blanket. Unlike in the case of beliefs that are settled by logical proof, I don't think it's the case that the untestable belief in X after having conditioned on its Markov blanket will necessarily be confident. Possibly that's not the case with proof either when you start talking about undecidability and self-reference. I guess the best question my curiosity can pose right now about resolvedness of belief is whether convergence of belief in probabilistic logical induction is related to an independence result like which obtains when you condition on the Markov blanket of a causal variable.

Learning rate adaptation is another math thing in the semantic neighborhood of resolved belief, but it's not a thing Bayesians often talk about, from what I've seen.

But so then:
The feelings of confusion and understanding are both related to expectations about the conclusions of our contemplation, or to a specific kind of conclusion that I've called a resolved belief. Resolved beliefs are accompanied by a feeling of understanding, and more strongly resolved beliefs are probably more confident. Still, one's confidence in a single belief is not the thing that justifies a judgement that the belief won't change in the future. I don't yet know what theoretically justifies a judgement that an issue is settled, but maybe it has something to do with belief convergence in the mathematics of learning. And I guess I have a hope that it's possible to translate philosophical questions into non-subjective terms where the math of learning is clearly relevant, however great that feat might be, so that philosophical questions can potentially be resolved just as much as can computational ones or physical empirical ones.

Addendum: Hrm, the papers I've seen so far about posterior convergence make it seem like statisticians care about such things for pretty different reasons than I do. I'm going to have to read for a long time before I find something insightful. The usual situation presented in the literature on convergence of parameter estimates is like, "You get a sequence of percepts of the thing, and the noise has a certain shape. Depending on the shape, we can show how fast your guesses become better." Adapting that to be relevant to confusion, I'd want to frame attempts at cognitive production as noisy percepts of something. That's not the worst idea, but it doesn't seem to capture most of my idea of the source of failures of cognitive production. Maybe the literature on decreasing KL divergence in model selection will help to clear things up.

Thursday, April 21, 2016

My Lives and Deaths in a Big World

Among the describable moments where I remember this moment as the last one that I experienced, maybe a lot of them involve dying as my thermal fluctuation of a brain, filled with hallucination, returns to non-existence.

That situation isn't one I normally consider when going about my life, and so the question arises of whether my behavior is adapted to it.

Maybe it make reflective sense, and not just intuitive sense, to not care much about the observer moments in the lives of Boltzmann brains, because one's decisions as instantiated in Boltzmann brains won't have much of any effect on those brains' surrounding environments, given that the rest of one's supposed body probably won't exist there to be actuated, and even if one's bodies do exist, one would still be effectively swatting in the dark at imagined flies.

Taking this idea a little further, we find the suggestion that we should perhaps generally care less about incarnations of our bodies which are more disconnected from their surroundings, those being less perceptive or less powerful to effect changes. Consider however that this is surely not the principle which informs our intuitive expectations of worldly permanence and our intuitive lack of concern for observer moments that arise in thermal fluctuations: if so, we'd also disbelieve that we could be blinded or demented or paralyzed.

In locales where brains persist long enough to effect biological reproduction, it's little surprise that brains evolve with expectations of sensory persistence. If we endorse the value of our intuitions of sensory persistence, then perhaps the process of evolution which endowed us with those intuitions can also provide us with principles for forming reasoned beliefs regarding how to act in a universe or multiverse large enough to contain many incarnations of our minds in different locales.

"Act as though only those future moments are real in which you might reproduce" doesn't sound like wisdom. What other possible lessons could we abstract from evolution? "Act to achieve good states in worlds where you can do so"? That might  be a principle which prescribed avoiding paralysis in EEA, while excluding thermal weirdness. Although it sounds much less evolution-y than the first one.

Complementary to the topic of intuitive dis-belief in fluctuation-death is our intuitive actual-belief in total death - i.e. in the complete cessations of our subject experience, despite arguments suggesting subjective immortality of minds in a big world. And yet the explicit reasons for discounting the decision theoretic value of Boltzmann moments (of seemingly extraordinary death) listed above do not seem to provide complementary reasons to discount extraordinary survival scenarios.

And so once again I am left wondering whether my evolved intuitions are ill suited to thriving in a big world.

We've all heard the quantum immortality thought experiment, and we've mostly all found some reason to not commit quantum suicide. Beyond this, I know almost no sources of relevant advice.

Of course Robin Hanson wrote How To Live In A Simulation, and Eliezer Yudkowsky once made the intriguing suggestion that one should open one's eyes to decrease world simulation measure when bad things happen (and to close one's eyes to reduce simulation load when good things happen). This advice is in the right weirdness neighborhood, but not obviously relevant to taking Boltzmann brains or subjective immortality seriously.

Sunday, April 10, 2016

Cultural Relays

This post is sort of a response to “Don’t Be a Relay in the Network”, but I've tried to make it self-contained.

Some websites have strong information currents; there are lots of posts, and new ones are constantly pushing older ones out from their place of prioritized visibility on the top of the stack, down toward the archives of darkness. The archives are so dark that the stack is effectively a pipe, with old posts being forgotten entirely, at least from the memories of site users. 

New, highly visible posts are the best place for site users to write comments if they're looking to socialize, and so the speed that new posts get buried under even newer posts is a strong determiner of how long people will stay in a comment thread before leaving for newer content and larger crowds. When the incentives for conversation are particularly bad on a streaming media channel, people move away from commenting at all and toward a media consumption strategy that's consists of only snap-judgements: liking, disliking, sharing, following, and blocking. In short, click and click and do not stop. 

Like watching televised shows, this clicky strategy allows users to stay abreast of topical content, and like watching televised shows, this clicky strategy can not support a thriving culture where people interact with each other or make things of value. Even in less extreme cases which do support socialization, a stronger information current makes for a culture with a shorter memory, where ideas and references have a briefer shelf life of cultural relevance.

The original essay suggests that this state of affairs is poor for the prosperity and mental health of site users, because they lose their personal significance in the community as creators when they only passively consume media produced by large-scale external forces or thoughtless group dynamics. Users of websites with strong information currents are attuning themselves to the dominant rhythms of an impersonal culture, says the author, and each user becomes less of a person and more of a mere relay in the network, doing little but passing and blocking signals.

The prosperity of mental life is a fine thing. I'd like to list some additional reasons to resist becoming a relay in the network. Firstly, some information merits, for one’s own use, a deeper understanding, produced by long, deliberative research and contemplation. If you become a Relay, you will fail yourself. A person must have a long memory to learn some things of great value.

Another reason that we should invest more time in valuing and responding to content has a universalizing character. The author mentions a “clever, fictive metaphor bandied about by pseudo-mystical techno-utopians”, that a society is a mind and its members are as neurons. I, being such a techno-utopian, am of course putting this metaphor to use. The third reason is this: there are things which can not be achieved by cultures with short memories and fleeting interests. If we imagine setting the ratio of time that members of a community will devote to gaining broad informedness versus deep understanding, we see that the short memory extreme, with its vanishing and exploding feedback gradients, has few and shoddy basins of attraction.

Like the narrative distorted into myth, like the speech distorted into chants and slogans and soundbites, like the joke distorted into habitual references, like the interests distorted into stereotypes, and like the motive appeals distorted into click-bait, the products of short memory culture cannot sustain context or nuance. These are washed away by the current of topicality, and much value goes with them, down into the dark archives. 

So, it is good to resist becoming a relay in the network. If you want to relay some signals, fine, but then also go spelunking in the dark. Write about old works. Reinforce others who try to connect disparate fields and those who show that they've engaged with their ideas at length. Work on long term projects. Ultimately, build a creative culture with a long memory, because that is necessarily the domain of complex human flourishing.

Wednesday, March 16, 2016

Capitalized Abstract Nouns

On Facebook, Marcello Herreshoff posted a list of Capitalized Abstract Nouns that people believe in. Marcello provided it as a resource for an exercise to help individuals explicate their own preferences, but that exercise is not my main interest here. My aim to form a taxonomy of things that people believe in, working off of Marcello's list.

-

Conditions: People believe in preferred conditions of individual existence such as {innocence, dignity, strength, health, and freedom} and conditions of collective existence, such as {equity, order, accountability, and justice}. Also, people believe in beauty, which might be a preferred condition of reality, independent of the existence of people. Asserting a belief in these is perhaps asserting that the audience should also value the conditions.

Standards: People believe in behavioral standards or procedures (both individual & collective) such as {decency, fairness, honesty, efficiency, cleanliness, discipline, faith, and patience}. Statements of belief in these are partly assertions of their terminal value, and partly assertions of their instrumental efficacy in obtaining good conditions.

Agents: People believe in particular agents (individual & collective) such as {god, nature, human society, America, the American military, the global market, Whatever Inc, Senator Whomever, and themselves}. Belief in these is mostly about the ability of the agent to obtain good conditions, and partly about the ethical behavior of the agent, and a little bit about the terminal value of the agent's existence.

People believe in non-particular agents also, such as {family, friends, volunteers, patrons, citizens, stewards, and leaders}. Saying "I believe in family" means something very close to "I believe in the value of the relational behavioral standard of kinship," but I've decided to make this a separate category from believing in the other behavioral standards like decency, fairness, and honesty.

Projects: People believe in projects, movements, and social systems, such as {the reformation, open source software, democracy, feudalism, the war on drugs, and miscegenation} Belief in these is a lot like belief in particular agents.

People believe in bodies of knowledge and procedure, such as {education, medicine, tradition, law, and functional programming}. These are more like the behavioral standards.

Dunno: The last one I really care about categorizing is "The Future". Is that like believing in a particular narrative, as in "I believe in the virgin birth of Christ"? Or is the future conceived of as an agent bringing about desirable conditions? Or is the future thought of as desirable condition itself, like beauty? Or is the future a non-particular collective project, like believing in revolution? Maybe it's the normal kind of belief, and people are just saying that the future exists. Probably not that one, but I don't know.

-

SO THEN: People believe in good things, good lives, good conduct, good people and institutions and social roles, good projects, and maybe things in whatever category The Future belongs to. Riveting.