If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Hello, I always been an off and on reader to Lesswrong for several years and this site in particular (like only few others) seemed to have a mythical aura of quality control that felt very out of place for me yet still very stimulating to watch from outside. The reason I decided to join now is because I have very direct things to say and the world of now is pretty fit for what I want to share. Lesswrong on coincidence is also the perfect platform for what I want to say which mostly entails cybernetics, semiotics, anthropocene, coming automatocene , machine learning and very common sense AI views that is conspiciously absent from all human studies. Unlike what one could expect I guess I am hardly a rationalist thought, rather I feel more at ease with the term anti-rational but I promise to only engage with parts of the community that I could be constructive. I am also hopeful the community will accept me in the way that I observed other newcomers turning into succesful posters with positive engagement.
Take care, stay safe <3
I'm curious to read what you have to say about cybernetics.
New to the site, and glad to be here, so let me give my introduction. I've more or less always been a traditional rationalist, in the sense that I place a high value on knowledge of the truth for its own sake and optimizing my own cognitive processes. This has been something I've approached as a casual effort since childhood, but of course, rationality is a rigorous art that must be refined over many years, so I wouldn't say I have gotten particularly skilled at pursuing those ideals until relatively recently. So I came across this site, read a fairly diverse sample of the core materials as well as sweeping through the concepts page, and its content appears to overlap a lot with my objectives as a pursuant of rationality, as well as having quite a history. Eliezer and many of LW's other featured authors write in a style I find very engaging and easy to grasp, and with just one major forthcoming exception, I find myself in agreement with most everything said. I'm surprised I hadn't heard of it sooner.
About the one exception, here's something anyone talking to me should know up front: I am a mind-brain dualist. I have nothing to hide, and I presume we can at least get along. Don't worry, I'm not religious, nor a silly Chalmersite; rather in fact a fairly basic Cartesian, if I must approximate. I am also probably one of the world's most committed and obsessive dualists. My reasons for so being, of course, are perfectly Bayesian; I'm actually a researcher of afterlife phenomena, and I mention that because it's a big part of why I'm here. When one treads in such waters, an overabundance of rationality, both epistemic and instrumental, is necessary to find any measure of success. First off, the fact of the matter is that there is a very good reason why this stuff has a hard time getting widely noticed, and it's because there is so much bullshit mixed in with the good bits that anyone without mountains of patience and discernment will end up hopelessly lost. That poses a problem for me, as one who will passionately defend the integrity of the scientific method over any particular theoretical persuasion, and very much a naturalistic reductionist in principle. So I certainly make my dualism pay rent (and I gouge the living daylights out of it, as seen below), I've inspected it a thousand and one times for any sort of underlying bias or rationalization, and I always update it wherever I find faults and come out clear, so I think I'm doing better than most. But I just want to become as adept as I can be at criticizing my own thought processes and being absolutely sure I'm continuing down the right track. Second, without overt demonstrations of the efficacy of these metaphysical phenomena I study, they might as well not exist at all. While some of the extreme scientific skepticism such ideas generate is of the motivated sort, much of it is also absolutely justified, until the research bears indubitable results. To that end, I'm a thought leader in realizing the practical applications of the theories in my field, and it's made me a beyond-obsessive mental modification practitioner, in a "makes Tom Brady look adorable" kind of way. In essence, I need to turn myself into the equivalent of a paperclip maximizer for the exact sort of ability I'm pursuing. I've been working at this intensely for a few years, with surprisingly accelerated results, but I really do need to take advantage of every possible resource to get a leg up, and LW certainly looks to be one. The "tsuyoku naritai" self-improvement ethos propagated on this site aligns exactly with my intentions.
Despite my strong affinities for many of LW's core concepts, however, I wouldn't call myself much of a singularian. Mainly, it's just not in my purview; besides metaphysics, I study cognitive neuroscience and linguistics, and have never done programming. So I can't claim to disagree with how the site talks about AI the way I explicitly oppose materialism, but I'm not sure if I quite follow, either. It often just seems overhyped to me, especially the risk side (the benefits of having it seem fairly obvious, but still feel bounded). I'm undeniably a transhumanist, but my brand of futurism is more of a scientific-metaphysical bent; just as an example, my perfectly envisioned distant future would have human life expectancy artificially shortened (talk about Weirdtopia). Naturally from that, stuff like cryonics is just needless and misguided to me. My personal utility function assigns very little value to death itself (as distinguished from suffering), and my model of Fun Theory, which I've developed fairly extensively, is fully content to surf the wave, so to speak. That's not to say I'm a technological luddite, though; I just lack the expertise to foresee where exactly long-term advancement is going, or to comment on issues like AI alignment. But I'm the sort who'll take it all however it comes, because life in the sense I care about will go on regardless. And I had damn well better be a futurist, knowing I'll be around to see all of it. So we'll share common ground in terms of thinking 10 moves ahead of the average person, just working to different endpoints. That's not to say they aren't substantially compatible; there's no reason the far future couldn't be metaphysically empowered beyond imagination and develop superintelligences that do all the boring stuff for us!
All told, I probably won't be extremely active here as a contributor, excepting if in-depth exposition of some theoretical constructs I bring up is directly requested, but I'll definitely be reading and commenting and asking questions whenever I have them, in order to learn what I need to know to enhance my rationality practices. As always, the only thing a true scientist needs to actually change their mind is one piece of falsifying evidence that's stronger than all the other available evidence. And I go after the weird stuff because it's just so fascinating and enjoyable, but I also understand the monumental burdens such a choice places on me. After all, extraordinary claims demand extraordinary evidence, and no one demands it more than I do. And that's why I'm here: to get better every day at demanding more every day.
What's the evidence - reincarnation memories? encounters with the dead?
Personally, I mostly study reincarnation cases; they're the only evidence I really find to meet a scientific standard. Let's just say that without them, I wouldn't be a dualist on any confident epistemic ground. That said, 99 percent of what you'll encounter in a casual search on the matter is absolute nonsense. When skeptics cry "Here be dragons!" to dissuade curious folks from messing around in such territory, I honestly can't say I blame them one bit, given how much dedication it takes to separate the signal from the deafening noise. If you want to dip your feet in the water without getting bit by a shark, I'd stick to looking at cases that, A, only involve very young children, and B, have been very thoroughly investigated and come up categorically verified by all accounts. It will probably take time to encounter something that feels really satisfying, but at the top end, they really do get next-level spectacular. It's incredibly fascinating and I love it to bits, but I'd never call it a pursuit to be taken casually. I actually think a population like LessWrong would probably be much better equipped than most to engage with such subject matter, though, because they're already practiced at the sort of Bayesian reasoning that's necessary to keep an honest assessment of the data, for what it is and nothing more.
How many such cases are known to you?
Restricting the query to true top-level, sweep-me-off-my-feet material, I'd say I've personally read about at least a few dozen that hit me that hard. If we expand to any case that researchers consider "solved" - that is, the deceased person whose life the child remembers has been confidently identified - I would estimate on the order of 2000 to 2500 worldwide, possibly more at this point.
Any idea how many of those would have been collected by Ian Stevenson specifically?
Good on you doing your DD. His official count (counting all cases known to him, not only ones he investigated) is around 1700, which probably means that my collective estimate is on the way low side - there's just a lot of unpublished material to try to account for (file drawer effect) - but I would definitely say that a great deal of the advancement in the field after Stevenson has been of a conceptual and theoretical nature rather than collecting large amounts of additional data. In general, researchers have pivoted to allowing cases to come to their attention organically (the internet has helped) rather than seeking out as many as possible. On the other hand, Stevenson hardly knew anything about what he was really studying until late in his career (and admitted as much), while his successors have been able to form much more cohesive models of what is going on. I would say that Stevenson is a role model to me as Eliezer is to a great deal of LW, but on the other hand, I find appeal to authority counterproductive, because the fact of the matter is that we today have access to better resources than he had and are able to do stronger and more confident work as a result. He, of course, supplied us with many of those resources, so respect is absolutely in order, but if we don't move forward at a reasonable pace from just gathering the same stuff over and over, the whole endeavor is no better than an NFL quarterback compiling 5000 passing yards for a 4-12 team.
How do you go about validating a case that comes to your attention via the internet? It seems to me like it's very hard to have access to information that the person in question has no way of knowing for cases that reach you via the internet.
Disclaimer, I'm not someone who personally investigates cases. What you've raised has actually been a massive problem for researchers since the beginning, and has little to do with the internet - Stevenson himself often learned of his cases many years after they were in their strongest phase, and sometimes after connections had already been made to a possible previous identity. In general, the earlier a researcher can get on a case and in contact with the subject, the better. As a result, cases in which important statements given by the subject are documented, and corroborated by a researcher, before any attempt at verification has been made are considered some of the best. In that regard, the internet has actually helped researchers get informed of cases earlier, when subjects are typically still giving a lot of information and no independent searches have been conducted. Pertaining to problems specifically presented by online communication, whenever a potebtially important case comes to their attention, I would say that researchers try to take the process offline as soon as the situation allows.
Where do you think the most convincing information about those cases is published?
Unfortunate to say I haven't kept a neat record of where exactly each case is published, so I asked my industry connections and was directed to the following article. Having reviewed it, it would of course be presumptuous of me to say I endorse everything stated therein, since I have not read the primary source for every case described. But those sources are referenced at bottom, many with links. It should suffice as a compilation of information pertaining to your question, and you can judge what meets your standards.
Epoch Times in 2015 said Stevenson's successor Jim Tucker has brought the total up to "about 2000 cases".
Anyway, I will come out and say I don't believe it. Reincarnation may be logically possible - many things are logically possible - but the ascertainable facts don't provide sufficient reason to think it's actually happening. Adults consistently underestimate the imagination and intuition of children, and scientists regularly convince themselves of things that are false (and then there's the level of discussion present e.g. in cable TV documentaries, which is far more characteristic of ordinary thinking on the subject, and which cannot be counted on to have any respect for truth at all).
Also, our current understanding of neural networks, suggests that individual brains develop idiosyncratic representations for anything complex, a problem for the idea that memories of other lives, formed in other brains, get downloaded into them. This is not a decisive objection, but it's definitely an issue for anyone seeking a mechanism.
It means very little evidentially, but I will report one thing that happened when I looked into this. In the opinion of some, Stevenson's most convincing case was a boy from Lebanon. I thought: Lebanon is a Muslim country, and one doesn't associate Islam with belief in reincarnation. Then I remembered the Druze sect - and indeed, on further study by myself, the boy turned out to be from a Druze family.
Reincarnation studies may be of interest from the perspective of "anomalistic psychology" - belief in reincarnation, after all, is part of some of the world's major belief systems; and understanding why people believe in it, and how that belief is reinforced in new generations, may shed light on how those cultures work.
That's definitely the proper naïve reaction to assume in my opinion. I would say with extremely high confidence that this is one of those things that takes dozens of hours of reading to overcome one's priors toward, if your priors are well-defined. It took every bit of that for me. The reason for this is that there's always a solid-sounding objection to any one case - it takes knowing tons of them by heart to see how the common challenges fail to hold up. So, in my experience and that of many I know, the degree which one is inclined to buy into it is a direct correlation of how determined one is to get to the bottom of it. Otherwise, I have to agree with you that there's no really compelling reason to be convinced based on what a casual search will show you. That, as well, seems to be the experience of most. Those who really care tend to get it, but it is inherently time-and-effort prohibitive. I really don't feel like asking anyone to undertake that unless they're heavily motivated.
Stevenson's greatest flaw as a researcher was that he didn't look terribly hard for American and otherwise Western cases, and the few he stumbled into were often mediocre at best. Therefore, he was repeatedly subjected to justified criticism of the nature "you can't isolate your data from the cultural environment it develops in". However, this issue has been entirely dissolved by successors who have rectidfied his error and found that they're just as common in non-believer Western families as anywhere, including arguably stronger ones than anything he found. This is definitely the most important data-collection development in the field during the 21st century.
I must say I'm not at all interested in belief systems as an object of study, though - my goal is more or less to eradicate them. They're nothing but epistemic pollution.
Do animals also get this? Does an embryo's brain contain biological means to receive foreign data?
To the first question, there's just no way to know at the current stage of research. It's perfectly possible, just as it's possible that there's life in the Andromeda galaxy. To the second, know that taking ideas like this seriously involves entertaining some hard dualism; the brain essentially has to be regarded as analogous to a personal computer (at least I find such a comparison useful). Granting that premise, there's no reason a user couldn't "download" data into it.
I would guess memories from past lives to work like other parts of reality: no time travel (else we would be eaten by time travellers), minds need brains (else we wouldn't spend 20% of the body's oxygen in the brain), everything about biology has an evolutionary explanation.
It sure would be useful to an embryo to receive foreign data, but there's little point to broadcasting such. I'd therefore suspect the ability to broadcast to be incidental - perhaps a byproduct of the ability to receive, like every radio receiver can function as a transmitter. That we, given the ability, would broadcast is straightforward: If you copy software, you're more likely to copy from those who broadcast more. The practice would spread like a virus.
Dead brains generally stop doing things, and if the transmitter could work quickly we should see the same hardware used for telepathy. Therefore I suspect the transmission to be ongoing over the course of a life, that memories would very rarely be ones of death, that childhood memories are more common than elderly memories because they've been broadcasted for longer. Does this match the evidence?
No time travel: You are 100% correct. All cases ever recorded involve memories belonging to previously deceased individuals.
Minds need brains: To inhabit matter, they absolutely do. You won't see anyone incarnating into a rock, LMAO.
Everything about biology has an evolutionary explanation: Also 100% correct. Just adding dualism changes nothing about natural selection. And, once again granting the premise, the ability to retain previous-life memories is sure as hell adaptive.
By "broadcast", I assume you mean "speak about previous-life experiences". To that, I'd just say that humans tend to talk about things that matter to them. Therefore, having such memories would naturally lead to them being communicated.
I don't see how the mechanism for this connects to telepathy; that's an entirely different issue, and one I'm not personally convinced of the evidence for, but there are some who are.
Pertaining to the evidence you predict: Communication of past-life memory often tends to be centered in early childhood, and some subjects lose them as they grow up, but others retain it. Memories of death are in fact very prevalent in such cases, because they, naturally, carry extreme emotional salience. To your final prediction, the lives remembered actually involve early and violent deaths far more often than not, but beyond that, the age distribution of what is recalled seems to follow roughly the same relative histogram as normal long-term autobiographical memory does, with things like recency and primacy effects operative.
Thanks for all the excellent questions!
Any thoughts on Rupert Sheldrake? Complex memories showing up with no plausible causal path sounds a lot like his morphic resonance stuff.
Also, old thing from Ben Goertzel that might be relevant to your interests, Morphic Pilot Theory hypothesizes some sort of compression artifacts in quantum physics that can pop up as inexplicable paranormal knowledge.
I haven't read Sheldrake in depth, but I'm familiar with some of his novel concepts. The issue with positing anything so circumstantial being the mechanism for these phenomena is that the cases follow such narrow, exceptionless patterns that would not be so utterly predictable in the event of a non-directed etiology. The subjects never exhibit memories of people who are still alive, there are never two different subjects claiming to have been the same person, one subject never claims memories of two separate people who lived simultaneously... all these things one would expect to be frequent if the information being communicated was essentiaĺly random. It's honestly downright bonkers how perfectly the dataset aligns to a more or less "dualist the exact way humans have imagined it since prehistory" cosmology.
Have you run the numbers on these? For example
sounds like a case of the Birthday paradox. Assume there's order of magnitude 10^11 dead people since 8000 BCE. So if you have a test group of, say, 10 000 reincarnation claimants and all of them can have memories of any dead person, already claimed or not, what's the probability of you actually observing two of them claiming the same dead person?
The bit about the memories always being from dead people is a bit more plausible. We seem to have like 10 % of all people who ever lived alive right now, so assuming the memories are random and you can actually verify where they came from, you should see living people memories pretty fast.
About 0.01. Calculated using this logfactorial function in Matlab:
p = 1 - exp( logfactorial( N ) - logfactorial( N-n ) - n * log( N ) )
You would need about 400000 reincarnation claimants to have a 50% chance of any collisions.
I assume you mean to say the odds of two subjects remembering the same life by chance would be infinitesimal, which, fair. The odds of one subject remembering two concurrent lives would be much, much higher. Still doesn't happen. In fact, we don't see much in the way of multiple-cases at all, but when we do, it's always separate time periods.
what if it turned out that almost all memories are lost on death and only those that are explicitly passed on get passed on, but that also, memories passed on to other humans via explicit action on the part of the person passing them on do result in some of the author's soul being embedded in the story or what have you, and in this sense you really are a reincarnation of the consciousness of the beings you grew up around, who are themselves a reincarnation of those that they grew up around, etc.
personally I've started giving really annoying answers to both dualists and materialists; I could stretch both definitions into calling myself a "materialist dualist", in that I view information in the sense of structure to be the defining characteristic that is soul. as such, I would not terribly mind to be part of a sequence of beings who reincarnate often, as long as they reincarnate quite well, with much less lost than current deaths lose. in my current view, deaths today are a tragedy not quite beyond measure, because the majority of a person's genetic and memetic soul-shape are lost on death, and only any of their soul that was imprinted onto the environment in coherent form remains significant. Plenty of the imprinted soul a person pushes into the world around them can come back as thoughts of implications in the minds of future beings, but it's just so limited compared to our rich inner lives. I did much imagining in the process of writing this message, for example, and I believe it's actually legitimate to consider the thoughts you have while reading this to be a semi-reincarnation of the thoughts I had while writing it, such that those thoughts are partially of the same dualist soul. but they can only enter the same soul by nature of being the same thoughts; and you can't see the living room I'm imagining, nor did you know until this sentence that I was imagining the imprint I leave on my walls by leaning on them, by bumping into them, etc.
if you have evidence of anyone being aware of information that could not have reached them any other way than presently-unknown physical processes, though, then that seems like it could be a strong argument for there being additional realms of structure available to the brain that are currently unidentified. to my knowledge, there are no verified accounts of people having access to information that seemed impossible that did not eventually resolve to either lucky description of simple information that matched too easily, or resolved to the person having encountered the information without realizing it.
but again, as I legitimately 100% believe information is soul, I wouldn't argue against reincarnation being an accurate description of the world - I'd argue against the view that all parts of a soul come bundled. in my view, only that which you pass on gets passed on.
I had a hard time understanding a good bit of what you're trying to say here, but I'll try to address what I think I picked up clearly:
While reincarnation cases do involve memories from people within the same family at a rate higher than mere chance would predict, subjects also very often turn out to have been describing lives of people completely unknown to their "new" families. The child would have absolutely no other means of access to that information. Also, without exception, they never, ever invoke memories belonging to still-living people.
On that note, you'll be pleased to hear that your third paragraph is underinformed; there are in fact copious verifications of that nature in the relevant literature. If there weren't, you wouldn't hear me talking about any of this; I'm simply too clingy to my reductionist priors to demand anything less to qualify as real evidence for off-the-wall metaphysics.
Whether there are people who reincarnate often is really hard to determine at present; subjects who concretely remember more than one verified previous life are incredibly rare. However, I suppose that is my cue to spill the remaining beans: my entire utility function and a huge basis of my rationality practice is predicated on the object of "reincarnating well", particularly fixating on the matter of psychological continuity, which you allude to directly; this is my personal "paperclips" to be maximized unconditionally. In familiar Eliezer-ese diction, I feel a massive sense that more is possible in this area, and you can bet your last dollar that I have something to protect. Moreover, as a scientist working with ideas many consider impossible, I believe in holding myself to equally impossible standards and making them possible, thereby forcing the theoretical foundations into the acknowledged realm of possibility. In other words, if the phenomena I'm studying are legitimate, I'll be able to do truly outrageous things with them; if I can't, the doubters deserve to claim victory.
Frankly, I'm pleasantly surprised to be seeing concepts like these discussed this charitably on LW; none of this is anything close to Sequence-canon. I certainly don't want to jinx it, but from what I'm seeing so far, I'm extremely impressed with how practically the community applies its ideological commitment to pure Bayesian analysis. If nothing more, I hope to at least make myself one of LW's very best contrarians. But I'm curious now, is there a fairly sizable contingent of academic/evidential dualists in the rationalist community?
I mean I actually think you are catastrophically wrong about there being any "hidden variable" knowledge-passing, but I'm going to talk to you to figure out why you believe it, not just dismiss it a priori! I simply expect the evidence for dualist violations of known physics to turn out to be very weak.
could you cite somewhere I can look to find more of this? after looking briefly at wikipedia, I find what I expected to find - careful analysis of a plausibly astounding phenomenon, carefully catalogued and currently expected to be found to not be blatantly violating thermodynamics about what the kids knew when. If the kids can recite passwords they could not possibly have had access to, then it would start to seem plausible - but it takes an awful lot of evidence to overcome "it was just the kid forgetting they'd seen the stuff before", and it looks like the evidence probably isn't there. certainly no causally isolated studies.
There is a case on record that involved a recalled phone number. A password is a completely plausible next step forward.
For a very approachable and modernized take on the subject matter, I'd check out the book Before by Jim Tucker, a current leading researcher.
As a disclaimer, it's perfectly rational and Bayesian to be extremely doubtful of such "modest" proposals at first blush - I was for a good length of time, until I did the depth of investigation that was necessary to form an expert opinion. Don't take my word for things!
It's more empirical than ideological for me. There are these pockets of "something's not clear here", where similar things keep being observed, don't line up with any current scientific explanation, and even people who don't seem obviously biased start going "hey, something's off here". There's the recent US Navy UFO sightings thing that nobody seems to know what to make of, there's Darryl Bem's 2011 ESP study that follows stuff by people like Dean Radin who seem to keep claiming the existence of a very specific sort of PSI effect. Damien Broderick's Outside the Gates of Science was an interesting overview of this stuff.
I don't think I've heard much of reincarnation research recently, but it was one of the three things Carl Sagan listed as having enough plausible-looking evidence for them that people should look a lot more carefully into them in The Demon-Haunted World in 1996, when the book was otherwise all about claims of the paranormal and religious miracles being bunk. I guess the annoying thing with reincarnation is that it's very hard to study rigorously if brains are basically black boxes. The research is postulating whole new physics, so things should be established with the same sort of mechanical rigor and elimination of degrees of freedom as existing physics is, and "you ask people to tell you stories and try to figure out if the story checks out but it's completely implausible for the person telling it to you to know it" is beyond terrible degrees-of-freedom-wise if you think of it like a physicist.
When you keep hearing about the same sort of weird stuff happening and don't seem to have a satisfying explanation for what's causing it, that makes it sound like there's maybe some things that ought to be poked with a stick there.
On the other hand, there's some outside view concerns. Whatever weird thing is going on seems to be either not really there after all, or significantly weirder than any resolved scientific phenomenon so far. Scientists took reports of PSI seriously in the early 20th century and got started trying to study them (Alan Turing was still going "yeah, human telepathy is totally a thing" in his Turing Test paper). What followed was a lot of smart people looking into the shiny new thing and accomplishing very little. Susan Blackmore spent decades studying parapsychology and ended up vocally disillusioned. Dean Radin seems to think that the PSI effect is verified, but it's so slight that "so go win the Randi Prize" doesn't make sense because the budget for a statistically conclusive experiment would be bigger than the prize money. And now we're in the middle of the replication crisis (which Radin mentions zero times in a book he published in 2018), and psychology experiments that report some very improbable phenomenon look a lot less plausible than they did 15 years ago.
The UFO stuff also seems to lead people into strange directions of thinking that something seems to be going on, but it doesn't seem to be possible for it to be physical spacecraft. Jacques Vallée ended up going hard on this path and pissed off the science-minded UFOlogists. More recently, Greg Cochran and Lesswrong's own James Miller talked about the Navy UFO reports and how the reported behavior doesn't seem to make sense for any physically real object on Miller's podcast (part 1, part 2).
So there's a problem with the poke things with a stick idea. A lot of smart people have tried, and have had very little progress in the 70 years since the consensus as reported by Alan Turing was that yeah this looks like it's totally a thing.
One of the best, approachable overviews of all this I've ever read. I've dabbled in some, but not all of the topics you've raised here, and I certainly know about the difficulties they've all faced with increasing to a scientific level of rigor. What I've always said is that parapsychology needs Doctor Strange to become real, and he's not here yet and probably never will be. Otherwise, every attempt at "proof" is going to be dealing with some combination of unfalsifiability, minuscule effect sizes, or severe replication issues. The only related phenomenon that has anything close to a Doctor Strange is, well, reincarnation - it's had a good few power players who'd convince anyone mildly sympathetic. And it lacks the above unholy trinity of bad science; lack of verification would mean falsification, and it's passed that with flying colors, the effect sizes and significance get massive quick even within individual cases, and they sure do keep on coming with exactly the same thing. But it certainly needs to do a lot better, and that's why it has to move beyond Stevenson's methodology to start creating its own evidence. So my progressive approach holds that, if it is to stand on its own merit, then it is time to unleash its full capacity and conduct a wholesale destruction of normalcy with it; if such an operation fails, then it has proven too epistemically weak to be worthy of major attention if it is genuine at all.
Hello! I am new to Lesswrong and I've heard from people online that this is where they dump their philosophical thoughts in. I didn't know if signing up was the right thing, but after I've seen how welcoming people are, I wanted to try it out for myself as well! I am a student that is interested in doing philosophy. I have had my doubts regarding sharing my ideas on certain topics, but I feel like this would be the best place to give those things out
I stumbled upon LW by chance when I was listening to Liv Boeree on youtube! My first impression is very positive. I'm looking forward to reading the different posts.
Discovering your website is timely for me because I want to take a step back in life to think about useful/real/big problems to work on. I hold a PhD in computer Science and most of my work involve algorithmic reasoning. Recently, I started to get bored from developing 'dry' algorithms that are almost always useless in practice. I feel that I need a fresh start to work on interesting and useful problems. Currently, I'm trying to explore different subjects to have a better vision. However, this exploration is associated to a struggle to decide which area to focus on and which problem to tackle! If you have been in such a situation before of if you have suggestions I'll be more than grateful to you.
Oops, sorry, I let our SSL certificate expire for like 20 minutes. Sorry for everyone who got a non-secure warning on the frontpage for the last 15 minutes or so, but should all be fixed now (I was tracking it as a thing to fix today, but didn't think about timezones when thinking about when to deal with it).
Tiny Feature Request:
I like that there's a "crossposted to the EA Forum; click to see n comments" message on crossposts. I would like it even more though if it actually sent me to the post's comments section if I click it. (But maybe that's just me?)
Hm, yeah, that sure does seem like it ought to behave like that.
I'm not new to reading LessWrong, but I am new to posting or commenting here. I plan to be more active in the future. I care about the cause of AI Alignment, and am currently in the process of shifting my career from low-level operations work at MIRI to something I think may be more impactful:
I.e. supporting alignment researchers in their efforts to level up in research effectiveness, by offering myself as a conversational partner to help them think through their own up-leveling plans.
In that spirit, here's an offer I'd like to make to any interested alignment researchers who come across this comment.
Free debugging-style conversations (could be just one, or recurring) aimed at helping you become a more effective researcher. How to sign up?
Questions you may have:
What would the conversation look like?
Who am I, and why might I be a good person to talk to about this?
Does anyone know of work dealing with the interaction between anthropic reasoning and illusionism/elimitivism?
Let A be a set, f:A→A and g:A→A. We can now define a property for f and g: g∘fn=(g∘f)n. Does a name for this property already exist?
If I were making one up, I might say "g distributes over composition of f".
I like it!
Is that equation supposed to hold for one specific value of n, or for each n?
For e.g. n = 2, am I parsing it correctly as "g(f(f(x))) = g(f(g(f(x))))"?
You're parsing it correctly, and it's supposed to hold for all n.
Never heard about anything like this. (Doesn't mean much, I am not a professional mathematician.)
Just thinking out loud...
Trivial examples are: "g is identity", "g is constant", "f is constant".
A non-trivial example would be "f = x^3", "g = absolute value".
Something in between: A is a binary representation of an integer, f is an operation where a lower bit of an input never has an impact on a higher bit of the output (for example: a square root, rounded down), g is setting the last K bits to some constant value.
Functions f and g cannot both be bijections, otherwise g◦f◦id◦f = g◦f◦g◦f would imply id=g .
If you have functions f1 and g1 : A -> A with this property, and also f2 and g2 : B -> B with this property, the functions f([a, b]) = [f1(a), f2(b)] and g([a, b]) = [g1(a), g2(b)] : A×B -> A×B will also have this property.
...that's all that comes to my mind.
This is an inspirational example. If g is idempotent (g^2 = g) and commutes with f (gf = fg), they will satisfy the property.
Wow, that is a nice generalization.
But the commutativity is not necessary, in the case where f = "square root, rounded down" and g = "x rounded down to a multiple of K". In such case, it is true that gg = g, but not always gf = fg, and yet always gfg = gf.
From that example, I would be tempted to generalize that gg = g and fg = f. But that would not be true for x^3 and absolute value.
So maybe gg = g and gfg = gf? Is it sufficient? gfgf = gff. gfgfgf = gfgff = gfff. Yes, it is. Is it necessary? No, for example if f is a constant then gg does not have to be equal to g and it will work anyway. There are also counter-examples if f is not a constant, for example if we have a bijection between N and NN, then let f([a, b]) = [a^3, K] and g([a, b]) = [|a|, whatever(b)]; then gfgf([a, b]) = gff([a, b]) = [|a^9|, whatever(K)].
This is the moment where I give up. It was tempting to translate g(f^n) = (gf)^n into a set of equations that does not include n, but apparently I can't do it.
Equivalently, gfgf=gff. Do you even have fgf=ff?
That is, admittedly, a bit terse for me. Could you elaborate?
gf^n=(gf)^n trivially implies gff=gfgf. gff=gfgf implies gf^3=(gff)f=(gfgf)f=gf(gff)=gf(gfgf)=(gf)^3, and analogously for all n. Therefore you can summarize your property as gff=gfgf.
I suspect you discovered this property by studying specific g,f. ff=fgf implies gff=g(ff)=g(fgf)=gfgf. Therefore I suggest you check whether your g,f satisfy the stronger ff=fgf, which looks like a more natural property.
Will it one day be possible for authors of posts to view analytics (such as view count etc.) on their posts? On EA Forum this is already implemented, I'm curious whether it's coming to LW too. I'd like it to.
We haven't historically done it out of a vague fear that it sends people down the same incentive path that has ruined the rest of the web. I'm not sure how well founded the fear is. (Also, maybe separately, because it involves some setup costs that are slightly a-pain)
Ah, OK. Well, that's reasonable I guess.
Suppose you are the CCP, trying to decide whether to invade Taiwan soon. The normal-brain reaction to the fiasco in Ukraine is to see the obvious parallels and update downwards on "we should invade Taiwan soon."
But (I will argue) the big-brain reaction is to update upwards, i.e. to become more inclined to invade Taiwan than before. (Not sure what my all-things considered view is, I'm a bit leery of big-brain arguments) Here's why:
Consider this list of variables:
These variables influence who would win and how long it would take / how costly it would be, which is the main variable influencing the decision of whether to invade.
Now consider how the fiasco in Ukraine gives evidence about those variables. In Ukraine,
1. The Russians were surprisingly incompetent
2. The Ukrainians put up a surprisingly fierce resistance
3. The USA responded with surprisingly harsh sanctions, but stopped well short of actually getting involved militarily.
This should update us towards expecting the Chinese military to be more incompetent, the Taiwanese to put up more of a fight, and the USA to be more likely to respond merely with sanctions and arms deliveries than with military force.
However, the incompetence of the Russian military is only weak evidence about the incompetence of the Chinese military.
The strength of Ukrainian resistance is stronger evidence about the strength of Taiwanese resistance, because there is a causal link: Ukrainian resistance was successful and thus will likely inspire the Taiwanese to fight harder. (If not for this causal link, the evidential connection would be pretty weak.)
But the update re: predicted reaction of USA should be stronger still, because we aren't trying to infer the behavior of one actor from the behavior of another completely different actor. It's the same actor in both cases, in a relevantly similar situation. And given how well the current policy of sanctions-and-arms-deliveries is working for the USA, it's eminently plausible that they'd choose the same policy over Taiwan. Generals always fight the last war, as the saying goes.
So, the big-brain argument concludes, our estimates of variable 1 should basically stay the same, our estimate of variable 2 should change to make invasion somewhat less appealing, and our estimate of variable 3 should change to make invasion significantly more appealing.
Moreover, variable 3 is way more important than variable 2 anyway. Militarily Taiwan has less of a chance against China than Ukraine had against Russia, much less. (Fun fact: In terms of numbers, even on Day 1 of the invasion the Russians didn't really outnumber the Ukrainians, and very quickly they were outnumbered as a million Ukrainian conscripts and volunteers joined the fight.) China, by contrast, can drop about as many paratroopers on Taiwan in a single day as there are Taiwanese soldiers.) By far the more relevant variable in whether or not to invade is what the USA's response will be. And if the USA responds in the same way that it did in Ukraine recently, that's great news for China, because economic sanctions and arms deliveries take months to have a significant effect, and Taiwan won't last that long.
So, all things considered, the events in Ukraine should update the CCP to be more inclined to invade Taiwan soon, not less.
Ben Thompson ( https://stratechery.com/ ) , an American industry analyst currently living in Taiwan has a bunch of analyses on this on his blog. In nutshell, the US has a critical infra dependency on Taiwan in high-performance chip manufacturing; specifically, TSMC has a 90% share of 7nm, and 5nm chips. This is critical infra, for which the US does not have good (or even close-enough) substitues. Based on both these economic incentives, and Biden's own statements, the US is extremely likely to reply to Chineese aggression against Taiwan with military force.
See my reply to ChristianKI above.
Or are you saying that the probability is so high that it isn't a relevant variable in CCP planning; they'll basically just assume a kinetic US response and then plan around that? If so, then yeah that's a good counterargument.
Metaculus disagrees, fwiw:
The communication about what happens when Ukraine gets attacked and the communication about what happens when Taiwan gets attacked by the US are different.
Biden didn't say anything about defending Ukraine with US troops. Biden did say that Taiwan would be defended with US troops. Taiwan is much more important to US interests than Ukraine is.
I agree with that. But that's separate from what I'm discussing here. I'm not saying an invasion of Taiwan is overall likely, or wise for the CCP to do; I'm saying that the invasion of Ukraine should update the CCP towards invading Taiwan. Biden's statements should obviously update them away.
Updates come from reality playing out differently than you expected. Without knowing the models that someone has it's hard to say how they should update based on events happening.
The Chinese are going to have a bunch of scenarios mapped out in which the US would or wouldn't respond militarily and those rest on assumptions about how the US makes decisions. I don't think that the US response to Ukraine invalidates any of the assumptions that CCP models of the world in which the US retaliates militarily.
What's the GiveWell/AMF of AI Safety? I'd like to occasionally donate. In the past I've only done so for MIRI a few times. A quick googling fails to return anything useful in the top results which is odd given how much seems to be written in LW/EA and other forums on the subject every week.
Lark's end of year reports. (Looking at their user profile is probably your best bet)
I tried to find the link for the Searching for Outliers post by Ben Kuhn, but the google search results here are quite weird, listing benkuhn's profile page but not the post itself. Any idea why?
Google likely sees the real Searching for Outliers as being https://www.benkuhn.net/outliers/ and thus reasonably down weights the post on LessWrong in the search ranking.
I do think that this behavior is reasonable. Crossposting to LessWrong shouldn't make the post on the original blog harder to find via Google.
When setting up automatic crossposting, authors can ask us to make it so that all crossposts automatically have the
rel=canonicaltag set, pointing to the original post. Ben Kuhn asked us to do this, so the HTML directly says "when indexing this, go to this other URL to find the canonical version of this post".