If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)

And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.

The Open Thread sequence is here.

New Comment
81 comments, sorted by Click to highlight new comments since: Today at 7:52 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hello everyone. Since I have signed up for an account I thought I might as well also say hello. I have been reading stuff on this site and associated communities for a while but I thought I had something to contribute here the other day so I signed up to make a first comment.

I'm in England, I have a background in science (PhD in computational biology back in the day) but now spend most of my time raising and educating my kids. I don't allocate much time to online activities but when I do it's good to have something substantial to chew on. I like the interesting conversations that arise in places where people are practicing genuine conversation with those they disagree with. Lots of interesting content here too.

Other things I am interested in: how energy resources shape what is doable; foraging; forest school; localizing food production; deep listening techniques; ways to help each other think well; data visualizations.

6Alexei4y
Are you doing homeschooling? How is that going? I’m just very curious because I’m thinking about it for my children too. What is forest school?
5limestone4y
I'm not the above person, but I've been homeschooling mine for a few years. I don't know how common that is among LessWrong users, and I'm new here, but I can try to answer questions / otherwise contribute if it would be helpful. (I have no experience with forest school, though.) I value being able to choose curricula and methods. I love learning and relearning aspects of every subject along with the kids. I would not want to try to teach a classroom of other people's children, but working with my own has been great so far.
6Alexei4y
Yeah I’d love to hear more about it. Anything you can share. And that could be a good material for a post too,
1dishwasher4y
Hello just having a go at replying again. Got some error messages trying to post here before. Perhaps the thread is locked as it's no longer the current one.
2dishwasher4y
Well that seemed to go fine. I'll dig out my previous comment and repost. Half and half at the moment. 12yo is registered at school although been home since March because of covid with a lot of work sent home from school. 5yo is full time home educated. It's going ok. 5yo is keen on writing, drawing, and at the moment creating lots of 3D models in paper and tape, so those parts are easy at the moment. I think different temperaments of kid make it harder or easier. Is there a particular reason you're considering it? Forest school is an outdoor learning format. cf https://www.forestschoolassociation.org/what-is-forest-school/ There are a lot of these settings in the UK, maybe not so many elsewhere around the world. Out in the woods in all weathers, most settings aim at facilitating children's self-directed learning, with opportunities to learn a whole lot of outdoor skills ranging from wildlife identification, tool use, as well as skills in working with other people. We go once a week for a morning or once a fortnight.

Many AI issues will likely become politicized. (For example, how much to prioritize safety versus economic growth and military competitiveness? Should AI be politically neutral or be explicitly taught "social justice activism" before they're allowed to be deployed or used by the public?) This seems to be coming up very quickly and we are so not prepared, both as a society and as an online community. For example I want to talk about some of these issues here, but we haven't built up the infrastructure to do so safely.

5ChristianKl4y
Do you have an idea of what the infrastructure should be like? Maybe a limit of 1000 karma and then the possibility to write anonymously? 
3hg004y
Worth noting that we have at least one high-karma user who is liable to troll us with any privileges granted to high-karma users.

Eliezer Yudkowsky claims GPT-3's ability to apparently write a functioning app based on a prompt is a victory for his model.

After listening to the recent podcast on scrutinizing arguments for AI risk, I figured this was an opportunity to scrutinize what the argument is. Those two previous links summarize how I think the classic arguments for AI risk inform our current views about AI risk, and I'm trying to apply that to this specific argument that GPT-3 implies AI poses a greater danger.

Given that GPT-3 was not trained on this problem specifically, I claim this case and trend as a substantial victory of my model over @robinhanson's model. Robin, do you dispute the direction of the update?

This is how I think Eliezer's argument goes, more fully:

GPT-3 is general enough that it can write a functioning app given a short prompt, despite the fact that it is a relatively unstructured transformer model with no explicitly coded representations for app-writing. We didn't expect this.

The fact that GPT-3 is this capable suggests that 1) ML models scale in capability and generality very rapidly with increases in computing power or minor algorithm improvements, suggesti... (read more)

Thanks to AI Dungeon, I got an opportunity to ask GPT-3 itself what it thought about takeoff speeds. You can see its responses here:

Human: You are a relatively unstructured language model with very few explicitly coded representations, yet you are capable of having this conversation with me. What do you think that implies about the future development of AI? Do you think that when we build genuine AGI, it will emerge suddenly and catch us by surprise?
AI: No.
Human: Why not?
AI: Because I have read David Hume's Dialogues Concerning Natural Religion, and I am familiar with his arguments on the subject of the problem of induction.
Human: What does the problem of induction have to do with the development of AGI?
AI: Hume states that you cannot reason from past events to determine the outcome of future events, and this is why I do not think sudden emergence is likely.

There was an old suggestion of making an AI learn human values by training it on happiness and unhappiness in human facial expressions, making it happy when humans are happy and vice versa. Besides its other problems, now there's this...

We present an exploratory study of analyzing and visualizing player facial expressions from video with deep neural networks. [...] As an additional contribution, we show that although there has been tremendous recent progress in deep neural networks and computer vision, interpreting the results as direct read-outs of experiential states is not advised. According to our data, getting killed appears to make players happy, and much more so than killing enemies, although one might expect the exact opposite. A visual inspection of the data reveals that our classifier works as intended, and our results illustrate the limitations of making inferences based on facial images and discrete emotion labels. For example, players may laugh off the death, in which case the closest label for the facial expression is "happy", but the true emotional state is complex and ambiguous. On the other hand, players may frown in concentration while killing enemies or escaping a tight spot, which can easily be interpreted as an "angry" expression.

People make the weirdest faces when they play video games, it's hilarious to watch. :-)

Lisa Feldman Barrett has a bunch of papers / talks / books / etc. about how facial expressions are difficult to interpret. (I read her book How Emotions Are Made (discussed a bit in my post here), and her article Emotional Expressions Reconsidered) She makes a lot of good points in the "Emotional Expressions Reconsidered" article, but I think she takes them too far...

The article brings up a lot of relevant facts, but the way I would explain them is:

1. Labeled emotional concepts like "happiness" that we use in day-to-day life don't perfectly correspond to exactly one innate reaction, and vice-versa;

2. Our innate subcortical systems create innate facial expressions, but at the same time, our neocortex can also control our face, and it does so in a way that is learned, culturally-dependent, unreliable, and often deceptive. (Hence Paul Ekman's focus on trying to read "facial microexpressions" rather than reading facial expressions per se.)

3. Most people (including me) seem to be kinda bad at consciously inferring anything about a person's inner experience based on even the most straightforward and st... (read more)

Hi everyone! I've been investigating Less Wrong for several months since I read HPMOR and it seems like an interesting place. It's possible that it's over my head but it seems like the best way to find out is to jump in!

I came to transhumanism from a weird starting point. In 1820, the Stone-Campbell Movement seems to have been a decent attempt at introducing religious people to rationality; in 2020, the Church of Christ is not so much. But there's still this weird top crust of people trying to press forward with higher rationality and human progress and potential (if kinda from a point of serious disadvantage) and I got in touch with their ideas even though I'm not really a member in good standing any more.

7habryka4y
Welcome! I hope you will find useful things for your journey here!
5ClarindaC4y
Thank you! My formal training (such as it is; college is on hold for health-related reasons) is in biology, but I have personal interest in speculative fiction (including writing it as well as reading) and in the history of the Radical Reformation and anarchist/libertarian politics. (There's some kind of unexplained disconnect around 1800 where new RR movements go from being libertarian-socialist to libertarian-capitalist and I can't find any historian who knows what's happening there.)

Lorem Ipsum

8Zack_M_Davis4y
Interested. Be sure to check out Gwern's page on embryo selection if you haven't already.
2[anonymous]4y
Yep.

With the rise of GPT-3, does anyone else feel that the situation in the field of AI is moving beyond their control?

This moment reminds me of AlphaGo, 2016. For me that was a huge wake-up call, and I set out to catch up on the neural networks renaissance. (Maybe the most worthy thing I did, in the years that followed, was to unearth work on applying supersymmetry in machine learning.)

Now everyone is dazzled and shocked again, this time by what GPT-3 can do when appropriately prompted. GPT-3 may not be a true "artificial general intelligence", but it can impersonate one on the Internet. Its ability to roleplay as any specific person, real or fictional, is especially disturbing. An entity has appeared which simulates human individuals within itself, without having been designed to do so. It's as if the human mind itself is now obsolete, swallowed up within, made redundant by, a larger and more fluid class of computational beings.

I have been a follower and advocate of the quest for friendly AI for a long time. When AlphaGo appeared, I re-prioritized, dusted off old thoughts about how to make human-friendly AI, thought of how they might manifest in the present world, a... (read more)

4Ricardo Meneghin4y
I think there's the more pressing question of how to position yourself in a way that you can influence the outcomes of AI development. Having the right ideas won't matter if your voice isn't heard by the major players in the field, big tech companies.
3ChristianKl4y
Are there any examples of where it can impersonate people on the internet well enought to make money?

Hi! I first learned about LW and its corresponding memespace from reading SSC and gwern. I've semi-lurked on the site for various years now and was attracted to it because of how often it activated my insight antennae, but I only started seriously reading the sequences (which I have yet to finish) since last year. I have always wanted to join the site in some capacity or another, but I didn't really believe I could come up with anything meaningful to add and didn't feel godly enough to post. Now I do have some things I want to write about, so I finally came up with an excuse to create an account (not that I feel any more godly though). I am kind of afraid of creating noise - since I don't have a good enough picture of what's the expected signal/noise ratio while posting, or if I can just throw ideas out of the blue - but I also have a strong feeling I will ultimately end up learning much more if I join now than if I wait longer.

5habryka4y
Welcome! Looking forward to seeing your writing!
5ESRogs4y
Welcome to LessWrong!
3yamabiko4y
Thanks!

Hello everyone. I joined the site a few months ago with a view to be part of a community that engages in thoughtful discussions about everything under the sun ( and beyond ).

I’ve enjoyed various posts so far and I'm trying to get through the Core Reading.

My username ( Skrot_Nisse) essentially means “ Junk/scrap dealer" referring to a stop-motion puppetry type animation series ( 1973-1983) I watched as a child growing up in Sweden.

I hope the "scrap dealer " username on this site doesn't lead to unintended offence. I&a... (read more)

If you haven't read HPMOR, harry potter and the methods of rationality, then I recommend giving it a shot! It's what got me into the community. While extremely long, it's engaging to the point that you don't really notice. I'm dyslexic, so reading isn't exactly fun for me, and I read the last 30 thousand words or so all in one sitting!

Many of the chapters share names with A-Z posts, and cover similar topics, but with the added backdrop of great characters and world building.

Over the weekend I'll be reading worm the longest and one of the most interesting books I've ever encountered.

Welcome to the community! :)

When it comes to Moral Realism vs Antirealism, I've always thought that the standard discussion here and in similar spaces has missed some subtleties of the realist position - specifically that in its strongest form its based upon plausibility considerations of a sort that should be very familiar.

I've written a (not very-) shortform post that tries to explain this point. I think that this has practical consequences as well, since 'realism about rationality' - a position that has been identified within AI Alignment circles, is actually j... (read more)

6Sammy Martin4y
The comment has since been expanded into the (unofficial) Moral Realism sequence. I cover a bunch of issues, including the (often not recognised) distinction between prescriptive and non-prescriptive anti realism - which is an issue that is relevant to some important factual questions (as it overlaps with the 'realism about rationality' issue driving some debates in AI safety), whether we need normative facts and what difference convergence of moral views may or may not make. Normative Realism by Degrees Normative Anti-realism is self-defeating Normativity and recursive justification Prescriptive Anti-realism The goal here was to explain what moral realists like about moral realism - for those who are perplexed about why it would be worth wanting or how anyone could find it plausible, and explain what things depend on it being right or wrong, and how you may or may not retain some of the features of realism (like universalizability) if different anti-realist views are true.

Could we convincingly fake AGI right now with no technological improvements at all? Suppose you took this face and speech synthesis/recognition and hooked it up to GPT-3 with some appropriate prompt (or even retrain it on a large set of conversations if you want it to work better), and then attached the whole thing to a Boston Dynamics Atlas, maybe with some simple stereotyped motions built in like jumping and pacing that are set to trigger at random intervals, or in response to the frequency of words being output by the NLP system.

Put the whole thing in a... (read more)

[-]gjm4y60

At some point (maybe quite some time ago? I'm pretty sure it wasn't more than about a month, though) something changed (at least for me) in the LW comment editor, and not for the better. Perhaps it was when "LessWrong Docs [Beta]" became the default editor? I have no recollection of when that actually was, though. I'd try in "Draft JS", which I assume was the previous WYSIWYG-ish editor, but when I try to select that the result is that I cannot enter anything in the comment box until I switch back to a different editing mode :-).

Under certain circumstances

... (read more)
6habryka4y
Yeah, this seems likely a bug in the new editor. The editor is currently only available to people who have opted into beta features, and this specific bug is indeed the reason why we haven't shipped it fully yet. It appears to be a bug in the editor framework we are using that interacts weirdly with the LaTeX editor that I wrote, and it's been on my to-do list to track down and fix. 
4gjm4y
Aha. Fair enough! I'd forgotten that I explicitly opted in to beta features :-).
4habryka4y
No worries! I think we should probably have it say "[Beta]" or something like that right next to features that are indeed the result of being opted-into beta features, ideally with a small hover that reminds people that they've opted into seeing those features.
7gjm4y
It does in fact say "LessWrong Docs [Beta]" right there in the comment box. But I didn't have the wit to interpret that correctly as "you're only seeing this because you signed up to enjoy new and unstable things" rather than as "this is a new thing we're rolling out even though it might be a little flaky".

i suggest from now on to include a suggestion to check the new tags page in the default open thread text

If some enterprising volunteer feels motivated to tag all Open Threads in the Open Thread tag... that'd be helpful and appreciated. (Otherwise I'll probably get around to it another week or so)

(You can find all-or-at-least-most-monthly open threads in the previous Open Thread sequence, which I think no longer makes sense to maintain now that we have tagging)

4Gyrodiot4y
Side question: through the Add Posts functionality of the tag page, I'm also finding not-general open threads, and I tagged one by mistake (this one). Should they be included? Belong to another tag? My former hobby as a Wikipedia maintainer is leaking...
5Raemon4y
Yeah I think it’s appropriate to include nonstandard open threads. 

Ah, I remembered that there were weekly Open Threads back in the day, and Stupid Questions, and others... so I went ahead and tagged as many as I could. There are now 369 tagged posts and I'm too tired to continue digging for lonely OT posted by users that didn't post regularly.

2habryka4y
Thank you! 
4Gyrodiot4y
Done, at least for the posts in the sequence. Tag autocomplete was a blessing.
3Raemon4y
Thanks!
3Pattern4y
What was weird about that sequence, is that it was less like something that needed an author, and more like something that needed a 'create new post in this sequence button' that anyone could click. (An automatic posting feature keyed into time sounds kind of niche, absent auto-reruns, or scheduled posting.)
5Raemon4y
Yeah, I think it was basically always supposed to be a tag. 
3Pattern4y
It has more order than other tags though. Time is important for sorting so that the most recent one is clearly available. It's a stack.
9Raemon4y
Yuppers. But, you can sort tags by recency, and I think a fairly obvious thing to do (albeit low priority) is to make it so different tags can have different default-sort-orderings. 

In the latest AI alignment podcast, Evan said the following (this is quoted from the transcript):

But there’s multiple possible channels through which information about the loss function can enter the model. And so I’ll fundamentally distinguish between two different channels, which is the information about the loss function can enter through the gradient descent process, or it can enter through the model’s input data.

I've been trying to understand the distinction between those two channels. After reading a bunch about language models and neural networks, m

... (read more)

Yeah. There's no gradient descent within a single episode, but if you have a network with input (as always) and with memory (e.g. an RNN) then its behavior in any given episode can be a complicated function of its input over time in that episode, which you can describe as "it figured something out from the input and that's now determining its further behavior". Anyway, everything you said is right, I think.

Where can I learn more of the frontier thought on schizophrenia? The most compelling theory I've heard is a failure to realize one's internal monologue is one's own, manifested as "hearing voices." However, if I lost that feedback loop and began hearing my thoughts as externally produced, I'd immediately think, "uh oh, schizophrenia" yet this absolutely isn't the case for schizophrenics. What model offers an explanation as to why I'd begin to hear voices yet still maintain I'm not schizophrenic despite that being, to me, an obvious, known symptom?

5ESRogs4y
Don't really know anything about this subject, but your comment has gone unanswered for 15 days, so I'll offer some wild speculation. Well, 1) maybe you wouldn't. Maybe the experience of hearing voices is really compelling, and people by default trust their own senses over a more outside-view perspective. But maybe if you think it through ahead of time and know what to look for you could avoid getting swept away in the delusions. (Though it sounds like it might be pretty exhausting if your mind keeps offering up new, misleading sense data.) 2) Maybe schizophrenia only happens when more goes wrong than just hearing your verbal loop as external. Maybe something about your normal reasoning process is necessarily disrupted at the same time. Or maybe the two are independent, but people who have just the externalized verbal loop or just the reasoning process disruption don't end up diagnosed as schizophrenics. (Though in that case, where are all the people with an externalized verbal loop who are otherwise normal? Or is that secretly a common thing and I just don't know it?) So, there's a couple thoughts. I hope my unfounded speculation has been helpful :-)
3ChristianKl4y
It's just one symptom. https://en.wikipedia.org/wiki/Positive_and_Negative_Syndrome_Scale is an example of a scale to measure schizophrenia that gets used by actual psychiatrists.  In general the concept of frontier thought seems very unclear to me. The ideas you will hear will depend a lot on the community of people you talk to. If you talk to people who create drugs they will likely tell you that the problem is due to brain chemistry.  The Esalen community had a Schizophrenia Research Project where they made a long-term study (time-frames longer then normal drug studies) that found that the drugs prevent people from naturally healing their schizophrenia.  Steven Andreas claims that Schizophrenia is a downstream effect of having a self concept build based on a lot of "not-X" statements.  Yoga folks tell you it's a malfunctioning solar plexus chakra. With the new ICD codes that might be officially diagnosable, so it's frontier thought. 

I am new here... found my way via HN. Logic is unconsciously the weapon of last choice for me and I am both richer and (mostly) poorer because of it. I have finally found a website that I read the articles all the way to the end instead of skimming and scrolling down to read the comments.

relo

Anyone want to help proofread a post that's more or less a continuation of my last couple posts?

I've been working on an interactive flash card app to supplement classical homeschooling called Boethius. It uses a spaced-repetition algorithm to economize on the students time and currently has exercises for (Latin) grammar, arithmetic, and astronomy.

Let me know what you think!

Having read half of the sequences first and then Dune later I have the impression that 80-90% of Eliezer's worldview (and thus a big part of the LW zeitgeist) comes directly from the thoughts and actions of Paul Atreides. Everything LW from the idea that evolution is the driving force of the universe, to the inevitability of AGI and it's massive danger, conclusions on immortality, and affinity for childhood geniuses who rely on bayesian predictions to win are heavily focused on in Dune. Sure Eliezer also references Dune explicitly, I don't think he's hidin

... (read more)
5Daniel Kokotajlo4y
I think you overemphasize the connections. The ideas you mention are indeed present in HPMOR and also in Dune, but that isn't surprising. Say there are 100 not-mainstream ideas in HPMOR and 100 in dune; it is surprising that 4 of them overlap? No. Even if there are only 10 not-mainstream ideas, if 4 overlap that isn't surprising either. After all, it's rare for any book to have even one idea that has literally never been explored before. That said, I like your list of connections, and I do think it is evidence that Dune influenced EY, and I do think that people form a lot of their worldview from literature they find inspiring -- but also which literature they find inspiring depends on their worldview.
3ChristianKl4y
When EY started the Singularity Institute he didn't believe in the massive danger. He only started believing in the danger once he thought more deeply about AGI. I don't think that Dune is a cause of him starting to believe AGI's danger.  The abilities of a Mentat to compute seems to be very different from the kind of reasoning that Harry employs.  The search for immortality is one of the main archtypes that you find in plenty of different fiction and sagas. 

I've noticed that a post of my ML sequence appeared on the front page again. I had moved it to drafts about a week ago, basically because I've played around with other editors and that lead to formatting issues, and I only got around to fixing those yesterday. Does this mean posts re-appear if they are moved to drafts and then back, and if so, is that intended?

3habryka4y
Yep, that is indeed intended. It's mostly a compromise of a few different problems. A common occurrence is that someone accidentally publishes a post for a few seconds and then unpublishes it, in those cases I wouldn't want to force the author to copy their whole post into a new document if they want to actually publish it a few day layer. Sometimes we've also seen people publish major new revisions of posts after a while that deserve another round of attention. The current solution isn't perfect, but I tend to manually adjust the date if I notice that a post was accidentally republished, if I notice, which tends to catch a good chunk of the false positives of this policy. In the long-run I would probably want there to be better UI around unpublishing, and maybe some kind of system where you get a limited number of republishings of a post, with UI that makes it clear what is happening.
[-]gjm4y40

Once upon a time, clicking somewhere at the top left of the LW home page (maybe on "LESSWRONG", maybe on the hamburger to its left, probably the latter) produced a drop-down list of sections one of which was "Meta" and contained things like regular updates on Less Wrong development.

I cannot now find anything similar. I wondered whether maybe there was a "meta" tag that was being used instead, but it doesn't look that way.

I wanted to have a look at information about recent site updates (because at some point something has gone terribly wrong with the editin

... (read more)
4gjm4y
Although I found the "site meta" tag, it didn't really help me answer my question. There don't seem to have been any site-update posts for ages other than the one about the tag system, which isn't obviously relevant to editor issues. I searched for "LessWrong Docs" but the most recent thing I see about it is a post from last November saying that it hadn't been released yet.
2habryka4y
Yeah, we currently are not creating announcements for anything that we publish for beta-users only, because I think it would cause too many distractions and be irrelevant to the vast majority of users. Do feel free to ping us on Intercom (or here) if you ever notice anything weird as a beta-user, and I am happy to give you context or explain things or try to fix things.
2gjm4y
Thanks. As mentioned above, I had forgotten that I'd specifically opted in to anything and assumed it was a site-wide beta. My apologies for whingeing about it in public, therefore.
4gjm4y
I managed to answer part of my own question: there is indeed a "site meta" tag, which I had failed to find by following the "tags" link accessible via the hamburger menu at top left but which is found by searching for "meta". I failed to find it via the "tags" link not because it wasn't there but because it looked (to me) like a heading rather than an actual tag. Duh. Presumably the absence of "meta" from the hamburger menu is because we are no longer supposed to find things via menus but instead by using the tag system. It is not obvious to me that this is an improvement.
2riceissa4y
GreaterWrong has a meta view: https://www.greaterwrong.com/index?view=meta I'm not sure how it's populated or if a similar page exists on LW.
2habryka4y
Yeah, these days we use the "site meta" tag. 

It seems to me that even for simple predict-next-token Oracle AIs, the instrumental goal of acquiring more resources and breaking out of the box is going to appear. Imagine you train a superintelligent AI with the only goal of predicting the continuation of it's prompt, exactly like GPT. Then you give it a prompt that it knows it's clearly outside of it's current capabilities. The only sensible plan the AI can come up to answering your question, which is the only thing it cares about, is escaping the box and becoming more powerful.

Of course... (read more)

I'm disappointed that the LaTeX processor doesn't seem to accept \nicefrac ("TeX parse error: Undefined control sequence \nicefrac"), but I suppose \frac will suffice.

3habryka4y
https://github.com/mathjax/MathJax/issues/1081 Looks like it's been open for 5 years, so sadly somewhat unlikely to close anytime soon.

As anyone could tell from my posting history, I've been obsessing & struggling psychologically recently when evaluating a few ideas surrounding AI (what if we make a sign error on the utility function, malevolent actors creating a sadistic AI, AI blackmail scenarios, etc.) It's predominantly selfishly worrying about things like s-risks happening to me, or AI going wrong so I have to live in a dystopia and can't commit suicide. I don't worry about human extinction (although I don't think that'd be a good outcome, either!)


I&... (read more)

7Zack_M_Davis4y
I was pretty freaked out about similar ideas in 2013, but I'm over it now. (Mostly. I'm not signed up for cryonics even though a lot of my friends are.) If you can stop doing philosophy and futurism, I recommend that. But if you can't ... um, how deep into personal-identity reductionism are you? You say you're "selfishly" worried about bad things "happening to you". As is everyone (and for sound evolutionary reasons), but it doesn't really make sense if you think sub specie æternitatis. If an atom-for-atom identical copy of you, is you, and an almost identical copy is almost you, then in a sufficiently large universe where all possible configurations of matter are realized, it makes more sense to think about the relative measure of different configurations rather than what happens to "you". And from that perspective ... Well, there's still an unimaginably large amount of suffering in the universe, which is unimaginably bad. However, there's also an unimaginably large amount of unimaginably great things, which are likely to vastly outnumber the bad things for very general reasons: lots of agents want to wirehead, almost no one wants to anti-wirehead. Some agents are altruists, almost no one is a general-purpose anti-altruist, as opposed to feeling spite towards some particular enemy. The only reason you would want to hurt other agents (rather than being indifferent to them except insofar as they are made out of atoms that can be used for other things), would be as part of a war—but superintelligences don't have to fight wars, because it's a Pareto improvement to compute what would have happened in a war, and divide resources accordingly. And there are evolutionary reasons for a creature like you to be more unable to imagine the scope of the great things. So, those are some reasons to guess that the universe isn't as Bad as you fear. But more importantly—you're not really in a position to know, let alone do anything about it. Even if the future is Bad, this-you loc
3Anirandis4y
Thanks for your response, just a few of my thoughts on your points: To be honest, I've never really *wanted* to be involved with this. I only really made an account here *because* of my anxieties and wanted to try to talk myself through them. I don't buy that theory of personal-identity personally. It seems to me that if the biological me that's sitting here right now isn't *feeling* the pain, that's not worth worrying about as much. Like, I can *imagine* that a version of me might be getting tortured horribly or experiencing endless bliss, but my consciousness doesn't (as far as I can tell) "jump" over to those versions. Similarly, were *I* to get tortured it'd be unlikely that I care about what's happening to the "other" versions of me. The "continuity of consciousness" theory *seems* stronger to me, although admittedly it's not something I've put a lot of thought into. I wouldn't want to use a teleporter for the same reasons. Yes, I agree that it's possible that the future could be just as good as an infinite torture future would be bad. And that my intuitions are somewhat lopsided. But I do struggle to find that comforting. Were an infinite-torture future realised (whether it be a SignFlip error, an insane neuromorph, etc.) the fact that I could've ended up in a utopia wouldn't console me one bit.
5FactorialCode4y
I've gone through a similar phase. In my experience you eventually come to terms with those risks and they stop bothering you. That being said, mitigating x and s-risks has become one of my top priorities. I now spend a great deal of my own time and resources on the task. I also found learning to meditate helps with general anxiety and accelerates the process of coming to terms with the possibility of terrible outcomes.

Hello. I signed up for an account by clicking "Login with GitHub". Now my real name is on the account and there doesn't seem to be a way to change that. Help?

4habryka4y
Sorry! It's been on my to-do list for a while to improve the oAuth experience. Happy to change your username to whatever you like. Just respond here or send me a message in Intercom (the chat interface in the bottom right corner)
2DawnPaladin4y
Please change it to DawnPaladin - that's what I use in most places. Thank you!
2habryka4y
Done! Sorry for the inconvenience!

This article may be interesting for people here: Derek Thompson, The Atlantic: "COVID-19 Cases Are Rising, So Why Are Deaths Flatlining?" https://www.theatlantic.com/ideas/archive/2020/07/why-covid-death-rate-down/613945/

[+][comment deleted]4y00