All of burmesetheater's Comments + Replies

Are long-form dating profiles productive?

If you had trouble finding a partner

You know, one can find a desirable partner after having had trouble finding one. Just finding a parter is not very hard as XX. Please think more carefully about what has (and hasn't) been said before strawmanning. 

I assume that the kind of partners people are seeking are partners that are desirable for them. That assumption seems at least apply to most people with whom I have spoken about finding a partner. But even if you have evidence from one step of "one person acquired one partner by doing XY" that's not very robust for making general claims about which strategies on average provide returns.
Are long-form dating profiles productive?

The post originally had several positive karma then got downvoted. The need for "epistemic legibility" is noted.

you haven't spoken with XX who have trouble finding a desireable partner

Haven't spoken with? Who said I'm not in this category lol

If you had trouble finding a partner, having certainty that it's just a matter of doing simple step A and B would not be warranted.
It’s Probably Not Lithium

More plausible model for why in present day so many are overweight:

--cheap calories that taste good widely available with very low effort to obtain

--tasty food is, other things equal, an easy exploit to reward / motivation loops, so it tends to get used in exactly this way which results in excess calorie consumption and of course this is habit forming. there is probably also a lower threshold to "get into" food vs something else in this class like drugs since eating is already universal and not taboo or otherwise particularly regulated. 

--fewer obliga... (read more)

I was, and still am, tho much less, excited about the contamination theory – much easier to fix! But I think I'm back to thinking basically along the lines you outlined. I'm currently losing weight and my model of why is: * I'm less stressed, and depressed, than recently, and I've been able to better stop eating when I'm satiated. * I'm exercising regularly and intensely; mainly rock climbing and walking (with lots of decent elevation changes). It being sunnier and warmer with spring/summer has made this much more appealing. * I'm maybe (hypo)manic (or 'in that direction', i.e. 'hypohypomanic'; or maybe even 'euthymic'). I'm guessing recent sunlight-in-my-home changes triggered this (as well as the big recent drop in stress/depression). I would love to see a study of weight gain in modern hunter-gatherer people that provides an experimental group with 'very palatable' food. I think I would be willing to bet that they would gain some weight. I do also suspect that hunter-gatherers engage in a LOT of fairly strenuous physical activity. Walking – and living in a dense urban walkable city (in my case NYC) – does seem like maybe one of the most feasible ways to try to match that much higher level of overall physical activity. (Rock climbing is also pretty strenuous!)
Air Conditioner Repair

First bullet, those are good points. It is an interesting question how one would good data on this sort of thing and how accurate that data would be. 

Second, this isn't the intention, it's to show that the story sounds bizarre. It's not a political comment. 

Thanks for your reply. Sorry about that second bullet point and I no longer endorse it - I think that after being annoyed by the first issue, I was in a looking-for-trouble frame of mind while interpreting the rest and read in something that really wasn't there.
Air Conditioner Repair

What am I supposed to do now? Chargeback?

If you want your money back, sure. The alternative is to fight a company experienced at not giving refunds.

As for warning the community, this kind of thing happens all over the place all the time in all kinds of industries. Complaints to BBB and Yelp tend to be famously ineffective although possibly will demonstrate good citizenship to those who don't know better. Overall, this post is a bit confusing--it's like someone from a completely different society was suddenly transported to modern USA. What are you asking / telling us? 

1Nanda Ale1mo
BBB may be ineffective at changing public behavior overall, or the company's behavior, but in my experience it is effective at getting monetary results for individual complaints. I have used the BBB twice after failing every other method I could think of. Surprisingly I was contacted and fully 100% refunded very quickly, after doing the legwork for well documented BBB complaints. Both cases were egregious (clearly a full refund was warranted) but all other complaints got me absolutely nothing, not even a partial refund, so there was something special about using BBB. This experience has changed my own behavior. BBB complaints require identity and documentation and I think they are far more reliable than a typical online review. Both companies had a large number of public BBB complaints that I could have checked in advance. Especially in the case of anything with a recurring fee, or any company that is supposed to bill my medical insurance, I now religiously check their BBB information before committing. Same for home repair, plumbing, or any large project. For small companies you can't find much, so it's not a huge help, but the BBB is a good source of information in cases it does cover. Please report this company to the BBB. Especially if they have no BBB complaints yet. The other place I check on contractors is Facebook local groups. Find the most active group for your local neighborhood, town, city, whatever, and just search for posts in the last few years. This typically filters out the worst offenders.
I do not of course know your intentions, but this comment really rubbed me the wrong way: * Most importantly, there's the everybody-knows dynamic (which, unrelatedly, Zvi has written about []). Something that you happen to know is usually not as common knowledge as you think, and even if this case actually is mostly common knowledge, you could probably have found a way to write it that sounds nicer (i.e. less of a you're-an-ignorant-outsider vibe) and/or better supported (got any stats/links showing how common this kind of scamming really is?) * Less importantly, the 'modern USA' phrasing feels to me like it's taking a dig at something, like (a less extreme version of) whichever of the following feels most unfair to you: "of course this kind of scamming is common - welcome to capitalism", or "of course this kind of scamming is common - welcome to Biden's USA".
I don't think the post is confusing. Everyone is young once. Everyone has that time where, for the first time in their life, sorting out the gas/internet/leak/AC is their responsibility. Many people, within the first couple of times they need to do this, get stung and learn a valuable lesson. Depending on how lucky/unlucky you are you could go a long way before encountering something like this. And, for everyone, (at least for me) the first time this happens it is a real surprise. Yes, maybe at an academic level you were able to imagine that there might be conmen out there trying to rob you - but that acadmeic understanding has no power over your perception of reality, because it has never actually intruded on your life. When you were buying stuff in shops or online you never got stung because you were comparing products/websites. This is the first time you are buying a service where the price tag wasn't agreed upon beforehand. The sting comes fairly late in life. When you are renting its the landlord's problem.
2Randomized, Controlled1mo
I had a similar reaction, I had the impression that Zvi is more worldly and jaded than the median LWer.
Limits of Bodily Autonomy

This is fundamentally misframed. For example, there's no reason not to support--in some cases--mandatory abortion if you support mandatory vaccination. The main benefits of abortion aren't to the user, they're to the potential conscious entity who mercifully wasn't forced to endure a predictably sub-par life and to society. Abortion isn't really about personal (bodily) autonomy, that's just a useful political expedient. 

edit: is this being downvoted because people think it's anti-abortion? To put this comment in more context,  it's assumed that a... (read more)

AI Training Should Allow Opt-Out

once the cat is out of the bag it's out

Since this was not clear, that's correct. The intention is not to encourage non-contribution to the open internet, including open source projects.

It is a problem in 2022 when someone seriously proposes opt-out as a solution to anything. Our world does not "do" opt-out. Our concept of "opting out" of the big-data world is some inconsequential cookie selection with a "yes" and a buried "no" to make the user feel good. We are far past the point of starting conversations. It's not productive, or useful, when it's predicta... (read more)

Are long-form dating profiles productive?

Productive for what, exactly? There's a lot of assumed context missing from the post, including your gender, and the gender you're targeting. It's also not completely clear what kind of relationship you want, but we'll assume it's serious and long-term.

First: you're XY, looking for XX. In this case, @swarriner's post is applicable to most of the distribution. But since you're here, we'll assume the girl you're looking for is intellectually gifted, data oriented, and may or may not be slightly on the spectrum. Even in this case, pictures are still worth 100... (read more)

It's not controversial. If it would be it would get both agreement and disagreement votes. It's rather low quality. Part of what makes a LessWrong post high quality is epistemic legibility. That would mean that you not only give your opinions but explain the evidence you have for your opinions. Apart of that the idea that XX looking for a long term relationship should orient themselves around what's necessary instead of orienting themselves around increasing the likelihood of finding a partner that has the attributes the want seems flawed and like you haven't spoken with XX who have trouble finding a desireable partner.
AI Training Should Allow Opt-Out

Disagree. Data public (and private) will be used by all kinds of actors under various jurisdictions to train AI models and only a fraction of these predictably will pay any heed to an opt-out (and only a fraction of those who do may actually implement it correctly). So an opt-out is not only a relatively worthless token gesture, the premise of any useful upside appears to be based on the assumption that one can control what happens to information one has publicly shared on the internet. It's well evidenced that this doesn't work very well.

Here's another approach: if you're worried about what will happen to your data, then maybe do something more effective, like not put it out in public.

I agree that whatever is available will be used to whatever extent can be gotten away with. And opt-outs generally don't work that well, or even can be counterproductive like in the case of tailoring ads to people who opt out of receiving them. That being said, an opt out token is better than what is available now, which is totally nothing. Yes, this would only work if all players actually respected it (ha!), but it's a start. And may get the conversation going. Like Stallman with GNU. Your proposed approach isn't helpful. You're pretty much suggesting to stop contributing to open source (wikipedia, social media, anything), as you can never tell what will be done with your code (or what you wrote). Though after reading through this again it might simply be a communications problem, where I assume your statement is snark, rather than a valid observation that once the cat is out of the bag it's out and a general exhortation to increase one's digital hygiene. Still - open source sort of requires people to be able to read it in order for it to work...
Against Active Shooter Drills

If your response to that idea is ‘what, what, that sounds horrible and terrifying and we should absolutely positively not do that’ then you seem like a normal human to me.

Or maybe it's dull, boring and dumb like most other things in school. How you perceive the threat of mass shootings, or anything else, is not one-size-fits-all. School tends to be a ways down on the list of one's influences at any age and if one's dearer influences consider shootings to be a very unlikely cause of problems to one's health, as is objectively the case, one might simply thin... (read more)

Alignment Risk Doesn't Require Superintelligence

Destructive alignment issues in our species are more mundane. Several leaders in the 20th century killed outright very large numbers of people for completely banal reasons like political ambition. Actually, your intuition that 9/11 events happen "all the time" is only off in a temporal sense; the number of humans unambiguously killed by the coordinated actions of relatively few other unaligned humans in the last 100 years is so great that it is probably enough to have at least one 9/11 a day during that time. Humans are generally unaligned on several level... (read more)

Agreed. Look at the wars of just the past 100 years, the Spanish flu, and the damage caused by ignorant statements of a few famous or powerful people during the COVID-19 pandemic. We start to see a picture where a handful of people are capable of causing a large amount of damage, even if they didn't anticipate it. If they set their mind to it, as probably with the Ukraine war at the moment, then the amount of destruction is very asymmetrically proportioned to the number of people responsible for it.
Yes, AI research will be substantially curtailed if a lab causes a major disaster

and have the best forecasters

With forecasters from both sides given equal amounts of information, these institutions might not even reliably beat the Metaculus community. If one is such a great forecaster then they can forecast that jobs like this might not be, among other things, that fulfilling.

I don't know if we've gotten to the point where they can fool the professionals at not getting fooled

Quite a few professionals (not at not getting fooled) still believe in a roughly 0% probability of a certain bio-related accident a couple three years ago thanks i... (read more)

Yes, AI research will be substantially curtailed if a lab causes a major disaster

Well, there's a significant probability COVID isn't a "natural" pandemic, although the story behind that is too complicated without an unambiguous single point of failure which hinders uptake among would-be activists.

If there's an AI failure will things be any different? There may be numerous framings of what went wrong or what might be addressed to fix it, details sufficient to give real predictive power will probably be complicate and it's a good bet that however interested "the powers that be" are in GOF, they're probably much MUCH more interested in AI... (read more)

My main thesis regarding how a non-existential AI disaster would happen in practice is (and I don't think this would happen), Google or Facebook or some other large tech company publicly releases an agent that's intelligent enough to be middling at wargames but not enough to do things like creative ML research, and people put it in one or more of IOT devices/critical infrastructure/ military equipment. Surprise: it has a bad value function and/or edge case behavior, and a group of agents end up deliberately and publicly defecting and successfully killing large numbers of people. In this scenario, it would be extremely obvious that the party responsible for marketing and selling the AI was FaceGoog, and no matter what the Powers That Be wanted, the grieving would be directing their anger towards those engineers. Politicians wouldn't individually give much of a shit about the well being of The Machine and instead be racing to see who could make the most visible condemnations of Big Tech and argue over which party predicted this would happen all along. Journalists would do what they always do and spin the story according to their individual political ideologies and not according to some institutional incentives, which would be more about painting their political opponents as Big Tech supporters than instrumentally supporting the engineers. Whatever company was responsible for the problem would, at a minimum, shutter all AI research. Congress would pass some laws written by their lobbyist consultants, of whom who knows, maybe even one or two could even be said to be "alignment people", and there's a new body of oversight analogous to the FDA for biotech companies. And I appreciate the viewpoint that this is either just one timeline, or relies on premises that might be untrue, but in my head at least it just seems like it falls into place without making many critical assumptions.
Generally. I endorse the comparison of AI with nuclear weapons (especially because AI is currently being mounted on literal nuclear weapons []). But in this case, there's a really big distinction that should be made between mass-media and specialized institutions. Intelligence/military agencies, specialized Wall-street analyst firms, and bureaucracy leadership all probably know things like exactly how frequently Covid causes brain damage [] and have the best forecasters predicting the next outbreak. For them, it's less about spinning stories, and more about figuring out what type of professional employees tend to write accurate/predictive reports and forecasts. Spun stories are certainly more influential then they were 10 years ago, and vastly more influential than they appear to the uninitiated, but I don't know if we've gotten to the point where they can fool the professionals at not getting fooled. Arms control has happened in the past even though it was difficult to verify, and nuclear weapons were centralized by default so it's hard to know anything about how hard it is to centralize that sort of thing.
Let's not name specific AI labs in an adversarial context

Anyone can try, this seems way out in a practically invisible part of the tail of obstacles to not being destroyed by AGI, if it's even an obstacle at all. 

Hmm. I think I just disagree. LW and alignment forum content will be likely be noticed by many people working on capabilities, whether or not they agree with it, especially as capabilities progress further. Just as an example, Sam Altman tweeted out Yudkowsky's latest doom post. And convincing capabilities researchers seems like potentially the biggest obstacle, not a small one. I haven't justified my beliefs much, but I can if you can point out where exactly we're mostly likely to disagree.
Why so little AI risk on rationalist-adjacent blogs?

Most probably just haven't identified it as salient / don't understand it / don't take it seriously, and besides there tend to be severely negative social / audience ramifications associated with doomsday forecasting. 

How are compute assets distributed in the world?

One way to maybe shed some light on this is to sort the latest Top500 results by location (maybe with extra work to get the specific locations inside the country, if required). There is a very long tail but most of it should correlate with investment in top infrastructure. Of course certain countries (US, China) might have undeclared computing assets of significant power (including various private datacenters), but this probably doesn't change the big picture much. 

How much stupider than humans can AI be and still kill us all through sheer numbers and resource access?

A stupid AI that can generate from thin air things that have both useful predictive power and can't be thought of by humans, AND that can reliably employ the fruits of these ideas without humans being suspicious or having a defense...isn't that stupid. This AI is now a genius.

What might an irate e-chimp do if their human handler denied it a banana?

Who cares? For one, if we're talking about an AI and not a chimp em this is an obvious engineering failure to create something with all the flaws of an evolved entity with motivational pressures extraneous and harmful to users. Or in other words this is a (very) light alignment problem that can be foreseen and fixed. 

How much stupider than humans can AI be and still kill us all through sheer numbers and resource access?

How much real power does the AI have access to, and what can humans do about it?

To reframe your question, even relatively small differences in human intelligence appear to be associated with extraordinary performance differences in war: consider the Rhodesian Bush War, or the Arab-Israeli conflict. Both sides of each conflict are relatively well-supplied and ideologically motivated to fight. In both cases there is also a serious intellectual giftedness gap (among other things) between the competing populations and the more intelligent side is shown to win ... (read more)

I see where you are coming from, but I don't think comparing an adversarial AI/human interaction with a human/human interaction is fruitful. Even a "stupid" AI thinks differently than a human would, in the way that it considers options no human ever would think of or take seriously. Self-cloning and not having to consider losses is another approach humans have no luxury of. I would start by asking a question that no one at MIRI or elsewhere seems to be asking: What might an irate e-chimp do if their human handler denied it a banana? (I.e. what are the dangers of gain-of-function research in sub-human A[G]I)
Let's not name specific AI labs in an adversarial context

So, to be clear, you don't think confidently naming people by first name as destroying the world can be parsed emotionally by them?

Mentions of AI companies / AI personalities on LW will intrinsically tend to be adversarial, even if the author spares a polemic or use of terms like "so and so is working to destroy the world" because misaligned AI destroying the world is clearly THE focus of this community. So it can be argued that to be meaningful, a policy of no names would need to be applied to practically any discussion of AI as even if some AI content is... (read more)

re: first para Yes that is true, I think you've weakened my opinion a bit. re: second para Are you just saying it is so difficult to convince capabilities researchers to stop that we shouldn't even bother to try?
Let's not name specific AI labs in an adversarial context

This seems like a case of making a rule to fix a problem that doesn't exist.

Are people harassing individual AI labs or researchers? The tendency for reasonable people who are worried about AI safety should be to not do so, since it predictably won't help the cause and can hurt. So far there does not seem to be any such problem of harassment discernible from background noise.

Naming individual labs and / or researchers is interesting, useful, and keeps things "real." 

So, to be clear, you don't think confidently naming people by first name as destroying the world can be parsed emotionally by them? It's harder to focus on object-level when your name is on something imo, whether or not it's targeted at you or intended to be harassing.
Why don't you introduce really impressive people you personally know to AI alignment (more often)?

A conventional approach might lead one to consider that inside the LW / AI safety bubble it borders on taboo to discount the existential threat posed by unaligned AI, but this is almost an inversion of the outside world, even if limited to to 25/75 of what LW users might consider "really impressive people."

This is one gateway to one collection of problems associated with spreading awareness of AI alignment, but let's go in a different direction: somewhere more personal.

Fundamentally, it seems a mistake to frame alignment as an AI issue. While unaligned AGI... (read more)

How dangerous is human-level AI?

"human-level AI" is a confusing term for at least a couple reasons: first, there is a gigantic performance range even if you consider only the top 1% of humanity and second it's not clear that human-level general learning systems won't be intrinsically superhuman because of things like scalable substrate and extraordinarily high bandwidth access (compared to eyes, ears, and mouths) to lossless information. That these apparent issues are not more frequently enumerated in the context of early AGI is confusing. 

As far as I'm aware all serious attempts to... (read more)

Why it's bad to kill Grandma

There seems to be a deep problem underlying these claims: even if humans have loosely aligned intuition about what's right and wrong, which isn't at all clear, why would we trust something just because it feels obvious? We make mistakes on this basis all the time and there are plenty of contradictory notions of what is obviously correct--religion, anyone?

Further, if grandma is in such a poor state that simply nudging her would kill her AND the perpetrator is such a divergent individual that they would then use the recovered funds to improve others' lives (... (read more)

It doesn't seem to me that obviousness is proof enough that an intuition is good, but something appearing "obvious" in your brain might be a marker/enrich for beliefs that have been selected for in social environments. There are certainly times when it's good to break universal maxims. Yet I don't think it's very easy to be a person who is capable of doing that -- the divergent individual you're talking about. Let's take lying, for example. It is generally good to be honest. There are times when it is really net good and useful to lie. But if you're someone who becomes very good at lying and habituated to lie, you probably start overriding your instincts to be honest. Maybe a divergent individual who says "fuck off" to all internal signals and logically calculates out the results of every decision could get away with that. But I think those people really run the risk of losing out on information baked into the heuristics. Similarly, I don't think those divergent people are really optimal actors, in the long-run. There are certainly times when it's good to break universal maxims; but even for you, as an individual, it's probably not good to do it all the time. If you get known as a grandma-pusher, you're going to be punished, which is probably net good for society, but you also reduced your ability to add resources to the game. Human interaction is an iterated game, and there are no rings of Gyges.
Is there any way someone could post about public policy relating to abortion access (or another sensitive subject) on LessWrong without getting super downvoted?

Writing about politics isn't discouraged because of sensitivity, but because political positions tend to be adopted for bad epistemological reasons, have poor predictive power and little to do with rationality. Correspondingly, framing a topic politically is a good indicator that the author has resorted to poor argumentation and is very unlikely to update their views based on superior argument or evidence, which is a little annoying and not less wrong. These are general problems not limited to discussing politics but for politics it's especially bad.

My impression has been it's presumed that a position presented will have been adopted for bad epistemological reasons and that it has little to do with rationality without much in the way of checking. I'm not asking about subjects I want to or would frame as political. I'm asking if there are some subjects that will be treated as though they are inherently political even when they are not.
Revisiting "Why Global Poverty"

I'm far more skeptical of the "governments have this covered" position than I was in 2015. Some of this is for theoretical reasons (ex: preventing catastrophe benefits people beyond your country) and some of it is from observing governments more (ex: pandemic response).

This is an interesting response to the perceived folly of trusting that our authorities can handle a cosmic body appearing on track to collide with the planet as there would seem to be more fundamental issues at play: that many such bodies may be unidentified, including due to long period or... (read more)

Double Asteroid Redirection Test [] was launched in 2021 and will impact in September 2022. I think it is in fact correct, that planetary defense is, relative to other existential risks, well funded and not neglected.
Probability that the President would win election against a random adult citizen?

Random person most likely has an IQ of 100 which is a standard deviation or two below a random president. Random person most likely has less talent and experience at politics, promotion and attack than a politician. Random person is most likely less physically attractive, less charismatic, and less wealthy than a politician. Random person doesn't have a gigantic support apparatus behind them (although even if they do they're probably still screwed because it's not enough to make up for the rest of the deficiencies). As others say it won't even be close. Random person will predictably be slaughtered. 

What will happen when an all-reaching AGI starts attempting to fix human character flaws?

What is the question? It seems to have something to do with AGI intervening in personality disorders, but why? AGI aside, considering the modification of humans to remove functionality that's undesirable to oneself it's not at all clear where one would stop. Some would consider human existence (and propagation) to be undesirable functionality that the user is poorly equipped to recognize or confront. Meddling in personality disorders doesn't seem relevant at this stage.

1Michael Bright2mo
My main concern is: Humans can be irrational and illogical, allowing them to let things slide for better or for worse. There are also psychological and reach limitations that put a hard cap on them somewhere. An AGI will most likely do everything it does rationally and logically. Including emotions. And this may be detrimental to most humans. Yes
What is the state of Chinese AI research?

A few reflections on a tragically wrong comment:

  1. What does what I think matter? Make argument, don't invoke myself if not necessary.
  2. It seemed obvious why the analysis is biased, but maybe this isn't the case, and maybe more info should have been provided. Mostly the concern here is over wording like "Xi seems to be doing his level best to wreck the Chinese high-tech economy" and "shortsighted national-security considerations" and "Uighur oppression" and (to paraphrase) maybe their leader is insane enough to invade Taiwan. To have all of these pop up in a si
... (read more)
What is the state of Chinese AI research?

Probability is high that all nations with strong AI research are keeping secrets, since some AI research will naturally go into projects with high secrecy. A better question is what the proportion of published to secret research is in USA, China, etc. It might actually be similar, which could suggest that China is pretty far behind. 

1David Johnston2mo
For what it's worth, I agree that there's clear evidence of ill-will towards the Chinese government (and, you know, I don't like them either). It's reasonable to suspect that this might colour a person's perception of the state of thing that the Chinese government is involved with. It is also superficial, so it's not like I can draw any independent conclusions from it to defray suspicions of bias. I'm also not giving it a lot of weight.
A few reflections on a tragically wrong comment: 1. What does what I think matter? Make argument, don't invoke myself if not necessary. 2. It seemed obvious why the analysis is biased, but maybe this isn't the case, and maybe more info should have been provided. Mostly the concern here is over wording like "Xi seems to be doing his level best to wreck the Chinese high-tech economy" and "shortsighted national-security considerations" and "Uighur oppression" and (to paraphrase) maybe their leader is insane enough to invade Taiwan. To have all of these pop up in a single paragraph that's supposed to be about AI raises some red flags. Does it need to be explained why? 3. Calling the analysis superficial without explicitly justifying this is problematic, particularly as the response is even more superficial. Yesterday I saw that gwern's response was heavily upvoted but didn't understand why; maybe it is part of a mechanism to keep people below a certain intellectual threshold off the site.
This part of the thread is both biased and extremely superficial. It may also be correct, but I would give low credence at face value.
What to do when starting a business in an imminent-AGI world?

You can't account for AGI because nobody has any idea at all what a post-AGI world will look like, except maybe that it could be destroyed to make paperclips. So if starting a business is a real calling, go for it. Or not. Don't expect the business to survive AGI even if thrives pre-arrival. Don't underestimate that your world may change so much that scenarios like you (or an agent somewhat associated with the entity formerly known as you, or even anyone else at all) running a business might not make sense--the concept of business is a reflection of how ou... (read more)

You get one story detail

Which category does this story fit into? 

Ukraine Post #8: Risk of Nuclear War

losing all the friends it has left with the possible exception of Iran

To be pedantic, they also wouldn't very likely wouldn't lose Syria or North Korea. 

Greyed Out Options

In any moment, you have literally millions of options.

Has anyone actually made an attempt to calculate possible degrees of freedom for a human being at any instant? There are >millions of websites that could be brought up in those tabs alone. 

One of the top things I'd love to see in my lifetime is a map of every thought that every human has had ever. Google might have a good dataset to build this. Studying the structure of this would answer your question. (Although I am personally more interested in it to identify what are the spaces of possible thought that humans have not frequented yet.)
There are literally infinite depending on how you want to define it - I could go to google and search any number I want, up to a googul at least. And if I decide to do it a fraction of a second later, that’s another googul options. And that’s just typing things in on google.
Why learn to code?

If you're into information then learning to code can help you acquire more information more easily and process that information in beautiful ways that could be laborious or impractical otherwise. That's probably the simplest explanation with the broadest appeal. At the risk of downvotes (maybe there are a lot of professional coders here), I'm not sure why anyone would want a job coding because then you risk the fun aspect for someone else's purposes in exchange for some tokens and quite a lot of your time. 

What an actually pessimistic containment strategy looks like

Taking for granted that AGI will kill everybody, and taking for granted that this is bad, it's confusing why we would want to mount costly, yet quite weak, and (poorly) symbolic measures to merely (possibly) slow down research.

Israel's efforts against Iran are a state effort and are not accountable to the law. What is proposed is a ragtag amateur effort against a state orders of magnitude more powerful than Iran. And make no mistake, AGI research is a national interest. It's hard to overstate the width of the chasm. 

Even gaining a few hours is pretty ... (read more)

It's hard to overstate the width of the chasm. 

I think you're overstating the width of the chasm. Where are you getting the impression that congress or the state department is so committed to AGI in a non-symbolic way? Most of the research towards AGI capabilities is done at the moment by private actors, staffed almost entirely by the kind of systematizing nerd most often concerned by existential risk. What exactly is the grand difficulty you expect scaling up the kind of outreach I outlined in the post to a slightly more hostile environment?

Avoiding Moral Fads?

How are we to know that we aren't making similar errors today?

Based only on priors, the probability we aren't is very low indeed. A better question is, given an identified issue, how can change happen? One main problem is that contra-orthodox information on moral issues tends not to travel easily.  

Book review: Very Important People

This isn't really much different from life outside the club. Social forces are often not aligned with majority personal preference and can even be in conflict. For example, people want to make friends or hook up but seeking those goals explicitly tends to be perceived as low-class and / or strange. 

I think there is a social equivalent of "(not) hiring the top 1%".

(In the linked article, the author questions what really happens when you invite 100 people for an interview and only hire the best 1. Naively, it seems like you are hiring the top 1%. But actually, some kinds of people are overrepresented at job interviews, namely those who can't get a job, and those are exactly the ones you want to avoid. And okay, most likely the 1 best out of 100 is not like this, but because the sample is skewed, they are neither the top 1%. Maybe top 10%, or top 50%, o... (read more)

Interacting with a Boxed AI

I'm not sure considering how to restrict interaction with super-AI is an effective way to address its potential risks, even if some restrictions might work (and it is not at all clear that such restrictions are possible). Humans tend not to leave capability on the table where there's competitive advantage to be had so it's predictable that even in a world that starts with AIs in secure boxes there will be a race toward less security to extract more value.

Russian x-risk newsletter March 2022 update

If the US knew of a way to locate subs, then it would worry that Russia or China would figure it out, too

There are many conceivable ways to track subs and this is only part of the problem because subs still need to be destroyed after being located. Russia and China combined don't have enough nuclear attack subs to credibly do this to the US. The US does have enough nuclear attack subs to credibly destroy Russia's deterrent fleet, if they can be tracked, with attack subs left over to defend our own ballistic missile subs. A primary mission for nuclear attac... (read more)

What would make you confident that AGI has been achieved?

at what point would you expect the average (non-rationalist) AI researcher to accept that they’ve created an AGI?

Easy answers first: the average AI researcher will accept it when others do.

at what point would you be convinced that human-level AGI has been achieved?

When the preponderance of evidence is heavily weighted in this direction. In one simple class of scenario this would involve unprecedented progress in areas limited by things like human attention, memory, io bandwidth, etc. Some of these would likely not escape public attention. But there are a lot of directions AGI can go.

Could you give a specific hypothetical if you have the time? What would be a specific example of a scenario that you’d look at and go “welp, that’s AGI!” Asking since I can imagine most individual accomplishments being brushed away as “oh, guess that was easier than we thought”
Formal epistemiology for extracting truth from news sources

To the extent that there are believers, you won't change their mind with reason, because their beliefs are governed, guarded and moderated by more basic aspects of the brain--the limbic system is a convenient placeholder for this.

So a problem you are focused on is that minority (or majority) of individual opinions are prevented from being honestly expressed. Flipping a small number of individual opinions, as is your motivation, does not address this problem.

Wargaming AGI Development

Because the benefits of quantum computing were so massive

Please elaborate. I'm aware of Grover's algorithm, Shore's algorithm, and quantum communication, and it's not clear that any of these pose a significant threat to even current means of military information security / penetration.

This is an element in the game's resolution which wasn't described, so I don't actually know. If I were to guess based on the level of abstraction used in games like this, it might just be a strong assumption of quantum supremacy that caches out as a series of advantages like: * Your communications are completely secure against any faction which does not also have Quantum Computing. * Your attempts to penetrate the communication of any faction without Quantum Computing are 25% more likely to succeed. * Your available FLOPs increase 25% after Quantum Computing. I think this reflects the assumptions which underly the game; this is one of the things we would want to be able to vary in order to help with exploring AGI scenarios.
It poses a threat because the military moves much slower than whatever you're naively assuming is ubiquitious COMSEC. Many CIA assets over the last twenty years, even some today, sent their communiques through channels protected mostly by Quantum-breakable encryption. If China/Russia got a quantum computer now (hell, probably even 15 years from now), it would be almost immediately followed by volleys of executions of our spies.
Formal epistemiology for extracting truth from news sources

I'm interested if there were any attempts at formal rules of transforming media feed into world model. Preferably with Bayesian interference and cool math. So I can try to discuss these with my friends and maybe even update my own model.

So you are interested in changing other people's minds on a complicated issue that has more to do with the limbic system than rational hardware by using reason. This distribution of influence is one reason why their intelligence isn't really important here, and it is also why your strategy won't work.

More generally, you are in a trap. Be skeptical of your own motivations. The least worst course of action available is probably to disengage.

What do you mean when you write that it is about 'limbic system'? For me it suggests that you imagine a big enthusiastic nationalistic crowd - but read [] - most of the people are forced to be 'supporters'. Many of them will learn to be more 'enthusiastic' with time - because the Russian society has been trained for that for a long time. For sure there are also true believers - but the whole game is about making everyone look like a believer - it is all about [] I have been living in communistic Poland and I attended the 1st of May parades - I remember the coercion, but I also remember how absurd it all felt, I did not understand why it happens when everyone seem to be against it. Maybe I just lived at the very decline of that system in Poland. Only after reading Solzhenitsyn I started to build a theory of how it works. Some people can really just force themselves to believe, this is easier when you don't have a strong drive to gather information or are incapable processing the contradictions, but on the other hand the more intelligent are also better at resolving the contradictions to the 'right' conclusions - delusions are a hell of a drug (in Gulag Archipelago Solzhenitsyn writes about the true communists in the prison camps and the ways they justify their own suffering). For others it is about developing a split personality, something like a [] that you can switch on when needed. You also would expect the psychopaths to be very successful in that society.
Danger(s) of theorem-proving AI?

Realistically, a complexity limit on practical work may not be imposed if the AI is considered reliable enough and creating proofs too complex to otherwise verify is useful, and it's very difficult to see a complexity limit imposed for theoretical exploration that may end up in practical use.

Still in your scenario the same end can be met with a volume problem where the ratio of new AI-generated proofs with important uses is greater than external capability of humans to verify, even in the case that individual AI proofs are in principle verifiable by humans... (read more)

That seems like a very mild danger compared to risks from other AI models, tbh.
Danger(s) of theorem-proving AI?

AI becomes trusted and eventually makes proofs that can't otherwise be verified, makes one or more bad proofs that aren't caught, results used for something important, important thing breaks unexpectedly. 

Let's say we limit it further so all proofs have to be verifiable within some reasonable complexity limit. In such a case, we wouldn't need to trust it. What then?
Who is doing Cryonics-relevant research?

I do not feel entirely comfortable talking the whole thing over with my profs.

If you're going to take a 3 month internship they will all know about anyway, it can't hurt to talk about it, right? Cryonics isn't really that taboo, especially if, as it appears, you will take the position that you don't expect current methods to work (but you would like to see about creating ones that might).