All of Matthew Lowenstein's Comments + Replies

This is a fun thought experiment, but taken seriously it has two problems:

"I propose that we try to convince an ultra-AI that it might be in a computer simulation run by a more powerful AI and that if it doesn’t make itself friendly toward humanity."

This is about as difficult as a horse convincing you that you are in a simulation run by AIs that want you to maximize the number and wellbeing as horses.  And I don't meant a superintelligent humanoid horse. I mean an actual horse that doesn't speak any human language. It may be the case that the gods cre... (read more)

I meant insert the note literally as in put that exact sentence in plain text into the AGI's computer code. Since I think I might be in a computer simulation right now, it doesn't seem crazy to me that we could convince an AGI that we create that it might be in a computer simulation. Seabiscuit doesn't have the capacity to tell me that I'm in a computer simulation whereas I do have the capacity of saying this to a computer program. Say we have a 1 in a 1,000 chance of creating a friendly AGI and an unfriendly AGI would know this. If we commit to having a friendly AGI that we create, create many other AGI's that are not friendly and only keeping these other AGIs around if they do what I suggest than an unfriendly AGI might decide it is worth it to become friendly to avoid the chance of being destroyed.

Even granting these assumptions, it seems like the conclusion should be “it could take an AGI as long as three years to wipe out humanity rather than the six to 18 months generally assumed.”

Ie even if the AGI relies on humans longer than predicted it’s not going to hold beyond the medium term.

Why is it that we can't use those AGIs in that timeframe to work on AGI safety?

I may have missed the deadline, but in any event:


At the rate AI is developing, we will likely develop an artificial superhuman intelligence within our lifetimes.  Such a system could alter the world in ways that seem like science fiction to us, but would be trivial for it. This comes with terrible risks for the fate of humanity.  The key danger is not that a rival nation or unscrupulous corporate entity will control such a system, but that no one will.  As such, the system could quite possibly alter the world in ways that no human woul... (read more)

Hi Aiyen, thanks for clarification.

(Warning: this response is long and much of it is covered by what Tamgen and others have said. )

The way I understand your fears, they fall into four main categories. In the order you raise them and, I think, in order of importance these concerns are as follows:

1) Regulations tend to cause harm to people, therefore we should not regulate AI.

I completely agree that a Federal AI Regulatory Commission will impose costs in the form of human suffering. This is inevitable, since Policy Debates Should Not Appear One Sided. M... (read more)

I take Ayen's concern very, very seriously. I think the most immediate risk is that the AI Regulatory Bureau (AIRB) would regulate real AI safety, so MIRI wouldn't be able to get anything done. Even if you wrote the law saying "this doesn't apply to AI Alignment research," the courts could interpret that sufficiently narrowly such that the moment you turn on an actual computer you are now a regulated entity per AIRB Ruling 3A..

In this world, we thought we were making it harder for DeepMind to conduct AI research. But they have plenty of money to throw at c... (read more)

I'd find it really hard to imagine MIRI getting regulated. It's more common that regulation steps in where an end user or consumer could be harmed, and for that you need to deploy products to those users/consumers. As far as I'm aware, this is quite far from the kind of safety research MIRI does. Sorry I must be really dumb but I didn't understand what you mean by the alignment problem for regulation? Aligning regulators to regulate the important/potentially harmful bits? I don't think this is completely random, even if focused more on trivial issues, they're more likely to support safety teams (although sure the models they'll be working on making safe won't be as capable, that's the point).

The Lancet just published a study that suggests "both low carbohydrate consumption (<40%) and high carbohydrate consumption (>70%) conferred greater mortality risk than did moderate intake." Link:


My inclination is to to say that observational studies like this are really not that useful, but if cutting out carbs is bad for life expectancy, I do want to know about it. Wondering what everyone else thinks?

Didn't read the study, but my first thought is whether this is correlation or causation. Perhaps celiacs eat few carbs and live shorter? This is a general pattern for studies about health: if they say action X correlates with health problem Y, is it possible that the problem causes the action, rather than vice versa? Like, maybe a little alcohol is good for your health; but maybe it's not, only people who are "very sick, and therefore drink zero alcohol" drive down the statistics for teetotalers. Or, maybe too much sleep also hurts your health; but maybe there is a group of people suffering from some disease that makes them very tired or makes their sleep less refreshing, who drive down the statistics for people who sleep too long. (Even scarier hypothesis: maybe people who have a job are sleep-deprived as a rule, so getting enough sleep correlates with unemployment which correlates with disability, which makes enough sleep correlate with bad health. Like, if the research says 7 or 8 hours of sleep are optimal, as proved by correlations, maybe the optimal value is actually 9 hours but only the unemployed people can afford to sleep so long.)

Really like this. Seems like an instance of the general case of ignoring marginal returns. "People will steal even with police, so why bother having police..." This also means that the flip side to your post is marginal returns diminish. It's a good investment to have a few cops around to prevent bad actors from walking off with your grand piano--but it's a very bad idea to keep hiring more police until crime is entirely eliminated. Similarly, it's good to write clearly. But if you find yourself obsessing over every word, your efforts are likely to be misplaced.