ChristianKl

Sequences

Random Attempts at Apllied Rationality
Using Credence Calibration for Everything
NLP and other Self-Improvement
The Grueling Subject
Medical Paradigms

Wiki Contributions

Comments

I’m mildly skeptical that blindness prevents schizophrenia

Well, so much for that plan!

An incidence rate of 1 in 2 million might not be enough for Danish (5.831 million) and Western Australian (2.667 million) but might be enough for the United Kingdom (67.733 million). 

The United Kingdom does want its data to be analyzed. I know an EA who works as a contractor to do ML to help them with data analysis. I however don't know the exact rules under which data analysis happens. Message me if you want the contact. 

Oversight Misses 100% of Thoughts The AI Does Not Think

You think literally no part of their brains is tracking that policy X is about the coverup of sexual abuse?

The problem is not that no part of their brain tracks it. It's just that it's not the central reason when describing why they do what they do and not the story they tell to themselves. 

Step 6: "Great, I've now figured out that this DNA sequence corresponds to a deadly pathogen. Just synthesize it and release it into the air. Anyone who could have got cancer or already has cancer will die quickly, curing cancer."

I don't think that the problematic actions by AGIs are likely of the nature that they can be described in that fashion. They are more likely to be 4D chess moves where the effects are hard to understand directly. 

It might be something like: "In our experiments where doctors are supposed to use the AGI to help them make treatment decisions those doctors regularly overrate their own competency and don't follow the AGI recommendation and as a result patients die unnecessarily. Here's an online course that your doctors could take that would make them understand why it's good to follow AGI recommendations"

Actions like that seem totally reasonable but they increase AGI power in contrast to human power. Economic pressure incentives that power transfer. 

I wouldn't expect that we go directly from AGI with human supervision to AGI that kills all humans via a deadly pathogen. We are more likely going from AGI with human supervision to AGI that effectively operates without human supervision. Then in a further step, AGIs that operate without human supervision centralize societal powers on themselves and after a few years, there are no resources for humans left. 

And the Revenues Are So Small

There was no mechanism that seemed like it would have reliably stopped these provisions if they had been an order of magnitude or two worse, and indeed the original BBB bill seemed to have a number of things in that category.

The general mechanism for stopping provisions that are an order of magnitude or two worse is lobbying and in this case, that's likely exactly what happened. The original BBB bill had a bunch of those things and then lobbyists came and fought the bill.  

Lobbyist power is not absolute and there will be policies that damage business interests that lobbyists can't prevent. On the other hand, at present Washington does to me not look like a place where lobbyists have too little power. 

AllAmericanBreakfast's Shortform

People's willingness to spend on healthcare changes with the amount they are currently suffering. Immediate suffering is a much stronger motivator for behavior than plausible future suffering and even likely future suffering. 

And the Revenues Are So Small

I'm surprised that the phrase moral maze doesn't appear at all in the post. Rules that punish big corporations but don't punish smaller ones tend to push the world in a direction with fewer maze levels.

Oversight Misses 100% of Thoughts The AI Does Not Think

Step 4, might rather be: "There are 10,000 unresolved biological questions that I think need to be answered to make progress, shall I give you a list?"

If you look at the Catholic church covering up sexual abuse of children, no church official would have answered the question "Why is policy X going to do?" with "Policy X exists so that more sexual abuse of children happens" and that's not because they are lying from their own perspective.

Motivations in enantiodromia dynamics just don't look like that. 

Why is increasing public awareness of AI safety not a priority?

Which companies, and to what extent? My internal model says that this is as simple as telling them they have to contract with somebody to dispose it properly.

Electric utilities. Coal plants produced a lot of mercury pollution and adding filters cost money. Given that burning fossil fuels cause the most mercury pollution it's a really strange argument to treat that as something separate from mercury pollution. 

Also mercury pollution is much more localized with clear, more immediate consequences than dealCO2 pollution. It doesnt suffer from any 'common good' problems.

Do you think that lowered childhood IQ isn't a common good issue? That seems like a pretty a strange argument. 

I don't understand your model of this at all, do you think if CO2 wasn't a controversial topic, we could just raise gas taxes and people would be fine?

I don't think "just raise gas taxes" is an effective method to dealing with the issue. As an aside, just because the German public cares very much about CO2 doesn't mean that our government stops subventioning long commutes to work. It doesn't stop our government either from shutting down our nuclear power plants. 

The Kyoto Protocol was negotiated fine in an environment of little public pressure. 

I do agree with the sentiment that it's important to discuss solutions to reducing carbon emissions sector by sector. If there would have been less public pressure, I do think it's plausible that expert conference discussions would have been more willing to focus on individual sectors and discuss what's needed in those. 

The kind of elite consensus that brought the Kyoto Protocol could also have had a chance to create cap-and-trade. 

From Personal to Prison Gangs: Enforcing Prosocial Behavior

I'm very curious about what the actual rules of the various gangs look like. If they exist in written form in an environment where it's easy to confiscate documents I would expect them to be publically accessible. 

Oversight Misses 100% of Thoughts The AI Does Not Think

It seems to me like, while the AI is still running on compute that humans oversee and can turn off, the AI has to discard a bunch of less effortful plans that would fail because they would reveal that it is misaligned (plans like "ask the humans for more information / resources") and instead go with more effortful plans that don't reveal this fact. 

If I ask an AGI to create a cancer cure and it tells me that it would need more resources to do so and a bunch of information, it wouldn't feel to me like a clear sign that the AGI is misaligned. 

I would expect that companies that want their AGIs to solve real-world problems would regularly be in a situation where the AGI can clearly explain that it currently doesn't have the resources to solve the problem and that more resources would clearly help. 

Those companies that actually are very willing to give their AGIs the resources that the AGI thinks are needed to solve the problems are going to be rewarded with economic success. 

chinchilla's wild implications

I'm not sure how many bytes per second we see, but I don't think it's many orders of magnitudes higher than 2kb.

That depends a lot on how you count. A quick Googling suggest that the optic nerve has 1.7 million nerve fibers. 

If you think about a neuron firing rate of 20 hz that gives you 34 MB per second. 

Load More