Yair Halberstadt

Wiki Contributions

Comments

Taking Clones Seriously

In this particular case I think the clone is far more like to be interested in AI or philanthropy in general than the particular cross section of the two that is AI safety research.

Taking Clones Seriously

At a guess reducing interest variance to a single number is inappropriate. For example I imagine the correlation between twins both liking maths is much higher than them both being interested in a specific branch of maths.

Taking Clones Seriously

This seems like the sort of thing that would be expensive to investigate, has low potential upside and just investigating would have enormous negatives (think loss of wierdness point, and potential for scandal).

Taking Clones Seriously

Super smart people are 10 a penny. But for every genius working to make AGI safer, there's 10 working to bring AGI sooner. Adding more intelligent people to the mix is just as likely to hinder as to harm.

More concretely, if we were to clone Paul Christiano what's the chance the clone would work on AGI safety research? What's the chance it would work on something neutral? What's the chance it would work on something counterproductive?

And how much would it cost?

Seems like it would be a much better use of resources to offer existing brilliant AI researchers million dollar a year salaries to work on AGI safety specifically.

Why don't our vaccines get updated to the delta spike protein?

Given that

  1. Developing a new version of the vaccine would probably face significant regulatory hurdle, and thus take time.
  2. There's an expectation for new variants to become dominant every few months.
  3. The vaccine industry is either way going to sell as many vaccines as it can produce for the foreseeable future.

There doesn't seem to be a huge incentive for vaccine companies to go through the pain and expense of developing a new vaccine version, which would only be useful for a few months, and which just displaces sales from its existing vaccine instead of creating new sales.

Stop button: towards a causal solution

I'm sceptical of any approach to alignment that involves finding a perfect ungameable utility function.

Even if you could find one, and even if you could encode it accurately when training the AI, that only effects outer alignment.

What really matters for AI safety is inner alignment. And that's very unlikely to pick up all the subtle nuances of a complex utility function.

A Defense of Functional Decision Theory

I read through this long enough to come to the conclusion that the author of the original article simply does not understand FDT rather than having valid criticisms of it, and stopped there, that being perfectly sufficient to refute the article.

Why Save The Drowning Child: Ethics Vs Theory

I'm not saying that's the explicit goal. I'm saying that in practice, if someone suggests a moral theory which doesn't reflect how humans actually feel about most actions nobody is going to accept it.

The underlying human drive behind moral theories is to find order in our moral impulses, even if that's not the system's goal

Why Save The Drowning Child: Ethics Vs Theory

Although they disagree about some very fundamental questions, they seem to broadly agree on a lot of actions.

I think this is mixing up cause and effect.

People instinctively find certain things moral. One of them is saving drowning children.

Ethical theories are our attempts to try to find order in our moral impulses. Of course they all save the drowning child, because any that didn't wouldn't describe how humans actually behave in practice, and so wouldn't be good ethical theories.

It's similar to someone being surprised that Newton's theories predict results that are so similar to Einstein's even though they were wrong. But Newton would never have suggested his theories if they didn't accurately predict the Einsteinian world we actually live in.

You Don't Need Anthropics To Do Science

You could make the exact same argument about quantum mechanics.

Quantum physics is often suggested as essential in understanding the physical world, such that without the proper understanding of quantum physics, we can't do mechanics.

Say you are working out how fast a ball would fall. You could use the equations for acceleration and gravity to work this out. All this is straightforward.

However many quantum physicists would say this reasoning is simple-minded and technically incorrect. What you actually have to do is describe the evolution of the wave function of all the particle that make up the ball...

To say the very least, this quantum mechanical consideration seems unnecessary. Good scientific theories should be able to predict our observations, using experiments to evaluate them directly reflects that. The quantum mechanical view above cuts off this tie by “zooming out” and taking a god’s eye view of the entire universe first. By doing so, the computational complexity of calculating anything becomes impossible. Then they propose the solution: use statistical techniques to approximate the solution. It enables us to do science just like we did before.

So before quantum mechanics was widely discussed, when people focused on experiment results without minding the entire universe or its possible interactions with every particle, we were selecting theories based on the wrong reasoning. And how convenient that the correct reasoning with the proper quantum mechanical assumption gives the same effect as before. This is too coincidental and unparsimonious.

It is sensible to suspect our reasoning before was right all along.  If so, what does this mean for quantum mechanics? etc.

Load More