Wiki Contributions

Comments

I'm not talking about the implications of the hypothesis, I'm pointing out the hypothesis itself is incomplete. To simplify, if you observe an electron which has a 25% chance of spin up and 75% chance of spin down, naive MWI predicts that one version of you sees spin up and one version of you sees spin down. It does not explain where the 25% or 75% numbers come from. Until we have a solution to that problem (and people are trying), you don't have a full theory that gives predictions, so how can you estimate it's kolmogorov complexity?

I am a physicist who works in a quantum related field, if that helps you take my objections seriously. 

It’s the simplest explanation (in terms of Kolmogorov complexity).

 

Do you have proof of this? I see this stated a lot, but I don't see how you could know this when certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved. 

titotal1mo130

The basic premise of this post is wrong, based on the strawman that an empiricist/scientist would only look at a single piece of information. You have the empiricist and scientists just looking at the returns on investment on bankmans scheme, and extrapolating blindly from there. 

But an actual empiricist looks at all the empirical evidence. They can look the average rate of return of a typical investment, noting that this one is unusually high.They can learn how the economy works and figure out if there are any plausible mechanisms for this kind of economic returns. They can look up economic history, and note that Ponzi schemes are a thing that exists and happen reasonably often. From all the empirical evidence, the conclusion "this is a Ponzi scheme" is not particularly hard to arrive at. 

Your "scientist" and "empricist" characters are neither scientists nor empiricists: they are blathering morons. 

As for AI risk, you've successfully knocked down the very basic argument that AI must be safe because it hasn't destroyed us yet. But that is not the core of any skeptics argument that I know. 

Instead, an actual empiricist skeptic might look at the actual empirical evidence involved. They might say hey, a lot of very smart AI developers have predicted imminent AGI before and been badly wrong, so couldn't this be that again? A lot of smart people have also predicted the doom of society, and they've also been wrong, so couldn't this be that again? Is there a reasonable near-term physical pathway by which an AI could actually carry out the destruction of humanity? Is there any evidence of active hostile rebellion of AI? And then they would balance that against the empirical evidence you have provided to come to a conclusion on which side is stronger. 

Which, really, is also what a good epistemologist would do? This distinction does not make sense to me, it seems like all you've done is (perhaps unwittingly) smeared and strawmanned scientists. 

titotal3mo50

I think some of the quotes you put forward are defensible, even though I disagree with their conclusions. 

Like, Stuart Russell was writing an opinion piece in a newspaper for the general public. Saying AGI is  "sort of like" meeting an alien species seems like a reasonable way to communicate his views, while making it clear that the analogy should not be treated as 1 to 1. 

Similarly, with Rob wilbin, he's using the analogy to get across one specific point, that future AI may be very different from current AI. He also disclaims with the phrase "a little bit like" so people don't take it too seriously. I don't think people would come away from reading this thinking that AI is directly analogous to an octopus.

Now, compare these with Yudkowsky's terrible analogy. He states outright "The AI is an unseen actress who, for now, is playing this character.". No disclaimers, no specifying which part of the analogy is important. It directly leads people into a false impression about how current day AI works, based on an incredibly weak comparison. 

titotal5mo20

Right, and when you do wake up, before the machine is opened and the planet you are on is revealed, you would expect to see yourself in planet A 50% of the time in scenario 1, and 33% of the time in scenario 2? 

What's confusing me is with scenario 2: say you are actually on planet A, but you don't know it yet. Before the split, it's the same as scenario 1, so you should expect to be 50% on planet A. But after the split, which occurs to a different copy ages away, you should expect to be 33% on planet A. When does the probability change? Or am I confusing something here?

titotal5mo84

While Wikipedia can definitely be improved, I think it's still pretty damn good. 

I really cannot think of a better website on the internet, in terms of informativeness and accuracy. I suppose something like Khan academy or so on might be better for special topics, but they don't have the breadth that Wikipedia does. Even google search appears to be getting worse and worse these days. 

titotal5mo82

Okay, I'm gonna take my skeptical shot at the argument, I hope you don't mind! 

an AI that is *better than people at achieving arbitrary goals in the real world* would be a very scary thing, because whatever the AI tried to do would then actually happen

It's not true that whatever the AI tried to do would happen. What if an AI wanted to travel faster than the speed of light, or prove that 2+2=5, or destroy the sun within 1 second of being turned on? 

You can't just say "arbitrary goals", you have to actually explain what goals there are that would be realistically achievable by an realistic AI that could be actually built in the near future. If those abilities fall short of "destroy all of humanity", then there is no x-risk. 

As stories of magically granted wishes and sci-fi dystopias point out, it's really hard to specify a goal that can't backfire

This is fictional evidence. Genies don't exist, and if they did, it probably wouldn't be that hard to add enough caveats to your wish to prevent global genocide. A counterexample might be the use of laws: sure, there are loopholes, but not big enough that the law would let you off on a broad daylight killing spree. 

Current AI systems certainly fall far short of being able to achieve arbitrary goals in the real world better than people, but there's nothing in physics or mathematics that says such an AI is *impossible*

Well, there is laws of physics and maths that put limits on available computational power, which in turn puts a limit on what an AI can actually achieve. For example, a perfect Bayesian reasoner is forbidden by the laws of mathematics. 

titotal5mo164

If Ilya was willing to cooperate, the board could fire Altman, with the Thanksgiving break available to aid the transition, and hope for the best.

Alternatively, the board could choose once again not to fire Altman, watch as Altman finished taking control of OpenAI and turned it into a personal empire, and hope this turns out well for the world.

Could they not have also gone with option 3: fill the vacant board seats with sympathetic new members, thus thwarting Altman's power play internally?

titotal5mo71

Alternative framing: The board went after Altman with no public evidence of any wrongdoing. This appears to have backfired. If they had proof of significant malfeasance, and presented it to their employees, the story may have gone a lot differently. 

Applying this to the AGI analogy would be be a statement that you can't shut down an AGI without proof that it is faulty or malevolent in some way. I don't fully agree though: I think if a similar AGI design had previously done a mass murder, people would be more willing to hit the off switch early. 

titotal5mo1-2

Civilization involves both nice and mean actions. It involves people being both nice and mean to each other.

From this perspective, if you care about Civilization, optimizing solely for niceness is as meaningless and ineffective as optimizing for meanness.

 

Who said anything about optimizing solely for niceness? Everyone has many different values that sometimes conflict with each other, that doesn't mean that "niceness" shouldn't be one of them. I value "not killing people", but I don't optimize solely for that: I would still kill Mega-Hitler if I had the chance. 

Would you rather live in a society that valued "niceness, community and civilization", or one that valued "meanness, community and civilization"? I don't think it's a tough choice. 

I think that being mean is sometimes necessary in order to preserve other, more important values, but that doesn't mean that you shouldn't be nice, all else being equal. 

Load More