maximkazhenkov

maximkazhenkov's Comments

Life as metaphor for everything else.

Another useful metaphor in this context: Fire

Implications of the Doomsday Argument for x-risk reduction

Thank you for the offer, I am however currently reluctant to interact with people I met on the internet in this way. But know that your openness and forthcomingness is greatly appreciated :)

Implications of the Doomsday Argument for x-risk reduction

Nitpick: I was arguing that the Doomsday Argument would actually discourage x-risks related work because "we're doomed anyway".

Implications of the Doomsday Argument for x-risk reduction

I agree that Lesswrong is probably the place where crazy philosophical ideas are given the most serious consideration; elsewhere it's usually just mentioned as a mind-blowing trivia over dinner parties, if at all. I think there are two reasons why these ideas are so troubling:

  • They are big. Failing to take account of even one of them will derail one's worldview completely
  • Being humble and not taking an explicit position is still just taking the default position

But alas, I guess that's just the epistemological reality we live in. We'll just have to make working assumptions and carry on.

Implications of the Doomsday Argument for x-risk reduction
Even if I were convinced that we will almost certainly fail, I might still prioritize x-risk reduction, since the stakes are so high.

In this case, it isn't so much that "stakes are high and chances are low so they might cancel out", rather there is an exact inverse proportionality between the stakes and the chances because the Doomsday Argument operates directly through the number of observers.

If it does work though, its conclusion is not that we will all go extinct soon, but rather that ancestor simulations are one of the main uses of cosmic resources.

I feel like being in a simulation is just as terrible a predicament as doom soon; given all the horrible things that happen in our world the simulators are clearly Unfriendly, they could easily turn off the simulation or thwart our efforts in creating an AI. Basically we're already living in a post-Singularity dystopia so it's too late to work on it.

I have a much harder time accepting the Simulation Hypothesis though because there are so many alternative philosophical considerations that could be pursued. Maybe we are (I am) Boltzmann brains. Maybe we live in an inflationary universe that expands 10^37 fold every second. Maybe minds do not need instantiation, or anything like a rock could be an instantiation. Etc.

Going one meta level up, I can't help but feel like a hypocrite to lament the lack of attention given to intelligence explosion and x-risks by the general public yet fail to seriously consider all these other big weird philosophical ideas. Are we (the rationalist community) doing the same as people outside it, just with a slightly shifted Overton Window? When is it Ok to sweep ideas under the rug and throw hands up in the air?

Implications of the Doomsday Argument for x-risk reduction

But isn't the point of the Doomsday Argument that we'll need very very VERY strong evidence to the contrary to have any confidence that we're not doomed? Perhaps we should focus on drastically controlling future population growth to better our chances of prolonged survival?

How special are human brains among animal brains?

I think what the author meant was that the anthropic principle removes the lower bound on how likely it is for any particular specie to evolve language; similar to how the anthropic principle removes the lower bound on how likely it is for life to arise on any particular planet.

So our language capability constitutes zero evidence for "evolving language is easy" (and thus dissolving any need to explain why language arose; it could just be a freak 1 in 10^50 accident); similar to our existence constituting zero evidence for "life is abundant in the universe" (and thus dissolving the Fermi paradox).

When to assume neural networks can solve a problem
You are aware chatbots have been "beating" the original Turing test since 2014, right?

Yes, I was in fact. Seeing where this internet argument is going, I think it's best to leave it here.

When to assume neural networks can solve a problem
ML playing any possible game better than humans assuming a team actually works on that specific game (maybe even if one doesn't), with huma-like inputs and human-like limitations in terms of granularity of taking inputs and giving outputs.

I disagree with this point in particular. I'm assuming you're basing this prediction on the recent successes of AlphaStar and OpenAI5, but there are obvious cracks upon closer inspection.

The "any possible game" part, though, is the final nail in the coffin to me since you can conceive plenty of games that are equivalent or similar to the Turing test, which is to say AGI-complete.

(Although I guess AGI-completeness is a much smaller deal to you)

AGI in a vulnerable world

It makes no difference if the marginal distributed harm to all of society is so overwhelmingly large that your share of it is still death.

Load More