ProgramCrafter

Wiki Contributions

Comments

I mean something like AutoGPT, where there is no human in the loop who could reset the history.

For example, I've seen how ChaosGPT got into a loop of "researching nuclear weapons". Probably if it could erase them completely from its context, it would generate more interesting ideas (though, there is still a question whether we need that).

And is there a technology to erase certain concept from the prompt?

It can be useful both for AI safety and capabilities: currently LLM cannot forget unsuccessful attempts to solve the task, and those can make it harder to find new ways to solve. (I'd call that "object permanence of simulacrum").

I've recently read about the sequence about quantum mechanics, and now I feel interested about experiment with two half-mirrors where amplitudes cancel out (Configurations and Amplitude).

  1. Is such an explanation approximately equivalent to quantum one? I've heard that phase of wave represents angle of amplitude complex value.
    "On each reflection, light wave phase is shifted by pi/2; there are two ways that light goes to the top, with 1 and 3 reflections; these ways cancel out due to interferention with phase difference = pi".
  2. Is it possible to do such an experiment at home? In particular, are half-silvered mirrors suitable for the experiment sold somewhere?

Could you write the equations in LaTeX (https://www.lesswrong.com/faq#How_do_I_use_Latex_)? Currently equations' images make the post hard to read.

I think that's reaction for "tried to verify the fact and found out the opposite".

Has anyone fed the rules to GPT-4 and invited it as player?

I've thought on this additional axiom, and it seems to bend the reality too much, leading to possible [unpleasant outcomes](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes): for example, where a person survives but is tortured indefinitely long.

Also, it's unclear how could this axiom manage to preserve ratios of probabilities for quantum states.

You feel a slight philosophical discomfort about this. You don't like the idea of forced change, of intervention, being so integral to such a seemingly basic notion as causality.

Maybe it's easier to think that Bayes nets without external data can only say "rain is strong evidence to wet" and "wet is strong evidence to rain" but not "rain causes wet" or "wet causes rain"?

By less extreme bad equilibrium, do you mean "play 79, until someone defects, and then play 80"? Or "play 80 or 100"?

Here is the Python script I've used: https://gist.github.com/ProgramCrafter/2af6a5b1cde0ff8995b9502f1c502151
To make all agents start from Hell, you need to change line 31 to self.strategy = equilibrium.

This is not particularly necessary, because we can save at least our genetic information for possibility of being recreated in the future. So we can also hope for friendly alien civ to defeat superintelligence even if humanity is extinct. If a hostile superintelligence takes over the universe, it's not likely that humans will be recreated ever.

Load More