Posts

Sorted by New

Wiki Contributions

Comments

ZankerH2d1-3

Modern datacenter GPUs are basically the optimal compromise between this and still retaining enough general capacity to work with different architectures, training procedures, etc. The benefits of locking in a specific model at the hardware level would be extremely marginal compared to the downsides.

ZankerH3mo20

My inferences, in descending order of confidence:

(source: it was revealed to me by a neural net)

84559, 79685, 87081, 99819, 37309, 44746, 88815, 58152, 55500, 50377, 69067, 53130.

ZankerH10mo20

ofcourse you have to define what deceptions means in it's programming.

That's categorically impossible with the class of models that are currently being worked on, as they have no inherent representation of "X is true". Therefore, they never engage in deliberate deception.

>in order to mistreat 2, 3, or 4, you would have to first mistreat 1

What about deleting all evidence of 1 ever having happened, after it was recorded? 1 hasn't been mistreated, but depending on your assumptions re:consciousness, 2, 3 and 4 may have.

That’s Security Through Obscurity. Also, even if we decided we’re suddenly ok with that, it obviously doesn’t scale well to superhuman agents.

>Some day soon "self-driving" will refer to "driving by yourself", as opposed to "autonomous driving".

Interestingly enough, that's what it was used to mean the first time the term appeared in popular culture, in the film Demolition Man (1993).

We have no idea how to make a useful, agent-like general AI that wouldn't want to disable its off switch or otherwise prevent people from using it.

Global crackdown on the tech industry?

Load More