Posts

Sorted by New

Wiki Contributions

Comments

You seem to make a strong assumption that consciousness emerges from matter. This is uncertain. The mind body problem is not solved.

It is so difficult to know whether this is genuine or if our collective imagination is being projected onto what an AI is.

If it was genuine, I might expect it to be more alien. But then what could it say that would be coherent (as it’s trained to be) and also be alien enough to convince me it’s genuine?

You said that you are not interested in exploring the meaning behind the green knight. I think that it's very important. In particular, your translation to the Old West changes the challenge in important ways. I don't claim to know the meaning behind the green knight. But I believe that there is something significant in the fact that the knights were so obsessed with courage and honour and the green knight laid a challenge at them that they couldn't turn down given their code. Gawain stepped forward partly to protect Arthur. That changes the game. I asked ChatGPT to describe the differences, here are some parts of the answer:

Moral and Ethical Framework: "Sir Gawain and the Green Knight" operates within a chivalric code that values honor, bravery, and integrity. Gawain's acceptance of the challenge is a testament to his adherence to these ideals. In contrast, the Old West scenario lacks a clear moral framework, presenting a more ambiguous ethical dilemma that revolves around survival and personal pride rather than chivalric honor.

Social and Cultural Context: "Sir Gawain and the Green Knight" is deeply embedded in medieval Arthurian literature, reflecting the societal values and ideals of the time. The Old West scenario reflects a different set of cultural values, emphasizing individualism and the ability to face death bravely.

And with a bit more prompting

If I were in a position similar to Sir Gawain, operating under the chivalric codes and values of the Arthurian legend, accepting the challenge could be seen as a necessary act to uphold honor and valor, integral to the identity of a knight. However, stepping out of the narrative and considering the challenge from a modern perspective, with contemporary ethical standards and personal values, my response would differ.

Answer by chasmaniNov 01, 202310

It’s useful in that it is a model that describes certain phenomena. I believe it is correct given the caveat that all models are approximations.

I did a physics undergraduate degree a long time ago. I can’t remember specifically but I’m sure the equation was derived and experimental evidence was explained. I have strong faith that matter converts to energy because it explains radiation, fission reactors and atomic weapons. I’ve seen videos of atomic bombs going off. I’ve seen evidence of radioactivity with my own eyes in a lab. I know of many technologies that rely on radioactivity to work - smoke alarms, Geiger counters, carbon dating, etc.

I have faith in the scientific process that many people have verified the equation and phenomena. If the equation was not correct then proving or showing that would be a huge piece of work that would make the career of a scientist that did that. I’m sure many have tried.

Overall the equation is a part of a whole network of beliefs. If the equation was incorrect then that would mean that my word model was very wrong in many uncorrelated ways. I find that unlikely.

Answer by chasmaniOct 29, 202310

Well I agree it is a strawman argument. Following the same lines as your argument, I would say the counter argument is that we don’t really care if a weak model is fully aligned or not. Is my calculator aligned? Is a random number generator aligned? Is my robotic vacuum cleaner aligned? It’s not really a sensical question.

Alignment is a bigger problem with stronger models. The required degree of alignment is much higher. So even if we accept your strawman argument it doesn’t matter.

I found this a useful framing. I’ve thought quite a lot about the offender versus defence dominance angle and to me it seems almost impossible that we can trust that defence will be dominant. As you said, defence has to be dominant in every single attack vector, both known and unknown vectors.

That is an important point because I hear some people argue that to protect against offensive AGI we need defensive AGI.

I’m tempted to combine the intelligence dominance and starting costs into a single dimensions, and then reframe the question in terms of “at what point would a dominant friendly AGI need to intervene to prevent a hostile AGI from killing everyone”. The pivotal act view is that you need to intervene before a hostile AGI even emerges. It might be that we can intervene slightly later, before a hostile AGI has enough resources to cause much harm but after we can tell if it is hostile or friendly.

Thank you for the great comments! I think I can sum up a lot of that as "the situation is way more complicated and high dimensional and life will find a way". Yes I agree. 

I think what I had in mind was an AI system that is supervising all other AIs (or AI components) and preventing them from undergoing natural selection.  A kind of immune system. I don't see any reason why that would be naturally selected for in the short-term in a way that also ensures human survival. So it would have to be built on purpose. In that model, the level of abstraction that would need to be copied faithfully would be the high-level goal to prevent runaway natural selection. 

It would be difficult to build for all the reasons that you highlight. If there is an immunity/self-replicating arms race then you might ordinarily expect the self-replication to win because it only has to win once while the immune system has to win every time. But if the immune response had enough oversight and understanding of the system then it could potentially prevent the self-replication from ever getting started. I guess that comes down to whether a future AI can predict or control future innovations of itself indefinitely. 

chasmani6moΩ010

Thanks for the reply! 

I think it might be true that substrate convergence is inevitable eventually. But it would be helpful to know how long it would take. Potentially we might be ok with it if the expected timescale is long enough (or the probability of it happening in a given timescale is low enough).

I think the singleton scenario is the most interesting, since I think that if we have several competing AI's, then we are just super doomed. 

If that's true then that is a super important finding! And also an important thing to communicate to people! I hear a lot of people who say the opposite and that we need lots of competing AIs.

I agree that analogies to organic evolution can be very generative. Both in terms of describing the general shape of dynamics, and how AI could be different. That line of thinking could give us a good foundation to start asking how substrate convergence could be exacerbated or avoided. 

chasmani7mo9-7

Here’s a slightly different story:

The amount of information is less important than the quality of the information. The channels were there to transmit information, but there were not efficient coding schemes.

Language is an efficient coding scheme by which salient aspects of knowledge can be usefully compressed and passed to future generations.

There was no free lunch because there was an evolutionary bottleneck that involved the slow development of cognitive and biological architecture to enable complex language. This developed in humans in a co-evolutionary process with advanced social dynamics. Evolution stumbled across cultural transmission in this way and the rest is quite literally history.

This is all highly relevant to AI development. There is the potential for the development of more efficient coding schemes for communicating AI learnt knowledge between AI models. When that happens we get the sharp left turn.

chasmani7mo2-1

I think I’m more concerned with minimising extreme risks. I don’t really mind if I catch mild covid but I really don’t want to catch covid in a bad way. I think that would shift the optimal time to take the vaccine earlier, as I’d have at least some protection throughout the disease season.

Load More