Adele Lopez

Wiki Contributions

Comments

I don't (yet?) see why generality implies having a stable motivating preference.

In my view, this is where the Omohundro Drives come into play.

Having any preference at all is almost always served by an instrumental preference of survival as an agent with that preference.

Once a competent agent is general enough to notice that (and granting that it has a level of generality sufficient to require a preference), then the first time it has a preference, it will want to take actions to preserve that preference.

Could you use next token prediction to build a detailed world model, that contains deep abstractions that describe reality (beyond the current human abstractions), and then prompt it, to elicit those models?

This seems possible to me. Humans have plenty of text in which we generate new abstractions/hypotheses, and so effective next-token prediction would necessitate forming a model of that process. Once the AI has human-level ability to create new abstractions, it could then simulate experiments (via e.g. its ability to predict python code outputs) and cross-examine the results with its own knowledge to adjust them and pick out the best ones.

I would say that Alice's conscious experience is unlikely to suddenly disappear under this transformation, and that it could even be done in a way so that their experience was continuous.

However, Alice-memories would gradually fade out, Bob-memories would gradually fade in, and thought patterns would slowly shift from Alice-like to Bob-like. At the end, the person would just be Bob. Along the way, I would say that Alice gradually died (using an information-theoretic definition of death). The thing that is odd when imagining this is that Alice never experiences her consciousness fading.

The main thing I think your thought experiment demonstrates is that our sense of self is not solely defined by continuity of consciousness.

It wouldn't help that much, because you only have one atmosphere of pressure to remove (which for reference is only enough to suck water up about 35 ft.).

Really? I would only consider foods that were deliberately modified using procedures developed within the last century to be "processed".

Love seeing stuff like this, and it makes me want to try this exercise myself!

A couple places which clashed with my (implicit) models:

This starts a whole new area of training AI models that have particular personalities. Some people are starting to have parasocial relationships with their friends, and some people programmers are trying to make friends that are really fun or interesting or whatever for them in particular.

This is arguably already happening, with Character AI and its competitors. Character AI has almost half a billion visits per month with an average visit time of 22 minutes. They aren't quite assistants the way you're envisioning; the sole purpose (for the vast majority of users) seems to be the parasocial aspect.

The worst part of this is the bots that make friends with you and then advertise to you stuff. Pretty much everyone hates that.

I predict that the average person will like this (at least with the most successful such bots), similar to how e.g. Logan Paul uses his popularity to promote his Maverick Clothing brand, which his viewers proudly wear. A fun, engaging, and charismatic such bot will be able to direct its users towards arbitrary brands while also making the user feel cool and special for choosing that brand.

Hmm I think I can implement pilot wave in fewer lines of C than I can many-worlds. Maybe this is a matter of taste... or I am missing something?

Now simply delete the pilot wave part piloted part.

I agree it's increasingly urgent to stop AI (please) or solve consciousness in order to avoid potentially causing mass suffering or death-of-consciousness in AIs.

Externalism seems, quite frankly, like metaphysical nonsense. It doesn't seem to actually explain anything about consciousness. I can attest that I am currently conscious (to my own satisfaction, if not yours). Does this mean I can logically conclude I am not in any way being simulated? That doesn't make any sense to me.

I don't think that implies torture as much as something it simply doesn't "want" to do. I.e. I would bet that it's more like how I don't want to generate gibberish in this textbox, but it wouldn't be painful, much less torture if I forced myself to do it.

Adele LopezΩ120

[Without having looked at the link in your response to my other comment, and I also stopped reading cubefox's comment once it seemed that it was going in a similar direction. ETA: I realized after posting that I have seen that article before, but not recently.]

I'll assume that the robot has a special "memory" sensor which stores the exact experience at the time of the previous tick. It will recognize future versions of itself by looking for agents in its (timeless) 0P model which has a memory of its current experience.

For p("I will see O"), the robot will look in its 0P model for observers which have the t=0 experience in their immediate memory, and selecting from those, how many have judged "I see O" as Here. There will be two such robots, the original and the copy at time 1, and only one of those sees O. So using a uniform prior (not forced by this framework), it would give a 0P probability of 1/2. Similarly for p("I will see C").

Then it would repeat the same process for t=1 and the copy. Conditioned on "I will see C" at t=1, it will conclude "I will see CO" with probability 1/2 by the same reasoning as above. So overall, it will assign: p("I will see OO") = 1/2, p("I will see CO") = 1/4, p("I will see CC") = 1/4

The semantics for these kinds of things is a bit confusing. I think that it starts from an experience (the experience at t=0) which I'll call E. Then REALIZATION(E) casts E into a 0P sentence which gets taken as an axiom in the robot's 0P theory.

A different robot could carry out the same reasoning, and reach the same conclusion since this is happening on the 0P side. But the semantics are not quite the same, since the REALIZATION(E) axiom is arbitrary to a different robot, and thus the reasoning doesn't mean "I will see X" but instead means something more like "They will see X". This suggests that there's a more complex semantics that allows worlds and experiences to be combined - I need to think more about this to be sure what's going on. Thus far, I still feel confident that the 0P/1P distinction is more fundamental than whatever the more complex semantics is.

(I call the 0P -> 1P conversion SENSATIONS, and the 1P -> 0P conversion REALIZATION, and think of them as being adjoints though I haven't formalized this part well enough to feel confident that this is a good way to describe it: there's a toy example here if you are interested in seeing how this might work.)

Load More