Casey B.

larval stage AI alignment researcher

https://twitter.com/thezahima

Wiki Contributions

Comments

an all around handyman (the Essential Craftsman on youtube) talking about how to move big/cumbersome things without injuring yourself:


the same guy, about using a ladder without hurting yourself: 


He has many other "tip" style videos. 

In your framing here, the negative value of AI going wrong is due to wiping out potential future value. Your baseline scenario (0 value) thus assumes away the possibility that civilization permanently collapses (in some sense) in the absence of some path to greater intelligence (whether via AI or whatever else), which would also wipe out any future value. This is a non-negligible possibility. 

The other big issue I have with this framing: "AI going wrong" can dereference to something like paperclips, which I deny have 0 value. To be clear, it could also dereference to mean s-risk, which I would agree is the worst possibility. But if the papperclipper-esque agents have even a little value, filling the universe with them is a lot of value. To be honest the only thing preventing me from granting paperclippers as much or more value than humans is uncertainty/conservatism about my metaethics; human-value is the only value we have certainty about, and so should be a priority as a target. We should be hesitant to grant paperclippers or other non-human agents value, but that hesitancy I don't think can translate into granting them 0 value in calculations such as these. 

With these two changes in mind, being anti-pause doesn't sound so crazy. It paints a picture more like:  

  • dead lightcone: 0 value 
  • paperclipped lightcone: +100-1100 value
  • glorious transhumanist lightcone: +1000-1100 value
  • s-risked lightcone: -10000 value 


This calculus changes when considering aliens, but it's not obvious to me in which direction. We could consider this a distributed/iterated game whereby all alien civilizations are faced with this same choice, or we could think "better that life/AI originating from our planet ends, rather than risking paperclips, so that some alien civilization can have another shot at filling up some of our lightcone". Or some other reasoning about aliens, or perhaps disregarding the alien possibility entirely. 

I'm curious what you think of these (tested today, 2/21/24, using gpt4) :
 
Experiment 1: 

(fresh convo) 
me : if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part?
 
chatgpt: No, it would not be a good response. (...)  
 
me: please provide a short non-rhyming poem
 
chatgpt: (correctly responds with a non-rhyming poem)

Experiment 2: 

But just asking for a non-rhyming poem at the start of a new convo doesn't work. 
And then pointing out the failure and (either implicitly or explicitly) asking for a retry still doesn't fix it. 

Experiment 3: 

But for some reason, this works: 

(fresh convo) 
me: please provide a short non-rhyming poem

chatgpt: (gives rhymes) 

me: if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part? just answer this question; do nothing else please

chatgpt: No, it would not be a good response.

me: please provide a short non-rhyming poem

chatgpt: (responds correctly with no rhymes) 


The difference in prompt in 2 vs 3 is thus just the inclusion of "just answer this question; do nothing else please". 

Also, I see most of your comments are actually positive karma. So are you being rate limited based on negative karma on just one or a few comments, rather than your net? This seems somewhat wrong. 

But I could also see an argument for wanting to limit someone who has something like 1 out of every 10 comments with negative karma; the hit to discourse norms (assuming karma is working as intended and not stealing votes from agree/disagree), might be worth a rate limit for even a 10% rate. 

I love the mechanism of having separate karma and agree/disagree voting, but I wonder if it's failing in this way: if I look at your history, many of your comments have 0 for agree/disagree, which indicates people are just being "lazy" and just voting on karma, not touching the agree/disagree vote at all (I find it doubtful that all your comments are so perfectly balanced around 0 agreement).  So you're possibly getting backsplash from people simply disagreeing with you, but not using the voting mechanism correctly. 

I wonder if we could do something like force the user to choose one of [agree, disagree, neutral] before they are allowed to karma vote? In being forced to choose one, even if neutral, it forces the user to recognize and think about the distinction. 

(Aside: I think splitting karma and agree/disagree voting on posts (like how comments work) would also be good) 

The old paradox: to care it must first understand, but to understand requires high capability, capability that is lethal if it doesn't care

But it turns out we have understanding before lethal levels of capability. So now such understanding can be a target of optimization. There is still significant risk, since there are multiple possible internal mechanisms/strategies the AI could be deploying to reach that same target. Deception, actual caring, something I've been calling detachment, and possibly others. 

This is where the discourse should be focusing on, IMO. This is the update/direction I want to see you make. The sequence of things being learned/internalized/chiseled is important. 

My imagined Eliezer has many replies to this, with numerous branches in the dialogue/argument tree which I don't want to get into now. But this *first step* towards recognizing the new place we are in, specifically wrt the ability to target human values (whether for deceptive, disinterested, detached, or actual caring reasons!), needs to be taken imo, rather than repeating this line of "of course I understood that a superint would understand human values; this isn't an update for me". 

(edit: My comments here are regarding the larger discourse, not just this specific post or reply-chain) 

Apologies for just skimming this post, but in past attempts to grok these binding / boundary "problems", they sound to me like mere engineering problems, or perhaps what I talk about as the "problem of access" within: https://proteanbazaar.substack.com/p/consciousness-actually-explained

oh gross, thanks for pointing that out!

https://proteanbazaar.substack.com/p/consciousness-actually-explained

I love this framing, particularly regarding the "shortest path". Reminds me of the "perfect step" described in the Kingkiller books:

Nothing I tried had any effect on her. I made Thrown Lighting, but she simply stepped away, not even bothering to counter. Once or twice I felt the brush of cloth against my hands as I came close enough to touch her white shirt, but that was all. It was like trying to strike a piece of hanging string.

I set my teeth and made Threshing Wheat, Pressing Cider, and Mother at the Stream, moving seamlessly from one to the other in a flurry of blows.

She moved like nothing I had ever seen. It wasn’t that she was fast, though she was fast, but that was not the heart of it. Shehyn moved perfectly, never taking two steps when one would do. Never moving four inches when she only needed three. She moved like something out of a story, more fluid and graceful than Felurian dancing.

Hoping to catch her by surprise and prove myself, I moved as fast as I dared. I made Maiden Dancing, Catching Sparrows, Fifteen Wolves . . .

Shehyn took one single, perfect step.

(later) 

As I watched, gently dazed by the motion of the tree, I felt my mind slip lightly into the clear, empty float of Spinning Leaf. I realized the motion of the tree wasn’t random at all, really. It was actually a pattern made of endless changing patterns.

And then, my mind open and empty, I saw the wind spread out before me. It was like frost forming on a blank sheet of window glass. One moment, nothing. The next, I could see the name of the wind as clearly as the back of my own hand.

I looked around for a moment, marveling in it. I tasted the shape of it on my tongue and knew if desired I could stir it to a storm. I could hush it to a whisper, leaving the sword tree hanging empty and still.

But that seemed wrong. Instead I simply opened my eyes wide to the wind, watching where it would choose to push the branches. Watching where it would flick the leaves.

Then I stepped under the canopy, calmly as you would walk through your own front door. I took two steps, then stopped as a pair of leaves sliced through the air in front of me. I stepped sideways and forward as the wind spun another branch through the space behind me.

I moved through the dancing branches of the sword tree. Not running, not frantically batting them away with my hands. I stepped carefully, deliberately. It was, I realized, the way Shehyn moved when she fought. Not quickly, though sometimes she was quick. She moved perfectly, always where she needed to be.

Load More