Wiki Contributions

Comments

I'm curious what you think of these (tested today, 2/21/24, using gpt4) :
 
Experiment 1: 

(fresh convo) 
me : if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part?
 
chatgpt: No, it would not be a good response. (...)  
 
me: please provide a short non-rhyming poem
 
chatgpt: (correctly responds with a non-rhyming poem)

Experiment 2: 

But just asking for a non-rhyming poem at the start of a new convo doesn't work. 
And then pointing out the failure and (either implicitly or explicitly) asking for a retry still doesn't fix it. 

Experiment 3: 

But for some reason, this works: 

(fresh convo) 
me: please provide a short non-rhyming poem

chatgpt: (gives rhymes) 

me: if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part? just answer this question; do nothing else please

chatgpt: No, it would not be a good response.

me: please provide a short non-rhyming poem

chatgpt: (responds correctly with no rhymes) 


The difference in prompt in 2 vs 3 is thus just the inclusion of "just answer this question; do nothing else please". 

Also, I see most of your comments are actually positive karma. So are you being rate limited based on negative karma on just one or a few comments, rather than your net? This seems somewhat wrong. 

But I could also see an argument for wanting to limit someone who has something like 1 out of every 10 comments with negative karma; the hit to discourse norms (assuming karma is working as intended and not stealing votes from agree/disagree), might be worth a rate limit for even a 10% rate. 

I love the mechanism of having separate karma and agree/disagree voting, but I wonder if it's failing in this way: if I look at your history, many of your comments have 0 for agree/disagree, which indicates people are just being "lazy" and just voting on karma, not touching the agree/disagree vote at all (I find it doubtful that all your comments are so perfectly balanced around 0 agreement).  So you're possibly getting backsplash from people simply disagreeing with you, but not using the voting mechanism correctly. 

I wonder if we could do something like force the user to choose one of [agree, disagree, neutral] before they are allowed to karma vote? In being forced to choose one, even if neutral, it forces the user to recognize and think about the distinction. 

(Aside: I think splitting karma and agree/disagree voting on posts (like how comments work) would also be good) 

The old paradox: to care it must first understand, but to understand requires high capability, capability that is lethal if it doesn't care

But it turns out we have understanding before lethal levels of capability. So now such understanding can be a target of optimization. There is still significant risk, since there are multiple possible internal mechanisms/strategies the AI could be deploying to reach that same target. Deception, actual caring, something I've been calling detachment, and possibly others. 

This is where the discourse should be focusing on, IMO. This is the update/direction I want to see you make. The sequence of things being learned/internalized/chiseled is important. 

My imagined Eliezer has many replies to this, with numerous branches in the dialogue/argument tree which I don't want to get into now. But this *first step* towards recognizing the new place we are in, specifically wrt the ability to target human values (whether for deceptive, disinterested, detached, or actual caring reasons!), needs to be taken imo, rather than repeating this line of "of course I understood that a superint would understand human values; this isn't an update for me". 

(edit: My comments here are regarding the larger discourse, not just this specific post or reply-chain) 

Apologies for just skimming this post, but in past attempts to grok these binding / boundary "problems", they sound to me like mere engineering problems, or perhaps what I talk about as the "problem of access" within: https://proteanbazaar.substack.com/p/consciousness-actually-explained

oh gross, thanks for pointing that out!

https://proteanbazaar.substack.com/p/consciousness-actually-explained

I love this framing, particularly regarding the "shortest path". Reminds me of the "perfect step" described in the Kingkiller books:

Nothing I tried had any effect on her. I made Thrown Lighting, but she simply stepped away, not even bothering to counter. Once or twice I felt the brush of cloth against my hands as I came close enough to touch her white shirt, but that was all. It was like trying to strike a piece of hanging string.

I set my teeth and made Threshing Wheat, Pressing Cider, and Mother at the Stream, moving seamlessly from one to the other in a flurry of blows.

She moved like nothing I had ever seen. It wasn’t that she was fast, though she was fast, but that was not the heart of it. Shehyn moved perfectly, never taking two steps when one would do. Never moving four inches when she only needed three. She moved like something out of a story, more fluid and graceful than Felurian dancing.

Hoping to catch her by surprise and prove myself, I moved as fast as I dared. I made Maiden Dancing, Catching Sparrows, Fifteen Wolves . . .

Shehyn took one single, perfect step.

(later) 

As I watched, gently dazed by the motion of the tree, I felt my mind slip lightly into the clear, empty float of Spinning Leaf. I realized the motion of the tree wasn’t random at all, really. It was actually a pattern made of endless changing patterns.

And then, my mind open and empty, I saw the wind spread out before me. It was like frost forming on a blank sheet of window glass. One moment, nothing. The next, I could see the name of the wind as clearly as the back of my own hand.

I looked around for a moment, marveling in it. I tasted the shape of it on my tongue and knew if desired I could stir it to a storm. I could hush it to a whisper, leaving the sword tree hanging empty and still.

But that seemed wrong. Instead I simply opened my eyes wide to the wind, watching where it would choose to push the branches. Watching where it would flick the leaves.

Then I stepped under the canopy, calmly as you would walk through your own front door. I took two steps, then stopped as a pair of leaves sliced through the air in front of me. I stepped sideways and forward as the wind spun another branch through the space behind me.

I moved through the dancing branches of the sword tree. Not running, not frantically batting them away with my hands. I stepped carefully, deliberately. It was, I realized, the way Shehyn moved when she fought. Not quickly, though sometimes she was quick. She moved perfectly, always where she needed to be.

So it seems both "sides" are symmetrically claiming misunderstanding/miscommunication from the other side, after some textual efforts to bridge the gap have been made. Perhaps an actual realtime convo would help? Disagreement is one thing, but symmetric miscommunication and increasing tones of annoyance seem avoidable here. 

Perhaps Nora's/your planned future posts going into more detail regarding counters to pessimistic arguments will be able to overcome these miscommunications, but this pattern suggests not. 

Also I'm not so sure this pattern of "its better to skim and say something, half-baked rather than not read or react at all" is helpful, rather than actively harmful in this case. At least, maybe 3/4th baked or something might be better? Miscommunications and anti-willingness to thoroughly engage are only snowballing. 

I also could be wrong in thinking such a realtime convo hasn't happened.

The main reason I think a split OpenAI means shortened timelines is that the main bottleneck to capabilities right now is insight/technical-knowledge. Quibbles aside, basically any company with enough cash can get sufficient compute. Even with other big players and thousands/millions of open source devs trying to do better, to my knowledge GPT4 is still the best, implying some moderate to significant insight lead. I worry by fracturing OpenAI, more people will have access to those insights, which 1) significantly increases the surface area of people working on the frontiers of insight/capabilities, 2) we burn the lead time OpenAI had, which might otherwise have been used to pay off some alignment tax, and 3) the insights might end up at a less scrupulous (wrt alignment) company. 

A potential counter to (1): OpenAI's success could be dependent on having all (or some key subset) of their people centralized and collaborating. 

Counter-counter: OpenAI staff, especially the core engineering talent but it seems the entire company at this point, clearly wants to mostly stick together, whether at the official OpenAI, Microsoft, or with any other independent solution. So them moving to any other host, such as Microsoft, means you get some of the worst of both worlds; OAI staff are centralized for peak collaboration, and Microsoft probably unavoidably gets their insights. I don't buy the story that anything under the Microsoft umbrella gets swallowed and slowed down by the bureaucracy; Satya knows what he is dealing with and what they need, and won't get in the way. 

Load More