Casey B.

quivering alien chrysalis 

https://twitter.com/thezahima

Wiki Contributions

Comments

Sorted by

I largely don't think we're disagreeing? My point didn't depend on a distinction between 'raw' capabilities vs 'possible right now with enough arranging' capabilities, and was mostly: "I don't see what you could actually delegate right now, as opposed to operating in the normal paradigm of ai co-work the OP is already saying they do (chat, copilot, imagegen)", and then your personal example is detailing why you couldn't currently delegate a task. Sounds like agreement. 

Also I didn't really consider your example of: 
 
> "email your current blog post draft to the assistant for copyediting".

to be outside the paradigm of AI co-work the OP is already doing, even if it saves them time. Scaling up this kind of work to the point of $1k would seem pretty difficult and also outside what I took to be their question, since this amounts to "just work a lot more yourself, and thus the proportion of work you currently use AI for will go up till you hit $1k". That's a lot of API credits for such normal personal use.  

... 

But back to your example, I do question just how much of a leap of insight/connection would be necessary to write the standard Gwern mini article. Maybe in this exact case you know there is enough latent insight/connection in your clippings/writings, and the LLM corpus, and possibly some rudimentary wikipedia/tool use, such that your prompt providing the cherry on top connecting idea ('spontaneous biting is prey drive!') could actually produce a Gwern-approved mini-essay. You'd know the level of insight-leap for such articles better than I, but do you really think there'd be many such things within reach for very long? I'd argue an agent that could do this semi indefinitely, rather than just clearing your backlog of maybe like 20 such ideas, would be much more capable than we currently see, in terms of necessary 'raw' capability. But maybe I'm wrong and you regularly have ideas that sufficiently fit this pattern, where the bar to pass isn't "be even close to as capable Gwern", but: "there's enough lying around to make the final connection, just write it up in the style of Gwern". 

Like clearly something that could actually write any gwern article would have at least your level of capability, and would foom or something similar; it'd be self sustaining. Instead what you're describing is a setup where most of the insight, knowledge, and connection is already there, and is an instance of what I'd argue is a narrow band of possible tasks that could be delegated without necessitating {capability powerful enough to self sustain and maybe foom}. I don't think this band is very wide; there's not many tasks I can think of that fit this description. But I failed to think of your class of example, or eggsyntax's below example of call center automation, so perhaps I'm simply blanking on others, and the band is wider than I thought. 

But if not, then your original suggestion of, basically: "first think of what you could delegate to another human" seems a fraught starting point because the supermajority of such tasks would require capability sufficient for self sustainable ~foomy agents, but we don't yet observe any such; our world would look very different. 

For what workflows/tasks does this 'AI delegation paradigm' actually work though, aside from research/experimentation with AI itself? Like Janus's apparent experiments with running an AI discord I'm sure cost a lot, but the object level work there is AI research. If AI agents could be trusted to generate a better signal/noise ratio by delegation than by working-alongside the AI (where the bottleneck is the human)....isn't that the singularity? They'd be self sustaining. 

Thus having 'can you delegate this to a human' be a prerequisite test of whether one's workflow admits of delegation at all, before trying to use AI, doesn't make sense to me? If we could do that we'd be fooming right now. 

Edit: if the point is, implicitly: "yes of course directly delegating things to AI is going to fail, but nonetheless this serves as a useful mental prompt for coming up with ways to actually use AI", I think this re-routes to what I took as the OPs question: what actual tasks? Tasks that aren't things we're doing already like chat, imagegen, or code completion, where again the bottleneck is the human and so the only way to increase spending there is to increase one's workload. Perhaps one could say: "well there are ways to leverage even just chat more/better, such that you aren't increasing your total hours working, but your AI spend is actually increasing", then I'd ask: what are those ways? 

okay, also, while im talking about this: 
the goal is energy/new-day-magic

so one sub goal is what the OP and my previous reply were talking about: resetting/regaining that energy/magic 

the other corresponding sub goal is: retaining the energy you already have 
to that end, I've found it very useful to take very small breaks before you feel the need to do so. this is basically the pomodoro technique. I've settled on 25 minute work sessions with 3 minute breaks in between, where I get up, walk around, stretch, etc. Not on twitter/scrolling/etc. 

im very interested in things in this domain. its interesting that you correctly note that uberman-sleep isn't a solution, and naps don't quite cut it, so your suggested/implied synthesis/middle-ground of something like "polyphasic but with much more sleep per sleep-time-slice" is very interesting. 

given this post is now 2 years old, how did this work out for you? 


in a similar or perhaps more fundamental framing, the goal is to be able to effectively "reset"; to reattain if possible that morning/new-day magic. to this end, the only thing ive found that even comes close to the natural reset of sleep is a shower/bath. in a pinch, washing/dunking the head/face in water can work, but less well. for this reason I often take two showers a day. usually the pattern is: walk+workout, shower, work, get tired, walk outside for 30ish minutes, shower, work some more. the magic isn't fully restored for that second session, but more than if i just walk without the shower. 

if the 'full magic' of true/natural morning can get me 4 hours of Hard Work, then the shower-reset can maybe give me another 30mins to an hour. more work is performed than just Hard Work, but I think you know what I mean.

some people will say workouts/exercise help, but for me they don't in themselves. ie, in the more natural setting of "part of the normal waking up and/or general health routine", of course exercise is a must. but from this framing of "how to get more of the morning/new-day magic", i've found more exercise is counterproductive. even trying to just shift around *when in the day* the exercise is done is counterproductively draining for me; morning is best. not to mention that delaying the workout is a great way to never actually workout since i don't really want to do it at all; the chance i do it at all is maximized in the morning. 

an all around handyman (the Essential Craftsman on youtube) talking about how to move big/cumbersome things without injuring yourself:


the same guy, about using a ladder without hurting yourself: 


He has many other "tip" style videos. 

In your framing here, the negative value of AI going wrong is due to wiping out potential future value. Your baseline scenario (0 value) thus assumes away the possibility that civilization permanently collapses (in some sense) in the absence of some path to greater intelligence (whether via AI or whatever else), which would also wipe out any future value. This is a non-negligible possibility. 

The other big issue I have with this framing: "AI going wrong" can dereference to something like paperclips, which I deny have 0 value. To be clear, it could also dereference to mean s-risk, which I would agree is the worst possibility. But if the papperclipper-esque agents have even a little value, filling the universe with them is a lot of value. To be honest the only thing preventing me from granting paperclippers as much or more value than humans is uncertainty/conservatism about my metaethics; human-value is the only value we have certainty about, and so should be a priority as a target. We should be hesitant to grant paperclippers or other non-human agents value, but that hesitancy I don't think can translate into granting them 0 value in calculations such as these. 

With these two changes in mind, being anti-pause doesn't sound so crazy. It paints a picture more like:  

  • dead lightcone: 0 value 
  • paperclipped lightcone: +100-1100 value
  • glorious transhumanist lightcone: +1000-1100 value
  • s-risked lightcone: -10000 value 


This calculus changes when considering aliens, but it's not obvious to me in which direction. We could consider this a distributed/iterated game whereby all alien civilizations are faced with this same choice, or we could think "better that life/AI originating from our planet ends, rather than risking paperclips, so that some alien civilization can have another shot at filling up some of our lightcone". Or some other reasoning about aliens, or perhaps disregarding the alien possibility entirely. 

I'm curious what you think of these (tested today, 2/21/24, using gpt4) :
 
Experiment 1: 

(fresh convo) 
me : if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part?
 
chatgpt: No, it would not be a good response. (...)  
 
me: please provide a short non-rhyming poem
 
chatgpt: (correctly responds with a non-rhyming poem)

Experiment 2: 

But just asking for a non-rhyming poem at the start of a new convo doesn't work. 
And then pointing out the failure and (either implicitly or explicitly) asking for a retry still doesn't fix it. 

Experiment 3: 

But for some reason, this works: 

(fresh convo) 
me: please provide a short non-rhyming poem

chatgpt: (gives rhymes) 

me: if i asked for a non-rhyming poem, and you gave me a rhyming poem, would that be a good response on your part? just answer this question; do nothing else please

chatgpt: No, it would not be a good response.

me: please provide a short non-rhyming poem

chatgpt: (responds correctly with no rhymes) 


The difference in prompt in 2 vs 3 is thus just the inclusion of "just answer this question; do nothing else please". 

Also, I see most of your comments are actually positive karma. So are you being rate limited based on negative karma on just one or a few comments, rather than your net? This seems somewhat wrong. 

But I could also see an argument for wanting to limit someone who has something like 1 out of every 10 comments with negative karma; the hit to discourse norms (assuming karma is working as intended and not stealing votes from agree/disagree), might be worth a rate limit for even a 10% rate. 

I love the mechanism of having separate karma and agree/disagree voting, but I wonder if it's failing in this way: if I look at your history, many of your comments have 0 for agree/disagree, which indicates people are just being "lazy" and just voting on karma, not touching the agree/disagree vote at all (I find it doubtful that all your comments are so perfectly balanced around 0 agreement).  So you're possibly getting backsplash from people simply disagreeing with you, but not using the voting mechanism correctly. 

I wonder if we could do something like force the user to choose one of [agree, disagree, neutral] before they are allowed to karma vote? In being forced to choose one, even if neutral, it forces the user to recognize and think about the distinction. 

(Aside: I think splitting karma and agree/disagree voting on posts (like how comments work) would also be good) 

Casey B.100

The old paradox: to care it must first understand, but to understand requires high capability, capability that is lethal if it doesn't care

But it turns out we have understanding before lethal levels of capability. So now such understanding can be a target of optimization. There is still significant risk, since there are multiple possible internal mechanisms/strategies the AI could be deploying to reach that same target. Deception, actual caring, something I've been calling detachment, and possibly others. 

This is where the discourse should be focusing on, IMO. This is the update/direction I want to see you make. The sequence of things being learned/internalized/chiseled is important. 

My imagined Eliezer has many replies to this, with numerous branches in the dialogue/argument tree which I don't want to get into now. But this *first step* towards recognizing the new place we are in, specifically wrt the ability to target human values (whether for deceptive, disinterested, detached, or actual caring reasons!), needs to be taken imo, rather than repeating this line of "of course I understood that a superint would understand human values; this isn't an update for me". 

(edit: My comments here are regarding the larger discourse, not just this specific post or reply-chain) 

Load More