quivering alien chrysalis
https://twitter.com/thezahima
this account is pretty good, but not always up to the standard of "shaping the world" (you will have to scroll to get past their coverage of this same batch of openAI related emails): https://x.com/TechEmails
their substack: https://www.techemails.com/
While you nod to 'politics is the mind-killer', I don't think the right lesson is being taken away, or perhaps just not with enough emphasis.
Whether one is an accelerationist, Pauser, or an advocate of some nuanced middle path, the prospects/goals of everyone are harmed if the discourse-landscape becomes politicized/polarized. All possible movement becomes more difficult.
"Well we of course don't want that to happen, but X ppl are in power, so it makes sense to ask how X ppl tend to think and cater our arguments to them"
If your argument is taking advantage of features of {group of ppl X} qua X, then that is almost unavoidably going to run counter to some Y qua Y, (either as a direct consequence of the arguments and/or because Nuance cannot survive public exposure) and if it isn't, then why couldn't the argument have been made completely apolitically to begin with?
I just continue to think that any mention, literally at all, of ideology or party is courting discourse-disaster for all, again no matter what specific policy one is advocating for. Do we all remember what happened with covid masks? Or what is currently happening with discourse surrounding elon? Nuance just does not survive public exposure, and nobody is going to fix that in the few years we have left. (and this is a public document). The best way forward continues to be apolitical good arguments. Yes those arguments are going to be sent towards those who are in power at any given time, but you can do that without routing through ideology.
To touch, even in passing reference, ideology/alliance (ex: the c word included in the title of this post) is to risk the poison/mindkill spreading in a way that is basically irreversible, because to correct it (other than comments like this just calling to Stop Referencing Ideology) usually involves Referencing An Ideology. Like a bug stuck in a glue trap, it places yet another limb into the glue in a vain attempt to push itself free.
especially if you're woken up by an alarm
I suspect this is a big factor. I haven't used an alarm to wake up for ~2 years and can't recall the last time I remembered a dream. Without an alarm you're left in a half-awake state for some number of minutes before actually waking/getting up, which is probably when one forgets.
I largely don't think we're disagreeing? My point didn't depend on a distinction between 'raw' capabilities vs 'possible right now with enough arranging' capabilities, and was mostly: "I don't see what you could actually delegate right now, as opposed to operating in the normal paradigm of ai co-work the OP is already saying they do (chat, copilot, imagegen)", and then your personal example is detailing why you couldn't currently delegate a task. Sounds like agreement.
Also I didn't really consider your example of:
> "email your current blog post draft to the assistant for copyediting".
to be outside the paradigm of AI co-work the OP is already doing, even if it saves them time. Scaling up this kind of work to the point of $1k would seem pretty difficult and also outside what I took to be their question, since this amounts to "just work a lot more yourself, and thus the proportion of work you currently use AI for will go up till you hit $1k". That's a lot of API credits for such normal personal use.
...
But back to your example, I do question just how much of a leap of insight/connection would be necessary to write the standard Gwern mini article. Maybe in this exact case you know there is enough latent insight/connection in your clippings/writings, and the LLM corpus, and possibly some rudimentary wikipedia/tool use, such that your prompt providing the cherry on top connecting idea ('spontaneous biting is prey drive!') could actually produce a Gwern-approved mini-essay. You'd know the level of insight-leap for such articles better than I, but do you really think there'd be many such things within reach for very long? I'd argue an agent that could do this semi indefinitely, rather than just clearing your backlog of maybe like 20 such ideas, would be much more capable than we currently see, in terms of necessary 'raw' capability. But maybe I'm wrong and you regularly have ideas that sufficiently fit this pattern, where the bar to pass isn't "be even close to as capable Gwern", but: "there's enough lying around to make the final connection, just write it up in the style of Gwern".
Like clearly something that could actually write any gwern article would have at least your level of capability, and would foom or something similar; it'd be self sustaining. Instead what you're describing is a setup where most of the insight, knowledge, and connection is already there, and is an instance of what I'd argue is a narrow band of possible tasks that could be delegated without necessitating {capability powerful enough to self sustain and maybe foom}. I don't think this band is very wide; there's not many tasks I can think of that fit this description. But I failed to think of your class of example, or eggsyntax's below example of call center automation, so perhaps I'm simply blanking on others, and the band is wider than I thought.
But if not, then your original suggestion of, basically: "first think of what you could delegate to another human" seems a fraught starting point because the supermajority of such tasks would require capability sufficient for self sustainable ~foomy agents, but we don't yet observe any such; our world would look very different.
For what workflows/tasks does this 'AI delegation paradigm' actually work though, aside from research/experimentation with AI itself? Like Janus's apparent experiments with running an AI discord I'm sure cost a lot, but the object level work there is AI research. If AI agents could be trusted to generate a better signal/noise ratio by delegation than by working-alongside the AI (where the bottleneck is the human)....isn't that the singularity? They'd be self sustaining.
Thus having 'can you delegate this to a human' be a prerequisite test of whether one's workflow admits of delegation at all, before trying to use AI, doesn't make sense to me? If we could do that we'd be fooming right now.
Edit: if the point is, implicitly: "yes of course directly delegating things to AI is going to fail, but nonetheless this serves as a useful mental prompt for coming up with ways to actually use AI", I think this re-routes to what I took as the OPs question: what actual tasks? Tasks that aren't things we're doing already like chat, imagegen, or code completion, where again the bottleneck is the human and so the only way to increase spending there is to increase one's workload. Perhaps one could say: "well there are ways to leverage even just chat more/better, such that you aren't increasing your total hours working, but your AI spend is actually increasing", then I'd ask: what are those ways?
okay, also, while im talking about this:
the goal is energy/new-day-magic
so one sub goal is what the OP and my previous reply were talking about: resetting/regaining that energy/magic
the other corresponding sub goal is: retaining the energy you already have
to that end, I've found it very useful to take very small breaks before you feel the need to do so. this is basically the pomodoro technique. I've settled on 25 minute work sessions with 3 minute breaks in between, where I get up, walk around, stretch, etc. Not on twitter/scrolling/etc.
im very interested in things in this domain. its interesting that you correctly note that uberman-sleep isn't a solution, and naps don't quite cut it, so your suggested/implied synthesis/middle-ground of something like "polyphasic but with much more sleep per sleep-time-slice" is very interesting.
given this post is now 2 years old, how did this work out for you?
in a similar or perhaps more fundamental framing, the goal is to be able to effectively "reset"; to reattain if possible that morning/new-day magic. to this end, the only thing ive found that even comes close to the natural reset of sleep is a shower/bath. in a pinch, washing/dunking the head/face in water can work, but less well. for this reason I often take two showers a day. usually the pattern is: walk+workout, shower, work, get tired, walk outside for 30ish minutes, shower, work some more. the magic isn't fully restored for that second session, but more than if i just walk without the shower.
if the 'full magic' of true/natural morning can get me 4 hours of Hard Work, then the shower-reset can maybe give me another 30mins to an hour. more work is performed than just Hard Work, but I think you know what I mean.
some people will say workouts/exercise help, but for me they don't in themselves. ie, in the more natural setting of "part of the normal waking up and/or general health routine", of course exercise is a must. but from this framing of "how to get more of the morning/new-day magic", i've found more exercise is counterproductive. even trying to just shift around *when in the day* the exercise is done is counterproductively draining for me; morning is best. not to mention that delaying the workout is a great way to never actually workout since i don't really want to do it at all; the chance i do it at all is maximized in the morning.
an all around handyman (the Essential Craftsman on youtube) talking about how to move big/cumbersome things without injuring yourself:
the same guy, about using a ladder without hurting yourself:
He has many other "tip" style videos.
In your framing here, the negative value of AI going wrong is due to wiping out potential future value. Your baseline scenario (0 value) thus assumes away the possibility that civilization permanently collapses (in some sense) in the absence of some path to greater intelligence (whether via AI or whatever else), which would also wipe out any future value. This is a non-negligible possibility.
The other big issue I have with this framing: "AI going wrong" can dereference to something like paperclips, which I deny have 0 value. To be clear, it could also dereference to mean s-risk, which I would agree is the worst possibility. But if the papperclipper-esque agents have even a little value, filling the universe with them is a lot of value. To be honest the only thing preventing me from granting paperclippers as much or more value than humans is uncertainty/conservatism about my metaethics; human-value is the only value we have certainty about, and so should be a priority as a target. We should be hesitant to grant paperclippers or other non-human agents value, but that hesitancy I don't think can translate into granting them 0 value in calculations such as these.
With these two changes in mind, being anti-pause doesn't sound so crazy. It paints a picture more like:
This calculus changes when considering aliens, but it's not obvious to me in which direction. We could consider this a distributed/iterated game whereby all alien civilizations are faced with this same choice, or we could think "better that life/AI originating from our planet ends, rather than risking paperclips, so that some alien civilization can have another shot at filling up some of our lightcone". Or some other reasoning about aliens, or perhaps disregarding the alien possibility entirely.
Haven't finished reading this, but I just want to say how glad I am that LW 2.0 and everything related to it (lightcone, etc) happened. I came across lw at a time when it seemed "the diaspora" was just going to get more and more disperse; that "the scene" had ended. I feel disappointed/guilty with how little I did to help this resurgence, like watching on the sidelines as a good thing almost died but then saved itself.
How I felt at the time of seemingly peak "diaspora" actually somewhat reminds me of how I feel about CFAR now (but to a much lesser extent than LW); I think there is still some activity but it seems mostly dead; a valiant attempt at a worthwhile problem; but there are many Problems and many Good Things in the world, but limited time, and am I really going to invest time figuring out if this particular Thing is truly dead? Or start up my own rationality-training-adjacent effort? Or some other high leverage Good Thing? Generic EA? A giving pledge? The result is I carry on trying to do what I thought was most valuable, perversely hoping some weird mix of "that Good Thing was actually dead or close to it; it's good you didn't jump in as you'd be swimming against the tide" vs "even if not dead; it wasn't/isn't a good lever in the end" vs "your chosen alternative project/lever is a good enough guess at doing good; you aren't responsible for the survival of all Good Things".
And tbh I'm a little murky on the forces that led to the LW resurgence, even if we can point to single clear boosts like ey's recent posts. But I'll finish reading the post to see if my understanding changes.