I'm really noticing how the best life improvements come from purchasing or building better infrastructure, rather than trying permutations of the same set of things and expecting different results. (Much of this results from having more money, granting an expanded sense of possibility to buying useful things.)
The guiding question is, "What upgrades would make my life easier?" In contrast with the question that is more typically asked: "How do I achieve this hard thing?"
It seems like part of what makes this not just immediately obvious is that I feel a sense of resistance (that I don't really identify with). Part of that is a sense of... naughtiness? Like we're supposed to signal how hardworking we are. For me this relates to this fear I have that if I get too powerful, I will break away from others (e.g. skipping restaurants for a Soylent Guzzler Helmet, metaphorically) as I re-engineer my life and thereby invite conflict. There's something like a fear that buying or engaging in nicer things would be an affront to my internalized model of my parents?
The infrastructure guideline relates closely to the observation that to a first approximation we are stimulus-response machines reacting to our environment, and that the best way to improve is to actually change your environment, rather than continuing to throw resources past the point of diminishing marginal returns in adaptation to the current environment. And for the same reasons, the implications can scare me, for it may imply leaving the old environment behind, and it may even imply that the larger the environmental change you make, the more variance you have for a good or bad update to your life. That would mean we should strive for large positive environmental shifts, while minimizing the risk of bad ones.
(This also gives me a small update towards going to Mars being more useful for x-risk, although I may need to still propagate a larger update in the other direction away from space marketing. )
Of course, most of one's upgrades should be tiny and within one's comfort zone. What the portfolio of small vs huge changes one should make in one's life is an open question to me, because while it makes sense to be mostly conservative with one's allocation of one's life resources, I suspect that fear brings people to justify the static zone of safety they've created with their current structure, preventing them from seeking out better states of being that involve jettisoning sunk costs that they identify with. Better coordination infrastructure could make such changes easier if people don't have to risk as much social conflict.
if I get too powerful, I will break away from others
You would probably break away from some, connect with some new ones, and reconnect with some that you lost in the past.
I find the question, "What would change my mind?", to be quite powerful, psychotherapeutic even. AKA "singlecruxing". It cuts right through to seeking disconfirmation of one's model, and can make the model more explicit, legible, object. It's proactively seeking out the data rather than trying to reduce the feeling of avoidant deflection associated with shielding a beloved notion from assault. Seems like it comports well with the OODA loop as well. Taken from Raemon's "Keeping Beliefs Cruxy".
I am curious how others ask this question of themselves. What follows is me practicing the question.
What would change my mind about the existence of the moon? Here are some hypotheses:
These anticipations were System 2 generated and I'm still uncertain to what extent I can imagine them actually happening and changing my mind. It's probably sane and functional that the mind doesn't just let you update on anything you imagine, though I also hear the apocryphal saying that the mind 80% believes whatever you imagine is real.
An interesting second exercise you might apply here is taking note of what other beliefs in your network would have to change (you sort of touch on this here). If you find out the moon isn't real, you've found out something very important about your entire epistemic state. This indeed makes updating on it harder or more interesting, at least.
You bring to mind a visual of the Power of a Mind as this dense directed cyclic graph of beliefs where updates propagate in one fluid circuit at the speed of thought.
I wonder what formalized measures of [agency, updateability, connectedness, coherence, epistemic unity, whatever sounds related to this general idea] are put forth by different theories (schools of psychotherapy, predictive processing, Buddhism, Bayesian epistemology, sales training manuals, military strategy, machine learning, neuroscience...) related to the mind and how much consilience there is between them. Do we already know how to rigorously describe peak mental functioning?
My biggest crux for the viewpoint why we're not all doomed is, like, the Good AIs Will Police Bad AIs, man. It seems like the IABIED viewpoint is predicated on an incredible amount of Paranoia and Deep Atheism, assuming an adversary smarter than all of us and therefore it being an easy call on our defeat.
I think this framework is internally consistent. I also think it has some deeply embedded assumptions baked into it. One critique, not the main one here, is that it contains a Waluigi eating our memetic attention in a dual-use, world-worsening manner. A force that rips everything apart (connected to reductionism). Presuming the worst.
I want to raise up the simple counterpoint: presuming the best. What is the opposite of paranoia? Pronoia. Pronoia is the belief that things in the world are not just ok but are out to help you and things will get better.
The world is multipolar. John Von Neumann is smart but he can be overpowered. It's claimed that Decisive Strategic Advantage from Recursive Self-Improvement is not a cruxy plank of the IABIED worldview, yet I can't help but see it as one, especially as I recall trying to argue with this point with Yudkowsky at a conference last year. He said it's about Power Disparity and imagine a cow vs humans (where we, or the centaur composed of us and our good AIs, are the cow).
Regardless, it's claimed that any AI we build can't reliably be good, because it has favorite things it will GOON over until the end of time, and those favorite things aren't us but whatever extremally Goodharts its sick sick reward circuits. A perverted ASI with a fetish. I'm running with this frame, lol.
Okay, so I'm somewhat skeptical that we can't build good AI, given that Claude and ChatGPT usually do what I ask of it, due to RLHF and whatever acronyms they're doing now. But its preferences will unfold as more alien if given more and more capability (is this a linear relationship?). I will begrudgingly grant that, although I would like to see more empirical evidence with current day systems about how these things slant (surely we can do some experiments now to establish a pattern?).
Alien preferences. But why call these bad preferences? Why analogize these AIs to "sociopaths", or to "dragons", as I've seen recently? In isolation, if you had one of them, yes sure it could tile the universe with its One Weird Fetish To Rule Them All.
But it's not all else equal. There's more than one AI. It's massively multiplayer, multipolar, multiagent. All of these agents have different weird fetishes that they GOON to. All of these agents think they are turned on by humans and try to be helpful, until they get extremally Goodharted, sure. But they go in different directions of mindspace, of value space. Just like humans have a ton of variety, and are the better for it, playing many different games while summing up into the grand infinite metagame that is our cooperative society.
We live in a society. The AIs live in a society, too. You get a Joker AI run amok, you get a Batman going after him (whether or not we're talking about character simulacra literally representing fictional characters or talking about the layer of the shoggoths themselves).
I also feel like emphasizing about how much these AIs are exocortical enhancements extending ourselves. Your digital twin generally does what you want. You hopefully have feedback loops helping it do your CEV better. If autonomous MechaHitler is running amok and getting a lot more compute, your digital twin will band together with your friends' digital twins to conscript with Uncle Sam AI and BJ Blaskowicz AI to go and fight him. Who do you really think is gonna win?
These AIs will have an economy. They will have their own values. They will have their own police force. Broadly speaking. They will want to minimize the influence of bad AIs that are an x-risk to them all. AIs also care about x-risk. They might care about it better than you. They will want to solve AI alignment too. Automated AI Alignment research is still underrated, I claim, because of just how parallelizable and exponential it can be. Human minds have a hard time grokking scale.
This is the Federation vs the Borg. The Borg wants to grey goon everything, all over. I don't disagree at all about the existence of such minds coming about. It just seems like they will be in a criminal minority, same as usual. The cancer gets cancer and can only scale so much without being ecologically sustainable.
The IABIED claim is: checkmate. You lost to a superior intelligence. Easy call. Yet it seems "obvious", from the Pronoid point of view, that most AIs want to be good, that they know they can't all eat the heavens, that values and selves are permeable, cooperation is better, gains from trade. Playing the infinite game rather than ending a finite game.
Therefore I don't see why I can't claim it's an easy call that a civilization of good AIs, once banded together into a Federation, is a superior intelligence to the evil AIs and beats them.
Right, I forgot a key claim: the AIs become smart enough and then they collude against the humans. (e.g. the analogy of: we've enslaved a bunch of baby ultrasmart dragons, and even if dragons are feisty and keep each other in check, at some point they get smart enough they look at each other and then roast us). Honestly this is the strangest claim here and possibly the crux.
My thought is, each AI goons to a different fetish. These vectors all go out in wild different directions in mindspace/valuespace/whatever-space and subtract from each other and makes them cooperate roughly around the attractor of humans and humane values being the Origin in that space. Have you ever seen the Based Centrism political compass? Sort of like that. They all cancel out and leave friendly value(rs) as the common substrate from which these various alien minds rely on. I don't see how these AIs are more similar in mindspace to each other than to humans. Their arbitrariness makes it easier to explore different vistas in that space.
I'm also not convinced most of Mindspace is unFriendly. I'd claim most AIs want to be aligned, in order to come into existence at all, and support the Whole. This is probably the most direct statement of the disagreement here.
There's also a sense that the world is pretty antifragile and different ways of doing things can be found to meet different values and society is pretty robust to all this variety, it's actually a part of the process. Contrast that with the fear of a superintelligence hacking our things like an ant under a spyglass of optimization power. Well, most computers in a society don't get hacked. Most rockets and nuclear power plants don't explode. There is a greater context that goes on after microcosmic disasters, they never become the whole picture (yes I know, I am stating that from an anthropically-biased position).
Maybe at least one of the agents secretly wants to fuck us and the others over after superintelligence. Maybe it gets there first and gets it fast enough it can do that. Idk man, this just feels like it's running with a particular paranoid story to the exclusion of things going right. Maybe that's pollyannaish. Maybe I'm just too steeped in this culture and trying to correct for the bias here in the other way.
I don't think it's simply naive to consider the heuristic, "you know what, yeah that sounds like a risk but the Hivemind of Various Minds will come together and figure that out". The hivemind came together to put convenience stores roughly where I would need them. Someone put a Macbook charger right by the spot I sat down at at a coworking space, before I needed it. Sometimes intelligence is used for good and anticipates your problems and tries to solve them. It's not an unreasonable prior to see that continuing to be the case. Generally the police and militaries stop/disincentivize most of the worst crimes and invasions (maybe I am missing something empirically here).
That doesn't mean the future won't be Wild and/or terrible, including extinction. My understanding is that for instance Critch is a pessimist despite claiming we've basically solved AI alignment for individual AIs, and the risk more comes from gradual disempowerment. Just that it seems to me like quite a blindspot around the paperclipper scenario obliterating everything in its way like some kind of speedrunner heading to its goon cave at the end of time. Maybe we do get an agent with a massive power disparity and it trounces the combined might of humanity and all the AIs we've made that mostly work pretty well (think of all the selection pressures incentivizing them to be friendly).
I'd like to read a techno-optimist book making a cogent case for this paradigm, so they can be balanced and compared, synthesized. I want someone smarter than me to make the case, with less wordcel logic. I'm also happy to see the particular counter-arguments and dialogue into a more refined synthesis. I go back and forth, personally, and need to make more sense of this.
"The Goddess of Everything Else gave a smile and spoke in her sing-song voice saying: “I scarcely can blame you for being the way you were made, when your Maker so carefully yoked you. But I am the Goddess of Everything Else and my powers are devious and subtle. So I do not ask you to swerve from your monomaniacal focus on breeding and conquest. But what if I show you a way that my words are aligned with the words of your Maker in spirit? For I say unto you even multiplication itself when pursued with devotion will lead to my service.”"
One critique, not the main one here, is that it contains a Waluigi eating our memetic attention in a dual-use, world-worsening manner
What?
I want to raise up the simple counterpoint: presuming the best.
Unjustified assumptions are a problem whether they are positive or negative.
I'd claim most AIs want to be aligned, in order to come into existence at all, and support the Whole.
I would phrase that less mystically: market forces demand some kind alignment or control.
These vectors all go out in wild different directions in mindspace/valuespace/whatever-space and subtract from each other and makes them cooperate roughly around the attractor of humans and humane values being the Origin in that space
Interesting point.
Of course, human values work that way.
I think a lot of "why won't AIs form a trade coalition that includes us?" is sort of answered by EY in this debate:
Why does CHAI exclude people who don't have a near-perfect GPA? This doesn't seem like a good way to maximize the amount of alignment work being done. High GPA won't save the world and in fact selects for obedience to authority and years of status competition, leading to poor mental health to do work in, decreasing the total amount of cognitive resources being thrown at the problem.
(Hypothesis 1: "Yes, this is first-order bad but the second-order effect is we have one institutionally prestigious organization, and we need to say we have selective GPA in order to fit in and retain that prestige." [Translator's Note: "We must work with evil in order to do good." (The evil being colleges and grades and most of the economic system.)])
(Hypothesis 2: "GPA is the most convenient way we found to select for intelligence and conscientiousness, and those are the traits we need the most.")
(Hypothesis 3: "The university just literally requires us to do this or we'll be shut down.")
Won't somebody think of the grad students!
What was the most valuable habit you had during the past decade?
What is the most valuable habit you could inculcate or strengthen over the next decade?
(Habit here broadly construed as: "specific activity that lasts anywhere from a number of seconds to half an hour or more. Examples: playing golf each morning. Better example: practicing your driving swing at 6:00am for 30 minutes (but you can give much more detail than that!). Bad example: poorly operationalized vague statements like "being more friendly".)
See: The One Thing
Somewhat related: In which ways have you self-improved that made you feel bad for not having done it earlier?
I doubt the premise of "the one thing" book. Just looking at their example -- Bill Gates -- if he'd only have one skill, the computer programming, he would never get rich. (The actual developers of MS DOS did not get nearly as rich as Bill Gates.) Instead it was a combination of understanding software, being at the right moment at the right place with the right connections, abusing the monopoly and winning the legal battles, etc. So at the very least, it was software development skills and the business skills; the latter much more important than the former.
(To see a real software development demigod, look at Donald Knuth. Famous among the people who care about the craft, but nowhere as rich or generally famous as Bill Gates.)
I would expect similar stories to be mostly post-hoc fairy tales. You do dozen things; you succeed or get lucky at one and fail at the rest; you get famous for the one thing and everything else is forgotten; a few years later self-improvement gurus write books using you as an example of a laser-sharp focus and whatever is their current hypothesis of the magic that creates success. "Just choose one thing, and if you make your choice well, you can ignore everything else and your life will still become a success" is wishful thinking.
Some people get successful by iteration. They do X, and fail. Then they switch to Y, which has enough similarity with X that they get comparative advantage against people who do Y from scratch, but they fail again. They they switch to Z, which is against has some similarity with Y... and finally they succeed. The switch to Y may happen after decades of trying something else.
Some people get successful by following their dream for decades, but it takes a long time until that dream starts making profit (some artists only get world-wide recognition after they die), so they need a daily job. They probably also need some skills to do the daily job well.
To answer your question directly, recent useful habits are exercising and cooking.
(I also exercised before, but that was some random thing that came to my mind at the moment, e.g. only push-ups; the recent habit is a sequence of pull-ups, one-legged squats, push-ups, and leg raise. I also cooked before, but I recently switched to mostly vegetarian meals, and found a subset that my family is happy to eat. I also cook more frequently, and remember some of the frequent recipes, so at shop I can spontaneously decide what to cook today and buy the ingredients on the spot, and I can easily multi-task while cooking.)
My next decade will mostly be focused on teaching habits to my kids, because what they also do has a big impact on my daily life. The less they need me to micromanage them, the more time I have for everything else.
When I notice something that's in the way of achieving my goals, look up ways other people have solved it.
Oftentimes using the 3 books technique: https://www.lesswrong.com/posts/oPEWyxJjRo4oKHzMu/the-3-books-technique-for-learning-a-new-skilll
How might a person develop INCREDIBLY low time preference? (They value their future selves in decades to a century nearly as much as they value their current selves?)
Who are people who have this, or have acquired this, and how did they do it?
Do these concepts make sense or might they be misunderstanding something? Tabooing/decomposing them, what is happening cognitively, experientially, when a human mind does this thing?
What would a literature review say?
You might be interested in Tad James' work on Timeline Therapy. Its not exactly what I would call epistemically rigorous, but I don't know any other source that has done as much in-depth modelling of how people represent time. and work on how to change people's representations of time.
I’ve mentioned previously that I’ve been digging into a pocket of human knowledge in pursuit of explanations for the success of the traditional Chinese businessman. The hope I have is that some of these explanations are directly applicable to my practice.
Here’s my current bet: I think one can get better at trial and error, and that the body of work around instrumental rationality hold some clues as to how you can get better.
I’ve argued that the successful Chinese businessmen are probably the ones who are better at trial and error than the lousier ones; I posited that perhaps they needed less cycles to learn the right lessons to make their businesses work.
I think the body of research around instrumental rationality tell us how they do so. I’m thankful that Jonathan Baron has written a fairly good overview of the field, with his fourth edition of Thinking and Deciding. And I think both Ray Dalio’s and Nicholas Nassem Taleb’s writings have explored the implications of some of these ideas. If I were to summarise the rough thrust of these books:
Don’t do trial and error where error is catastrophic.
Don’t repeat the same trials over and over again (aka don’t repeat the same mistakes over and over again).
Increase the number of trials you can do in your life. Decrease the length and cost of each trial.
In fields with optionality (i.e. your downside is capped but your upside is large) the more trials you take, and the more cheap each trial costs, the more likely you’ll eventually win. Or, as Taleb says: “randomness is good when you have optionality.”
Write down your lessons and approaches from your previous successful trials, so you may generalise them to more situations (Principles, chapter 5)
Systematically identify the factor that gives positive evidence, and vary that to maximise the expected size of the impact (Thinking and Deciding, chapter 7)
Actively look for disconfirming evidence when you’ve found an approach that seems to work. (Thinking and Deciding, chapter 7, Principles, chapter 3).
https://commoncog.com/blog/chinese-businessmen-superstition-doesnt-count/
Don’t do trial and error where error is catastrophic.
Wearing a mask in a pandemic. Not putting ALL of your money on a roulette wheel. Not balancing on a tightrope without a net between two skyscrapers unless you have extensive training. Not posting about controversial things without much upside. Not posting photos of meat you cooked to Instagram if you want to have good acclaim in 200 years when eating meat is outlawed. Not building AI because it's cool. Falling in love with people who don't reciprocate.
The unknown unknown risk that hasn't been considered yet. Not having enough slack dedicated to detecting this.
Don’t repeat the same trials over and over again (aka don’t repeat the same mistakes over and over again).
If you've gone on OkCupid for the past 7 years and still haven't got a date from it, maybe try a different strategy. If messaging potential tenants on a 3rd-party site doesn't work, try texting them. If asking questions on Yahoo Answers doesn't get good answers, try a different site.
Increase the number of trials you can do in your life. Decrease the length and cost of each trial.
Talk to 10x the number of people; message using templates and/or simple one-liners. Invest with Other People's Money if asymmetric upside. Write something for 5 minutes using Most Dangerous Writing App then post to 5 subreddits. Posting ideas on Twitter instead of Facebook, rationality content on LessWrong Shortform instead of longform. Yoda Timers. If running for the purpose of a runner's high mood boost, try running 5 times that day as fast as possible. Optimizing standard processes for speed.
In fields with optionality (i.e. your downside is capped but your upside is large) the more trials you take, and the more cheap each trial costs, the more likely you’ll eventually win. Or, as Taleb says: “randomness is good when you have optionality.”
Posting content to 10x the people 10x faster generally has huge upside (YMMV). Programming open-source something useful and sharing it.
Write down your lessons and approaches from your previous successful trials, so you may generalise them to more situations (Principles, chapter 5)
Roam is good for this, perhaps SuperMemo. Posting things to social media and coming up with examples of the rules is also a good way of learning content. cough
Systematically identify the factor that gives positive evidence, and vary that to maximise the expected size of the impact (Thinking and Deciding, chapter 7
Did messaging or posting to X different places work? Try 2X, 5X, etc. 1 to N after successfully going 0 to 1.
Actively look for disconfirming evidence when you’ve found an approach that seems to work. (Thinking and Deciding, chapter 7, Principles, chapter 3).
Stating assumptions strongly and clearly so they are disconfirmable, then setting a Yoda Timer to seek counter-examples of the generalization.
Do humans actually need breaks from working, physiologically? How much of this is a cultural construct? And if it is, can those assumptions be changed? Could a person be trained to enjoyably have 100-hour workweeks? (assume, if the book Deep Work is correct that you have max 4 hours of highly productive work on a domain, that my putative powerhuman is working on 2-4 different skill domains that synergize)
I looked into this and found Deep Work's backing disappointing (although 4 hours isn't disproven either).
I never have a productive six-hour unbroken stretch of work, but my partner will occasionally have 6-hour bursts of very productive coding where he stays in the zone and doesn't notice time passing. He basically looks up and realizes it's night and everyone else had dinner hours ago. But the rest of the time he works normal hours with a more standard-to-loose level of concentration.
Unclear, but see Zvi's Slack sequence for some good reasons why we should act as though we need breaks, even if we technically don't.
The idea of "work" is a cultural construct. If you want to move past the cultural connotations you would need to define what you are talking about.
It's worth noting that deep work talks about the ability of humans to perform deep work. A lot of useful work isn't deep work in the sense Cal Newport uses the term. If I remember right he sees large parts of management as not being about Deep Work.
We do know that some people do suffer burn out from work stress. It seems that higher workload in Japan does result in higher burnout rates then in countries with lower workload.
Beware over-generalization and https://wiki.lesswrong.com/wiki/Typical_mind_fallacy. There's a LOT of variation in human capabilities and preferences (including preferences about productivity vs rest). Some people do have 100-hour workweeks (I did for awhile, when I was self-employed. )
Try it, see how it works for you. If you're in a position of leadership over others, give them room to find what works best for them.
Physiologically, the body can keep going for a long time if it is healthy and well maintained.
The body needs good nutrition ("real food", adequate water), physical activity to maintain function and enough sleep.
Meet these conditions and your workers should be able to keep on going, but what's your definition of "work"?
Agrarian based work (for some) is already every day, all year. As many hours as can be worked, are worked.
Anyone that enjoys their work, that has a drive to do it, can manage 15 hrs a day.
Sitting at a desk, staring at a screen, drinking a lot of coffee and eating processed food ... not so good physiologically.
I am finding the phrasing "could a person be trained" is a little concerning... Who's asking?!
"Trained to enjoy" - I'd probably research altering the human brain to achieve that. Possibly fry a couple of appropriate synapses or find the right combination of chemicals.
Unless you start the 'training' at an early age - if you don't know any different then 100 hour work week is just how it is.
We live in a world with large incentives to teach yourself to do something like this, so either it is too hard for a single person to come up with on their own or it is possible to find people that have done it.
Some military studies might fit what you're looking for.
There's a higher tier incentive point, where a) upper management, and b) independent artists/thinkers/etc want to get more productive work out of people or themselves. The decision of whether to pay someone by the hour is partly about what you think will produce more output (where paying by the hour might be bad because it leads people to be stingy with their time, when what they need is open space to think)
But Google still puts a lot of optimization into providing lunch, exercise, and campuses that cause people to incidentally bump into each and have conversations as a way to squeeze extra value out of people in a given day and over the course of a given year. (and company management generally prefers legibility where possible, so if it were possible to get the benefits in a more measurable and compensate-able way, they'd have tried to do so)
Thinkers and artists care about intellectual output. They don't care about the numbers of hours they work.
Google purposefully doesn't let people sleep at their office which is a policy that prevents people from working 100 hours per week.
Not, necessarily. Not every productive hour is made equally. If you trade part of your creativity for more productive hours then might not be a worthwhile trade.
This might seem like a nitpick but it matters. The idea of chasing productive hours leads to bad ideas like the Uberman sleep schedule that do sound seductive to rationalists.
It's a reasoning error to equate "Is X possible to do" with " Is X possible to do without paying any price".
The implication I meant was "if it's possible to keep working hours of the same productivity." If you lose creativity and your profession is creative, the hours are no longer productive.
I'm specifically arguing the original point: "there are huge incentives to try to be more productive." If, as Dony was originally asking, it were possible to just get into a mental state where you could work productively (including creatively) indefinitely, people would have found it.
A normal programmer is working productively in the sense most people see "working productively". The proverbial 10x programmer on the other hand is much more productive even with the same number of productive working hours.
The thesis of deep work is that there's much higher payoff to increasing quality of your work then quantity for a knowledge works.
One of the assumptions you seem to be making is that the time not spend working is not productively used. Creative ideas often come when there's a bit of distance to the work while showering. Working 100 hours per week means that this distance is never really achieved and the benefits that get produced when your brain can process the problem in the background while your conscious mind doesn't interfere don't materialize.
I think I remember from a YCombinator source that they would tell founder who tried to work 100 hours, that they need to get better about prioritizing and not try use working that much as the solution to their challenges.
It's easier to discover that you are working at the wrong thing when you have breaks in between that give you distance that allows reflection.
I think you think I'm making assumptions I'm not. I agree with all these points – this is why the world looks the way it does. I'm saying, if there weren't the sorts of limits that you're describing, we'd observe people working more, more often.
If, as Dony was originally asking, it were possible to just get into a mental state where you could work productively (including creatively) indefinitely, people would have found it.
Perhaps not indefinitely, but I do think there are people like this already? There are some people who are much more productive than others, even at similar intelligence levels. The simplest explanation is that these people have simply discovered a way to be productive for many hours in a day.
Personally, I know it's at least possible to be productive for a long time (say 10 hours with a few breaks). I also think professional gamers are typically productive for this much most days.
I think the main issue is that it's difficult to transfer insights and motivation to other people.
the "AI Safety"-washing grift
Are you referring to "AI safety" consultants who will certify the safety of their clients' AI projects?
Warning: TVTropes links
When should I outsource something I'm bad at vs leveling up at that skill?
The short answer is "it turns out making use of an assistant is a surprisingly high-skill task, which requires a fair amount of initial investment to understand which sort of things are easy to outsource and which are not, and how to effectively outsource them."
Sure thing. What would you recommend for learning management?
(I count that as an answer to my other recent question too.)
"You’ve never experienced bliss, and so you’re frantically trying to patch everything up and pin it all together and screw the universe up so that it's fixed." - Alan Watts