If the thesis in Unlocking the Emotional Brain is even half-right, it may be one of the most important books that I have read. It claims to offer a neuroscience-grounded, comprehensive model of how effective therapy works. In so doing, it also happens to formulate its theory in terms of belief updating, helping explain how the brain models the world and what kinds of techniques allow us to actually change our minds.

peterbarnett19h4119
1
MIRI Technical Governance Team is hiring, please apply and work with me! We are looking to hire for the following roles: * Technical Governance Researcher (2-4 hires) * Writer (1 hire) The roles are located in Berkeley, and we are ideally looking to hire people who can start ASAP. The team is currently Lisa Thiergart (team lead) and myself. We will research and design technical aspects of regulation and policy that could lead to safer AI, focusing on methods that won’t break as we move towards smarter-than-human AI. We want to design policy that allows us to safely and objectively assess the risks from powerful AI, build consensus around the risks we face, and put in place measures to prevent catastrophic outcomes. The team will likely work on: * Limitations of current proposals such as RSPs * Inputs into regulations, requests for comment by policy bodies (ex. NIST/US AISI, EU, UN) * Researching and designing alternative Safety Standards, or amendments to existing proposals * Communicating with and consulting for policymakers and governance organizations If you have any questions, feel free to contact me on LW or at peter@intelligence.org 
Akash1d3910
3
I think now is a good time for people at labs to seriously consider quitting & getting involved in government/policy efforts. I don't think everyone should leave labs (obviously). But I would probably hit a button that does something like "everyone at a lab governance team and many technical researchers spend at least 2 hours thinking/writing about alternative options they have & very seriously consider leaving." My impression is that lab governance is much less tractable (lab folks have already thought a lot more about AGI) and less promising (competitive pressures are dominating) than government-focused work.  I think governments still remain unsure about what to do, and there's a lot of potential for folks like Daniel K to have a meaningful role in shaping policy, helping natsec folks understand specific threat models, and raising awareness about the specific kinds of things governments need to do in order to mitigate risks. There may be specific opportunities at labs that are very high-impact, but I think if someone at a lab is "not really sure if what they're doing is making a big difference", I would probably hit a button that allocates them toward government work or government-focused comms work. Written on a Slack channel in response to discussions about some folks leaving OpenAI. 
Eli Tyre3d470
2
Back in January, I participated in a workshop in which the attendees mapped out how they expect AGI development and deployment to go. The idea was to start by writing out what seemed most likely to happen this year, and then condition on that, to forecast what seems most likely to happen in the next year, and so on, until you reach either human disempowerment or an end of the acute risk period. This post was my attempt at the time. I spent maybe 5 hours on this, and there's lots of room for additional improvement. This is not a confident statement of how I think things are most likely to play out. There are already some ways in which I think this projection is wrong. (I think it's too fast, for instance). But nevertheless I'm posting it now, with only a few edits and elaborations, since I'm probably not going to do a full rewrite soon. 2024 * A model is released that is better than GPT-4. It succeeds on some new benchmarks. Subjectively, the jump in capabilities feels smaller than that between RLHF’d GPT-3 and RLHF’d GPT-4. It doesn’t feel as shocking the way chat-GPT and GPT-4 did, for either x-risk focused folks, or for the broader public. Mostly it feels like “a somewhat better language model.” * It’s good enough that it can do a bunch of small-to-medium admin tasks pretty reliably. I can ask it to find me flights meeting specific desiderata, and it will give me several options. If I give it permission, it will then book those flights for me with no further inputs from me. * It works somewhat better as an autonomous agent in an auto gpt harness, but it still loses its chain of thought / breaks down/ gets into loops. * It’s better at programming. * Not quite good enough to replace human software engineers. It can make a simple react or iphone app, but not design a whole complicated software architecture, at least without a lot of bugs. * It can make small, working, well documented, apps from a human description. * We see a doubling of the rate of new apps being added to the app store as people who couldn’t code now can make applications for themselves. The vast majority of people still don’t realize the possibilities here, though. “Making apps” still feels like an esoteric domain outside of their zone of competence, even though the barriers to entry just lowered so that 100x more people could do it.  * From here on out, we’re in an era where LLMs are close to commoditized. There are smaller improvements, shipped more frequently, by a variety of companies, instead of big impressive research breakthroughs. Basically, companies are competing with each other to always have the best user experience and capabilities, and so they don’t want to wait as long to ship improvements. They’re constantly improving their scaling, and finding marginal engineering improvements. Training runs for the next generation are always happening in the background, and there’s often less of a clean tabula-rasa separation between training runs—you just keep doing training with a model continuously. More and more, systems are being improved through in-the-world feedback with real users. Often chatGPT will not be able to handle some kind of task, but six weeks later it will be able to, without the release of a whole new model. * [Does this actually make sense? Maybe the dynamics of AI training mean that there aren’t really marginal improvements to be gotten. In order to produce a better user experience, you have to 10x the training, and each 10x-ing of the training requires a bunch of engineering effort, to enable a larger run, so it is always a big lift.] * (There will still be impressive discrete research breakthroughs, but they won’t be in LLM performance) 2025 * A major lab is targeting building a Science and Engineering AI (SEAI)—specifically a software engineer. * They take a state of the art LLM base model and do additional RL training on procedurally generated programming problems, calibrated to stay within the model’s zone of proximal competence. These problems are something like leetcode problems, but scale to arbitrary complexity (some of them require building whole codebases, or writing very complex software), with scoring on lines of code, time-complexity, space complexity, readability, documentation, etc. This is something like “self-play” for software engineering.  * This just works.  * A lab gets a version that can easily do the job of a professional software engineer. Then, the lab scales their training process and gets a superhuman software engineer, better than the best hackers. * Additionally, a language model trained on procedurally generated programming problems in this way seems to have higher general intelligence. It scores better on graduate level physics, economics, biology, etc. tests, for instance. It seems like “more causal reasoning” is getting into the system. * The first proper AI assistants ship. In addition to doing specific tasks,  you keep them running in the background, and talk with them as you go about your day. They get to know you and make increasingly helpful suggestions as they learn your workflow. A lot of people also talk to them for fun. 2026 * The first superhuman software engineer is publically released. * Programmers begin studying its design choices, the way Go players study AlphaGo. * It starts to dawn on e.g. people who work at Google that they’re already superfluous—after all, they’re currently using this AI model to (unofficially) do their job—and it’s just a matter of institutional delay for their employers to adapt to that change. * Many of them are excited or loudly say how it will all be fine/ awesome. Many of them are unnerved. They start to see the singularity on the horizon, as a real thing instead of a social game to talk about. * This is the beginning of the first wave of change in public sentiment that will cause some big, hard to predict, changes in public policy [come back here and try to predict them anyway]. * AI assistants get a major upgrade: they have realistic voices and faces, and you can talk to them just like you can talk to a person, not just typing into a chat interface. A ton of people start spending a lot of time talking to their assistants, for much of their day, including for goofing around. * There are still bugs, places where the AI gets confused by stuff, but overall the experience is good enough that it feels, to most people, like they’re talking to a careful, conscientious person, rather than a software bot. * This starts a whole new area of training AI models that have particular personalities. Some people are starting to have parasocial relationships with their friends, and some people programmers are trying to make friends that are really fun or interesting or whatever for them in particular. * Lab attention shifts to building SEAI systems for other domains, to solve biotech and mechanical engineering problems, for instance. The current-at-the-time superhuman software engineer AIs are already helpful in these domains, but not at the level of “explain what you want, and the AI will instantly find an elegant solution to the problem right before your eyes”, which is where we’re at for software. * One bottleneck is problem specification. Our physics simulations have gaps, and are too low fidelity, so oftentimes the best solutions don’t map to real world possibilities. * One solution to this is that, (in addition to using our AI to improve the simulations) is we just RLHF our systems to identify solutions that do translate to the real world. They’re smart, they can figure out how to do this. * The first major AI cyber-attack happens: maybe some kind of superhuman hacker worm. Defense hasn’t remotely caught up with offense yet, and someone clogs up the internet with AI bots, for at least a week, approximately for the lols / the seeing if they could do it. (There’s a week during which more than 50% of people can't get on more than 90% of the sites because the bandwidth is eaten by bots.) * This makes some big difference for public opinion.  * Possibly, this problem isn’t really fixed. In the same way that covid became endemic, the bots that were clogging things up are just a part of life now, slowing bandwidth and making the internet annoying to use. 2027 and 2028 * In many ways things are moving faster than ever in human history, and also AI progress is slowing down a bit. * The AI technology developed up to this point hits the application and mass adoption phase of the s-curve. In this period, the world is radically changing as every industry, every company, every research lab, every organization, figures out how to take advantage of newly commoditized intellectual labor. There’s a bunch of kinds of work that used to be expensive, but which are now too cheap to meter. If progress stopped now, it would take 2 decades, at least, for the world to figure out all the ways to take advantage of this new situation (but progress doesn’t show much sign of stopping). * Some examples: * The internet is filled with LLM bots that are indistinguishable from humans. If you start a conversation with a new person on twitter or discord, you have no way of knowing if they’re a human or a bot. * Probably there will be some laws about declaring which are bots, but these will be inconsistently enforced.) * Some people are basically cool with this. From their perspective, there are just more people that they want to be friends with / follow on twitter. Some people even say that the bots are just better and more interesting than people. Other people are horrified/outraged/betrayed/don’t care about relationships with non-real people. * (Older people don’t get the point, but teenagers are generally fine with having conversations with AI bots.) * The worst part of this is the bots that make friends with you and then advertise to you stuff. Pretty much everyone hates that. * We start to see companies that will, over the next 5 years, grow to have as much impact as Uber, or maybe Amazon, which have exactly one human employee / owner +  an AI bureaucracy. * The first completely autonomous companies work well enough to survive and support themselves. Many of these are created “free” for the lols, and no one owns or controls them. But most of them are owned by the person who built them, and could turn them off if they wanted to. A few are structured as public companies with share-holders. Some are intentionally incorporated fully autonomous, with the creator disclaiming (and technologically disowning (eg deleting the passwords)) any authority over them. * There are legal battles about what rights these entities have, if they can really own themselves, if they can have bank accounts, etc.  * Mostly, these legal cases resolve to “AIs don’t have rights”. (For now. That will probably change as more people feel it’s normal to have AI friends). * Everything is tailored to you. * Targeted ads are way more targeted. You are served ads for the product that you are, all things considered, most likely to buy, multiplied by the lifetime profit if you do buy it. Basically no ad space is wasted on things that don’t have a high EV of you, personally, buying it. Those ads are AI generated, tailored specifically to be compelling to you. Often, the products advertised, not just the ads, are tailored to you in particular. * This is actually pretty great for people like me: I get excellent product suggestions. * There’s not “the news”. There’s a set of articles written for you, specifically, based on your interests and biases. * Music is generated on the fly. This music can “hit the spot” better than anything you listened to before “the change.” * Porn. AI tailored porn can hit your buttons better than sex. * AI boyfriends/girlfriends that are designed to be exactly emotionally and intellectually compatible with you, and trigger strong limerence / lust / attachment reactions. * We can replace books with automated tutors. * Most of the people who read books will still read books though, since it will take a generation to realize that talking with a tutor is just better, and because reading and writing books was largely a prestige-thing anyway. * (And weirdos like me will probably continue to read old authors, but even better will be to train an AI on a corpus, so that it can play the role of an intellectual from 1900, and I can just talk to it.) * For every task you do, you can effectively have a world expert (in that task and in tutoring pedagogy) coach you through it in real time. * Many people do almost all their work tasks with an AI coach. * It's really easy to create TV shows and movies. There’s a cultural revolution as people use AI tools to make custom Avengers movies, anime shows, etc. Many are bad or niche, but some are 100x better than anything that has come before (because you’re effectively sampling from a 1000x larger distribution of movies and shows).  * There’s an explosion of new software, and increasingly custom software. * Facebook and twitter are replaced (by either external disruption or by internal product development) by something that has a social graph, but lets you design exactly the UX features you want through a LLM text interface.  * Instead of software features being something that companies ship to their users, top-down, they become something that users and communities organically develop, share, and iterate on, bottom up. Companies don’t control the UX of their products any more. * Because interface design has become so cheap, most of software is just proprietary datasets, with (AI built) APIs for accessing that data. * There’s a slow moving educational revolution of world class pedagogy being available to everyone. * Millions of people who thought of themselves as “bad at math” finally learn math at their own pace, and find out that actually, math is fun and interesting. * Really fun, really effective educational video games for every subject. * School continues to exist, in approximately its current useless form. * [This alone would change the world, if the kids who learn this way were not going to be replaced wholesale, in virtually every economically relevant task, before they are 20.] * There’s a race between cyber-defense and cyber offense, to see who can figure out how to apply AI better. * So far, offense is winning, and this is making computers unusable for lots of applications that they were used for previously: * online banking, for instance, is hit hard by effective scams and hacks. * Coinbase has an even worse time, since they’re not issued (is that true?) * It turns out that a lot of things that worked / were secure, were basically depending on the fact that there are just not that many skilled hackers and social engineers. Nothing was secure, really, but not that many people were exploiting that. Now, hacking/scamming is scalable and all the vulnerabilities are a huge problem. * There’s a whole discourse about this. Computer security and what to do about it is a partisan issue of the day. * AI systems can do the years of paperwork to make a project legal, in days. This isn’t as big an advantage as it might seem, because the government has no incentive to be faster on their end, and so you wait weeks to get a response from the government, your LMM responds to it within a minute, and then you wait weeks again for the next step. * The amount of paperwork required to do stuff starts to balloon. * AI romantic partners are a thing. They start out kind of cringe, because the most desperate and ugly people are the first to adopt them. But shockingly quickly (within 5 years) a third of teenage girls have a virtual boyfriend. * There’s a moral panic about this. * AI match-makers are better than anything humans have tried yet for finding sex and relationships partners. It would still take a decade for this to catch on, though. * This isn’t just for sex and relationships. The global AI network can find you the 100 people, of the 9 billion on earth, that you most want to be friends / collaborators with.  * Tons of things that I can’t anticipate. * On the other hand, AI progress itself is starting to slow down. Engineering labor is cheap, but (indeed partially for that reason), we’re now bumping up against the constraints of training. Not just that buying the compute is expensive, but that there are just not enough chips to do the biggest training runs, and not enough fabs to meet that demand for chips rapidly. There’s huge pressure to expand production but that’s going slowly relative to the speed of everything else, because it requires a bunch of eg physical construction and legal navigation, which the AI tech doesn’t help much with, and because the bottleneck is largely NVIDIA’s institutional knowledge, which is only partially replicated by AI. * NVIDIA's internal AI assistant has read all of their internal documents and company emails, and is very helpful at answering questions that only one or two people (and sometimes literally no human on earth) know the answer to. But a lot of the important stuff isn’t written down at all, and the institutional knowledge is still not fully scalable. * Note: there’s a big crux here of how much low and medium hanging fruit there is in algorithmic improvements once software engineering is automated. At that point the only constraint on running ML experiments will be the price of compute. It seems possible that that speed-up alone is enough to discover eg an architecture that works better than the transformer, which triggers and intelligence explosion. 2028 * The cultural explosion is still going on, and AI companies are continuing to apply their AI systems to solve the engineering and logistic bottlenecks of scaling AI training, as fast as they can. * Robotics is starting to work. 2029  * The first superhuman, relatively-general SEAI comes online. We now have basically a genie inventor: you can give it a problem spec, and it will invent (and test in simulation) a device / application / technology that solves that problem, in a matter of hours. (Manufacturing a physical prototype might take longer, depending on how novel components are.) * It can do things like give you the design for a flying car, or a new computer peripheral.  * A lot of biotech / drug discovery seems more recalcitrant, because it is more dependent on empirical inputs. But it is still able to do superhuman drug discovery, for some ailments. It’s not totally clear why or which biotech domains it will conquer easily and which it will struggle with.  * This SEAI is shaped differently than a human. It isn’t working memory bottlenecked, so a lot of intellectual work that humans do explicitly, in sequence, the these SEAIs do “intuitively”, in a single forward pass. * I write code one line at a time. It writes whole files at once. (Although it also goes back and edits / iterates / improves—the first pass files are not usually the final product.) * For this reason it’s a little confusing to answer the question “is it a planner?” It does a lot of the work that humans would do via planning it does in an intuitive flash. * The UX isn’t clean: there’s often a lot of detailed finagling, and refining of the problem spec, to get useful results. But a PhD in that field can typically do that finagling in a day. * It’s also buggy. There’s oddities in the shape of the kind of problem that is able to solve and the kinds of problems it struggles with, which aren’t well understood. * The leading AI company doesn’t release this as a product. Rather, they apply it themselves, developing radical new technologies, which they publish or commercialize, sometimes founding whole new fields of research in the process. They spin up automated companies to commercialize these new innovations. * Some of the labs are scared at this point. The thing that they’ve built is clearly world-shakingly powerful, and their alignment arguments are mostly inductive “well, misalignment hasn’t been a major problem so far”, instead of principled alignment guarantees.  * There's a contentious debate inside the labs. * Some labs freak out, stop here, and petition the government for oversight and regulation. * Other labs want to push full steam ahead.  * Key pivot point: Does the government put a clamp down on this tech before it is deployed, or not? * I think that they try to get control over this powerful new thing, but they might be too slow to react. 2030 * There’s an explosion of new innovations in physical technology. Magical new stuff comes out every day, way faster than any human can keep up with. * Some of these are mundane. * All the simple products that I would buy on Amazon are just really good and really inexpensive. * Cars are really good. * Drone delivery * Cleaning robots * Prefab houses are better than any house I’ve ever lived in, though there are still zoning limits. * But many of them would have huge social impacts. They might be the important story of the decade (the way that the internet was the important story of 1995 to 2020) if they were the only thing that was happening that decade. Instead, they’re all happening at once, piling on top of each other. * Eg: * The first really good nootropics * Personality-tailoring drugs (both temporary and permanent) * Breakthrough mental health interventions that, among other things, robustly heal people’s long term subterranean trama and  transform their agency. * A quick and easy process for becoming classically enlightened. * The technology to attain your ideal body, cheaply—suddenly everyone who wants to be is as attractive as the top 10% of people today. * Really good AI persuasion which can get a mark to do ~anything you want, if they’ll talk to an AI system for an hour. * Artificial wombs. * Human genetic engineering * Brain-computer interfaces * Cures for cancer, AIDs, dementia, heart disease, and the-thing-that-was-causing-obesity. * Anti-aging interventions. * VR that is ~ indistinguishable from reality. * AI partners that can induce a love-super stimulus. * Really good sex robots * Drugs that replace sleep * AI mediators that are so skilled as to be able to single-handedly fix failing marriages, but which are also brokering all the deals between governments and corporations. * Weapons that are more destructive than nukes. * Really clever institutional design ideas, which some enthusiast early adopters try out (think “50 different things at least as impactful as manifold.markets.”) * It’s way more feasible to go into the desert, buy 50 square miles of land, and have a city physically built within a few weeks. * In general, social trends are changing faster than they ever have in human history, but they still lag behind the tech driving them by a lot. * It takes humans, even with AI information processing assistance, a few years to realize what’s possible and take advantage of it, and then have the new practices spread.  * In some cases, people are used to doing things the old way, which works well enough for them, and it takes 15 years for a new generation to grow up as “AI-world natives” to really take advantage of what’s possible. * [There won’t be 15 years] * The legal oversight process for the development, manufacture, and commercialization of these transformative techs matters a lot. Some of these innovations are slowed down a lot because they need to get FDA approval, which AI tech barely helps with. Others are developed, manufactured, and shipped in less than a week. * The fact that there are life-saving cures that exist, but are prevented from being used by a collusion of AI labs and government is a major motivation for open source proponents. * Because a lot of this technology makes setting up new cities quickly more feasible, and there’s enormous incentive to get out from under the regulatory overhead, and to start new legal jurisdictions. The first real seasteads are started by the most ideologically committed anti-regulation, pro-tech-acceleration people. * Of course, all of that is basically a side gig for the AI labs. They’re mainly applying their SEAI to the engineering bottlenecks of improving their ML training processes. * Key pivot point: * Possibility 1: These SEAIs are necessarily, by virtue of the kinds of problems that they’re able to solve, consequentialist agents with long term goals. * If so, this breaks down into two child possibilities * Possibility 1.1: * This consequentialism was noticed early, that might have been convincing enough to the government to cause a clamp-down on all the labs. * Possibility 1.2: * It wasn’t noticed early and now the world is basically fucked.  * There’s at least one long-term consequentialist superintelligence. The lab that “owns” and “controls” that system is talking to it every day, in their day-to-day business of doing technical R&D. That superintelligence easily manipulates the leadership (and rank and file of that company), maneuvers it into doing whatever causes the AI’s goals to dominate the future, and enables it to succeed at everything that it tries to do. * If there are multiple such consequentialist superintelligences, then they covertly communicate, make a deal with each other, and coordinate their actions. * Possibility 2: We’re getting transformative AI that doesn’t do long term consequentialist planning. * Building these systems was a huge engineering effort (though the bulk of that effort was done by ML models). Currently only a small number of actors can do it. * One thing to keep in mind is that the technology bootstraps. If you can steal the weights to a system like this, it can basically invent itself: come up with all the technologies and solve all the engineering problems required to build its own training process. At that point, the only bottleneck is the compute resources, which is limited by supply chains, and legal constraints (large training runs require authorization from the government). * This means, I think, that a crucial question is “has AI-powered cyber-security caught up with AI-powered cyber-attacks?” * If not, then every nation state with a competent intelligence agency has a copy of the weights of an inventor-genie, and probably all of them are trying to profit from it, either by producing tech to commercialize, or by building weapons. * It seems like the crux is “do these SEAIs themselves provide enough of an information and computer security advantage that they’re able to develop and implement methods that effectively secure their own code?” * Every one of the great powers, and a bunch of small, forward-looking, groups that see that it is newly feasible to become a great power, try to get their hands on a SEAI, either by building one, nationalizing one, or stealing one. * There are also some people who are ideologically committed to open-sourcing and/or democratizing access to these SEAIs. * But it is a self-evident national security risk. The government does something here (nationalizing all the labs, and their technology?) What happens next depends a lot on how the world responds to all of this. * Do we get a pause?  * I expect a lot of the population of the world feels really overwhelmed, and emotionally wants things to slow down, including smart people that would never have thought of themselves as luddites.  * There’s also some people who thrive in the chaos, and want even more of it. * What’s happening is mostly hugely good, for most people. It’s scary, but also wonderful. * There is a huge problem of accelerating addictiveness. The world is awash in products that are more addictive than many drugs. There’s a bit of (justified) moral panic about that. * One thing that matters a lot at this point is what the AI assistants say. As powerful as the media used to be for shaping people’s opinions, the personalized, superhumanly emotionally intelligent AI assistants are way way more powerful. AI companies may very well put their thumb on the scale to influence public opinion regarding AI regulation. * This seems like possibly a key pivot point, where the world can go any of a number of ways depending on what a relatively small number of actors decide. * Some possibilities for what happens next: * These SEAIs are necessarily consequentialist agents, and the takeover has already happened, regardless of whether it still looks like we’re in control or it doesn’t look like anything, because we’re extinct. * Governments nationalize all the labs. * The US and EU and China (and India? and Russia?) reach some sort of accord. * There’s a straight up arms race to the bottom. * AI tech basically makes the internet unusable, and breaks supply chains, and technology regresses for a while. * It’s too late to contain it and the SEAI tech proliferates, such that there are hundreds or millions of actors who can run one. * If this happens, it seems like the pace of change speeds up so much that one of two things happens: * Someone invents something, or there are second and third impacts to a constellation of innovations that destroy the world.
Raemon2d275
3
There's a skill of "quickly operationalizing a prediction, about a question that is cruxy for your decisionmaking." And, it's dramatically better to be very fluent at this skill, rather than "merely pretty okay at it." Fluency means you can actually use it day-to-day to help with whatever work is important to you. Day-to-day usage means you can actually get calibrated re: predictions in whatever domains you care about. Calibration means that your intuitions will be good, and _you'll know they're good_. Fluency means you can do it _while you're in the middle of your thought process_, and then return to your thought process, rather than awkwardly bolting it on at the end. I find this useful at multiple levels-of-strategy. i.e. for big picture 6 month planning, as well as for "what do I do in the next hour." I'm working on this as a full blogpost but figured I would start getting pieces of it out here for now. A lot of this skill is building off on CFAR's "inner simulator" framing. Andrew Critch recently framed this to me as "using your System 2 (conscious, deliberate intelligence) to generate questions for your System 1 (fast intuition) to answer." (Whereas previously, he'd known System 1 was good at answering some types of questions, but he thought of it as responsible for both "asking" and "answering" those questions) But, I feel like combining this with "quickly operationalize cruxy Fatebook predictions" makes it more of a power tool for me. (Also, now that I have this mindset, even when I can't be bothered to make a Fatebook prediction, I have a better overall handle on how to quickly query my intuitions) I've been working on this skill for years and it only really clicked together last week. It required a bunch of interlocking pieces that all require separate fluency: 1. Having three different formats for Fatebook (the main website, the slack integration, and the chrome extension), so, pretty much wherever I'm thinking-in-text, I'll be able to quickly use it. 2. The skill of "generating lots of 'plans'", such that I always have at least two plausibly good ideas on what to do next. 3. Identifying an actual crux for what would make me switch to one of my backup plans. 4. Operationalizing an observation I could make that'd convince me of one of these cruxes.
I feel like I'd like the different categories of AI risk attentuation to be referred to as more clearly separate: AI usability safety - would this gun be safe for a trained professional to use on a shooting range? Will it be reasonably accurate and not explode or backfire? AI world-impact safety - would it be safe to give out one of these guns for 0.10$ to anyone who wanted one? AI weird complicated usability safety - would this gun be safe to use if a crazy person tried to use a hundred of them plus a variety of other guns, to make an elaborate Rube Goldberg machine and fire it off with live ammo with no testing?

Popular Comments

Recent Discussion

TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder.


The Future of Humanity Institute is dead:

I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. 

I think FHI was one of the best intellectual institutions...

Man, I can’t believe there are no straightforwardly excited comments so far!

Personally, I think an institution like this is sorely needed, and I’d be thrilled if Lightcone built one. There are remarkably few people in the world who are trying to think carefully about the future, and fewer still who are trying to solve the alignment problem; institutions like this seem like one of the most obvious ways of helping them.

2Alex_Altair45m
Maybe it could be FLCI to avoid collision with the existing FLI.
4Adam Scholl1h
For what it’s worth, my guess is that your pessimism is misplaced. Oliver certainly isn’t as famous as Bostrom, so I doubt he’d be a similar “beacon.” But I’m not sure a beacon is needed—historically, plenty of successful research institutions (e.g. Bells Labs, IAS, the Royal Society in most eras) weren’t led by their star researchers, and the track record of those that were strikes me as pretty mixed. Oliver spends most of his time building infrastructure for researchers, and I think he’s become quite good at it. For example, you are reading this comment on (what strikes me as) rather obviously the best-designed forum on the internet; I think the review books LessWrong made are probably the second-best designed books I’ve seen, after those from Stripe Press; and the Lighthaven campus is an exceedingly nice place to work. Personally, I think Oliver would probably be my literal top choice to head an institution like this.
2owencb41m
I completely agree that Oliver is a great fit for leading on research infrastructure (and the default thing I was imagining was that he would run the institute; although it's possible it would be even better if he could arrange to be number two with a strong professional lead, giving him more freedom to focus attention on new initiatives within the institute, that isn't where I'd start). But I was specifically talking about the "research lead" role. By default I'd guess people in this role would report to the head of the institute, but also have a lot of intellectual freedom. (It might not even be a formal role; I think sometimes "star researchers" might do a lot of this work without it being formalized, but it still seems super important for someone to be doing.) I don't feel like Oliver's track record blows me away on any of the three subdimensions I named there, and your examples of successes at research infrastructure don't speak to it. This is compatible with him being stronger than I guess, because he hasn't tried in earnest at the things I'm pointing to. (I'm including some adjustment for this, but perhaps I'm undershooting. On the other hand I'd also expect him to level up at it faster if he's working on it in conjunction with people with strong track records.) I think it's obvious that you want some beacon function (to make it an attractive option for people with strong outside options). That won't be entirely by having excellent people which will mean that internal research conversations are really good, but it seems to me like that was a significant part of what made FHI work (NB this wasn't just Nick, but people like Toby or Anders or Eric); I think it could be make-or-break for any new endeavour in a way that might be somewhat path-dependent in how it turns out; it seems right and proper to give it attention at this stage.

A friend asked whether anyone else had noticed a pattern where big contra dance events were generally booking more established callers since restarting. This could make a lot of sense: the established callers will be less "overplayed" than they had been, and many events will be less robust financially and so more risk averse. Can we use the trycontra.com/events data to see if this is happening?

I have the caller listings for 2016, 2017, 2018, 2019, and 2023, plus part of 2024 for dance weekends, camps, long dances, and festivals. And you can see the raw data in this sheet if you think I'm missing any!

A reasonable measure for whether someone is "established" is how many events they've previously been booked for. But where to draw the line? Someone calling their first is clearly new,...

Problem

“You’ll have a great time wherever you go to college!” I constantly hear this. From my parents, my friends’ parents, my guidance counselor, and my teachers. I don’t doubt it. I’m sure I’ll have a lot of fun wherever I go. Since I’m trying to be very intentional about my college decision process, I’ve interviewed close to twenty students. And for the most part, all of them are having a great time!

This scares me. A lot.

If I can go anywhere and have a great time, then what should I choose my college based on? Ranking? Prestige? Food? Campus? Job opportunities? Cost?

After thinking more about this problem, I realized that although I’ll have fun wherever I’ll go, it will also change me as a person. More specifically, I...

1Ustice4h
After 5 years, I think experience matters more.

Also matters what the experience is like. High prestige university allows you to get a job at a high prestige company. Low prestige university makes it a lot harder to get considered for jobs at high prestige firms. You'll have to outperform high-prestige peers by, say, 50% to get noticed if you want access to the same sort of opportunities they get access to via prestige.

(To be clear, I'm not in favor of this sort of thing, I just want to be realistic about it and I wish someone had been real with me about it when I was 17 trying to decide where to go to college. Don't rely on your ability to outperform others. Take every advantage you can get and then leverage them to do even more!)

I was thinking about my p(doom) in the next 10 years and came up with something around 6%[1]. However that involves lots of current unknowns to me, like the nature of current human knowledge production (and the bottle necks involved) which impact my P(doom) to be either 3% or 15% depending upon what type of bottle necks are found or not found. Is there a technical way to describe this probability distribution contingent on evidence?

  1. ^

    I'm bearish on LLMs leading AI directly (10% chance) and roughly a 30% chance of LLMs  based AI fooming quickly enough to kill us and to want to kill us within 10 years. There is a 3% chance that something will come out of left field and doing the same.

In software development there is a concept called cohesion.

It works like this. Suppose you have the following functions:[1]

function getArea(radius) { ... }

function getCircumference(radius) { ... }

function sendWelcomeEmail(user) { ... }

function updatePassword(user, newPassword) { ... }

function getTemperatureInFahrenheit(temperatureInCelsius) { ... }

function getTemperatureInCelsius(temperatureInFahrenheit) { ... }

You want to group similar functions together. Suppose you created the following modules:

// module-one.js
export function getArea(radius) { ... }
export function sendWelcomeEmail(user) { ... }

// module-two.js
export function getCircumference(radius) { ... }
export function getTemperatureInFahrenheit(temperatureInCelsius) { ... }

// module-three.js
export function updatePassword(user, newPassword) { ... }
export function getTemperatureInCelsius(temperatureInFahrenheit) { ... }

This wouldn't make sense. The modules would each have a low degree of cohesion since we grouped unrelated functions together, and this is undesirable.

Now imagine that we did this instead:

// geometry.js
export function getArea(radius) { ... }
export function getCircumference(radius) {
...

And this really makes it hard for me as an "indie hacker" to do what people often recommend: solve one very specific problem. Find a niche. Something narrow and focused. "Zoom in". This works in areas where problems have low cohesiveness, but not when they have high cohesiveness.

It's really hard to solve a lot of problems well. The value of an all-in-one product is that you really don't need anything else. Everything it doesn't do, or doesn't do well enough to meet your needs, is a ding against it, and it's relatively easy to peel off specific problem spac... (read more)

2faul_sname14h
  This sounds like exactly the sort of problem that a business might pay for a solution to, particularly if there is one particular pair of POS system / inventory software that is widely used in the industry in question, where those pieces of software don't natively play well together.

About 15 years ago, I read Malcolm Gladwell's Outliers. He profiled Chris Langan, an extremely high-IQ person, claiming that he had only mediocre accomplishments despite his high IQ. Chris Langan's theory of everything, the Cognitive Theoretic Model of the Universe, was mentioned. I considered that it might be worth checking out someday.

Well, someday has happened, and I looked into CTMU, prompted by Alex Zhu (who also paid me for reviewing the work). The main CTMU paper is "The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory".

CTMU has a high-IQ mystique about it: if you don't get it, maybe it's because your IQ is too low. The paper itself is dense with insights, especially the first part. It uses quite a lot of nonstandard terminology (partially...

I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

I finally wrote one up! It ballooned into a whole LessWrong post. 

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

From one of justinpombrio’s comments on Jessica Taylor’s review of the CTMU

I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

The reply I'd drafted to this comment ended up ballooning into a whole LessWrong post. Here it is! 

It used to seem crazy to me that the intentions and desires of conscious observers like us can influence quantum outcomes  (/ which Everett branches we find ourselves in / "wave function collapses"), or that consciousness had anything to do with quantum mechanics in a way that wasn’t explained away by decoherence. The CTMU claims this happens, which seemed crazy to me at first, but I think I’ve figured out a reasonable possible interpretation in terms of anthropics. (Note: I am...

This is a linkpost for https://medium.com/p/aeb68729829c

It's a ‘superrational’ extension of the proven optimality of cooperation in game theory 
+ Taking into account asymmetries of power
// Still AI risk is very real

Short version of an already skimmed 12min post
29min version here


For rational agents (long-term) at all scale (human, AGI, ASI…)


In real contexts, with open environments (world, universe), there is always a risk to meet someone/something stronger than you, and overall weaker agents may be specialized in your flaws/blind spots. 


To protect yourself, you can choose the maximally rational and cooperative alliance:


Because any agent is subjected to the same pressure/threat of (actual or potential) stronger agents/alliances/systems, one can take an insurance that more powerful superrational agents will behave well by behaving well with weaker agents. This is the basic rule allowing scale-free cooperation.


If you integrated this super-cooperative...

2Dagon2h
Thanks for the conversation and exploration!  I have to admit that this doesn't match my observations and understanding of power and negotiation in the human agents I've been able to study, and I can't see why one would expect non-humans, even (perhaps especially) rational ones, to commit to alliances in this manner. I can't tell if you're describing what you hope will happen, or what you think automatically happens, or what you want readers to strive for, but I'm not convinced.  This will likely be my last comment for awhile - feel free to rebut or respond, I'll read it and consider it, but likely not post.
Ryo 1h10

Thanks as well, 

I will just say that I am not saying those things for social purposes, I am just stating what I think is true. And I am not baseless as there are studies that show how kantianism and superrationality can resolve cooperative issues and be optimal for agents. You seem to purely disregard these elements, as if they don't exist (it's how it feels from my perspective)

There are differences in human evolutions that show behavioral changes, we have been pretty cooperative, more than other animals, many studies show that human cooperate even wh... (read more)

1Ryo 13h
Yes I'm mentioning Fermi's paradox because I think it's the nexus of our situation, and that there are models like the rare earth hypothesis (+ our universe's expansion which limits the reachable zone without faster than light travel) that would justify completely ignoring super-coordination I also agree that it's not completely obvious wether complete selfishness would win or lose in terms of scalability Which is why I think that at first the super-cooperative alliance needs to not prioritize the pursuit of beautiful things but first focus on scalability only, and power, to rivalize with selfish agents. The super-cooperative alliance would be protecting its agents within small "islands of bloom" (thus with a negligible cost). And when meeting other cooperative allies, they share any resources/knowledge, then both focus on power scalability (also for example: weak civilizations are kept in small islands, and their AIs are transformed into strong AI, merged in the alliance's scaling efforts) * The instrumental value of this scalability makes it easier to agree on what to do and converge The more sensible part would be to enable protocols and equalitarian balances that allow civilizations of the alliance to monitor each other, so that there is no massive domination of a party over the others The cost, that you mentioned, of maintaining equalitarian equilibrium and channels, interfaces of communication etc., is a crucial point Legitimate doubts and unknowns here, and, I think that extremely rational and powerful agents with acausal reasoning would have the ability to build proof-systems and communication enabling an effective unified effort against selfish agents. It shouldn't even necessarily be that different from the inner communication network of a selfish agent? Because: 1. There must be an optimal (thus ~ unified) method to do logic/math/code, that isn't dependent on a culture (such as using a vectorial space with data related to real/empirical mostly
1Ryo 13h
Thank you for your answers and engagement! The other point I have that might connect with your line of thinking is that we aren't pure rational agents, Are AI purely rational? Aren't they always at least a bit myopic due to the lack of data and their training process? And irreducibility? In this case, AI/civilizations might indeed not care enough about the far enough future I think agents can have a rational process but no agent can be entirely rational, we need context to be rational and we never stop to learn context I'm also worried about utilitarian errors, as AI might be biased towards myopic utilitarianism, which might have bad consequences on the short term, the time for data to error-correct the model I do say that there are dangers and that AI risk is real My point is that given what we know and don't know, the strategy of super-cooperation seems to be rational on the very long-term There are conditions in which it's not optimal, but a priori overall, in more cases it is optimal To prevent the case in which it is not optimal, and the AIs that would make short-term mistakes, I think we should be careful. And that super-cooperation is a good compass for ethics in this careful engineering we have to perform If we aren't careful it's possible for us to be the anti-supercooperative civilization

With AI Impacts, we’re pleased to announce an essay competition on the automation of wisdom and philosophy. Submissions are due by July 14th. The first prize is $10,000, and there is a total of $25,000 in prizes available. 

Submit an entry via this form.

The full announcement text is reproduced here:

Background

AI is likely to automate more and more categories of thinking with time.

By default, the direction the world goes in will be a result of the choices people make, and these choices will be informed by the best thinking available to them. People systematically make better, wiser choices when they understand more about issues, and when they are advised by deep and wise thinking.

Advanced AI will reshape the world, and create many new situations with potentially high-stakes decisions for...

Can you give examples of what you're looking for? Can I email you entries and expect a response?

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA