LESSWRONG
LW

894
Chris_Leong
7702Ω458229218182
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Wise AI Wednesdays
Linguistic Freedom: Map and Territory Revisted
Investigations Into Infinity
10Chris_Leong's Shortform
Ω
6y
Ω
193
Chris_Leong's Shortform
Chris_Leong7mo*90

Why the focus on wise AI advisors? (Metapost[1] with FAQ 📚🙋🏻‍♂️, Informal[2] Edition 🕺, Working Draft 🛠️)

About this Post  - 🔗:  🕺wiseaiadvisors.com ,  🕴️Formal Edition (coming soon) 

This post is still in draft, so any feedback would be greatly appreciated 🙏. It'll be posted as a full, proper Less Wrong/EA Forum/Alignment forum post, as opposed to just a short-form, when it's ready 🌱🌿🌳.

 ✉️ PM via LW profile   

📲 QR Code

QR code

This post is a collaboration between Chris Leong (primary author) and Christopher Clay (editor), written in the voice of Chris Leong.

We have worked very hard on this[3] and we hope you find it to be of some use. Despite this work, it will most likely contain enough flaws that we will be at least somewhat embarrassed at having written at least parts of it and yet at some point a post has to go out into world to fend for itself.

There's no guarantee that this post will have the kind of impact that we deeply wish for, or even any achieve anything at all... And yet - regardless[4] - it is an honour to serve[5] especially at this dark hour when AGI looms on the horizon and doubt has begun to creep into the hearts of men[6] 🫡. The sheer scale of the problem feels overwhelming at times, and yet... we do what we can[7].

This post[8] doubles[9] as both as serious attempt to produce an original "alignment proposal"[10] and a passion project[11]. The AI safety community has spend nearly two and a half decades trying to convince people to take AI risks seriously. Whilst there have been significant successes, these have also fallen short[12]. Undoubtedly this is incredibly naive[13], but perhaps passion and authenticity[14] can succeed where arguments and logic have failed ✨🌠. 

Acknowledgements:

The initial draft of this post was produced during the 10th edition of AI Safety Camp. Thanks to my team: Matthew Hampton, Richard Kroon, and Chris Cooper, for their feedback. Unlike most other AI Safety Camp projects, which focus on a single, unitary project, we were more of a research collective with each person pursuing their own individual projects.

I am continuing development during the 2025 Summer Edition of the Cambridge ERA: AI Fellowship with an eye to eventually produce a more formal output. Thanks to my research manager, Peter Gebauer and my mentor, Prof. David Manley.

I also greatly appreciate feedback provided by many others[15], including: Jonathan Kummerfeld.

✒️ Selected Quotes: 🕵️ 

But the moral considerations, Doctor...

Did you and the other scientists not stop to consider the implications of what you were creating? — Roger Robb

When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb[16]— Oppenheimer
❦
There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: ‘What have we done?... Maybe it's great, maybe it's bad, but what have we done?  — Sam Altman[17]
❦
Urgent: get collectively wiser - Yoshua Bengio, AI "Godfather"[18][19]

We stand at a crucial moment in the history of our species. Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves—severing our entire future and everything we could become.

Yet humanity’s wisdom has grown only falteringly, if at all, and lags dangerously behind. Humanity lacks the maturity, coordination and foresight necessary to avoid making mistakes from which we could never recover. As the gap between our power and our wisdom grows, our future is subject to an ever-increasing level of risk. This situation is unsustainable. So over the next few centuries[20], humanity will be tested: it will either act decisively to protect itself and its long-term potential, or, in all likelihood, this will be lost forever — Toby Ord, The Precipice[21]

We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology — Edward O. Wilson, The Social Conquest of Earth[22]

❦

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct — Nick Bostrom, Founder of the Future of Humanity Institute, Superintelligence[23]

❦

If we continue to accumulate only power and not wisdom, we will surely destroy ourselves — Carl Sagan, Pale Blue Dot[24]

Never has humanity had such power over itself, yet nothing ensures that it will be used wisely, particularly when we consider how it is currently being used…There is a tendency to believe that every increase in power means “an increase of ‘progress’ itself ”, an advance in “security, usefulness, welfare and vigour; …an assimilation of new values into the stream of culture”, as if reality, goodness and truth automatically flow from technological and economic power as such. — Pope Francis, Laudato si'[25]

❦

The fundamental test is how wisely we will guide this transformation – how we minimize the risks and maximize the potential for good — António Guterres, Secretary-General of the United Nations[26]

❦

Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins — Stephen Hawking, Brief Answers to the Big Questions[27]

❦

🎁 Additional quotes (from Life Itself) 

A digital painting of a toddler sitting in a sandbox at dusk, wearing a crumpled paper crown and holding a glowing scepter topped with a miniature Earth. The child looks fascinated and innocent. Surrounding them are plastic toy tanks, a rocket, a dump truck, and a small shovel, softly illuminated by the warm glow of the scepter. The background is dark, evoking a sense of quiet gravity.

In light of the potential of AI development to feed back into itself...

if increases in capabilities don't lead to an equivalent increase in wisdom...

our capabilities are likely to far exceed our ability to handle[28][29]

❦

🔮 Preview

🧨 The Defining Challenge: Given the rapid speed of AI progress, massive strategic uncertainty and wide range of potential societal-scale vulnerabilities[30]... we face a growing gap between our capabilities and our wisdom[31]...
📌 𝙿𝚁𝙸𝙼𝙰𝚁𝚈 𝙲𝙻𝙰𝙸𝙼: In light of this, wise AI advisors aren't just an urgent research priority, but absolutely critical (⁉️)[32]

📖 𝙱𝙰𝚂𝙸𝙲 𝚃𝙴𝚁𝙼𝙸𝙽𝙾𝙻𝙾𝙶𝚈 - Wise AI advisors? Wisdom? (I recommend skipping initially) 🙏

 ⬇️ I suggest initially skipping this section and focusing on the core argument for now[33] ⬇️

Wise AI Advisors? 

Do you mean?:
• AIs trained to provide advice to humans ✅
• AIs trained to act wisely in the world ❌
• Humans trained to provide wise advise about AI ❌

Most work on Wise AI is focused on the question of how AI could learn to act wisely in the world[34], however, I'm more interested in the former as it allows humans to compensate for the weaknesses in the AIs[35]. 

Even though Wise AI Advisers doesn't refer to humans, I am primarily interested in how Wise AI Advisors could be deployed as part of a cybernetic system.

Training humans to be wiser would help with this project:

• Wiser humans can train wiser AI
• When we combine AI and humans into a cybernetic system, wiser humans will both be better able to elicit capabilities from the AI and also better able to plug any gaps in the AIs wisdom.

What do you mean by wisdom? 

For the purposes of the following argument, I'd encourage you to first consider this in relation to how you conceive of wisdom rather than worrying too much about how I conceive of wisdom. Two main reasons:

• I suspect this reduces the chance of losing someone partway through the argument because they conceive of wisdom slightly differently than I do[36].
• I believe that there are many different types of wisdom that are useful for steering the world in a positive direction and many perspectives on wisdom that are worth investigating. I'm encouraging readers to first consider this argument in relation to their own understanding of wisdom in order to increase the diversity of approaches pursued.

Even though I'd encourage you to read the core argument first, if you really want to hear more about how I conceive of wisdom right now, you can scroll down to the Clarifications section to find out more about what I believe. 🔜🧘 or ⬇️⬇️⬇️😔⭐ My core argument... in a single sentence ☝️⭐

 

😮 Absolutely critical? That's a strong claim! - 🛡️[37]

Yes,  I've[38] intentionally chosen a high bar for what I'll be arguing for in this post. I believe that there's a strong case to be made that the gameboard looks much less promising without them[39][40].

Don't worry,  I'll be addressing a long list of objections[41] ⚔️🌎. Let's go 🪂!

☞ Three Key Advantages:

  1. ✳️🗺️ — This is complementary with almost any plan for making AGI go well.
  2. 🆓🍒 — The opportunity cost is minimal. Financially, much of this work could be pursued as a startup and I expect non-profit projects to appeal to new funders. Talentwise, I expect pursuing this direction to (counter-intuitively) increase the effective talent available for other directions by drawing more talent into the space[42].
  3. 🦋🌪️ — Even modest boosts in wisdom could be significant. Single decisions often shape the course of history.

(See the main post for more detail.)

Five additional benefits

🚨💪🦾  —  1: This approach scales with increases in capabilities.
2) Forget marginal improvements, wisdom tech could provide the missing piece for another strategy, by allowing us to pursue it in a wiser manner.
3) Wisdom technology is likely favourable from a differential technology perspective. Many actors are only reckless because they're unwise.
4) Even a small coalition using these advisors to refine strategy, improve coordination and engage in non-manipulative persuasion could significantly shift the course of civilisation over time.
5) Suppose all else fails: training up a bunch of folks with both a deep understanding of the alignment problem and a strong understanding of wisdom seems incredibly useful.

Please note: This post is still a work in progress. Some sections have undergone more editing and refinement than others. Some bits may be inconsistent, such as if I'm in the middle of making a change. As this is just a draft, I may not end up endorsing all the claims made. Feedback is greatly appreciated 🙏.

Given the stakes, we need every advantage we can get...

we can't afford to leave anything on the table[43][44]

⭐ My 𝙲𝙾𝚁𝙴 𝙰𝚁𝙶𝚄𝙼𝙴𝙽𝚃... in a single sentence ☝️⭐

Unaided human wisdom is vastly insufficient for the task before us..

of navigating an entire series of highly uncertain and deeply contested decisions

where a single mistake could prove ruinous

with a greatly compressed timeline

❦

⭐ ... or in less than three minutes[45]⏱️⭐

...if you use the bold text to skim until the amber lantern section divider 😉

  Two useful framings:

𝚃 𝙷 𝙴 🅂🅄🅅 𝚃 𝚁 𝙸 𝙰 𝙳 – speed, uncertainty, vulnerability🃏[46][47]

Going through them in reverse order:
 

🅅 𝚄 𝙻 𝙽 𝙴 𝚁 𝙰 𝙱 𝙸 𝙻 𝙸 𝚃 𝚈 – 🌊🚣:

The development of advanced AI technologies will have a massive impact on society given the essentially infinite ways to deploy such a general technology. There are lots of ways this could go well, and lots of ways we all die this could go extremely poorly.

Catastrophic malfunctions, permanent dictatorships, AI-designed pandemics, mass cyberattacks, information warfare, war machines lacking any mercy, the list goes on...

Worse:  Someone   Will   Always   Invent   A   New   Way   That   Disaster   Could  Strike   ...  [48]

The kicker: Threat models can interact (in many, many ways). Technological unemployment breaking politics. Loss of control + automated warfare. AI-enabled theft of specialised biological models...

In more detail...

i) At this stage I'm not claiming any particular timelines.

I believe it's likely to be absurdly quite fast, but I don't make this claim this until we get to Speed.

I suspect that often when people doubt this claim, they've implicitly assumed that I was talking about the short or medium term, rather than the long term 🤔. After all, the claim that there are many ways that AI could plausible lead to dramatic benefits or harms over the next 50 or 100 years feels like an extremely robust claim. There are many things that a true artificial general intelligence could do. It's mainly just a question of how long it takes to develop the technology.

ii) "We only have to be lucky once, you have to be lucky every time" - the IRA on the offense-defense balance. Unfortunately, if there's one thing computers are good at, it's persistence.

iii) "Mossad is much more clever and powerful than novices implicitly imagine a "superintelligence" will be; in the sense that, when novices ask themselves what a "superintelligence" will be able to do, they fall well short of the actual Mossad." - Eliezer Yudkowsky

🅄 𝙽 𝙲 𝙴 𝚁 𝚃 𝙰 𝙸 𝙽 𝚃 𝚈 – 🌅💥:

We have massive disagreement on what we expect the development of AI, let alone the best strategy[49]. Making the wrong call could prove catastrophic. 

Strategically: Accelerate or pause? What's the offence-defence balance? Do we need global unity or to win the arms race? Who (or what) should we be aligning AIs to? Can AI "do our alignment homework" for us?


Situationally[50]: Will LLM's scale AGI or are they merely a dead-end? Should we expect AGI in 2027 or is it more than a decade away? Will AI create jobs and cure economic malaise or will it crush, destroy and obliviate them? Will the masses embrace as the key to their salvation or call for a Butlerian Jihad?

The kicker: We're facing these issues at an extremly bad time – when both trust and society's epistemic infrastructure is crumbling. Even if our task were epistemically easy, we might still fail.

In more detail...

i) A lot of this uncertainty just seems inherently really hard to resolve. Predicting the future is hard.

ii) However hard this is to resolve in theory, it's worse in practise. Instead of an objective search for the truth, these discussions are distorted by all these different factors including money, social status and the need for meaning.

iii) More on the kicker: We're seeing increasing polarisation, less trust in media and experts[51] and AI stands to make this worse. This is not where we want to be starting from and who knows how long this might take to resolve?

🅂 𝙿 𝙴 𝙴 𝙳  – 😱⏳:

AI Is developing incredibly rapidly... We have limited time to act and to figure out how to act.[52].

No wall! See o3 crushing the ARC challenge, the automation of expontentially longer coding tasks[53], LLMs demonstrating IMO gold medal maths ability[54], widespread and rapid benchmark saturation, the Turing Test being passed[55] and society shrugging its shoulders (‼️)


Worse: We may already be witnessing an AI arms race. Stopping this may simply not be possible – forget about AGI, an inconcievably large prize – many see winning as a matter of survival.

The kicker: It's entirely plausible that as more of the development process is automated, that AI actually accelerates. That we look back at the current rate of progress and laugh about how we used to consider it fast.

In more detail...

i) Even if timelines aren't short, we might still be in trouble if the take-off speed is fast. Unfortunately, humanity is not very good at preparing for abstract, speculative-seeming threats ahead of time.

ii) Even if neither timelines nor take-off speeds are fast in an absolute sense, we might still expect disaster if they are fast in a relative sense. Governance - especially global governance - tends to proceed rather slowly. Even though it can happen much faster when there's a crisis, sometimes problems need to be solved ahead of time and once you're in them it's too late. As an example, once an AI induced pandemic is spreading, you may have already lost.

iii) Even if neither timelines and take-off speed are fast in an absolute or relative sense, it's just hard on a fundamental level to regulate or control agents that are capable of acting far faster than humans, especially if we develop agents that are adaptive.

Reflections - Why the SUV Triad is Fucking[56] Scary

  • The speed of development makes this problem much harder. Even if alignment were easy and governance didn't require anything special, we could still fail because it's been decided that we have to race as fast as possible.
  • Even if a threat can't led to catastrophe, it can still distract us from those that can. It's hard to avoid catastrophe when we don't know where to focus our efforts ⚪🪙⚫.
  • Many of the threats constitute civilisational-level risk by themselves. We could successful navigate all the other threats, but simply drop the ball once and all that could be for naught.

Big if true: It may even present a reason (draft) to expect Disaster-By-Default ‼️

In more detail... - TODO:

i) In an adversarial environment, your vulnerability is determined by the domain where you're most vulnerable. Your weakest link. Worse, this applies recursively. Your vulnerability within a domain is determined by the kind of attack you're least able to withstand.

ii) Civilisation has limited co-ordination capacity. A committee can only cover so many issues in a given time. Global co-ordination is incredibly difficult to achieve - but it's possible. However, one at a time is a lot easier than dozens. 😅⏳💣💣💣

iii) Talk about resiliance being hard and challenges with general agents.

⚘
𝚃 𝙷 𝙴   𝚆 𝙸 𝚂 𝙳 𝙾 𝙼 — [𝙲 𝙰 𝙿 𝙰 𝙱 𝙸 𝙻 𝙸 𝚃 𝙸 𝙴 𝚂]   𝙶 𝙰 𝙿[57] – 📉📈

<TODO: Feels like there's some more implicit subclaims here I need to address>
 

Subclaim 1: As humanity gain access to more power, we need more wisdom in order to navigate it


This claim is almost a cliche, but is it true?

 

I think that it is, at least in the case of AI:

  • The more powerful a technology is, the greater the cost of accidentally "dropping a ball"
  • As a general technology, the more powerful AI becomes, the more balls there are to drop
  • As AI becomes more powerful, we venture further and further away from past experience (we could even call this the societal training distribution) 

In more detail...

Objection: AI can help us reduce our chance of dropping a ball

Response: I agree, that's why I'm proposing this research direction, but I don't think this happens by default.

One possibility is that we create strongly aligned general defender agents (seems unlikely) that we just give mostly free reign (very risky).


Or we need AI that helps us make wise decisions ahead of time (where to allocate resources, where to concretrate oversight), which is much harder to train for than bio/cyber/manipulation capabilities.

 

Question: What about society's natural process of accumulating wisdom?


Let's take a step back. If we construe wisdom broadly, then we end up with two kinds of wisdom[3]:

  • Hindsight: Wisdom gained through hard, bitter experience
  • Foresight: Being able to figure out what needs to be done even without direct experience


The slower the rate of development, the more we can lean on hindsight, the faster it is, then more we need to lean on foresight.
 

Our timeline seems to be fast, so unfortunately it seems that we will need to rely heavily on foresight, instead of society's natural process of accumulating metis (or practical wisdom) through trial and error.

Subclaim 2: By default, increases in capabilities don't result in increases in wisdom

In more detail...

Objection: AI can help us reduce our chance of dropping a ball

Response: I agree, that's why I'm proposing this research direction, but I don't think this happens by default.

One possibility is that we create strongly aligned general defender agents (seems unlikely) that we just give mostly free reign (very risky).


Or we need AI that helps us make wise decisions ahead of time (where to allocate resources, where to concretrate oversight), which is much harder to train for than bio/cyber/manipulation capabilities.

 

Question: What about society's natural process of accumulating wisdom?


Let's take a step back. If we construe wisdom broadly, then we end up with two kinds of wisdom[3]:

  • Hindsight: Wisdom gained through hard, bitter experience
  • Foresight: Being able to figure out what needs to be done even without direct experience


The slower the rate of development, the more we can lean on hindsight, the faster it is, then more we need to lean on foresight.
 

Our timeline seems to be fast, so unfortunately it seems that we will need to rely heavily on foresight, instead of society's natural process of accumulating metis (or practical wisdom) through trial and error.

⚘
How do we address the SUV Triad and the Wisdom Gap? 

I propose that the most straightforward way to address this is to train wise AI advisors. But what about the alternatives?:

  1. Perhaps we need to buy time, that is to pause? As previously discussed, there are incredibly strong forces pushing against this. Even if a pause is necessarily, it seems unlikely that we could persuade enough actors to see this, agree on a robust mechanism and unpause at the appropriate time without some kind of major increase in wisdom.
  2. Given the number of different threats and how fast they're coming at us, whack-a-mole simply isn't going to cut it[58]. We need general solutions.
  3. Whilst we can and should be developing human wisdom as fast as possible, this process tends to be slow. I don't believe that this will cut it by itself. In any case, I expect increases in human wisdom and increases in AI wisdom to be strongly complementary.

 In more detail...

TODO: This diagram still needs to be updated. I don't want the alternatives I address to look like just a random list. Providing a diagram that shows how we could try to operate on different points helps make this seem more legible.

 

TODO: Address more exotic responses.

⚘

In light of this:

📌 I believe that AI is much more likely to go well for humanity if we develop wise AI advisors

    I am skeptical of the main alternatives

    I am serious about making this happen...

    If you are serious about this too, please:  ✉️ message me [59].

☞ Or for a More Formal Analysis (Using Three Different Frameworks) — TODO

Importance-Tractability-Neglectedness
 

This is a standard EA framework for considering cause areas. Wise AI is broad enough that I consider it reasonable to analyse it as a cause area. 

 DefinitionScoreReason
Importance   
Tractability   
Neglectedess   
OVERALL   

Safety-Freedom-Value

This framework is designed to appeal more to startup/open-source folks. These folks are more likely to put significant weight on the social value a technology provides and on freedom beyond mere utility.

 DefinitionScoreReason
Safety   
Positive Use Cases   
* Freedom (as intrinsic good)   
OVERALL   

Searching for Solutions
 

This framework is designed for folk who think current techniques are unlikely to work.

 DefinitionScoreReason
Survival valueHow much does the technique directly contribute to our chance of survival?  
Information Value   
Unlocked Optionality   
Acceleration   
* Malicious use cases   
OVERALL   

 

☞ I'm sold! How can I get involved? 🥳🎁

As I said, if you're serious, please  ✉️ PM me . If you think you might be serious, but need to talk it through, please reach out as well.

I'd be useful for me to know your background and how you think you could contribute. Maybe tell you a few facts about yourself, what interests you and drop a link to your LinkedIn profile?

If you scroll down, you'll see that I've answered some more questions about getting involved, but I'll include some useful links here as well:

•  List of potentially useful projects : for those who want to know what could be done concretely. Scroll down further for a list of project lists ⬇️.
•  Resources for founding an AI safety startup or non-profit : since I believe there should be multiple organisations pursuing this agenda
•  Resources for getting started in AI Safety more broadly : since some of these resources might be useful here as well

 

 

 

 

🤔 Clarifications

Reminder: If you're looking for a definition of wise AI advisors, it's at the 🔝 of the page .

I've read through your argument and substituted my own understanding of wisdom. Now that you've wasted my time I've done this, perhaps you could clarify how you think about wisdom? ✅🙏

Sure. I've written at the top of this post ⬆️ why I often try to dodge this question initially. But given that you've made it this far, I'll share some thoughts. Just don't over-update on my views 😂.

Given how rapidly AI is developing, I suspect that we're unlikely to resolve the millennia-long philosophical debate about the true nature of wisdom before AGI is built. Therefore, I suggest that we instead sidestep this question by identifying specific capabilities related to wisdom that might useful for steering the world in a positive direction.

I'd suggest that examples might include: the ability to make wise strategic decisions, non-manipulation persuasion and the ability to find win-wins. I'll try to write up a longer list in the future.

Some of these capabilities will be more or less useful for steering the world in positive directions. On the other hand, negative externalities such as accelerating timelines or enabling malicious actors.

My goal is to figure out which capabilities to prioritise by balancing the benefits against the costs and then incorporating feasibility.

It may be the case that some people decide that wisdom requires different capabilities than those I end up deciding are important. As long as they pick a capability that isn't net-negative, I don't see that as bad. In fact, I see different people pursuing different understandings of wisdom as adding robustness to different world models.

 If wisdom ultimately breaks down into specific capabilities, why not simply talk about these capabilities and avoid using a vague concept like wisdom? 🙅‍♂️🌐

So the question is: "Why do I want to break wisdom down into separate capabilities instead of choosing a unitary definition of wisdom and attempting to train that into an AI system?"

Firstly, I think the chance of us being able to steer the world towards a positive direction is much higher if we're able to combine multiple capabilities together, so it makes sense to have a handle for the broader project, in addition to handles for individual sub-projects. I believe that techniques designed for one capability will often carry over to other capabilities, as will the challenges, and having a larger handle makes it easier to make these connections. I also think there's a chance that these capabilities amplify each other (as per the final few paragraphs of  Imagining and Building Wise Machines [60] by Johnson, Bengio, Grossmann, et al).

Secondly, I believe we should be aiming to increase both human wisdom and AI wisdom simultaneously. In particular, I believe it's important to increase the wisdom of folks creating AI systems and that this will then prove useful for a wide variety of specific capabilities that we might wish to train.

Finally, I'm interested in investigating this frame as part of a more ambitious plan to solve alignment on a principled level. Instead of limiting the win condition to building an AI that always (competently) acts in line with human values, the wise AI Advisors frame broadens it such that the AI only needs to inspire humans to make the right decision. It's hard to know in advance whether this reframing will be any easier, but even if it doesn't help, I have a strong intuition that understanding why it doesn't help would shed light on the barriers to solving the core alignment problem.

Weren't you focused on wise AI advisors via Imitation Learning before? 🎯

Yep, I was focused on it before. I now see that goal as overly narrow. The goal is to produce wise AI Advisors via any means. I think that Imitation Learning is underrated, but there are lots of other approaches that are worth exploring as well.

✋ Objections 

Isn't this argument a bit pessimistic? 🙅‍♂️⚖️. I prefer to be optimistic. 👌

Optimism isn't about burying your head in the sand and ignoring the massive challenges facing us. That's denialism. Optimism is about rolling up your sleeves and doing what needs to be done.

The nice thing about this proposal from an optimistic standpoint is that, assuming there is a way for AI to go well for humanity, then it seems natural to expect that there is some way to leverage AI to help us find it[61].

Additionally, the argument for developing wise AI advisors isn't in any way contingent on a pessimistic view of the world. Even if you think AI is likely to go well by default, wise AI advisors could still be of great assistance for making things go even better. For example, facilitating negotiations between powers, navigating the safety-openness tradeoff and minimising any transitional issues.

But I don't think timelines are short. They could be long. Like more than a decade. 👌🤷‍♂️

Short timelines add force to the argument I've made above, but they aren't at all a necessary component[62].

Even if AI will be developed over decades rather than years, there's still enough different challenges and key decisions that unaugmented human wisdom is unlikely to be sufficient.

In fact, my proposal might even work better over long timelines, as it provides more time for AI advisers to help steer the world in a positive direction[63].

Don't companies have a commercial incentive to train wise AI by default? 🤏

I'm extremely worried about the incentives created by a general chatbot product. The average user is low-context and this creates an incentive towards sychophancy 🤥.

I believe that a product aimed at providing advice for critically important decisions would provide better incentives, even it were created by the same company.

Furthermore, given the potential for short timelines, it seems extremely risky 🎰 to rely purely on the profit motive, especially since there is a much stronger profit motive to pursue capabilities 💰💰💰. A few months' delay could easily mean that a wise AI advisor isn't available for a crucial decision 🚌💨. Humanity has probably already missed having such advisors available to assist us during a number of key decision points 😭.

Won't this just be used by malicious actors? Doesn't this just accelerate capabilities? 🤔❌

I expect both the benefits and the externalities to vary hugely by capability. I expect some to be positive, some to be negative and some to be extremely hard to determine. More work is required to figure out which capabilities are best from a differential technology development perspective.

I understand that this answer might be frustrating, but I think it's worth sharing these ideas even though i haven't yet had time to run this analysis. I have a  list of projects  that I hope will prove fairly robust, despite all the uncertainty.

Is there value in wisdom given that wisdom is often illegible and this makes it non-verifiable? ✔️💯

Oscar Delany makes this argument in  Tentatively against making AIs 'wise'  (runner up in the  AI Impact Essay Competition ).

This will depend on your definition of wisdom.

I admit that this tends to be a problem with how I tend to conceive of wisdom, however I will note that  Imagining and Building Wise Machines  ( summary ) takes the opposite stance - the wisdom, conceived of as metacognition - can actually assist with explainability.

But let's assume that this is a problem. How much of a problem is it?

I suspect that this varies significantly by actor. There's a reasonable argument that public institutions shouldn't be using such tools for reasons of justice. However, these arguments have much less force when it comes to private actors.

Even for private actors, it makes sense to use more legible techniques as much as possible, but I don't think this will be sufficient for all decisions. In particular, I don't think objective reasoning is sufficient for navigating the key decisions facing society in the transition to advanced AI.

But I also want to push back against the non-verifiability. You can do things like run a pool of advisors and only take dramatic action if more than a certain proportion agree, plus you can do testing, even advanced things like latent adversarial testing. It's not as verifiable as we'd like, but it's not like we're completely helpless here.

There will also be ways to combine wise AI advisors with more legible systems.

I'm mostly worried about inner alignment. Does this proposal address this? 🤞✔️

Inner alignment is an extremely challenging issue. If only we had some... wise AI advisors to help us navigate this problem.

"But this doesn't solve the problem, this assumes that these advisors are aligned themselves": Indeed, that is a concern. However, I suspect that the wise AI advisors approach has less exposure to these kinds of risks as it allows us to achieve certain goals at a lower base model capability level.

• Firstly, wise people don't always have huge amounts of intellectual firepower. So I believe that we will be able to achieve a lot without necessarily using the most powerful models.
• Secondly, the approach of combined human-AI teams allows the humans to compensate for any weaknesses present in the AIs.

In summary, this approach might help in two ways: by reducing exposure and advising us on how to navigate the issue.

🌅 What makes this agenda so great?

Thanks for asking 🥳. Strangely enough I was looking for an excuse to hit a bunch of my key talking points 😂. Here's my top three:


i: 🤝🧭🌎 Imagine a coalition of organisations attempting to steer the world towards positive outcomes[64], with wise AI advisors helping them refine strategy, improve coordination and engage in non-manipulative persuasion. How much much do you think this could shift our trajectory? It seems to me that even a small coalition could make a significant difference over time.
 

Q: If this existed, would your organisation want to be part of such a coalition?

Stronger, but more speculative 🤜🤛: Forget a small coalition[65], what if these wise AI advisors were able to help an initial coalition cultivate a much larger set of allies? Surely this would greatly increase the impact[66]?

❉


ii: 🆓🍒✳️ Wise AI advisors are most likely complementary with almost any plan for making AGI go well[67]. I expect that much of this work could take the form of a startup and that non-profits working in this space would appeal to new funders, such that there's very little opportunity cost in terms of pursuing this direction.
 

Q: How could wise AI advisors augment your favoured strategy?

 

Stronger, but more speculative 🕵️🧩: Forget marginal improvements, wisdom tech could provide the missing piece for another strategy. It seems quite likely that there are strategies where we'd inevitably fail with purely raw, unaided humans trying to pull it off[68], but where we could scrape through if we deployed (wise) AI advisors in the right way. 

❉


iii: 🧑‍🔬🧭🌅 Wisdom tech seems favourable from a differential technology development perspective. Most AI technology differentially advantages "reckless and unwise" since "responsible & wise" actors[69] need more time to figure out how to deploy a technology. There's a limit to how much we can speed up human processes, but wise AI advisors could likely reduce this time lag further[70]. There's also a reasonable chance than wisdom tech allows "reckless and unwise" actors to realise their own foolishness[71]. For those who favour openness, wisdom tech might be safer to distribute.
 

 Q: What aspect of wisdom would be most beneficial from a differential technology development perspective?

Stronger, but more speculative 🦉💥: Forget nudging technological development around the edges, if intelligence/capabilities tech differentially advantages irresponsible and unwise actors due to its very nature, perhaps pursuing an intelligence explosion is actualy a fatal mistake? Could pursuing a wisdom explosion instead break this trend[72]?

 

🎁 Not persuaded yet? Here's three more bonus points :

  • 🦋🌪️: Even modest boosts in wisdom could be significant. Single decisions often shape the course of history. Altering the right decision might be the difference between avoiding and experiencing a catastrophe. Similarly, there are individual actors who could dramatically change the strategic situation by deciding to start acting responsibly.
  • 🚧🌱: Progress on the core part of the alignment problem seems to have stalled. They say insanity is doing the same thing over and over again without success, so perhaps we should be looking for new framings? The wisdom framing seems quite underexplored and potentially fruitful to me[73]. It's not at all obvious that we want our technology to be intelligent more than we want it to be wise[74]. Furthermore, considering how to steer the world using cybernetic systems involving both humans and AI provides a generalisation of the alignment problem. Maybe neither of these frames will ultimately make sense, but even if that's the case, I suspect that understanding precisely why they aren't fruitful would be valuable[75].
  • 🐦‍⬛🦉: Even if the most ambitious parts of this proposal fail on a concrete level, training up a bunch of folks with both a deep understanding of the alignment problem and wisdom seems pretty darn valuable. Even if we don't know exactly what these people might do, it feels like a robustly useful skillset for the community to be developing[76].

I believe that this proposal is particularly promising because it has so many different plausible theories of change. It's hard to know in advance what assumptions will or will not pan out

You might also be interested in my draft post  N Stories of Impact for Wise AI Advisors 🏗️  which attempts to cleanly seperate out the various possible theories of impact.

☞ I wasn't sold before, but I am now. How can I get involved? 🥳🎁

If you're serious, please  ✉️ PM me . If you think you might be serious, but need to talk it through, please reach out as well.

I'd be useful for me to know your background and how you think you could contribute. Maybe tell you a few facts about yourself, what interests you and drop a link to your LinkedIn profile?

If you scroll down, you'll see that I've answered some more questions about getting involved, but I'll include some useful links here as well:

•  List of potentially useful projects : for those who want to know what could be done concretely. Scroll down further for a list of project lists ⬇️.
•  Resources for founding an AI safety startup or non-profit : since I believe there should be multiple organisations pursuing this agenda
•  Resources for getting started in AI Safety more broadly : since some of these resources might be useful here as well

✋ But this is still all so abstract? 🧘👩‍🍼

Now that we've clarified some of the possible theories of impact, the next section will delve more into specific approaches and projects, including listing some (hopefully) robustly beneficial projects that could be pursued in this space.

I also intend for some of my future work to be focused on making things more concrete. As an example, I'm hoping to spend some time during my ERA fellowship attempting to clarify what kinds of wisdom are most important for steering the world in positive directions.

So whilst I think it is important to make this proposal more concrete, I'm not going to rush. Doing things well is often better than doing them as fast as possible. It took a long time for AI safety to move from abstract theoretical discussions to concrete empirical research and I expect it'll also take some time for these ideas to mature[77].

🌱 What can I do? Is this viable? Is this useful?

☞ Do you have specific projects that might be useful? -  📚Main List  or expand for other lists

Yes, I created a list:  Potentially Useful Projects in Wise AI . It contains a variety of projects from ones that would be marginally helpful to incredibly ambitious moonshots.

What other project lists exist?

Here are some project lists (or larger resources containing a project list) for wise AI or related areas:

•  AI Impacts Essay Competition : Covers the automation of philosophy and wisdom.

•  Fellowship on AI for Human Reasoning - Future of Life Foundation : AI tools for coordination and epistemics

•  AI for Epistemics - Benjamin Todd  - He writes: "The ideal founding team would cover the bases of: (i) forecasting / decision-making expertise (ii) AI expertise (iii) product and entrepreneurial skills and (iv) knowledge of an initial user-type. Though bear in mind that if you have a gap in one of these areas now, you could probably fill it within a year" and then provides a list of projects.

•  Project ideas: Epistemics - Forethought  - focuses on improving epistemics in general, not just AI solutions.

Do you have any advice for creating a startup in this space? -  📚Resources 

See the  AI Safety & Entrepreneurship wiki page  for resources including articles, incubation programs, fiscal sponsorship and funding.

Is it really useful to make AI incrementally wiser? ✔️💯

AI being incrementally wiser might still be the difference between making a correct or incorrect decision at a key point.

Often several incremental improvements stack on top of each other, leading to a major advance.

Further, we can ask AI advisors to advise us on more ambitious projects to train wise AI. And the wiser our initial AI advisors are, the more likely this is to go well. That is, improvements that are initially incremental might be able to be leveraged to gain further improvements.

In the best case, this might kick off a (strong) positive feedback cycle (aka wisdom explosion).

Sure, you can make incremental progress. But is it really possible to train an AI to be wise in any deep way? 🤞✔️🤷

Possibly 🤷.

I'm not convinced it's harder than any of the other ambitious alignment agendas and we won't know how far we can go without giving it a serious effort. Is training an AI to be wise really harder than aligning it? If anything, it seems like a less stringent requirement.

Compare:
• Ambitious mechanistic interpretability aims to perfectly understand how a neural network works at the level of individual weights
• Agent foundations attempting to truly understand what concepts like agency, optimisation, decisions are values are at a fundamental level
• Davidad's Open Agency architecture attempting train AI's that come with proof certificates that an AI has less than a certain probability of having unwanted side-effects

Is it obvious that any of these are easier than training a truly wise AI advisor? I can't answer for you, but it isn't obvious to me.

Given the stakes, I think it is worth pursuing ambitious agendas anyway. Even if you think timelines are short, it's hard to justify holding a probability approaching 100%, so it makes sense for folk to be pursuing plans on different timelines.

I understand your point about all ambitious alignment proposals being extremely challenging, but do you have any specific ideas for training AI to be deeply wise (even if speculative)? ✅

This section provides a high-level overview of some of my more speculative ideas. Whilst these ideas are very far from fully developed, I nonetheless thought it was worthwhile sharing an early version of them. That said, I'm very much on the set of "let a hundred flowers bloom". Whilst I think the approach I propose is promising, I also think it's much more likely than not that someone else comes up with an even better idea.

I believe that imitation learning is greatly underrated. I have a strong intuition that using the standard ML approach for training wisdom will fail because of the traditional Goodhart's law reasons, except that it'll be worse because wisdom is such a fuzzy thing (Who the hell knows what wisdom really is?).

I feel that training a deeply wise AI system requires an objective function that we can optimise hard on. This naturally draws me towards imitation learning. It has its flaws, but it certainly seems like we could optimise much harder on an imitation function than with attempting to train wisdom directly.

Now, imitation learning is often thought of as weak and there's certainly some truth in this when we're just talking about an initial imitation model. However, this isn't a cap on the power of imitation learning, only a floor. There's nothing stopping us from training a bunch of such models and using amplification techniques such as debate, trees of agents or even iterated distillation and amplification. I expect there to be many other such techniques for amplifying your initial imitation models.

See  An Overview of "Obvious" Approaches to Training Wise AI Advisors  for more information. This post was the first half of my entry into the AI Impacts Essay Competition on the Automation of Wisdom and Philosophy which placed third.

Why mightn't I want to work on this?

I think it's important to consider whether you might not be the right person to make progress on this.

I mean — wisdom? Who the hell knows what that is?

I expect most people who try to gain traction on this to just end up confusing themselves and others. So, you probably shouldn't work on this unless you're a particularly good fit.

On the other hand, there are very strong selection effects where those who are fools think they are wise and those who are wise doubt their own wisdom.

I wish I had a good answer here. All I can say is to avoid both arrogance and performative modesty when reflecting on this question.

Many of the methods you've proposed are non-embodied. Doesn't wisdom require embodiment? 🤔❌🤷

There's a very simplistic version argument for this that proves too much: people have used this claim to argue that LLMs were never going to be able to learn to reason, whilst o3 seems to have conclusively disproven this. So I don't think that the standard version of this argument works.

It's also worth noting that in robotics, there are examples of zero-shot transfer to the real world. Even if wisdom isn't directly analogous, this suggests that large amounts of real-world experience might be less crucial than it appears at first glance.

All this said, I find it plausible that a more sophisticated version of this argument might post a greater challenge. Even if this were the case, I see no reason why we couldn't combine both embodied and non-embodied methods. So even if there was a more sophisticated version that ultimately turned out to be true, this still wouldn't demonstrated that research into non-embodied methods was pointless.

✍️ Conclusion:

Coming soon 😉.

☞ Okay, you've convinced me now. Any chance you could repeat how to get involved so I don't have to scroll up? ✅📚

Sure 😊. Round 1️⃣2️⃣ 🔔, here we go 🪂!

As I said, if you're serious, please  ✉️ PM me . If you think you might be serious, but need to talk it through, please reach out as well.

I'd be useful for me to know your background and how you think you could contribute. Maybe tell you a few facts about yourself, what interests you and drop a link to your LinkedIn profile?

Useful link:

•  List of potentially useful projects : for those who want to know what could be done concretely. Scroll down further for a list of project lists ⬇️.
•  Resources for founding an AI safety startup or non-profit : since I believe there should be multiple organisations pursuing this agenda
•  Resources for getting started in AI Safety more broadly : since some of these resources might be useful here as well

📘 Appendix:

Who are you? 👋

Hi, I'm Chris (Leong). I've studied maths, computer science, philosophy and psychology. I've been interested in AI safety for nearly a decade and I've participated in quite a large number of related opportunities.

With regards to the intersection of AI safety and wisdom:

I won third prize in the  AI Impacts Competition on the Automation of Wisdom and Philosophy . It's divided into two parts:

•  An Overview of “Obvious” Approaches to Training Wise AI Advisors 

•  Some Preliminary Notes on the Promise of a Wisdom Explosion 

I recently ran an  AI Safety Camp project on Wise AI Advisors .

More recently, I was invited to continue my research as a ERA Cambridge Fellow in Technical AI Governance.

Feel free to connect with me on  LinkedIn  (ideally mentioning your interest in wise AI advisors so I know to accept) or  ✉️ PM me .  

Hi, I'm Christopher Clay. I was previously a Non-Trivial Fellow and I participated in the Global Challenges Project.

Does anyone else think wise AI or wise AI advisors are important? ✔️💯

Yes, here are a few:

Imagining and Building Wise Machines: The centrality of AI metacognition 

 original paper   summary 

Authors: Yoshua Bengio, Igor Grossmann, Melanie Mitchell and Samuel Johnson[78] et al

Wise metacognition can lead to a virtuous cycle in AI, just as it does in humans. We may not know precisely what form wise AI will take—but it must surely be preferable to folly.

The paper mostly focuses on wise AI agents, however, not exclusively so:

Prof. Grossmann (personal correspondence): "I like the idea of wise advisors. I don't think the argument in our paper is against it -it all depends on how humans will use the technology (and there are several papers on the role of metacognition for discerning when to rely on decision-aids/AI advisors, too)."

AI Impacts Automation of Wisdom and Philosophy Competition 

Organised by: Owen Cotton-Barrett
Judges: Andreas Stuhlmüller, Linh Chi Nguyen, Bradford Saad and David Manley[79]

 competition announcement   Wise AI Wednesdays post   winners and judge's comments [80]

AI is likely to automate more and more categories of thinking with time.

By default, the direction the world goes in will be a result of the choices people make, and these choices will be informed by the best thinking available to them. People systematically make better, wiser choices when they understand more about issues, and when they are advised by deep and wise thinking.

Advanced AI will reshape the world, and create many new situations with potentially high-stakes decisions for people to make. To what degree people will understand these situations well enough to make wise choices remains to be seen. To some extent this will depend on how much good human thinking is devoted to these questions; but at some point it will probably depend crucially on how advanced, reliable, and widespread the automation of high-quality thinking about novel situations is[81]

Beyond Artificial Intelligence (AI): Exploring Artificial Wisdom (AW)
Authors: Dilip V Jest, Sarah A Graham, Tanya T Nguyen, Colin A Depp, Ellen E Lee, Ho-Cheol Kim

 paper 

The ultimate goal of artificial intelligence (AI) is the development of technology that is best able to serve humanity and this will require advancements that go beyond the basic components of general intelligence. The term “intelligence” does not best represent the technological needs of advancing society, because intelligence alone does not guarantee well-being either for individuals or societies. It is not intelligence, but wisdom, that is associated with greater well-being, happiness, health, and perhaps even longevity of the individual and the society. Thus, the future need in technology is for artificial wisdom (AW)

What do you see as the strongest counterarguments against this general direction? 🔜🙏

I've discussed several possible counter-arguments above, but I'm going to hold off publicly posting about which ones seem strongest to me for now. I'm hoping this will increase diversity of thought and nerdsnipe people into helping red-team my proposal for me 🧑‍🔬👩‍🔬🎯. Sometimes with feedback there's a risk where if you say, "I'm particularly worried about these particular counter-arguments" you can redirect too much of the discussion onto those particular points, at the expense of other, perhaps stronger criticisms.

☞ Do you have any recommendations for further reading? ✅📚

I've created a weekly post series on Less Wrong called  Wise AI Wednesdays .

Motivation

‘AI for societal uplift’ as a path to victory - LW Post: Examines the conditions in which a "societal uplift" - epistemics + co-ordination + institutional steering - might or might not lead to positive outcomes

N Stories of Impact for Wise AI Advisors - Draft 🏗️ : Different stories about how wise AI advisors could be useful for having a positive impact on the world.

Artificial Wisdom

Imagining and building wise machines: The centrality of AI metacognition by Johnson, Karimi, Bengio, et al. -   paper ,  summary : This paper argues that wisdom involves two kinds of strategies (task-level strategies & metacognitive strategies). Since current AI is pretty good at the former, they argue that we should pursue the latter as a path to increasing AI wisdom.

Finding the Wisdom to Build Safe AI by Gordon Seidoh Worley -  LW post : Seidoh talks about his own journey in becoming wiser through Zen and outlines a plan for building wise AI. In particular, he argues that it will be hard to produce wise AI without having a wise person to evaluate it.

Designing Artificial Wisdom: The Wise Workflow Research Organisation by Jordan Arel -  EA forum post : Jordan proposes mapping the workflows within an organisation that is researching a topic like AI safety or existential risk. AI could be used to automate or augment parts of their work. This proportion would increase over time, with the hope being that this would eventually allow us to fully bootstrap an artificially wise system.

Should we just be building more datasets? by Gabriel Recchia -  Substack : Argues that an underrated way of increasing the wisdom of AI systems would be building more datasets (whilst also acknowledging the risks).

Tentatively Against Making AIs 'wise' by Oscar Delany -  EA forum post : This articles argues that insofar as wisdom is conceived of as being more intuitive than carefully reasoned pursuing AI wisdom would be a mistake as we need AI reasoning to be transparent. I've included this because it seems valuable to have at least one critical article.

Neighbouring Areas of Research

What's Important In "AI for Epistemics"? by Lukas Finnveden -  Forethought : AI for Epistemics is a subtly different but overlapping area. It is close enough that this article is worth reading. It provides an overview of why you might want to work on this, heuristics for good interventions and concrete projects.

AI for AI Safety by Joe Carlsmith ( LW post ): Provides a strategic analysis of why AI for AI safety is important whether it's for making direct safety progress, evaluating risks, restraining capabilities or improving "backdrop capacity". Great diagrams.

AI Tools for Existential Security by Lizka Vaintrob and Owen Cotton-Barratt --  Forethought : Discusses how applications of AI can be used to reduce existential risks and suggests strategic implications.

Not Superintelligence: Supercoordination -  forum post [82]: This article suggests that software-mediated supercoordination could be beneficial for steering the world in positive directions, but also identifies the possibility of this ending up as a "horrorshow".

Human Wisdom

Stanford Encyclopedia of Philosophy Article on Wisdom by Sharon Ryan -  SEP article : SEP articles tend to be excellent, but also long and complicated. In contrast, this article maintains the excellence while being short and accessible.

Thirty Years of Psychology Wisdom Research: What We Know About the Correlates of an Ancient Concept by Dong, Weststrate and Fournier -  paper : Provides an excellent overview of how different groups within psychology view wisdom.

The Quest for Artificial Wisdom by Sevilla -  paper : This article outlines how wisdom is viewed in the Contemplative Sciences discipline. It has some discussion of how to apply this to AI, but much of this discussion seems outdated in light of the deep learning paradigm.

Applications to Governance

🏆 Wise AI support for government decision-making by Ashwin -  Substack  (Prize winning entry in the  AI Impacts Automation of Wisdom and Philosophy Competition ): This article convinced me that it isn't too early to start trying to engage the government on wise AI. In particular, Ashwin considers the example of automating the Delphi process. He argues that even though you might begin by automating parts of the process, over time you could expand beyond this, for example, by helping the organisers figure out what questions they should be asking.

Some of my own work:

🏆 My third prize-winning entry in the  AI Impacts Automation of Wisdom and Philosophy Competition  (split into two parts):

 Some Preliminary Notes on the Promise of a Wisdom Explosion : Defines a wisdom explosion as a recursive self-improvement feedback loop that enhances wisdom, unlike intelligence as per the more traditional intelligence explosion. Argues that wisdom tech is safer from a differential technology perspective.

An Overview of "Obvious" Approaches to Training Wise AI Advisors : Compares four different high-level approaches to training wise AI: direct training, imitation learning, attempting to understand what wisdom is at a deep principled level, the scattergun approach. One of the competition judges wrote: "I can imagine this being a handy resource to look at when thinking about how to train wisdom, both as a starting point, a refresher, and to double-check that one hasn’t forgotten anything important".

☞ What are some projects that exist in this space? — TODO

 Automating the Delphi Method for Safety Cases  — Philip Fox, Ketana Krishna, Tuneer Mondal, Ben Smith, Alejandro Tlaie  — Ashwin and Michaelah Gertz-Billingsley proposed automating the Delphi Mehod as a foot-in-the door for getting the government to use wise AI. Well, turns out these folk at Arcadia Impact have done it!

 

  1. ^

    ⇢ "What's an metapost?" — Oh, I just made that term up. Essentially, it's just a megapost, but instead of being one long piece of text, it uses collapsible sections to keep the core post lightweight 🪶.  In other words, it's both a megapost and a minipost at the same time 😉.

    ⇢ "But people will confuse this with a meta post?" — No space. Simple 🤷‍♂️.

    ⇢ This term happens to also be appropriate in a second sense; some aspects are inspired by metamodernism.

  2. ^

    ⇢ I jokingly refer to this as the "Party Edition" as it's designed to be shared at informal social gatherings 🕺.

    ⇢ "Party edition? Why the hell would you make a party edition!?" — I could you tell a story about how this will actually be impactful (HPMoR; Siliconversations; spreading ideas by going from person to person being massively underrated), but at the end of the day, I wanted to make it, so I made it 🤷‍♂️.

    ⇢  "But what is it?" — I've handcrafted this version for more casual contexts. I've tried to be more authentic and I hope you find it more engaging. That said, these are serious issues, so I'm also creating a "serious edition" for situations for contexts where sharing a post like this might come off as disrespectful 🎩. Otherwise: 🥳.

    ⇢ "That makes sense, but are the lame jokes really necessary?" —  Yes, they are most necessary. "When the novice attained the rank of grad student, he took the name Bouzo and would only discuss rationality while wearing a clown suit" 🤡🎩

  3. ^

    "Why put so much effort into a Less Wrong post?" — 🟥🍉

  4. ^

    "Regardless" — Yes. One must imagine Sisyphus happy... ❤️🦉.

  5. ^

    ⇢  🔥👀🐎 — I remember reading about how, in The Lord of the Rings, they even took the time to inscribe some writing inside someone's armor that could never be seen on camera. I found this inspiring; as a demonstration of what true dedication to art looks like —  ▶️

    ⇢ Complete tangent but...

  6. ^

    ⇢ "I see in your eyes the same fear that would take the heart of me. A day may come when the courage of men fails, when we forsake our friends and break all bonds of fellowship, but it is not this day. An hour of wolves and shattered shields, when the age of men comes crashing down, but it is not this day! This day we fight!! By all that you hold dear on this good Earth, I bid you stand" 💍🌋🛡️ — ▶️

    ⇢ "If we as humans ever face extinction via alien invasion, I nominate Aragorn to come to life and lead mankind" — @jlop6822 (Youtube comment)

  7. ^

    "If I see a situation pointed south, I can't ignore it. Sometimes I wish I could" — Steve Rogers 🛡️🇺🇸

  8. ^

    It's really just this version that is the passion project. The "Formal Version" is necessary and the right version for certain contexts, but that format also loses something 💔.

  9. ^

    "We will call this discourse, oscillating between a modern enthusiasm and a postmodern irony, metamodernism... New generations of artists increasingly abandon the aesthetic precepts of deconstruction, parataxis, and pastiche in favor of aesth-ethical notions of reconstruction, myth, and metaxis... History, it seems, is moving rapidly beyond its all too hastily proclaimed end" —  Notes on metamodernism

  10. ^

    I'm using the the term "alignment proposal" in an extremely general sense, particularly "how we could save the world", rather than the narrow sense of achieving good outcomes by specifically aligning models. That said, wise AI advisors could assist with aligning models and they also provide us with a generalised version of the alignment problem (discussed later in this post) 🌏 → 🌞.

  11. ^

    "It is only as an aesthetic phenomenon that existence and the world are eternally justified" — ❤️🦉🔨.

  12. ^

    Reality does not grade on a curve 🚀🏰💥

  13. ^

    Is naiviety always bad? Is there some kind of universal rule? Or are there exceptions? Perhaps there are situations where a certain form of naiviety can form a self-fulfilling prophecy (or hyperstition) 🔮🪞.

  14. ^

    "Of all that is written, I love only what a man has written with his blood. Write with blood, and you will experience that blood is spirit" — ❤️🦉🔨.

  15. ^

    Unfortunately, I've forgotten the names of many people who provided me with useful feedback 😔.

  16. ^

    🇺🇸🚀 — 🏆: 📽️🎭🦾🎬🎞️🎥🎼 — ▶️

  17. ^

    Commenting on the parallels between the Manhattan project and AI on This Past Weekend with Theo Von ☢️🤖.

  18. ^

    Source: NeurIPS’2019 workshop on Fairness and Ethics, Vancouver, BC, On the Wisdom Race, December 13th, 2019 🦉🏎️🏁

  19. ^

    One reason why this arrangement of quotes amuses me is because the Sam Altman quote was actually in response to the host specifically asking Sam about what he thought about Yoshua Bengio's concerns 😂.

  20. ^

    "Centuries" — I wish 😭.

  21. ^

    🧗‍♀️

  22. ^

    🗿👑☢️

  23. ^

    👦💣💥

  24. ^

    📺: 🪐🔭

  25. ^

    ⛪: 🤲🥣 → 🦁🌱

  26. ^

    🇺🇳: 👨‍💻, ⭐⭐⭐⭐⭐

  27. ^

    👨‍🔬👨‍🦼: 🌬️🕳️

  28. ^

    Salvia Ego Ipse, Philosopher of AI, Liber magnus, sapiens et praetiosus philosophiae ÷

  29. ^

    "But what if an equivalent increase in wisdom turns out to be impossible?" — This is left as an exercise to the reader 😱.

  30. ^

    I refer to this as the SUV Triad 🫰

  31. ^

    ⇢ I refer to this the Wisdom-Capabilities Gap or simply the Wisdom Gap.

    ⇢ "Does it make sense to assume that greater capabilities require greater wisdom to handle?"  —  Yes, I believe it does. I'll argue for this in the Core Argument 🔜.

  32. ^

    I may still decide to weaken this claim. This is a draft post after all 🙏.

  33. ^

    ⇢ "Then why is this section first?": You might find it hard to believe, but there's actually folks who'd freak out if I didn't do this 🔱🫣. Yep... 🤷‍♂️

    ⇢ I recommend diving straight into the core argument, and seeing if you agree with it when you substitute your own understanding of what wisdom is. I promise you 💍, I'm not doing this arbitrarily. However, if you really want to read the definitions first, then you should feel free to just go for it 🧘👌.

  34. ^

    For example, see a and b.

  35. ^

    I also believe that this increases our chances of being able to set up a positive feedback loop 🤞🤞🤞 of increasing wisdom (aka "wisdom explosion" 🦉🎆) compared to strategies that don't leverage humans. But I don't want to derail the discussion by delving into this right now 🐇🕳️.

  36. ^

    I'd like to think of myself as being the kind of person who's sensible enough to avoid getting derailed by small or irrelevant details 🤞🤞. Nonetheless, I can recall countless a few times when I've failed at this in the past 🚇💥. Honestly, I wouldn't be surprised if many people were in the same boat 🧑‍🤝‍🧑🧑‍🤝‍🧑🧑‍🤝‍🧑🌏🤞.

  37. ^

    I'm using emojis to provide a visual indication of my answer without needing to expand the section 🦸🩻.

  38. ^

    This post is being written collaboratively by Chris Leong (primary author) and Christopher Clay (second author) in the voice of Chris Leong 🤔🤝✍️. The blame for any cringe in the footnotes can be pinned on Chris Leong 🎯.

  39. ^

    ⇢ I believe that good communication involves being as open about the language game ❤️🦉 being being played as possible 🫶.

    Given this, I want to acknowledge that this post is not intended to deliver a perfectly balanced analysis, but rather to lay out an optimistic case for wise AI advisors 🌅. In fact, feel free to think of it as a kind of manifesto 📖🚪📜📌.

    I suspect that many, if not most, readers of Less Wrong will be worried that this must necessarily come at the expense of truth-seeking 🧘. In fact, I probably would have agree with this not too long ago. However, recently I've been resonating a lot more with the idea that there's value in all kinds of language games, so long as they are deployed in the right context and pursued in the right way 🤔[83]. Most communication shouldn't be a manifesto, but the optimal number of manifestos isn't zero (❗).

    ⇢ Stone-cold rationality is important, but so is the ability to dream 😴💭🎑. It's important to be able to see clearly, but also to lay out a vision 🔭✨. These are in tension, sure, but not in contradiction ⚖️. Synthesis is hard, but not impossible 🧗.

    ⇢ If you're interested in further discussion, see the next footnote...

  40. ^

    ⇢ "Two footnotes on the exact same point? This is madness!": No, this is Sparta! 🌍⛰️🐰🕳️

    Spartans don't quit and neither do we 💪. So here it is: Why it makes sense to lay out an optimistic case, Round 🔔🔔.

    Who doesn't love a second bite at the apple? 👸🏻🍏💀😴

    ⇢ Adopting a lens of optimism unlocks creativity by preventing you from discarding ideas too early (conversely, a lens of pessimism aids with red-teaming by preventing you from discarding objections too easily). It increases the chance that bad ideas make it through your initial filter, but these can always be filtered further down the line. I believe that such a lens is the right choice for now, given that this area of thoughtspace is quite neglected. Exploration is a higher priority than filtration 🌬️⛵🌱.

    ⇢ "But you're biasing people toward optimism": I take this seriously 😔🤔. At the same time, I don't believe that readers need to be constantly handheld lest they form the wrong inference. Surely, it's worth risking some small amount of bias to avoid coming off as overly paternalistic?

    Should texts be primarily written for the benefit of those who can't apply critical thinking? To be honest, that idea feels rather strange to me 🙃. I think it's important to primarily write for the benefit of your target audience and I don't think that the readers I care about most are just going to absorb the text wholesale, without applying any further skepticism or cognitive processing 🤔. Trusting the reader has risks, sure, but I believe there are many people for whom this is both warranted and deserved 🧭⛵🌞.

  41. ^

    "Why all the emojis?": All blame belongs to Chris Leong 🎯.

    "Not who deserves to be shot, but why are you using them?": A few different reasons: they let me bring back some of the expressiveness that's possible in verbal conversation, but which is typically absent in written communication 🥹😱, and striking a more casual tone helps differentiate it from any more academic or rigorous treatment (😉) that might hypothetically come later 🤠🛤️🎓 and they help keep the reader interested 🎁.

    But honestly: I mostly just feel drawn to the challenge. The frame "both/and" has been resonating a lot with me recently and one such way is that I've been feeling drawn towards trying to produce content that manages to be both fun and serious at the same time 🥳⚖️🎩.

  42. ^

    I'm starting to wonder if this point is actually true. This is a NOTE TO MYSELF to review this claim.

  43. ^

    "Why can't we afford to leave anything on the table?" - The task we face is extremely challenging, the stakes sky high, it's extremely unclear what would have to happen for AI to go well, we have no easy way on gaining clarity on what would have to happen and we're kind of in an AI arms race which basically precludes us from fully thinking everything through 😅⏳⏰. 

  44. ^

    "But what about prioritisation?" — Think Wittgenstein. I'm asserting the need for a mindset, a way of being 🧘🏃‍♂️🏋️.

    "But I don't understand you"  — If a lion could speak, we could not understand him ❤️🦉🕵️.

  45. ^

    This section is currently being edited, so it may take more than 3 minutes. Sorry 🙏!

  46. ^

    I selected this name because it's easy to say and remember, but in my heart it'll always be the MST/MSU/MSV trifecta (minimally spare time, massive strategic uncertainty, many scary vulnerabilities 💔🚢🌊🎶. I greatly appreciate Will MacAskill's recommendation that I simplify the name 🙏.

  47. ^

    I don't think I'm exactly arguing anything super original or unprecedented here 😭😱, but I don't know if I've ever really seen anyone really go hard on this shape of argument before 🌊🌊🌊.

  48. ^

    Some of these risks could directly cause catastrophe. Others might indirectly lead to this via undermining vital societal infrastructure such that we are then exposed to other threats 🕷️🕸️. 

  49. ^

    One of the biggest challenges here is that AI throws so many civilisational scale challenges towards us at once. Humanity can deal with civilisational scale challenges, but global co-ordination is hard and when there's so many different issues to deal with its hard to give each problem the proper attention it deserves 😅🌫️💣💣💣.

  50. ^

    I'm intentionally painting these questions as more of a binary than they actually are to help illustrate the range of opinions.

  51. ^

    Some of this is deserved, imho 🤷‍♂️.

  52. ^

    In many ways, the figuring out how to act part of the problem is the much harder component 🧭🌫️🪨. Given perfect co-ordination between nation states, these issues could almost certainly be resolved quite quickly, but given there's no agreement on what needs to be done and politics being the mind-killer, I wouldn't hold your breadth... 😯😶🐌🚀🚀🚀

  53. ^

    The exponential increase in task-length actually generalises beyond coding.

  54. ^

    What's more impressive is how it was solved: "Why am I excited about IMO results we just published: - we did very little IMO-specific work, we just keep training general models - all natural language proofs - no evaluation harness We needed a new research breakthrough and @alexwei_ and team delivered"
    https://x.com/MillionInt/status/1946551400365994077 🤯

  55. ^

    Some people have complained that the students used in this experiment were underqualified as judges, but I'm sure there'll be a follow-up next year that addresses this issue 🤷‍♂️.

  56. ^

    ⇢ Clark: "Sir, this is a Less Wrong. You can't swear here" — In order for a film to get a 12A or PG-13 rating, it cannot contain gratuitous use of profanity. One 'f#@k' is allowed, anymore and it automatically becomes a 15 or R rated" — See, PG-13.

    ⇢ Jax: "Censorship is terrible and the belief that swearing hurts kids is utterly absurd. Do do you think they've forgotten what it was like as a kid?" — Not taking sides, but restrictions aren't always negative. Sometimes limiting a resource encourages people to use that resource to maximum effect 🎯.

  57. ^

    See Life Itself's article on the "wisdom gap" which is the terminology I prefer in informal contexts. 10 Billion uses the term capabilities-wisdom gap, which I've reversed in order to emphasis wisdom. I prefer this version for more formal contexts 📖😕.

  58. ^

    I don't want to dismiss the value of addressing specific issues in terms of buying us time 🔄.

  59. ^

    I recommend finishing the post first (don't feel obligated to expand all the sections) 🙏.

  60. ^

    This paper argues that robustness, explainability and cooperation all facilitate each other 🤞💪. In case the link to wisdom is confusing (🦉❓), the authors argue that metacognition is both key to wisdom and key to these capabilities ☝️.

  61. ^

    This argument could definitely be more rigorous, however, this response is addressed to optimists and this it is especially likely to be true if we adopt an optimistic stance on technical and social alignment. So I feel fine leaving it as is 🤞🙏.

  62. ^

    ⚠️ If timelines are extremely short, then many of my proposals might become irrelevant because there won't be time to develop them 😭⏳. We might be limited to simple things like tweaking the system prompt or developing a new user interface, as opposed to some of the more ambitious approaches that might require changing how we train our models. However, there's a lot of uncertainty with timelines, so more ambitious projects might make sense anyway. 🤔🌫️🤞🤷. 

  63. ^

    "If your proposal might even work better over long timelines, why does your core argument focus on short timelines 🤔⁉️": There are multiple arguments for developing wise AI advisors and I wanted to focus on one that was particularly legible 🔤🙏.

  64. ^

    Andrew Critch has labelled this - "pivoting" civilisation across multiple acts across multiple persons and institutions - a pivotal process. He compares this favourably to the concept of a pivotal act - "pivoting" civilisation in a single action - on the basis of it being easier, safer and more legitimate 🗺️🌬️⛵️⛵️⛵️🌎.

  65. ^

    I say "forget X" partly in jest. My point is that if there's a realistic chance of realising stronger possibility, then it would likely be much more significant than the weaker possibility 👁️⚽.

  66. ^

    I think there's a strong case that if these advisers can help you gain allies at all, they can help you gain many allies 🎺🪂🪂🪂.

  67. ^

    In So You Want To Make Marginal Progress..., John Wentworth argues that "when we don't already know how to solve the main bottleneck(s)... a solution must generalize to work well with the whole wide class of possible solutions to other subproblems... (else) most likely, your contribution will not be small; it will be completely worthless" 🗑️‼️.

  68. ^

    My default assumption is that people will be making further use of AI advice going forwards. But that doesn't contradict my key point, that certain strategies may become viable if we "go hard on" producing wise AI advisors 🗺️🔓🌄.

  69. ^

    "But "reckless and unwise" and "responsible & wise" aren't the only possible combinations!" - True 👍, but they co-occur sufficiently often that I think this is okay for a high-level analysis 🤔🤞🙏.

  70. ^

    They could also speed up the ability of malicious and "reckless & unwise" actors to engage in destructive actions, however I still think such advisors would most likely to improve the comparative situation since many defences and precautions can be put in place before they are needed 🤔🤞💪.

  71. ^

    This argument is developed further in Some Preliminary Notes on the Promise of a Wisdom Explosion 🦉🎆📄.

  72. ^

    I agree it's highly unlikely that we could convince all the major labs to pursue a wisdom explosion instead of an intelligence explosion 🙇. Nonetheless, I don't think this renders the question irrelevant ☝️. Let's suppose (😴💭) it really were the case that the transition to AGI was would most likely go well if society had decided to pursue a wisdom explosion rather than an intelligence explosion ‼️. I don't know, but that seems like it'd be the kind of thing that would be pretty darn useful to know 🤷‍♂️. My experience with a lot of things is that it's a lot easier to find additional solutions than it is to find the first. In other words, solutions aren't just valuable for their potential to be implemented, but because of what they tell us about the shape of the problem 🗝️🚪🌎.

  73. ^

    How can we tell which framings are likely to be fruitful and ones are likely to be not 👩‍🦰🍎🐍🌳? This isn't an easy question, but my intuition is that a framing more likely to be worth pursuing if they lead to questions that feel like they should (❓) be being asked, but have been neglected due to unconscious framing effects 🙈🖼️. In contrast, a framing is less likely to be fruitful if it asks questions that seem interesting prima facie, but for which the best researchers within the existing paradigm have good answers for, even if most researchers do not 🏜️✨.

  74. ^

    I acknowledge that "intelligence" is only rather loosely connect with the capabilities AI systems have in practise. Factors like prestige, economic demand and tractability play a larger role in terms of what capabilities are developed than the name used to describe the area of research. Nonethless, I think it'd be hard to argue that the framing effect hasn't exterted any influence. So I think it is worthwhile understanding how this may have shaped the field and whether this influence has been for the best 🕵️🤔. 

  75. ^

    A common way that an incorrect paradigm can persist is if existing researchers don't have good answers to particular questions, but see them as unimportant 🙄👉.

  76. ^

    I have an intuition that it's much more valuable for the AI safety and governance community to be generating talent with a distinct skillset/strengths than to just be recruiting more folk like those we already have 🧩. As we attract more people with the same skillset we likely experience decreasing marginal returns due to the highest impact roles already being filled 🙅‍♂️📉.

  77. ^

    That said, given how fast AI development is going, I'm hoping the process of concretisation can be significantly sped up. Fortunately, I think it'll be easier to do it this time as I've already seen what the process looked like for alignment 🔭🚀🤞. 

  78. ^

    First author 🥇.

  79. ^

    Wei Dai who was originally announced as a judge withdrew 🕳️.

  80. ^

    Including me 🏆🍀😊.

  81. ^

    They write: "The precise opinions expressed in this post should not be taken as institutional views of AI Impacts, but as approximate views of the competition organizers" ☝️👌.

  82. ^

    Linking to this article does not constitute an endorsement of Sofiechan or any other views shared there. Unfortunately, I am not aware of other discussions of this concept 😭🤷, so I will keep this link in this post temporarily 🙏🔜. 

  83. ^

    You could even say: with the right person, and to the right degree, and at the right time, and for the right purpose, and in the right way ❤️🦉.

Reply1
Narrative truth
Chris_Leong19h20

At it's worst it can be, but I'd encourage you to reflect on the second quote:

Society would remember the Holocaust differently if there were no survivors to tell the story, but only data, records and photographs. The stories of victims and survivors weave together the numbers to create a truth that is tangible to the human experience…

Reply
Scaling AI Safety in Europe: From Local Groups to International Coordination
Chris_Leong12d30

Great post!

I really appreciate proposals that are both pragmatic and ambitious; and this post is both!

I guess the closest thing there is to a CEA for AI Safety is Kairos. However, they decided to focus explicitly on student groups[1].

  1. ^

    SPAR isn't limited to students, but it is very much in line with this by providing, "research mentorship for early-career individuals in AI safety".

Reply
My Interview With Cade Metz on His Reporting About Lighthaven
Chris_Leong1mo40

I think he's clearly had a narrative he wanted to spin and he's being very defensive here.

If I wanted to steelman his position, I would do so as follows (low-confidence and written fairly quickly):

  1. I expect he believes his framing and that he feels fairly confident in it because most of the people he respects also adopt this framing.
  2. In so far as his own personal views make it into the article, I expect he believes that he's engaging in a socially acceptable amount of editorializing. In fact, I expect he believes that editorializing the article in this way is more socially responsible than not, likely due to the role of journalism being something along the lines of "critiquing power".
  3. Further, whilst I expect he wouldn't universally endorse "being socially acceptable among journalists" as guaranteeing that something is moral, he'd likely defend it as a strongly reliable heuristic, such that it would take pretty strong arguments to justify departing from this.
  4. Whilst he likely endorses some degree of objectivity (in terms of getting facts correct), I expect that he also sees neutrality as overrated by old school journalists. I expect he believes that it limits the ability of jouralists to steer the world towards positive outcomes. That is, more of as a consideration that can be overriden, rather than a rule.
Reply
My Interview With Cade Metz on His Reporting About Lighthaven
Chris_Leong1mo10

I almost agreed voted this — then read the comments below — and disagreed voted this instead.

Reply
Exploring the "Anti-TESCREAL" Ideology and the Roots of (Anti-)Progress
Chris_Leong1mo31

Fascinating work. I'm keen to hear more about the belief set of this opposing cluster.

Reply
Three Quotes on Transformative Technology
Chris_Leong1mo20

You're misunderstanding the language game.

Reply1
Mech Interp Wiki Page and Why You Should Edit Wikipedia
Chris_Leong1mo20

Do you think Wiki pages might be less important with LLM's these days? Also, I just don't end up on Wiki pages as often, I'm wondering if Google stopped prioritizing it so heavily.

Reply
The Open Agency Model
Chris_Leong1moΩ120

Is there any chance you could define what you mean by "open agency"? Do you essentially mean "distributed agency"?

Reply
Chris_Leong's Shortform
Chris_Leong1mo*20

Placeholder for an experimental art project — Under construction 🚧[1]

Anything can be art, it might just be bad art — Millie Florence

Art in the Age of the Internet

The medium is the message — Marshall McLuhan, Media Theorist

Hypertext is not a technology, it is a way of thinking — ChatGPT 5[2]

Writing is the process of reducing a tapestry of interconnections to a narrow sequence. This is, in a sense, illicit. This is a wrongful compression of what should spread out, and today’s computers, they’ve betrayed that — Ted Nelson, founder of Project Xanadu[3][4]

𝕯𝖔𝖔𝖒؟

𝒽𝑜𝓌 𝓉𝑜 𝒷𝑒𝑔𝒾𝓃? 𝓌𝒽𝒶𝓉 𝒶𝒷𝑜𝓊𝓉 𝒶𝓉 𝕿𝖍𝖊 𝕰𝖓𝖉?[5]

𝕿𝖍𝖊 𝕰𝖓𝖉? 𝕚𝕤 𝕚𝕥 𝕣𝕖𝕒𝕝𝕝𝕪 𝕿𝖍𝖊 𝕰𝖓𝖉?

𝓎𝑒𝓈. 𝒾𝓉 𝒾𝓈 𝕿𝖍𝖊 𝕰𝖓𝖉. 𝑜𝓇 𝓂𝒶𝓎𝒷𝑒 𝒯𝒽ℯ 𝐵ℯℊ𝒾𝓃𝓃𝒾𝓃ℊ.
𝓌𝒽𝒶𝓉𝑒𝓋𝑒𝓇 𝓉𝒽𝑒 𝒸𝒶𝓈𝑒, 𝒾𝓉 𝒾𝓈 𝒶𝓃 𝑒𝓃𝒹.[6]

Ilya: The AI scientist shaping the world

Now AI is a great thing, because AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty, but it will also create new problems...

The problem of fake new is going to be a million times worse, cyber attacks will become much more extreme, we will have totally automated AI weapons. I think AI has the potential to create infinity stable dictatorships...

❦[7]
❦

I feel technology is a force of nature...

Because the way I imagine it is that there is an avalanche, like there is an avalanche of AGI development. Imagine this huge unstoppable force...

And I think it's pretty likely the entire surface of the earth will be covered with solar panels and data centers.

❦

The future will be good for the AIs regardless, it would be nice if it were good for humans as well

❦
❦
❦

Journal

 

𝗠𝗶𝘁𝗶𝗴𝗮𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗿𝗶𝘀𝗸 𝗼𝗳 𝗲𝘅𝘁𝗶𝗻𝗰𝘁𝗶𝗼𝗻 𝗳𝗿𝗼𝗺 𝗔𝗜 𝘀𝗵𝗼𝘂𝗹𝗱 𝗯𝗲 𝗮 𝗴𝗹𝗼𝗯𝗮𝗹 𝗽𝗿𝗶𝗼𝗿𝗶𝘁𝘆 𝗮𝗹𝗼𝗻𝗴𝘀𝗶𝗱𝗲 𝗼𝘁𝗵𝗲𝗿 𝘀𝗼𝗰𝗶𝗲𝘁𝗮𝗹-𝘀𝗰𝗮𝗹𝗲 𝗿𝗶𝘀𝗸𝘀 𝘀𝘂𝗰𝗵 𝗮𝘀 𝗽𝗮𝗻𝗱𝗲𝗺𝗶𝗰𝘀 𝗮𝗻𝗱 𝗻𝘂𝗰𝗹𝗲𝗮𝗿 𝘄𝗮𝗿.

Geoffry Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Dario Amodei, Bill Gates, Ily Sutskever... 

There's No Rule That Says We'll Make It — Rob Miles

More

MIRI announces new "Death With Dignity" strategy, April 2nd, 2022

Well, let's be frank here.  MIRI didn't solve AGI alignment and at least knows that it didn't.  Paul Christiano's incredibly complicated schemes have no chance of working in real life before DeepMind destroys the world.  Chris Olah's transparency work, at current rates of progress, will at best let somebody at DeepMind give a highly speculative warning about how the current set of enormous inscrutable tensors, inside a system that was recompiled three weeks ago and has now been training by gradient descent for 20 days, might possibly be planning to start trying to deceive its operators.

Management will then ask what they're supposed to do about that.

Whoever detected the warning sign will say that there isn't anything known they can do about that.  Just because you can see the system might be planning to kill you, doesn't mean that there's any known way to build a system that won't do that.  Management will then decide not to shut down the project - because it's not certain that the intention was really there or that the AGI will really follow through, because other AGI projects are hard on their heels, because if all those gloomy prophecies are true then there's nothing anybody can do about it anyways.  Pretty soon that troublesome error signal will vanish.

When Earth's prospects are that far underwater in the basement of the logistic success curve, it may be hard to feel motivated about continuing to fight, since doubling our chances of survival will only take them from 0% to 0%.

That's why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained...

Three Quotes on Transformative Technology

But the moral considerations, Doctor...

Did you and the other scientists not stop to consider the implications of what you were creating? — Roger Robb

When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb— Oppenheimer
❦
There are moments in the history of science, where you have a group of scientists look at their creation and just say, you know: ‘What have we done?... Maybe it's great, maybe it's bad, but what have we done?  — Sam Altman
❦
Urgent: get collectively wiser - Yoshua Bengio, AI "Godfather"

✒️ Selected Quotes:

We stand at a crucial moment in the history of our species. Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves—severing our entire future and everything we could become.

Yet humanity’s wisdom has grown only falteringly, if at all, and lags dangerously behind. Humanity lacks the maturity, coordination and foresight necessary to avoid making mistakes from which we could never recover. As the gap between our power and our wisdom grows, our future is subject to an ever-increasing level of risk. This situation is unsustainable. So over the next few centuries, humanity will be tested: it will either act decisively to protect itself and its long-term potential, or, in all likelihood, this will be lost forever — Toby Ord, The Precipice

We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and godlike technology — Edward O. Wilson, The Social Conquest of Earth

❦

Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct — Nick Bostrom, Founder of the Future of Humanity Institute, Superintelligence

❦

If we continue to accumulate only power and not wisdom, we will surely destroy ourselves — Carl Sagan, Pale Blue Dot

Never has humanity had such power over itself, yet nothing ensures that it will be used wisely, particularly when we consider how it is currently being used…There is a tendency to believe that every increase in power means “an increase of ‘progress’ itself ”, an advance in “security, usefulness, welfare and vigour; …an assimilation of new values into the stream of culture”, as if reality, goodness and truth automatically flow from technological and economic power as such. — Pope Francis, Laudato si'

❦

The fundamental test is how wisely we will guide this transformation – how we minimize the risks and maximize the potential for good — António Guterres, Secretary-General of the United Nations

❦

Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins — Stephen Hawking, Brief Answers to the Big Questions

❦
A digital painting of a toddler sitting in a sandbox at dusk, wearing a crumpled paper crown and holding a glowing scepter topped with a miniature Earth. The child looks fascinated and innocent. Surrounding them are plastic toy tanks, a rocket, a dump truck, and a small shovel, softly illuminated by the warm glow of the scepter. The background is dark, evoking a sense of quiet gravity.

❤️‍🔥 Desires

𝓈𝑜𝓂𝑒𝓉𝒾𝓂𝑒𝓈 𝐼 𝒿𝓊𝓈𝓉 𝓌𝒶𝓃𝓉 𝓉𝑜 𝓂𝒶𝓀ℯ 𝒜𝓇𝓉

𝕥𝕙𝕖𝕟 𝕞𝕒𝕜𝕖 𝕚𝕥

𝒷𝓊𝓉 𝓉𝒽ℯ 𝓌𝑜𝓇𝓁𝒹 𝒩𝐸𝐸𝒟𝒮 𝒮𝒶𝓋𝒾𝓃ℊ...

𝕪𝕠𝕦 𝕔𝕒𝕟 𝓈𝒶𝓋ℯ 𝕚𝕥?

𝐼... 𝐼 𝒸𝒶𝓃 𝒯𝓇𝓎...

Effective altruism in the garden of ends

No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.

Hope

❦

Scraps

Ilya Sutskever

“It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. ‘Oh, I must have misspoken,’ Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.”

The Optimist, Keach Hagey

Ilya Sutskever, once widely regarded as perhaps the most brilliant mind at OpenAI, voted in his capacity as a board member last November to remove Sam Altman as CEO. The move was unsuccessful, in part because Sutskever reportedly bowed to pressure from his colleagues and reversed his vote. After those fateful events, Sutskever disappeared from OpenAI’s offices so noticeably that memes began circulating online asking what had happened to him. Finally, in May, Sutskever announced he had stepped down from the company.

Time 100 AI 2024

Twitter

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

Safe Superintelligence Inc.

  1. ^

    ⇢ Note to self: My previous project had too much meta-commetary and this may have undermined the sincerity, so I should probably try to minimise meta-commentary.

    ⇢ "You're going to remove this in the final version, right?" — Maybe.

  2. ^

    "But you can't quote ChatGPT 😠!" - Internet Troll ÷

  3. ^

    "I would say the flaw of Xanadu's UI was treating transclusion as 'horizontal' and side-by-side" — Gwern 🙃

  4. ^

    "StretchText is a hypertext feature that has not gained mass adoption in systems like the World Wide Web... StretchText is similar to outlining, however instead of drilling down lists to greater detail, the current node is replaced with a newer node" - Wikipedia

    This ‘stretching’ to increase the amount of writing, or contracting to decrease it gives the feature its name. This is analogous to zooming in to get more detail.

    Ted Nelson coined the term c. 1967.

    Conceptually, StretchText is similar to existing hypertexts system where a link provides a more descriptive or exhaustive explanation of something, but there is a key difference between a link and a piece of StretchText. A link completely replaces the current piece of hypertext with the destination, whereas StretchText expands or contracts the content in place. Thus, the existing hypertext serves as context.

    ⇢ "This isn't a proper implementation of StretchText"  — Indeed.

  5. ^

    In defence of Natural Language DSLs  —  Connor Leahy

  6. ^

    Did this conversation really happen? — 穆

  7. ^

    ⇢ "Sooner or later, everything old is new again" — Stephen King

    ⇢ "Therefore if any man be in Christ, he is a new creature: old things are passed away; behold, all things have become new." — 2 Corinthians 5:17

Reply
Load More
8Three Quotes on Transformative Technology
1mo
3
16On actually taking expressions literally: tension as the key to meditation?
2mo
12
19An Easily Overlooked Post on the Automation of Wisdom and Philosophy
Ω
3mo
Ω
0
12Potentially Useful Projects in Wise AI
Ω
3mo
Ω
0
18Reflections on AI Wisdom, plus announcing Wise AI Wednesdays
4mo
0
16AI Safety & Entrepreneurship v1.0
Ω
5mo
Ω
0
8What empirical research directions has Eliezer commented positively on?
5mo
1
8Linkpost to a Summary of "Imagining and building wise machines: The centrality of AI metacognition" by Johnson, Karimi, Bengio, et al.
Ω
5mo
Ω
0
9Biden administration unveils global AI export controls aimed at China
8mo
0
19Higher and lower pleasures
9mo
3
Load More
AI Safety & Entrepreneurship
2 hours ago
(+373/-3)
AI Safety & Entrepreneurship
12 days ago
(+1/-2)
AI Safety & Entrepreneurship
12 days ago
AI Safety & Entrepreneurship
a month ago
(+4)
AI Safety & Entrepreneurship
a month ago
(+417/-337)
AI Safety & Entrepreneurship
a month ago
(+168)
AI Safety & Entrepreneurship
a month ago
(+342/-1)
AI Safety & Entrepreneurship
2 months ago
(+136)
AI Safety & Entrepreneurship
2 months ago
(+25)
AI Safety & Entrepreneurship
2 months ago
(+175/-101)
Load More