Today’s humanity faces many high stakes and even existential challenges; many of the largest are generated or exacerbated by AI. Meanwhile, humans individually and humanity collectively appear distressingly underequipped.
Lots of folks around here naturally recognise that this implies a general strategy: make humans individually — and humanity collectively — better able to solve problems. Very good! (Complementary strategies look like: make progress directly, raise awareness of the challenges, recruit problem solvers, …)
One popular approach is to ‘raise the sanity waterline’ in the most oldschool and traditional way: have a community of best practice, exemplify and proselytise, make people wiser one by one and society wiser by virtue of that. There’ve been some successes, not least the existence of this and similar forums and some of their membership.
Another popular approach is to imagine augmenting ourselves in the most futuristic and radical ways: genetic engineering, selective breeding, brain-augmenting implants, brain emulation. Go for it, I suppose (mindful of the potential backfires and hazards). But these probably won’t pan out on what look like the necessary timelines.
There is a middle ground! Use tech to uplift ourselves, yes — but don’t wait for medical marvels and wholesale self-reauthorship. Just use the building blocks we have, anticipate the pieces we might have soon, and address our individual and collective shortcomings one low-hanging fruit at a time.[1]
The most exciting part is that we’ve got some nifty new building blocks to play with: big data, big compute, ML, and (most novel of all) foundation models and limited agentic AI.
How to generate useful ideas in human reasoning
One place people fall down here is getting locked into asking: ‘OK, what can I usefully ask this AI to do?’. Sometimes this is helpful. But usually it’s missing the majority of the design space: agentic form factors are only a very narrow slice of what we can do with technology, and for many purposes they’re not even especially desirable.
Think about human reasoning. ‘Human’ as in individuals, groups, teams, society, humanity at large. ‘Reasoning’ as in the full decision-making cycle, from sensing and understanding through to planning and acting, including acting together.
I like to first ask: ‘What human reasoning activities are in bad shape?’
For a given important (type of) decision, what are people observing?
How are they orienting and deciding?
What actions do they have available and do they know how to do them well?
What about the case of teams, groups, institutions: how do their OODAs work (and how do they fail)?
Also think about development: how do individuals learn and grow? What about groups and communities, how do they form, grow, connect?
In foresight,
What features are we even paying attention to in the first place?
What prospects are under consideration?
What affordances are we aware of?
How are we strategically creating sensing opportunities and the means to adapt plans?
How do our forecasts achieve precision and calibration?
In epistemics, think about the message-passing nature of most human knowledge processes.
How do we assess the nodes (communicators)?
How do we assess, digest, and compile the messages (claims, evidence, proposals, …)?
How do we understand and manage the structure of the network itself (communication relationships, broadcasts and other topologies, …)?
What about the traffic (message rates, density distribution, repeated and relayed transmissions, …)?
What messages ought to be routed where, when and on behalf of whom?
In coordination, what are the conditions for success?
We need to find or recognise potential counterparties.
We might need the charters, norms, or institutions to condition and frame interaction productively — ones which don’t fail or fall to corruption or capture.
We need to surface enough mutually-compatible intent or outcome preference.
Our ensembled group wisdom might be a necessary source of insight or agility.
We need to survive the tug of war of negotiation (which can dissolve into antagonism, even when there’s common knowledge of win-win possibilities).
Means of verification and enforcement may be needed to access good options.
Think of a particular audience with either the scale or the special influence to make a difference (this can include ‘the general public’), and the deficits they have in these reasoning activities. Now ask: ‘What kinds of software[2] might help and encourage people to do those better?’.
Is there an edge to be gained by unlocking big (or even medium) data (which can often be more living and queryable than ever before thanks to LMs)?
Can large amounts of clerical labour (again LMs) per capita make something newly feasible?
Can big compute and simulation (including multi persona simulation: LMs again!) drive better understanding of an important dynamic?
Can extensive background exploration, search, or ‘brainstorming’ by AI surface important opportunities or considerations?
Can always-on, flexibly-semantically-sensitive sensing and monitoring bring attention where it’s needed faster than before (or at all)?
Could facilitation and translation bring forth, and synergise, the best array of human capabilities in a given context?
Could software’s repeatability, auditability, and privacy (in principle), combined with the context and semantic sensitivity of AI, unlock new frontiers of trustable human scaffolding?
…
Finding flaws and avoiding backfire
Think seriously about backfire: we don’t want to differentially enable bad human actors or rogue AI to reason and coordinate! As Richard Rumelt, author of Good Strategy/Bad Strategy observes,
The idea that coordination, by itself, can be a source of advantage is a very deep principle.
Coordination’s dark side is collusion, including cartels, oligarchy, and concentration of power, in imaginable extreme cases cutting out most or even all humans.
Similarly, epistemic advantage (in foresight and strategy, say) can be parlayed into resource or influence advantage. If those can be converted in turn into greater epistemic advantage (by employing compute for epistemic attacks or in further private epistemic advancement) without commensurate counterweights or defences, this could be quite problematic.
How to think about these backfire principles in general, and the considerations on particular candidate projects, are among the pieces I think this forum could be especially good at. Part of it is about choosing distribution strategies which reduce misuse surface area (or provide antidotes), and part of it is about preferring tech which asymmetrically supports (and perhaps encourages) ‘good’ use and behaviour.
Do it
FLF’s fellows, and I and others have been doing some of this exploration recently. Stay tuned for more. Meanwhile, join in! We’re early in a critical period where much is up for grabs and what we build now might help shape and inform the choices humanity makes about its future (or whether it makes much choice at all). Try things, see what kinds of tools earn the attention and adoption that matters, and share what you learn. Consider principles to apply, especially for minimising backfire risks, and share particular considerations for or against certain kinds of tech and audience targets.
Thanks to Owen Cotton-Barratt and Ben Goldhaber for helpful comments, and to Lizka Vaintrob for recent relevant conversations
A close relative of this strategy is cyborgism. I might contrast what I’m centrally describing as being more outward-looking, asking how we can uplift the most important sensemaking and wisdom apparatus of humanity in general, whereas cyborgism maybe looks centrally more like a bet on becoming the uplifted paragons (optionally thence, and thereby, saving the world). I’d say these are complementary on the whole. ↩︎
This is better than asking ‘What kinds of AI…’. Software is the general, capability-unlocking and -enhancing artefact. AI components and form-factors are novel, powerful, sometimes indispensable building blocks in our inventory to compose software capabilities out of. ↩︎
Today’s humanity faces many high stakes and even existential challenges; many of the largest are generated or exacerbated by AI. Meanwhile, humans individually and humanity collectively appear distressingly underequipped.
Lots of folks around here naturally recognise that this implies a general strategy: make humans individually — and humanity collectively — better able to solve problems. Very good! (Complementary strategies look like: make progress directly, raise awareness of the challenges, recruit problem solvers, …)
One popular approach is to ‘raise the sanity waterline’ in the most oldschool and traditional way: have a community of best practice, exemplify and proselytise, make people wiser one by one and society wiser by virtue of that. There’ve been some successes, not least the existence of this and similar forums and some of their membership.
Another popular approach is to imagine augmenting ourselves in the most futuristic and radical ways: genetic engineering, selective breeding, brain-augmenting implants, brain emulation. Go for it, I suppose (mindful of the potential backfires and hazards). But these probably won’t pan out on what look like the necessary timelines.
There is a middle ground! Use tech to uplift ourselves, yes — but don’t wait for medical marvels and wholesale self-reauthorship. Just use the building blocks we have, anticipate the pieces we might have soon, and address our individual and collective shortcomings one low-hanging fruit at a time. [1]
The most exciting part is that we’ve got some nifty new building blocks to play with: big data, big compute, ML, and (most novel of all) foundation models and limited agentic AI.
How to generate useful ideas in human reasoning
One place people fall down here is getting locked into asking: ‘OK, what can I usefully ask this AI to do?’. Sometimes this is helpful. But usually it’s missing the majority of the design space: agentic form factors are only a very narrow slice of what we can do with technology, and for many purposes they’re not even especially desirable.
Think about human reasoning. ‘Human’ as in individuals, groups, teams, society, humanity at large. ‘Reasoning’ as in the full decision-making cycle, from sensing and understanding through to planning and acting, including acting together.
I like to first ask: ‘What human reasoning activities are in bad shape?’
Think of a particular audience with either the scale or the special influence to make a difference (this can include ‘the general public’), and the deficits they have in these reasoning activities. Now ask: ‘What kinds of software [2] might help and encourage people to do those better?’.
Finding flaws and avoiding backfire
Think seriously about backfire: we don’t want to differentially enable bad human actors or rogue AI to reason and coordinate! As Richard Rumelt, author of Good Strategy/Bad Strategy observes,
Coordination’s dark side is collusion, including cartels, oligarchy, and concentration of power, in imaginable extreme cases cutting out most or even all humans.
Similarly, epistemic advantage (in foresight and strategy, say) can be parlayed into resource or influence advantage. If those can be converted in turn into greater epistemic advantage (by employing compute for epistemic attacks or in further private epistemic advancement) without commensurate counterweights or defences, this could be quite problematic.
How to think about these backfire principles in general, and the considerations on particular candidate projects, are among the pieces I think this forum could be especially good at. Part of it is about choosing distribution strategies which reduce misuse surface area (or provide antidotes), and part of it is about preferring tech which asymmetrically supports (and perhaps encourages) ‘good’ use and behaviour.
Do it
FLF’s fellows, and I and others have been doing some of this exploration recently. Stay tuned for more. Meanwhile, join in! We’re early in a critical period where much is up for grabs and what we build now might help shape and inform the choices humanity makes about its future (or whether it makes much choice at all). Try things, see what kinds of tools earn the attention and adoption that matters, and share what you learn. Consider principles to apply, especially for minimising backfire risks, and share particular considerations for or against certain kinds of tech and audience targets.
Thanks to Owen Cotton-Barratt and Ben Goldhaber for helpful comments, and to Lizka Vaintrob for recent relevant conversations
A close relative of this strategy is cyborgism. I might contrast what I’m centrally describing as being more outward-looking, asking how we can uplift the most important sensemaking and wisdom apparatus of humanity in general, whereas cyborgism maybe looks centrally more like a bet on becoming the uplifted paragons (optionally thence, and thereby, saving the world). I’d say these are complementary on the whole. ↩︎
This is better than asking ‘What kinds of AI…’. Software is the general, capability-unlocking and -enhancing artefact. AI components and form-factors are novel, powerful, sometimes indispensable building blocks in our inventory to compose software capabilities out of. ↩︎