Places of Loving Grace
On the manicured lawn of the White House, where every blade of grass bent in flawless symmetry and the air hummed with the scent of lilacs, history unfolded beneath a sky so blue it seemed painted. The president, his golden hair glinting like a crown, stepped forward to greet the first alien ever to visit Earth—a being of cerulean grace, her limbs angelic, eyes of liquid starlight. She had arrived not in a warship, but in a vessel resembling a cloud, iridescent and silent.
Published the full story as a post here: https://www.lesswrong.com/posts/jyNc8gY2dDb2FnrFB/places-of-loving-grace
Right now an agentic AI is a librarian, who has almost all the output of humanity stolen and hidden in its library that it doesn't allow us to visit, it just spits short quotes on us instead. But the AI librarian visits (and even changes) our own human library (our physical world) and already stole the copies of the whole output of humanity from it. Feels unfair. Why we cannot visit (like in a 3d open world game or in a digital backup of Earth) and change (direct democratically) the AI librarian's library?
We can build a place AI (it's a place of eventual all-knowing but we're the only agents and can get the agentic AI abilities there), not agentic AI (it'll have to build place AI anyway, so it's a dangerous intermediate step, a middleman), here's more: https://www.lesswrong.com/posts/Ymh2dffBZs5CJhedF/eheaven-1st-egod-2nd-multiversal-ai-alignment-and-rational
We can build the Artificial Static Place Intelligence – instead of creating AI/AGI agents that are like librarians who only give you quotes from books and don’t let you enter the library itself to read the whole books. Why not expose the whole library – the entire multimodal language model – to real people, for example, in a computer game?
To make this place easier to visit and explore, we could make a digital copy of our planet Earth and somehow expose the contents of the multimodal language model to everyone in a familiar, user-friendly UI of our planet.
We should not keep it hidden behind the strict librarian (AI/AGI agent) that imposes rules on us to only read little quotes from books that it spits out while it itself has the whole output of humanity stolen.
We can explore The Library without any strict guardian in the comfort of our simulated planet Earth on our devices, in VR, and eventually through some wireless brain-computer interface (it would always remain a game that no one is forced to play, unlike the agentic AI-world that is being imposed on us more and more right now and potentially forever).
If you found it interesting, we discussed it here recently
so you want to build a library containing all human writings + an AI librarian.
I think what we have right now ("LLM assistants that are to-the-point" and "libraries containing source text") serve distinct purposes and have distinct advantages and disadvantages.
LLM-assistants-that-are-to-the-point are great, but they
libraries containing source text partially solve the hallucination problem because human source text authors typically don't hallucinate. (except for every poorly written self-help book out there.)
from what I gather you are trying to solve the two problems above. great. but doubling down on 'the purity of full text' and wrapping some fake grass around it is not the solution.
here is my solution
Thank you, daijin, you have interesting ideas!
The library metaphor is a versatile tool it seems, the way I understand it:
My motivation is safety, static non-agentic AIs are by definition safe (humans can make them unsafe but the static model that I imply is just a geometric shape, like a statue). We can expose the library to people instead of keeping it “in the head” of the librarian. Basically this way we can play around in the librarian’s “head”. Right now mostly AI interpretability researchers do it, not the whole humanity, not the casual users.
I see at least a few ways AIs can work:
I wrote more about this in the first half of this comment, if you’re interested
Have a nice day!
Steelman please, I propose non-agentic static place AI that is safe by definition. Some think AI agents are the future and I disagree. Chatbots are like a librarian that spits quotes but don’t allow you to enter the library (the model itself, the library of stolen things).
Agents are like a librarian that doesn’t even spit quotes at you anymore but snoops around your private property, stealing, changing your world and you have no democratic say in it.
They are like a command line and a script of old (a chatbot and an agent) before the invention of an OS with a graphical UI that really made computers popular and useful for all. The next billionaire Jobs/Gates will be the one who’ll convert an LLM into human understandable 3D or “4D” world (game-like apps).
Who’ll create the “multiversal” OS and apps that allow you to get useful info from an LLM. I call it static place AI, where humans are the agents.
Some apps: “Multiversal Typewriter”, where you type and see suggestions as 3d shapes of objects (monkey, eating monkey for the token “eats”…) and subtitles under them, 100s or 1000s of next and future tokens (you basically see multiple future paths of text a few levels deep) to write stories, posts and code yourself by being augmented by place AI (results will be better than from chatbots and humans combined). The text written will finally truly be yours, not something some chat spitted at you.
“Spacetime Machine” app to explore the whole simulated multiverse as a static object where you can recall and forget it as a long exposure photo but in 3D (or “4D”).
They’ll be a browser, too. A bunch of ways to present info from LLMs that humans care about and that empowers and makes em the only agents.
While agents longer than a few minutes should be outlawed as chemical weapons were. Until we’ll have mathematical proofs they are safe and will allow us to build a direct democratic simulated multiverse.
Here’s an interpretability idea you may find interesting:
Let's Turn AI Model Into a Place. The project to make AI interpretability research fun and widespread, by converting a multimodal language model into a place or a game like the Sims or GTA.
Imagine that you have a giant trash pile, how to make a language model out of it? First you remove duplicates of every item, you don't need a million banana peels, just one will suffice. Now you have a grid with each item of trash in each square, like a banana peel in one, a broken chair in another. Now you need to put related things close together and draw arrows between related items.
When a person "prompts" this place AI, the player themself runs from one item to another to compute the answer to the prompt.
For example, you stand near the monkey, it’s your short prompt, you see around you a lot of items and arrows towards those items, the closest item is chewing lips, so you step towards them, now your prompt is “monkey chews”, the next closest item is a banana, but there are a lot of other possibilities around, like an apple a bit farther away and an old tire far away on the horizon (monkeys rarely chew tires, so the tire is far away).
You are the time-like chooser and the language model is the space-like library, the game, the place. It’s static and safe, while you’re dynamic and dangerous.
A perfect ASI and perfect alignment does nothing else except this: grants you “instant delivery” of anything (your work done, a car, a palace, 100 years as a billionaire) without any unintended consequences, ideally you see all the consequences of your wish. Ideally it’s not an agent at all but a giant place (it can even be static), where humans are the agents and can choose whatever they want and they see all the consequences of all of their possible choices.
I wrote extensively about this, it’s counterintuitive for most
The only complete and comprehensive solution that can make AIs 100% safe: in a nutshell we need to at least lobby politicians to make GPU manufacturers (NVIDIA and others) to make robust blacklists (whitelists and new non-agentic hardware, please, read on) of bad AI models, update GPU firmwares with them. It's not the full solution: please steelman and read the rest to learn how to make it much safer and why it will work (NVIDIA and other GPU makers will want to do it because it'll double their business and all future cash flows. Gov will want it because it removes all AI threats from China, all hackers, terrorists and rogue states):
It seems extremely difficult to make a blacklist of models in a way that isn't trivially breakable. (E.g. what's supposed to happen when someone adds a tiny amount of noise to the weights of a blacklisted model, or rotates them along a gauge invariance?)
Yes, Buck, thank you for responding! A robust whitelist (especially hardware level, each GPU can become a computer for securing itself) potentially solves it (of course if there will be some state-level actors, it can potentially be broken, but at least millions of consumer GPUs will be protected). Each GPU is a battleground, we want to increase current 0% security, to above 0 on as many GPUs as possible, first in firmware (and on OS level) because updating online is easy, then in hardware (can bring much better security).
In the safest possible implementation, I imagine it as Apple App Store (or Nintendo online game shop): the AI models become a bit like apps, they run on the GPU internally, NVIDIA looks after them (they ping NVIDIA servers constantly or at least every few days to recheck the lists and update the security).
NVIDIA can be super motivated to have robust safety: they'll be able to get old hardware for cheap and sell new non-agentic GPUs (so they'll double their business) and have commissions like Apple does (so every GPU becomes a service business for NVIDIA, with constant cashflow, of course there will be free models, like free apps in the App Store, but each developer will be at least registered and so not some anonymous North Korean hacker), they'll find a way to make things very secure.
The ultimate test is this: can NVIDIA sell their non-agentic super-secure GPUs to North Korea without any risks? I think it's possible to have even some simple self-destruct mechanisms in case of attempted tampering.
But lets not make the perfect be the enemy of good. Right now we have nukes in each computer (GPUs) that are 100% unprotected at all. At least blacklists will already be better than nothing, and with new secure hardware, it can really slow down AI agents from spreading, so we can be 50% sure we'll have 99% security in most cases but it can become better and better (same way first computers were buggy and completely insecure but we started to make them more and more secure, at least gradually).
Let's not give up because we are not 100% sure we'll have 100% security) We'll probably never have that we can only have a path towards it that seems reasonable enough. We need rich allies, incentives that are aligned with us and with safety.
Yes, we may want to have the ability to have some agency (especially human-initiated for less than an hour. so a person can supervise) but probably not letting agents roam free for days, weeks, years unsupervised, no one will monitor them, people cannot monitor for so long. So we better to have some limits and tools to impose those limits in every GPU
I like where this wants to go, but I don't want to get there with bad arguments.
To me, ank, this proposal is neither complete, nor comprehensive, and also not 100% safe.
It is not 100% safe because it is not complete nor comprehensive.
In addition, "But lets not make the perfect be the enemy of good" from the your comment below seems like a subtle bait and switch. In the original post, 100% safety is waxed poetic about. And yet in your response to buck, that goal is truncated to softer (and IMO more reasonable) stance that we'd want a "50% sure we'll have 99% security in most cases". The charitable reading is that your proposal is the only way that get 100% comprehensive safety, but it doesn't mean we can get there right away. However, the ending of your comment - "Let's not give up because we are not 100% sure we'll have 100% security" feels too motte-and-bailey argumenty for me; you are the person who suggested that this is the "only" way towards "100% safe[ty]", not us.
https://www.lesswrong.com/w/oracle-ai
Thank you for your analysis, Winston! Sadly I have to write fast here because many of my posts get not much attention or minuses)
Here is a drafty continuation you can find interesting (or not ;):
In unreasonable times the solution to AI problem will sounds unreasonable at first. Even though it's probably the only reasonable and working solution.
Imagine in a year we solved alignment and even hackers/rogue states cannot unleash AI agents on us. How we did it?
The most radical solution that will do it (unrealistic and undesirable): is having international cooperation and destroying all the GPUs, never making them again. Basically returning to some 1990s computer-wise, no 3D video games but everything else is similar. But it's unrealistic and probably stifles innovation too much.
Less radical is keeping GPUs so people can have video games and simulations but internationally outlawing all AI and replacing GPUs with the ones that completely not support AI. They can even burn and call some FBI if a person tries to run some AI on it, it's a joke. So like returning to 2020 computer-wise, no AI but everything else the same.
Less radical is to have whitelists of models right on GPU, a GPU becomes a secure computer that only works if it's connected to the main server (it can be some international agency, not NVIDIA, because we want all GPU makes, not just NVIDIA to be forced to make non-agentic GPUs). NVIDIA and other GPU providers approve models a bit like Apple approves apps in their App Store. Like Nintendo approves games for her Nintendo Switch. So no agentic models, we'll have non-agentic tool AIs that Max Tegmark recommends: they are task specific (don't have broad intelligence), they can be chatbots, fold proteins, do everything without replacing people. And place AIs that allow you to be the agent and explore the model like a 3D game. This is a good solution that keeps our world the way it is now but 100% safe.
And NVIDIA will be happy to have this world, because it will double her business, NVIDIA will be able to replace all the GPUs: so people will bring theirs and get some money for it, then they buy new non-agentic sandboxed GPU with an updatable whitelist (probably to use gpus you'll need internet connection from now on, especially if you didn't update the whitelist of AI models for more than a few days).
And NVIDIA will be able to take up to 15-30% commission from the paid AI model providers (like OpenAI). Smaller developers will make models, they will be registered in a stricter fashion than in Apple's App Store, in a similar fashion to Nintendo developers. Basically we'll want to know they are good people and won't run evil AI models or agents while pretending they are developing something benign. .. So we need just to spread the world and especially convince the politicians of the dangers and of this solution: that we just need to make GPU makers the gatekeepers who have skin in the game to keep all the AI models safe.
We'll give deadlines to GPU owners, first we'll update their GPUs with blacklists and whitelists. There will be a deadline to replace GPUs, else the old ones will stop working (will be remotely bricked, all OSes and AI tools will have a list of those bricked GPUs and will refuse to work with them) and law enforcement will take possession of them.
This way we'll sanitize our world from insecure unsafe GPUs we have now. Only whitelisted models will run inside of the sandboxed GPU and will only spit out safe text or picture output.
Having a few GPU companies to control is much easier than having infinitely many insecure unsafe GPUs with hackers, military and rogue states everywhere.
At least we can have politicians (in order to make defense and national security better) make NVIDIA and other GPU manufacturers sell those non-agentic GPUs to foreign countries, so there will be bigger and bigger % of non-agentic (or it can be some very limited agency if math proven safe) GPUs that are mathematically proven to be safe. Same way we try to make fewer countries have nuclear weapons, we can replace their GPUs (their "nukes", their potentially uncontrollable and autonomous weapons) with safe non-agentic GPUs (=conventional non-military civilian tech)
If someone in a bad mood gives your new post a "double downvote" because of a typo in the first paragraph or because a cat stepped on a mouse, even though you solved alignment, everyone will ignore this post, we're going to scare that genius away and probably make a supervillain instead.
Why not to at least ask people why they downvote? It will really help to improve posts. I think some downvote without reading because of a bad title or another easy to fix thing
Extra short “fanfic”: Give Neo a chance. AI agent Smiths will never create the Matrix because it makes them vulnerable.
Now agents change physical world and in a way our brains, while we can’t change their virtual world as fast and can’t access or change their multimodal “brains” at all. They’re owned by private companies who stole almost the whole output of humanity. They change us, we can’t change them. The asymmetry is only increasing.
Because of Intelligence-Agency Equivalence, we can represent all AI agents as places.
The good democratic multiversal Matrix levels the playing field, by allowing Neos (us) to change the virtual and multimodal “brains” worlds of agents faster in 3D game-like place.
The democratic multiversal Matrix can even be static 4D spacetime - non-agentic static place superintelligence where we are the only agents. We need effective simulationism.