I've been working on an app for some parts of this. Plan to more formally announce it soon, but the basics might be simple enough. Eager to get takes. Happy to add any workflows if people have requests. (You can also play with adding "custom workflows", or just download the code and edit it).
Happy to discuss if that could be interesting.
https://www.roastmypost.org
I found this analysis refreshing and would like to see more on the GPU depreciation costs.
If better GPUs are developed, these will go down in value quickly. Perhaps by 25% to 50% per year. This seems like a really tough expense and supply chain to manage.
I'd expect most of the other infrastructure costs to depreciate much more slowly, as you mention.
I'm sure there are tons of things to optimize. Overall happy to see these events, just thinking of more things to improve :)
I'm unsure of shirts, but like the idea of more experimentation. It might be awkward to wear the same shirt for 2-3 consecutive days, and also some people will want more professional options.
I liked the pins this year (there were some for "pDoom"). I could also see having hats, lanyards, bracelets.
It's a possibility, but this seems to remove a ton of information to me. The Ghibli faces all look quite similar to me. I'd be very surprised if they could be de-anonymized in cases like these (people who aren't famous) in the next 3 years, if ever.
If you're particularly paranoid, I presume we could have a system do a few passes.
Kind of off topic, but I this leads me to wonder: why are so many websites burying the lede about the services they actually provide like this example?
I heard from a sales person that many potential customers turn away the moment they hear a list of specific words, thinking "it's not for me". So they try to keep it as vague as possible, learn more about the customer, then phrase things to make it seem like it's exactly for them.
(I'm not saying I like this, just that this is what I was told)
Personally, I'm fairly committed to [talking a lot]. But I do find it incredibly difficult to do at parties. I've been trying to figure out why, but the success rate for me plus [talking a lot] at parties seems much lower than I would have hoped.
Quickly:
1. I imagine that strong agents should have certain responsibilities to inform certain authorities. These responsibilities should ideally be strongly discussed and regulated. For example, see what therapists and lawyers are asked to do.
2. "doesn't attempt to use command-line tools" -> This seems like a major mistake to me. Right now an agent running on a person's computer will attempt to use that computer to do several things to whistleblow. This obviously seems inefficient, at very least. The obvious strategy is just to send one overview message to some background service (for example, something a support service to one certain government department), and they would decide what to do with it from there.
3. I imagine a lot of the problem now is just that these systems are pretty noisy at doing this. I'd expect a lot of false positives and negatives.
Part of me wants to create some automated process for this. Then part of me thinks it would be pretty great if someone could offer a free service (even paid could be fine) that has one person do this hunting work. I presume some of it can be delegated, though I realize the work probably requires more context than it first seems.
CoT monitoring seems like a great control method when available, but I think it's reasonably likely that it won't work on the AIs that we'd want to control, because those models will have access to some kind of "neuralese" that allows them to reason in ways we can't observe.
Small point, but I think that "neuralese" is likely to be somewhat interpretable, still.
1. We might advance at regular LLM interpretability, in which case those lessons might apply.
2. We might pressure LLM systems to only use CoT neuralese that we can inspect.
There's also a question of how much future LLM agents will rely on CoT vs. more regular formats for storage. For example, I believe that a lot of agents now are saving information in English into knowledge bases of different kinds. It's far easier for software people working with complex LLM workflows to make sure a lot of the intermediate formats are in languages they can understand.
All that said, personally, I'm excited for a multi-layered approach, especially at this point when it seems fairly early.
I appreciate this post for working to distill a key crux in the larger debate.
Some quick thoughts:
1. I'm having a hard time understanding the "Alas, the power-seeking ruthless consequentialist AIs are still coming” intuition. It seems like a lot of people in this community have this intuition, and I feel very curious why. I appreciate this crux getting attention.
2. Personally, my stance is something more like, "It seems very feasible to create sophisticated AI architectures that don't act as scary maximizers." To me it seems like this is what we're doing now, and I see some strong reasons to expect this to continue. (I realize this isn't guaranteed, but I do think it's pretty likely)
3. While the human analogies are interesting, I assume they might appeal more to the "consequentialist AIs are still coming” crowd than people like myself. Humans were evolved for some pretty wacky reasons, and have a large number of serious failure modes. Perhaps they're much better than some of what people imagine, but I suspect that we can make AI systems that have much more rigorous safety properties in the future. I personally find histories of engineering complex systems in predictable and controllable ways to be much more informative, for these challenges.
4. You mention human intrinsic motivations as a useful factor. I'd flag that in a competent and complex AI architecture, I'd expect that many subcomponents would have strong biases towards corrigibility and friendliness. This seems highly analogous to human minds, where it's really specific sub-routines and similar that have these more altruistic motivations.