I'm sure there are tons of things to optimize. Overall happy to see these events, just thinking of more things to improve :)
I'm unsure of shirts, but like the idea of more experimentation. It might be awkward to wear the same shirt for 2-3 consecutive days, and also some people will want more professional options.
I liked the pins this year (there were some for "pDoom"). I could also see having hats, lanyards, bracelets.
It's a possibility, but this seems to remove a ton of information to me. The Ghibli faces all look quite similar to me. I'd be very surprised if they could be de-anonymized in cases like these (people who aren't famous) in the next 3 years, if ever.
If you're particularly paranoid, I presume we could have a system do a few passes.
Kind of off topic, but I this leads me to wonder: why are so many websites burying the lede about the services they actually provide like this example?
I heard from a sales person that many potential customers turn away the moment they hear a list of specific words, thinking "it's not for me". So they try to keep it as vague as possible, learn more about the customer, then phrase things to make it seem like it's exactly for them.
(I'm not saying I like this, just that this is what I was told)
Personally, I'm fairly committed to [talking a lot]. But I do find it incredibly difficult to do at parties. I've been trying to figure out why, but the success rate for me plus [talking a lot] at parties seems much lower than I would have hoped.
Quickly:
1. I imagine that strong agents should have certain responsibilities to inform certain authorities. These responsibilities should ideally be strongly discussed and regulated. For example, see what therapists and lawyers are asked to do.
2. "doesn't attempt to use command-line tools" -> This seems like a major mistake to me. Right now an agent running on a person's computer will attempt to use that computer to do several things to whistleblow. This obviously seems inefficient, at very least. The obvious strategy is just to send one overview message to some background service (for example, something a support service to one certain government department), and they would decide what to do with it from there.
3. I imagine a lot of the problem now is just that these systems are pretty noisy at doing this. I'd expect a lot of false positives and negatives.
Part of me wants to create some automated process for this. Then part of me thinks it would be pretty great if someone could offer a free service (even paid could be fine) that has one person do this hunting work. I presume some of it can be delegated, though I realize the work probably requires more context than it first seems.
CoT monitoring seems like a great control method when available, but I think it's reasonably likely that it won't work on the AIs that we'd want to control, because those models will have access to some kind of "neuralese" that allows them to reason in ways we can't observe.
Small point, but I think that "neuralese" is likely to be somewhat interpretable, still.
1. We might advance at regular LLM interpretability, in which case those lessons might apply.
2. We might pressure LLM systems to only use CoT neuralese that we can inspect.
There's also a question of how much future LLM agents will rely on CoT vs. more regular formats for storage. For example, I believe that a lot of agents now are saving information in English into knowledge bases of different kinds. It's far easier for software people working with complex LLM workflows to make sure a lot of the intermediate formats are in languages they can understand.
All that said, personally, I'm excited for a multi-layered approach, especially at this point when it seems fairly early.
There are a few questions here.
1. Do Jaime's writings that that he cares about x-risk or not?
-> I think he fairly clearly states that cares.
2. Does all the evidence, when put together, imply that actually, Jaime doesn't care about x-risk?
-> This is a much more speculative question. We have to assess how honest he is in his writing. I'd bet money that Jaime at least believes that he cares and is taking corresponding actions. This of course doesn't absolve him of full responsibility - there are many people who believe they do things for good reasons, but causally actually do things for selfish reasons. But now we're getting to a particularly speculative area.
"I also think it should be our dominant prior that someone is not motivated by reducing x-risk unless they directly claim they do." -> Again, to me, I regard him as basically claiming that he does care. I'd bet money that if we ask him to clarify, he'd claim that he cares. (Happy to bet on this, if that would help)
At the same time, I doubt that this is your actual crux. I'd expect that even if he claimed (more precisely) to care, you'd still be skeptical of some aspect of this.
---
Personally, I have both positive and skeptical feelings about Epoch, as I do other evals orgs. I think they're doing some good work, but I really wish they'd lean a lot more on [clearly useful for x-risk] work. If I had a lot of money to donate, I could picture donating some to Epoch, but only if I could get a lot of assurances on which projects it would go to.
But while I have reservations about the org, I think some of the specific attacks against them (and defenses or them) are not accurate.
I did a bit of digging, because these quotes seemed narrow to me. Here's the original tweet of that tweet thread.
Full state dump of my AI risk related beliefs:
- I currently think that we will see ~full automation of society by Median 2045, with already very significant benefits by 2030
- I am not very concerned about violent AI takeover. I am concerned about concentration of power and gradual disempowerment. I put the probability that ai ends up being net bad for humans at 15%.
- I support treating ai as a general purpose tech and distributed development. I oppose stuff like export controls and treating AI like military tech. My sense is that AI goes better in worlds where we gradually adopt it and it's seen as a beneficial general purpose tech, rather than a key strategic tech only controlled by a small group of people
- I think alignment is unlikely to happen in a robust way, though companies could have a lot of sway on AI culture in the short term.
- on net I support faster development of AI, so we can benefit earlier from it.It's a hard problem, and I respect people trying their hardest to make it go well.
Then right after:
All said, this specific chain doesn't give us a huge amount of information. It totals something like 10-20 sentences.
> He says it so plainly that it seems as straightforwardly of a rejection of AI x-risk concerns that I've heard:
This seems like a major oversimplification to me. He says "I am concerned about concentration of power and gradual disempowerment. I put the probability that ai ends up being net bad for humans at 15%." There is a cluster in the rationalist/EA community that believes that "gradual disempowerment" is an x-risk. Perhaps you wouldn't define "concentration of power and gradual disempowerment" as technically an x-risk, but if so, that seems a bit like a technicality to me. It can clearly be a very major deal.
It sounds a lot to me that Jaime is very concerned about some aspects of AI risk but not others.
In the quote you reference, he clearly says, "Not that it should be my place to unilaterally make such a decision anyway.". I hear him saying, "I disagree with the x-risk community about the issue of slowing down AI, specifically. However, I don't think this disagreement a big concern, given that I also feel like it's not right for me to personally push for AI to be sped up, and thus I won't do it."
I found this analysis refreshing and would like to see more on the GPU depreciation costs.
If better GPUs are developed, these will go down in value quickly. Perhaps by 25% to 50% per year. This seems like a really tough expense and supply chain to manage.
I'd expect most of the other infrastructure costs to depreciate much more slowly, as you mention.