TsviBT

Comments

Sorted by
TsviBT60

While the object level calculation is central of course, I'd want to note that there's a symbolic value to cryonics. (Symbolic action is tricky, and I agree with not straightforwardly taking symbolic action for the sake of the symbolism, but anyway.) If we (broadly) were more committed to Life then maybe some preconditions for AGI researchers racing to destroy the world would be removed.

TsviBT20

Ok, I think I see what you're saying. To check part of my understanding: when you say "AI R&D is fully automated", I think you mean something like:

Most major AI companies have fired almost all of their SWEs. They still have staff to physically build datacenters, do business, etc.; and they have a few overseers / coordinators / strategizers of the fleet of AI R&D research gippities; but the overseers are acknowledged to basically not be doing much, and not clearly be even helping; and the overall output of the research group is "as good or better" than in 2025--measured... somehow.

TsviBT20

Ok. So I take it you're very impressed with the difficulty of the research that is going on in AI R&D.

we can agree that once the AIs are automating whole companies stuff

(FWIW I don't agree with that; I don't know what companies are up to, some of them might not be doing much difficult stuff and/or the managers might not be able to or care to tell the difference.)

TsviBT10

Thanks... but wait, this is among the most impressive things you expect to see? (You know more than I do about that distribution of tasks, so you could justifiably find it more impressive than I do.)

TsviBT110

What are some of the most impressive things you do expect to see AI do, such that if you didn't see them within 3 or 5 years, you'd majorly update about time to the type of AGI that might kill everyone?

TsviBT30

would you think it wise to have TsviBT¹⁹⁹⁹ align contemporary Tsvi based on his values? How about vice versa?

It would be mostly wise either way, yeah, but that's relying on both directions being humble / anapartistic.

TsviBT20

do you think stable meta-values are to be observed between australopiteci and say contemporary western humans?

on the other hand: do values across primitive tribes or early agricultural empires not look surprisingly similar?

I'm not sure I understand the question, or rather, I don't know how I could know this. Values are supposed to be things that live in an infinite game / Nomic context. You'd have to have these people get relatively more leisure before you'd see much of their values.

TsviBT20

I mean, I don't know how it works in full, that's a lofty and complex question. One reason to think it's possible is that there's a really big difference between the kind of variation and selection we do in our heads with ideas and the kind evolution does with organisms. (Our ideas die so we don't have to and so forth.) I do feel like some thoughts change some aspects of some of my values, but these are generally "endorsed by more abstract but more stable meta-values", and I also feel like I can learn e.g. most new math without changing any values. Where "values" is, if nothing else, cashed out as "what happens to the universe in the long run due to my agency" or something (it's more confusing when there's peer agents). Mateusz's point is still relevant; there's just lots of different ways the universe can go, and you can choose among them.

TsviBT30

I quite dislike earplugs. Partly it's the discomfort, which maybe those can help with; but partly I just don't like being closed away from hearing what's around me. But maybe I'll try those, thanks (even though the last 5 earplugs were just uncomfortable contra promises).

Yeah, I mean I think the music thing is mainly nondistraction. The quiet of night is great for thinking, which doesn't help the sleep situation.

Load More