kwiat.dev

Wikitag Contributions

Comments

Sorted by
kwiat.dev176

Genuine questions - do you still believe you're one of the smartest young supergeniuses?

kwiat.dev102

Trump "announces" a lot of things. It doesn't matter until he actually does them.

kwiat.dev180

While I participated in a previous edition, and somewhat enjoyed it, I couldn't bring myself to support it now considering Remmelt is the organizer, between his anti-AI-art crusades and an overall "stop AI" activism. It's unfortunate, since technical AI safety research is very valuable, but promoting those anti-AI initiatives makes it a probable net negative in my eyes. 

Maybe it's better to let AISC die a hero.

Because you could make the same argument could be made earlier in the "exponential curve". I don't think we should have paused AI (or more broadly CS) in the 50's, and I don't think we should do it now.

kwiat.dev-1-4

Modern misaligned AI systems are good, actually. There's some recent news about Sakana AI developing a system where the agents tried to extend their own runtime by editing their code/config. 

This is amazing for safety! Current systems are laughably incapable of posing x-risks. Now, thanks to capabilities research, we have a clear example of behaviour that would be dangerous in a more "serious" system. So we can proceed with empirical research, create and evaluate methods to deal with this specific risk, so that future systems do not have this failure mode.

The future of AI and AI safety has never been brighter.

Expert opinion is an argument for people who are not themselves particularly informed about the topic. For everyone else, it basically turns into an authority fallacy.

And how would one go about procuring such a rock? Asking for a friend.

The ML researchers saying stuff like AGI is 15 years away have either not carefully thought it through, or are lying to themselves or the survey.

 

Ah yes, the good ol' "If someone disagrees with me, they must be stupid or lying"

Load More