LESSWRONG
LW

515
kwiat.dev
25111360
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Ariel Kwiatkowski's Shortform
5y
4
I changed my mind about orca intelligence
kwiat.dev6mo56

So you do, gotcha

Reply
I changed my mind about orca intelligence
kwiat.dev6mo186

Genuine questions - do you still believe you're one of the smartest young supergeniuses?

Reply
DeepSeek Panic at the App Store
kwiat.dev8mo102

Trump "announces" a lot of things. It doesn't matter until he actually does them.

Reply
We don't want to post again "This might be the last AI Safety Camp"
kwiat.dev8mo180

While I participated in a previous edition, and somewhat enjoyed it, I couldn't bring myself to support it now considering Remmelt is the organizer, between his anti-AI-art crusades and an overall "stop AI" activism. It's unfortunate, since technical AI safety research is very valuable, but promoting those anti-AI initiatives makes it a probable net negative in my eyes. 

Maybe it's better to let AISC die a hero.

Reply
Ten counter-arguments that AI is (not) an existential risk (for now)
kwiat.dev1y10

Because you could make the same argument could be made earlier in the "exponential curve". I don't think we should have paused AI (or more broadly CS) in the 50's, and I don't think we should do it now.

Reply
Ariel Kwiatkowski's Shortform
kwiat.dev1y-1-4

Modern misaligned AI systems are good, actually. There's some recent news about Sakana AI developing a system where the agents tried to extend their own runtime by editing their code/config. 

This is amazing for safety! Current systems are laughably incapable of posing x-risks. Now, thanks to capabilities research, we have a clear example of behaviour that would be dangerous in a more "serious" system. So we can proceed with empirical research, create and evaluate methods to deal with this specific risk, so that future systems do not have this failure mode.

The future of AI and AI safety has never been brighter.

Reply
Ten arguments that AI is an existential risk
kwiat.dev1y68

Expert opinion is an argument for people who are not themselves particularly informed about the topic. For everyone else, it basically turns into an authority fallacy.

Reply
What if a tech company forced you to move to NYC?
[+]kwiat.dev1y-11-9
Why I'm not doing PauseAI
kwiat.dev1y12

And how would one go about procuring such a rock? Asking for a friend.

Reply1
AI Safety 101 : Capabilities - Human Level AI, What? How? and When?
kwiat.dev2y31

The ML researchers saying stuff like AGI is 15 years away have either not carefully thought it through, or are lying to themselves or the survey.

 

Ah yes, the good ol' "If someone disagrees with me, they must be stupid or lying"

Reply
Load More
20Ten counter-arguments that AI is (not) an existential risk (for now)
1y
5
-8Why I'm not doing PauseAI
1y
5
-5My thoughts on the Beff Jezos - Connor Leahy debate
2y
23
7Why I'm not worried about imminent doom
2y
2
7Thoughts about Hugging Face?
Q
2y
Q
0
26Alignment-related jobs outside of London/SF
Q
3y
Q
14
8AISC5 Retrospective: Mechanisms for Avoiding Tragedy of the Commons in Common Pool Resource Problems
4y
3
7Competence vs Alignment
Q
5y
Q
4
12How to validate research ideas?
Q
5y
Q
2
2Ariel Kwiatkowski's Shortform
5y
4
Load More