LESSWRONG
LW

249
Tapatakt
1142Ω282510
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
5Tapatakt's Shortform
2y
48
Tapatakt's Shortform
Tapatakt1d20

Most people don't understand the concept of type-annotation.

I think it's mostly not about simpler words, but about simpler sentences, actually.

Reply
Tapatakt's Shortform
Tapatakt1d119

New cause area: translate all EY's writing into Basic English. Only half-joke. And it's not only about Yudkowsky.

I think I will actually do something like this with some text for testing purposes.

Reply
Tapatakt's Shortform
Tapatakt2d2-2

Does anyone pushing writing for raising awareness about AI risks to be more simple?

Not inferential-distance-simple, but stylistically-simple.

I translate online materials for IABIED into Russian. It has sentences like this:

The wonder of natural selection is not its robust error-correction covering every pathway that might go wrong; now that we’re dying less often to starvation and injury, most of modern medicine is treating pieces of human biology that randomly blow up in the absence of external trauma.

This is not cherrypicked at all. It's from the last page I translated. And I translated this sentence with three sentences. And quick LLM-check confirmed that English is actually less tolerant to overly long sentences than Russian.

I think this is bad. I hope it's better in the book (my copy hasn't reached me yet) and online materials are like this because they are poorly edited bonus. But I have a feeling that a lot of the writing on AI safety has the same problem.

Reply1
Paranoia: A Beginner's Guide
Tapatakt3d94

Both medical advice and legal advice are categories where we only allow certified experts to speak freely

Really? I thought only medical/legal/financial professionals have to write "not a medical/legal/financial advice" disclaimers. (I'm not from US)

Reply
Lack of Social Grace is a Lack of Skill
Tapatakt13d324

I think I mostly agree with you, but I think some skills (or the process of learning them) predictably influence one's values and behaviour in undesirable ways. In case of social grace this influence can be "I have the feeling of small changes in my social capital depending on what I say -> I have frequent reinforcement based on my social capital -> I value my social capital more -> I'm more reluctant to spend my social capital on saying honest things". 

And yes, avoiding this influence is just another skill one can learn, and perfect rationalist definitely would have it, but learning it isn't free.

Reply4
LW Reacts pack for Discord/Slack/etc
Tapatakt21d30

Also resized all to 100x100

Reply
LW Reacts pack for Discord/Slack/etc
Tapatakt22d40

Thanks! Added to Telegram!

Reply1
LW Reacts pack for Discord/Slack/etc
Tapatakt22d40

Is there a version with a transparent background somewhere?

Reply
Homomorphically encrypted consciousness and its implications
Tapatakt24d91

I agree with J Bostock. I see no problem with A. Why do you think that polynomial complexity is this important?

(Thanks for a very nice structuring, btw!)

Reply1
The Mom Test for AI Extinction Scenarios
Tapatakt1mo102

"Well, AI will be the most lying bitch, and it will be friend with all bosses"

Reply
Load More
10Lucky Omega Problem
5mo
4
21Weird Random Newcomb Problem
7mo
16
104I turned decision theory problems into memes about trolleys
1y
23
5Tapatakt's Shortform
2y
48
24Should we cry "wolf"?
3y
5
4AI Safety "Textbook". Test chapter. Orthogonality Thesis, Goodhart Law and Instrumental Convergency
3y
1
3I (with the help of a few more people) am planning to create an introduction to AI Safety that a smart teenager can understand. What am I missing?
3y
5
30I currently translate AGI-related texts to Russian. Is that useful?
Q
4y
Q
6