LESSWRONG
LW

ShowMeTheProbability
-41100
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1ShowMeTheProbability's Shortform
3y
3
ChatGPT can learn indirect control
ShowMeTheProbability1y-1-2

I often share the feeling you have, I believe that it's best characterised as 'fear/terror/panic' of the unknown.

Some undefined stuff is going to happen which may be scary, but there's no reason to think it will specifically be death rather than something else.

Reply
AI #52: Oops
ShowMeTheProbability2y10

Great post, I loved the comprehensive breakdown and feel much more up to date

Thanks!

Reply
We're already in AI takeoff
ShowMeTheProbability3y10

The Sam Bankman Fried reads differently now his massive fraud with FTX is public, might be worth a comment/revision?

I can't help but see Sam disagreeing with a message as a positive for the message (I know it's a fallacy, but the feelings still there)

Reply
Here's the exit.
ShowMeTheProbability3y80

From my perspective, you nailed the emotional vibe dead on. Its what I wouldve needed to hear (if I had the mental resources to process the warning properly before having a breakdown)

Reply
Here's the exit.
ShowMeTheProbability3y60

Thank you for writing this Valentine, It is an important message and I am really glad somone is saying it.
I first got engaged with the community when i was in vulnurable life circumstances, and suffered major clinical distress fixated around many of the ideas encountered here.

To be clear I am not saying rationalist culture was the cause of my distress, it was not. I am sharing my subjective experience that when you are silently screaming in internal agony, some of the ideas in this community can serve as a catalyst for a psychotic breakdown.

Reply
Adversarial Priors: Not Paying People to Lie to You
ShowMeTheProbability3y10

Assertion: An Ideal Agent never pays people to lie to them.

 

What if an agent has built a lie-detector and wants to test it out? I expect thats a circumstance where you want somone to lie to you consistently and on demand.

Whats the core real-world situation you are trying to address here?

Reply
Extreme Security
[+]ShowMeTheProbability3y-6-2
ShowMeTheProbability's Shortform
ShowMeTheProbability3y*10

Thanks for the feedback!

I'll see if my random idea can be formalised in such a way to constitute a (hard) test of cognition which is satisfying to humans.

Reply
ShowMeTheProbability's Shortform
ShowMeTheProbability3y11

The lack of falsification criteria for AGI (unresearched rant)

Situation: Lots if people are talking about AGI, and AGI safety but nobody can point to one. This is a Serious Problem, and a sign that you are confused.

Problem:

  • Currently proposed AGI tests are ad-hoc nonsense (https://intelligence.org/2013/08/11/what-is-agi/)
  • Historically when these tests are passed the goalposts are shifted (Turning test was passed by fooling humans, which is incredibly subjective and relatively easy).

Solution:

  • A robust and scalable test of abstract cognitive ability.
  • A test that could be passed by a friendly AI in such a way as to communicate co-operative intent, without all the humans freaking out.

Would anyone be interested in such a test so that we can detect the subject of our study?

Reply
Announcing my YouTube channel
[+]ShowMeTheProbability4y*-60
Load More
1ShowMeTheProbability's Shortform
3y
3