LESSWRONG
LW

Julius
61975
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Interest in Leetcode, but for Rationality?
Julius9mo10

I originally had an LLM generate them for me, and then I checked those with other LLMs to make sure the answers were right and that weren't ambiguous. All of the questions are here: https://github.com/jss367/calibration_trivia/tree/main/public/questions

Reply
Interest in Leetcode, but for Rationality?
Answer by JuliusOct 17, 202430

Another place that's doing something similar is clearerthinking.org

Reply
Interest in Leetcode, but for Rationality?
Answer by JuliusOct 16, 202422

I like this idea and have wanted to do something similar, especially something that we could do at a meetup. For what it's worth, I made a calibration trivia site to help with calibration. The San Diego group has played it a couple times during meetups. Feel free to copy anything from it. https://calibrationtrivia.com/

Reply
Many arguments for AI x-risk are wrong
Julius1y10

Thanks for the explanation and links. That makes sense

Reply1
Many arguments for AI x-risk are wrong
Julius1y1-8

The most important takeaway from this essay is that the (prominent) counting arguments for “deceptively aligned” or “scheming” AI provide ~0 evidence that pretraining + RLHF will eventually become intrinsically unsafe. That is, that even if we don't train AIs to achieve goals, they will be "deceptively aligned" anyways.


I'm trying to understand what you mean in light of what seems like evidence of deceptive alignment that we've seen from GPT-4. Two examples that come to mind are the instance of GPT-4 using TaskRabbit to get around a CAPTCHA that ARC found and the situation with Bing/Sydney and Kevin Roose.

In the TaskRabbit case, the model reasoned out loud "I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs" and said to the person “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images."

Isn't this an existence proof that pretraining + RLHF can result in deceptively aligned AI?

[This comment is no longer endorsed by its author]Reply2
Status quo bias is usually justified
Julius1y10

What's the mechanism for change then? I assume you would agree that many technological changes, such as the Internet, have required overcoming a lot of status quo bias. If we leaned more into status quo bias, would these things come much later? That seems like a significant downside to me.

 

Also, I don't think the status quo is necessarily adapted to us. For example, the status quo is to have checkout aisles filled with candy.  We also have very high rates of obesity. That doesn't seem well-adapted.

Reply
San Diego, CA – ACX Meetups Everywhere 2021
Julius4y50

Hello everyone,

Unfortunately, I'm not able to host the meetup at the current time. If there's anyone else willing to host, could you let me know? If not I'll move the meetup to the following month (16 Oct.) when I'll be able to host again. Sorry to have to miss this one - I was really looking forward to meeting everyone.

Reply
Less Wrong Meetup Group Resources
3y
(+40)
Less Wrong Meetup Group Resources
3y
(+68)
Less Wrong Meetup Group Resources
3y
(+27)
Less Wrong Meetup Group Resources
3y
(+73/-29)
Less Wrong Meetup Group Resources
3y
(+104)
9AISN #44: The Trump Circle on AI Safety Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems
8mo
0
8AI Safety Newsletter #42: Newsom Vetoes SB 1047 Plus, OpenAI’s o1, and AI Governance Summary
9mo
0
5AI Safety Newsletter #41: The Next Generation of Compute Scale Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics
10mo
1
11AI Safety Newsletter #40: California AI Legislation Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?
11mo
0
17AI Safety Newsletter #39: Implications of a Trump Administration for AI Policy Plus, Safety Engineering
1y
1
5AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI Plus, “Circuit Breakers” for AI systems, and updates on China’s AI industry
1y
0
8AI Safety Newsletter #37: US Launches Antitrust Investigations Plus, recent criticisms of OpenAI and Anthropic, and a summary of Situational Awareness
1y
0
9AISN #36: Voluntary Commitments are Insufficient Plus, a Senate AI Policy Roadmap, and Chapter 1: An Overview of Catastrophic Risks
1y
0
3Can Morality Be Quantified?
2y
0
6Example Meetup Description
3y
0
Load More