LESSWRONG
LW

steven0461
8760Ω30441575157
Message
Dialogue
Subscribe

Steven K

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
34steven0461's Shortform Feed
Ω
6y
Ω
40
Contra papers claiming superhuman AI forecasting
steven046110mo40

As I understand it, the Metaculus crowd forecast performs as well as it does (relative to individual predictors) in part because it gives greater weight to more recent predictions. If "superhuman" just means "superhumanly up-to-date on the news", it's less impressive for an AI to reach that level if it's also up-to-date on the news when its predictions are collected. (But to be confident that this point applies, I'd have to know the details of the research better.)

Reply
This is already your second chance
steven04611y20

"Broekveg" should be "Broekweg"

Reply
Superforecasting the premises in “Is power-seeking AI an existential risk?”
steven04612y103

partly as a result of other projects like the Existential Risk Persuasion Tournament (conducted by the Forecasting Research Institute), I now think of it as a data-point that “superforecasters as a whole generally come to lower numbers than I do on AI risk, even after engaging in some depth with the arguments.”

I participated in the Existential Risk Persuasion Tournament and I disagree that most superforecasters in that tournament engaged in any depth with the arguments. I also disagree with the phrase "even after arguing about it" - barely any arguing happened, at least in my subgroup. I think much less effort went into these estimates than it would be natural to assume based on how the tournament has been written about by EAs, journalists, and so on.

Reply
Stampy's AI Safety Info soft launch
steven04612y*31

Thanks, yes, this is a helpful type of feedback. We'll think about how to make that section make more sense without background knowledge. The site is aimed at all audiences, and this means we'll have to navigate tradeoffs about text leaving gaps in justifying claims vs. being too long vs. not having enough scope to be an overview. In this case, it does look like we could make the tradeoff on the side of adding a bit more text and links. Your point about the glossary sounds reasonable and I'll pass it along. (I guess the tradeoff there is people might see an unexplained term and not realize that an earlier instance of it had a glossary link.)

Reply
Stampy's AI Safety Info soft launch
steven04612y21

You're right that it's confusing, and we've been planning to change how collapsing and expanding works. I don't think specifics have been decided on yet; I'll pass your ideas along.

I don't think there should be "random" tabs, unless you mean the ones that appear from the "show more questions" option at the bottom. In some cases, the content of child questions may not relate in an obvious way to the content of their parent question. Is that what you mean? If questions are appearing despite not 1) being linked anywhere below "Related" in the doc corresponding to the question that was expanded, or 2) being left over from a different question that was expanded earlier, then I think that's a bug, and I'd be interested in an example.

Reply
Stampy's AI Safety Info soft launch
steven04612y90

Quoting from our Manifund application:

We have received around $46k from SHfHS and $54k from LTFF, both for running content writing fellowships. We have been offered a $75k speculation grant from Lightspeed Grants for an additional fellowship, and made a larger application to them for the dev team which has not been accepted. We have also recently made an application to Open Philanthropy.

Reply
Stampy's AI Safety Info soft launch
steven04612y30

EA Forum version (manually crossposting to make coauthorship work on both posts):

https://forum.effectivealtruism.org/posts/mHNoaNvpEuzzBEEfg/stampy-s-ai-safety-info-soft-launch

Reply
Join AISafety.info's Writing & Editing Hackathon (Aug 25-28) (Prizes to be won!)
steven04612y20

if there's interest in finding a place for a few people to cowork on this in Berkeley, please let me know

Reply
Stampy's AI Safety Info - New Distillations #4 [July 2023]
steven04612y20

Thanks, I made a note on the doc for that entry and we'll update it.

Reply
Stampy's AI Safety Info - New Distillations #4 [July 2023]
steven04612y40

Traffic is pretty low currently, but we've been improving the site during the distillation fellowships and we're hoping to make more of a real launch soon. And yes, people are working on a Stampy chatbot. (The current early prototype isn't finetuned on Stampy's Q&A but searches the alignment literature and passes things to a GPT context window.)

Reply
Load More
5If we get things right, AI could have huge benefits
23d
0
8Advanced AI is a big deal even if we don’t lose control
23d
0
5Defeat may be irreversibly catastrophic
23d
0
6AI can win a conflict against us
1mo
0
5Different goals may bring AI into conflict with us
1mo
2
14AI’s goals may not match ours
2mo
1
13AI may pursue goals
2mo
0
10The road from human-level to superintelligent AI may be short
3mo
0
23Human-level is not the limit
3mo
2
11AI may attain human-level soon
3mo
0
Load More
Tiling Agents
1y
(+10/-11)
Tiling Agents
1y
(+9/-10)
Tiling Agents
1y
(+7/-8)
Tiling Agents
1y
(+13/-14)
Hedonium
2y
(+26/-27)
Narrow AI
2y
(+15/-15)
Superintelligence
2y
(-48)
Moral uncertainty
13y
(+253/-88)
Economic Consequences of AGI
13y
(+93/-8)