WilliamKiely

Wiki Contributions

Comments

That's it, thank you!

How is...

I'm short on AGI timelines (50% by 2030)

...consistent with...

An AI paradigm as performance-enhancing as transformers is discovered by AI search. 30% by 2030

...?

Doesn't AGI imply the latter?

To add some more emphasis to my point, because I think it deserves more emphasis:

Quoting the interview Jacy linked to:

Your paper also says that “[w]ithout being overly alarmist, this should serve as a wake-up call for our colleagues” — what is it that you want your colleagues to wake up to? And what do you think that being overly alarmist would look like?

We just want more researchers to acknowledge and be aware of potential misuse. When you start working in the chemistry space, you do get informed about misuse of chemistry, and you’re sort of responsible for making sure you avoid that as much as possible. In machine learning, there’s nothing of the sort. There’s no guidance on misuse of the technology.

I know I'm not saying anything new here, and I'm merely a layperson without ability to verify the truth of the claim I highlighted in bold above, but I do want to emphasize further:

I seems clear that changing the machine learning space so that it is like the chemistry space in the sense that you do get informed about ways machine learning can be misused and cause harm, is something that we should all push to make happen as soon as we can. (Also expanding the discussion of potential harm beyond harm caused from misuse to any harm related to the technology.)

Years ago I recall hearing Stuart Russell mention the analogy of how civil engineers don't have a separate field for bridge safety; rather bridge safety is something all bridge designers are educated on and concerned about, and he similarly doesn't want the field of AI safety to be separate from AI but wants all people working on AI to be educated on and concerned with risks from AI.

This is the same point I'm saying here, and I'm saying it again because it seems like the present machine learning space is still far from this point and we as a community really do need to devote more efforts to ensuring that we change this in the near future.

Risk of misuse

The thought had never previously struck us.

This seems like one of the most tractable things to address to reduce AI risk.

If 5 years from now anyone developing AI or biotechnology is still not thinking (early and seriously) about ways their work could cause harm that other people have been talking about for years, I think we should consider ourselves to have failed.

FWIW when I first saw the title (on the EA Forum) my reaction was to interpret it with an implicit "[I think that] We must be very clear: fraud in the service of effective altruism is unacceptable".

Things generally don't just become true because people assert them to be--surely people on LW know that. I think habryka's concern that not including "I think" in the title is a big deal is overblown. Dropping "I think" from the title is reasonable IMO to make the title more concise; I don't anticipate it degrading the culture of LW. I also don't see how it "bypasses epistemic defenses." If the lack of inclusion of an "I think" in your title will worsen readers' epistemics, then those readers seem to be at great risk of getting terrible epistemics from seeing any news headlines.

I don't mean to say that there's not value in using more nuanced language, including "I think" and similar qualifications to be more precise with ones words, just that I think the karma/vote ratio your post received is an over-reaction to concern about posts of your style degrading the level one "Attempt to describe the world accurately" culture of LW.

Agreed with this given how many orders of magnitude potential values span.

Rescinding my previous statement:

> I also think the amount of probability I assign to 1%-99% futures is (~10x?) larger than the amount I assign to >99% futures.

I'd now say that probably the probability of 1%-99% optimal futures is <10% of the probability of >99% optimal futures.

This is because 1% optimal is very close to being optimal (only 2 orders of magnitude away out of dozens of orders of magnitude of very good futures).

Your comment greatly adds to the value of the post for me, thanks!

Just saw that Dindane's comment above is a helpful answer to my question.

What does the thickness of the lines represent precisely? Something like "the thicker the line, the more important the thing underneath is to the thing on top"?

The context behind why I'm asking these questions is because I feel like without a clear sense of what the thickness of the lines represents and how to compare two interventions (represented by two different multi-line paths along the tree) using the thickness of the lines to inform the decisions, it's not clear to me how this framework is more helpful than a reminder to think big picture and not get lost in the details. (And I'd like it to be a useful framework for me.)

I wonder if the fact that there are ~10 respondents who have worked in AI for 7 years, but only one who has for 8 years is because of Superintelligence which came out in 2015.

Load More