LESSWRONG
LW

Ronny Fernandez
1466273340
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
6Ronny Fernandez's Shortform
1y
2
A case for courage, when speaking of AI danger
Ronny Fernandez8d0-2

To check, do you have particular people in mind for this hypothesis? Seems kinda rude to name them here, but could you maybe send me some guesses privately? I currently don't find this hypothesis as stated very plausible, or like sure maybe, but I think it's a relatively small fraction of the effect.

Reply
A case for courage, when speaking of AI danger
Ronny Fernandez8d90

Curated. I have been at times more cautious in communicating my object level views than I now wish I had been. I appreciate this post as a flag for courage: something others might see, and which might counter some of the (according to me) prevailing messages of caution. Those messages, at least in my case, significantly contributed to my caution, and I wish there had been something like this post around for me to read before I had to decide how cautious to be.

The argument this post presents for the conclusion that many people should be braver in communicating about AI x-risk by their own lights, is only moderately convincing. It relies heavily on how the book’s blurbs were sampled, and it seems likely that a lot of optimization went into getting strong snippets rather than a representative sample. I find it hard to update much on this without knowing the details of how the blurbs were collected, even though you address this specific concern. Still, it's not totally unconvincing.

I’d like to see more empirical research into what kinds of rhetoric work to achieve which aims when communicating with different audiences about AI x-risk. This seems like the sort of thing humanity has already specced into studying, and I’d love to see more writeups applying that existing competence to these questions.

I also would have liked to see more of Nate’s personal story: how he came to hold his current views. My impression is that he didn’t always so confidently believe people should more hold the courage of their convictions when talking about AI x-risk. A record of how his mind changed over time, and what observations/thoughts/events caused that change, could be informative for others in an importantly different way from how this post or empirical work on the question might be. I’d love to see a post from Nate on that in the future.

Reply
METR: Measuring AI Ability to Complete Long Tasks
Ronny Fernandez3mo60

Curated. Comparing model performance on tasks to the time human experts need to complete the same tasks (with fixed reliability) is worth highlighting since it helps operationalize terms like "human-level-AI" and "AI-level-of-capabilities" in general. Furthermore, by making this empirical comparison and discovering a 7-month doubling time, this work significantly reduces our uncertainty about both when to expect certain capabilities (and more impressively according to me) how to conceptualize those AI capability levels. That is, on top of reducing our uncertainty, I think this work also provides a good general format / frame for reporting general AI capabilities forecasts, eg, we have X years until models can do things that it takes human experts Y hours to do with reliability Z%.

I also appreciated the discussions this post inspired about whether we should expect the slope in log-space to change, and if so in which direction, as well as the related discussion about whether we should expect this trend to go superexponential. Interesting arguments and models were put forth in both discussions.

I hope in the future METR explores other methods for concretizing/operationalizing and forecasting AI capability levels. For example, comparing human expert reliability in general within specific task domains to model task reliability within those same domains, or comparing the time humans take to become reliable experts in certain domains to model task reliability within those same domains.

Reply1
What Is The Alignment Problem?
Ronny Fernandez4mo5-1

Curated. Tackles thorny conceptual issues at the foundation of AI alignment while also revealing the weak spots of the abstractions used to do so.

I like the general strategy of trying to make progress on understanding the problem relying only on the concept of "basic agency" without having to work on the much harder problem of coming up with a useful formalism of a more full throated conception of agency, whether or not that turns out to be enough in the end.

The core point of the post: that certain kinds of goals only make sense at all given that there are certain kinds of patterns present in the environment, and that most of the problem of making sense of the alignment problem is identifying what those patterns are for the goal of "make aligned AGIs", is plausible and worthy of discussion. I also appreciate that this post elucidates the (according to me) canon-around-these-parts general patterns that render the specific goal of aligning AGIs sensible (eg, compression based analyses of optimization) and presents them as such explicitly.

The introductory examples of patterns that must be present in the general environment for certain simpler goals to make sense—especially how the absence of the pattern makes the goal not make sense—are clear and evocative. I would not be surprised if they helped someone notice that there are some ways that the canon-around-these-parts hypothesized patterns which render "align AGIs" a sensible goal are importantly flawed.

Reply
Judgements: Merging Prediction & Evidence
Ronny Fernandez4mo80

Curated. The problem of certain evidence is an old fundamental problem in Bayesian epistemology and this post makes a simple and powerful conceptual point tied to a standard way of trying to resolve that problem. Explaining how to think about certain evidence vs. something like Jefferey's conditionalization under the prediction market analogy of a Bayesian agent is itself valuable. Further pointing out both that:

1) You can think of evidence and hypotheses as objects of the same type signature using the analogy.

And 

2) The difference between them is revealed by the analogy to be a quantitative rather than qualitative difference.

Moves me much further in the direction of thinking that radical probabilism will be a fruitful research program. Unfruitful research programs rarely reveal deep underlying similarities between seemingly very different types of fundamental objects.

Reply
Lighthaven Sequences Reading Group #7 (Tuesday 10/22)
Ronny Fernandez9mo20

There is! It is now posted! Sorry about the delay.

Reply
Lighthaven Sequences Reading Group #5 (Tuesday 10/08)
Ronny Fernandez9mo30

Hello, last time a taught a class on the basics of Bayesian epistemology. This time I will teach a class that goes a bit further. I will explain what a proper scoring rule is and we will also do some calibration training. In particular, we will play a calibration training game called two lies, a truth, and a probability. I will do this at 7:30 the same place as last time. Come by to check it out.

Reply
Lighthaven Sequences Reading Group #4 (Tuesday 10/01)
Ronny Fernandez9mo70

Hello! Please note that I will be giving a class called the Bayesics in Eigen hall at 7:30. Heard of Bayes's theorem but don't fully understand what the fuss is about? Want to have an intuitive as well as formal understanding of what the Bayesian framework is? Want to learn how to do bayesian updates in your head? Come and learn the Bayesics.

Reply
First Lighthaven Sequences Reading Group
Ronny Fernandez9mo20

Also, please note that I will be giving a class at 7:30 after the reading group called "The Bayesics" where I will teach you the basics of intuitive Bayesian epistemology and how to do Bayesian updates irl on the fly as a human. All attending the reading group are welcome to join for that as well.

Reply
Thomas Kwa's Shortform
Ronny Fernandez1y20

I think you should still write it. I'd be happy to post it instead or bet with you on whether it ends up negative karma if you let me read it first.

Reply
Load More
6Ronny Fernandez's Shortform
1y
2
74MATS AI Safety Strategy Curriculum
1y
2
42Ronny and Nate discuss what sorts of minds humanity is likely to find by Machine Learning
2y
30
26High schoolers can apply to the Atlas Fellowship: $10k scholarship + 11-day program
2y
0
19Aligned Behavior is not Evidence of Alignment Past a Certain Level of Intelligence
3y
5
45Are We Right about How Effective Mockery Is?
5y
12
14Excusing a Failure to Adjust
5y
2
8Empathetic vs Intrinsic Oof
5y
3
4What do you make of AGI:unaligned::spaceships:not enough food?
Q
5y
Q
3
30Comment on Coherence arguments do not imply goal directed behavior
Ω
6y
Ω
8
Load More