Wiki Contributions

Comments

Note that I agree with your sentiment here, although my concrete argument is basically what LawrenceC wrote as a reply to this post.

Ryan, this is kind of a side-note but I notice that you have a very Paul-like approach to arguments and replies on LW.

Two things that come to notice:

  1. You have a tendency to reply to certain posts or comments with "I don't quite understand what is being said here, and I disagree with it." or, "It doesn't track with my views", or equivalent replies that seem not very useful for understanding your object level arguments. (Although I notice that in the recent comments I see, you usually postfix it with some elaboration on your model.)
  2. In the comment I'm replying to, you use a strategy of black-box-like abstraction modeling of a situation to try to argue for a conclusion, one that usually involves numbers such as multipliers or percentages. (I have the impression that Paul uses this a lot, and one concrete example that comes to mind is the takeoff speeds essay. I usually consider such arguments invalid when they seem to throw away information we already have, or seem to use a set of abstractions that don't particularly feel appropriate to the information I believe we have.

I just found this interesting and plausible enough to highlight to you. Its a moderate investment of my time to find out examples from your comment history to highlight all these instances, but writing this comment still seemed valuable.

This is a really well-written response. I'm pretty impressed by it.

If your acceptable lower limit for basically anything is zero you wont be allowed to do anything, really anything. You have to name some quantity of capabilities progress that’s okay to do before you’ll be allowed to talk about AI in a group setting.

"The optimal amount of fraud is non-zero."

Okay I just read the entire thing. Have you looked at Eric Drexler's CAIS proposal? It seems to have played some role as the precursor to the davidad / Evan OAA proposal, and has involved the use of composable narrow AI systems.

but I’m a bit disappointed that x-risk-motivated researchers seem to be taking the “safety”/”harm” framing of refusals seriously

I'd say a more charitable interpretation is that it is a useful framing: both in terms of a concrete thing one could use as scaffolding for alignment-as-defined-by-Zack research progress, and also a thing that is financially advantageous to focus on since frontier labs are strongly incentivized to care about this.

Haven't read the entire post, but my thoughts on seeing the first image: Pretty sure this is priced into Anthropic / Redwood / OpenAI cluster of strategies where you use an aligned boxed (or 'mostly aligned) generative LLM-style AGI to help you figure out what to do next.

e/acc is not a coherent philosophy and treating it as one means you are fighting shadows.

Landian accelerationism at least is somewhat coherent. "e/acc" is a bundle of memes that support the self-interest of the people supporting and propagating it, both financially (VC money, dreams of making it big) and socially (the non-Beff e/acc vibe is one of optimism and hope and to do things -- to engage with the object level -- instead of just trying to steer social reality). A more charitable interpretation is that the philosophical roots of "e/acc" are founded upon a frustration with how bad things are, and a desire to improve things by yourself. This is a sentiment I share and empathize with.

I find the term "techno-optimism" to be a more accurate description of the latter, and perhaps "Beff Jezos philosophy" a more accurate description of what you have in your mind. And "e/acc" to mainly describe the community and its coordinated movements at steering the world towards outcomes that the people within the community perceive as benefiting them.

I use GreaterWrong as my front-end to interface with LessWrong, AlignmentForum, and the EA Forum. It is significantly less distracting and also doesn't make my ~decade old laptop scream in agony when multiple LW tabs are open on my browser.

The main part of the issue was actually that I was not aware I had internal conflicts. I just mysteriously felt less emotions and motivation.

Yes, I believe that one can learn to entirely stop even considering certain potential actions as actions available to us. I don't really have a systematic solution for this right now aside from some form of Noticing practice (I believe a more refined version of this practice is called Naturalism but I don't have much experience with this form of practice).

Load More