All of Kayden's Comments + Replies

Announcing the LessWrong Curated Podcast

Thanks for the suggestion, works great!

Announcing the LessWrong Curated Podcast

I agree. It's not easy to search for specific episodes on the Nonlinear library. I open it in something like Google podcasts and then search in page for keywords. It is cumbersome as you said. They did mention in their original announcement post that the to-do goals are 1. Creating a compilation of top-of-all-time posts for all 3 forums, 2. Creating forum-specific feed,  and 3. Creating tags and a searchable archive.

Since they've done the first two, I hope it's not long before they add the tags functionality. 

For Nonlinear, there's a threshold to... (read more)

Announcing the LessWrong Curated Podcast

Have you looked at the Non-linear library? I find them far better than the otherwise-robotic sounding audio by usual TTS. Plus, it's automated and podcasts are available pretty soon after any post is published. I like it as I like to go on long walks and listen to LW or Alignment forum posts. 

Also, there's an audio collection of the top-of-all-time posts from LW, the Alignment forum, and the EA forum.

1Evan R. Murphy11d
I tried Nonlinear Library awhile ago but had trouble finding my groove with it. I recall finding it cumbersome to search for episodes/posts. Is there a good way to do that? It's good to know the episodes are available soon after a post comes out. That was another doubt I had about Nonlinear, not knowing if the post I wanted would be on there yet. Do you know about how long after a post is published it takes it to appear on Nonlinear?
Google's new text-to-image model - Parti, a demonstration of scaling benefits

The readability difference, when compared to DALL-E 2, is laughable. 

They have provided some examples after the references section, including some direct comparisons with DALL-E 2 for text in images. Also, PartiPrompts looks like a good collection of novel prompts for eval.

It Looks Like You're Trying To Take Over The World

(I don't think many people ever bother to use that particular gwern.net feature, but that's at least partially due to the link bibliography being in the metadata block, and as we all know, no one ever reads the metadata.)

I don't have any idea whether people use that feature or not, but I definitely love it. One of my fav things about browsing gwern.net.

I was directed to the story of Clippy from elsewhere (rabbit hole from the Gary Marcus vs SSC debate) and was pleasantly surprised with the reader mode (I had not read gwern.net for months). Then, I came her... (read more)

4gwern6h
/sheds tears of joy that someone actually uses the link-bibliographies and noticed the reader mode
Alignment Risk Doesn't Require Superintelligence

Agreed. Look at the wars of just the past 100 years, the Spanish flu, and the damage caused by ignorant statements of a few famous or powerful people during the COVID-19 pandemic. We start to see a picture where a handful of people are capable of causing a large amount of damage, even if they didn't anticipate it. If they set their mind to it, as probably with the Ukraine war at the moment, then the amount of destruction is very asymmetrically proportioned to the number of people responsible for it.

AGI Ruin: A List of Lethalities

I assumed that there will come a time when the AGI has exhausted consuming all available human-collected knowledge and data. 

My reasoning for the comment was something like 

"Okay, what if AGI happens before we've understood the dark matter and dark energy? AGI has incomplete models of these concepts (Assuming that it's not able to develop a full picture from available data - that may well be the case, but for a placeholder, I'm using dark energy. It could be some other concept we only discover in the year prior to the AGI creation and have relati... (read more)

Does the rationalist community have a membership funnel?

I'm 22 (±0.35) years old and have been seriously getting involved with AI-Safety over the last few months. However, I chanced upon LW via SSC a few years ago (directed to SSC by Guzey) when I was 19. 

The generational shift is a concern to me because as we start losing people who've accumulated decades of knowledge (of which only a small fraction is available to read/watch), it's possible that a lot of time would be wasted on developing ideas which have been developed via routes which have been explored. Of course, there's a lot of utility in coming up... (read more)

AGI Ruin: A List of Lethalities

I mostly agree with the points written here. It's actually on the (Section A; Point1) that I'd like to have more clarification on:

AGI will not be upper-bounded by human ability or human learning speed.  Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains

When we have AGI working on hard research problems, it sounds akin to decades of human-level research compressed up into just a few days or maybe even less, perhaps. That may be possible, but often, the bottleneck is not th... (read more)

9Daphne_W1mo
I think Yudkowsky would argue [https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message] that on a scale from never learning anything to eliminating half your hypotheses per bit of novel sensory information, humans are pretty much at the bottom of the barrel. When the AI needs to observe nature, it can rely on petabytes of publicly available datasets from particle physics to biochemistry to galactic surveys. It doesn't need any more experimental evidence to solve human physiology or build biological nanobots: we've already got quantum mechanics and human DNA sequences. The rest is just derivation of the consequences. Sure, there are specific physical hypotheses that the AGI can't rule out because humanity hasn't gathered the evidence for them. But that, by definition, excludes anything that has ever observably affected humans. So yes, for anything that has existed since the inflationary period [https://en.wikipedia.org/wiki/Inflation_(cosmology)], the AGI will not be bottlenecked on physically gathering evidence. I don't really get what you're pointing at with "how much AGI will be smarter than humans", so I can't really answer your last question. How much smarter than yourself would you say someone like Euler [https://en.wikipedia.org/wiki/Leonhard_Euler] is than yourself? Is his ability to do scientific/mathematical breakthroughs proportional to your difference in smarts?
What is the state of Chinese AI research?

ChinAI takes a different approach: it bets on the proposition that for many of these issues, the people with the most knowledge and insight are Chinese people themselves who are sharing their insights in Chinese. Through translating articles and documents from government departments, think tanks, traditional media, and newer forms of “self-media,” etc., ChinAI provides a unique look into the intersection between a country that is changing the world and a technology that is doing the same.

ChinAI might be of interest to you.

Google's Imagen uses larger text encoder

From what I've seen so far, Imagen is more "straightforward" and does a better job generating an image describing the text than DALE-2. But DALE-2 seems to be producing prettier images (which makes sense given it was fine-tuned for aesthetics),

There's a Github repo up already, so I hope we'll be able to try an Open source version and actually test on the same prompts as DALE-2. 

1Logan Zoellner1mo
It'll be interesting to see Imagen fine-tuned on laion aesthetic [https://twitter.com/rom1504/status/1528394751308865538]