Kayden

Wiki Contributions

Comments

Sorted by
Kayden20

Thanks for the suggestion, works great!

Kayden40

I agree. It's not easy to search for specific episodes on the Nonlinear library. I open it in something like Google podcasts and then search in page for keywords. It is cumbersome as you said. They did mention in their original announcement post that the to-do goals are 1. Creating a compilation of top-of-all-time posts for all 3 forums, 2. Creating forum-specific feed,  and 3. Creating tags and a searchable archive.

Since they've done the first two, I hope it's not long before they add the tags functionality. 

For Nonlinear, there's a threshold too:

The current upvote thresholds for which articles are converted are:
25 for the EA forum
30 for LessWrong

I think that due to the volume of posts on LW, The curated podcast would have something similar.

Kayden50

Have you looked at the Non-linear library? I find them far better than the otherwise-robotic sounding audio by usual TTS. Plus, it's automated and podcasts are available pretty soon after any post is published. I like it as I like to go on long walks and listen to LW or Alignment forum posts. 

Also, there's an audio collection of the top-of-all-time posts from LW, the Alignment forum, and the EA forum.

Kayden10

The readability difference, when compared to DALL-E 2, is laughable. 

They have provided some examples after the references section, including some direct comparisons with DALL-E 2 for text in images. Also, PartiPrompts looks like a good collection of novel prompts for eval.

Kayden70

(I don't think many people ever bother to use that particular gwern.net feature, but that's at least partially due to the link bibliography being in the metadata block, and as we all know, no one ever reads the metadata.)

I don't have any idea whether people use that feature or not, but I definitely love it. One of my fav things about browsing gwern.net.

I was directed to the story of Clippy from elsewhere (rabbit hole from the Gary Marcus vs SSC debate) and was pleasantly surprised with the reader mode (I had not read gwern.net for months). Then, I came here for discussion and stumbled upon this thread explaining your reasoning for the reader mode. This is great! It's a really useful feature and incidentally, I used it exactly the way you envisioned users would. 

Kayden30

Agreed. Look at the wars of just the past 100 years, the Spanish flu, and the damage caused by ignorant statements of a few famous or powerful people during the COVID-19 pandemic. We start to see a picture where a handful of people are capable of causing a large amount of damage, even if they didn't anticipate it. If they set their mind to it, as probably with the Ukraine war at the moment, then the amount of destruction is very asymmetrically proportioned to the number of people responsible for it.

Kayden43

I assumed that there will come a time when the AGI has exhausted consuming all available human-collected knowledge and data. 

My reasoning for the comment was something like 

"Okay, what if AGI happens before we've understood the dark matter and dark energy? AGI has incomplete models of these concepts (Assuming that it's not able to develop a full picture from available data - that may well be the case, but for a placeholder, I'm using dark energy. It could be some other concept we only discover in the year prior to the AGI creation and have relatively fewer data about), and it has a choice to either use existing technology (or create better using existing principles), or carry out research into dark energy and see how it can be harnessed, given reasons to believe that the end-solution would be far more efficient than the currently possible solutions. 

There might be types of data that we never bothered capturing which might've been useful or even essential for building a robust understanding of certain aspects of nature. It might pursue those data-capturing tasks, which might be bottlenecked by the amount of data needed, the time to collect data, etc (though far less than what humans would require)."

Thank you for sharing the link. I had misunderstood what the point meant, but now I see. My speculation for the original comment was based on a naive understanding. This post you linked is excellent and I'd recommend everyone to give it a read. 

Kayden20

I'm 22 (±0.35) years old and have been seriously getting involved with AI-Safety over the last few months. However, I chanced upon LW via SSC a few years ago (directed to SSC by Guzey) when I was 19. 

The generational shift is a concern to me because as we start losing people who've accumulated decades of knowledge (of which only a small fraction is available to read/watch), it's possible that a lot of time would be wasted on developing ideas which have been developed via routes which have been explored. Of course, there's a lot of utility in coming up with ideas from the ground up, but there comes a time when you accept and build upon an existing framework based on true statements. Regardless of whether the timelines are shorter than what we expect, this is a cause for concern.

Kayden51

I mostly agree with the points written here. It's actually on the (Section A; Point1) that I'd like to have more clarification on:

AGI will not be upper-bounded by human ability or human learning speed.  Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains

When we have AGI working on hard research problems, it sounds akin to decades of human-level research compressed up into just a few days or maybe even less, perhaps. That may be possible, but often, the bottleneck is not the theoretical framework or proposed hypothesis, but waiting for experimental proof. If we say that an AGI will be a more rational agent than humans, do we not expect it to try to accumulate more experimental proof to test the theory to estimate, for example, the expected utility of pursuing a novel course of action?

I think there would still be some constraints to this process. For example, humans often wait until the experimental proof has accumulated enough to validate certain theories (for example, the Large Hadron Collider Project, the Photoelectric effect, etc). We need to observe nature to gather proof that the theory doesn't fail in scenarios we expect it to fail. To accumulate such proof, we might build new instruments to gather new types of data to validate the theory on the now-larger set of available data. Sometimes that process can take years. Just because AGI will be smarter than humans, can we say that it'll be making proportionately faster breakthroughs in research?  

Answer by Kayden80

ChinAI takes a different approach: it bets on the proposition that for many of these issues, the people with the most knowledge and insight are Chinese people themselves who are sharing their insights in Chinese. Through translating articles and documents from government departments, think tanks, traditional media, and newer forms of “self-media,” etc., ChinAI provides a unique look into the intersection between a country that is changing the world and a technology that is doing the same.

ChinAI might be of interest to you.

Load More