TL;DR I explain why I think AI research has been slowing down, not speeding up, in the past few years.
How have your expectations for the future of AI research changed in the past three years? Based on recent posts in this forum, it seems that results in text generation, protein folding, image synthesis, and other fields have accomplished feats beyond what was thought possible. From a bird's eye view, it seems as though the breakneck pace of AI research is already accelerating exponentially, which would make the safe bet on AI timelines quite short.
This way of thinking misses the reality on the front lines of AI research. Innovation is stalling beyond just throwing more computation at the problem, and the forces that made scaling computation cheaper or...
From the abstract, emphasis mine:
The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stackblocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens.
(Will edit to add more as I read. ETA: 1a3orn posted first.)
The section on broader implications is interesting. Selected quote:
...In addition, generalist agents can take actions in the the physical world; posing new challenges that may require
Maybe I misinterpreted you and/or her sorry. I guess I was eyeballing Ajeya's final distribution and seeing how much of it is above the genome anchor / medium horizon anchor, and thinking that when someone says "we literally could scale up 2020 algorithms and get TAI" they are imagining something less expensive than that (since arguably medium/genome and above, especially evolution, represents doing a search for algorithms rather than scaling up an existing algorithm, and also takes such a ridiculously large amount of compute that it's weird to say we "cou... (read more)
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Oh, cool!
I got access to DALL-E 2 earlier this week, and have spent the last few days (probably adding up to dozens of hours) playing with it, with the goal of mapping out its performance in various areas – and, of course, ending up with some epic art.
Below, I've compiled a list of observations made about DALL-E, along with examples. If you want to request art of a particular scene, or to test see what a particular prompt does, feel free to comment with your requests.
It's stunning at creating photorealistic content for anything that (this is my guess, at least) has a broad repertoire of online stock images – which is perhaps less interesting because if I wanted a stock photo of (rolls dice) a...
I'm curious why this prompt resulted in overwhelmingly black looking hands. Especially considering that all the other prompts I see result in white subjects being represented. Any theories?
TL;DR: We have ethical obligations not just towards people in the future, but also people in the past.
Imagine the issue that you hold most dear, the issue that you have made your foremost cause, the issue that you have donated your most valuable resources (time, money, attention) to solving. For example: imagine you’re an environmental conservationist whose dearest value is the preservation of species and ecosystem biodiversity across planet Earth.
Now imagine it’s 2100. You’ve died, and your grandchildren are reading your will — and laughing. They’re laughing because they have already tiled over the earth with one of six species chosen for maximum cuteness (puppies, kittens, pandas, polar bears, buns, and axolotl) plus any necessary organisms to provide food.
Why paperclip the world when you could bun it?
Cuteness...
If you only kept promises when you want to, they wouldn't be promises. Does your current self really think that feeling lazy is a good reason to break the promise? I kinda expect toy-you would feel bad about breaking this promise, which, even if they do it, suggests they didn't think it was a good idea.
If the gym was currently on fire, you'd probably feel more justified breaking the promise. But the promise is still broken. What's the difference in those two breaks, except that current you thinks "the gym is on fire" is a good reason, and "I'm feeling lazy... (read more)
Most witches don't believe in gods. They know that the gods exist, of course. They even deal with them occasionally. But they don't believe in them. They know them too well. It would be like believing in the postman.
—Terry Pratchett, Witches Abroad
Once upon a time, I was pondering the philosophy of fantasy stories—
And before anyone chides me for my "failure to understand what fantasy is about", let me say this: I was raised in an SF&F household. I have been reading fantasy stories since I was five years old. I occasionally try to write fantasy stories. And I am not the sort of person who tries to write for a genre without pondering its philosophy. Where do you think story ideas come from?
Anyway:
I was...
This gets me thinking so much that it might be worth making a top level post. In fact, there are a lot of reasons why such people want to enter the world of magic:
(Edited in a section about an hour after posting.)
This is primarily a response to One saving one's world
In defence of attempting unnatural or extreme strategies
- Hard problems deserve a diverse portfolio of solution attempts, if it is not obvious which ones will succeed. This portfolio can include unnatural or extreme strategies.
(This is ignoring unilateralist curse concerns and how some solution attempts may not only fail but make other solution attempts also more likely to fail. Ideally solution attempts should only be attempted if their success is completely decoupled from the success of other approaches.)
Some examples of unnatural strategies: trying to build very powerful theories to understand memetics, "extreme rationality" to make it easier to convince people of AI risk, finding pareto-optimal solutions to all conflict or to...
Wait, are there people who explicitly state that getting normal people involved will make things worse?
Clear communication is difficult. Most people, including many of those with thoughts genuinely worth sharing, are not especially good at it.
I am only sometimes good at it, but a major piece of what makes me sometimes good at it is described below in concrete and straightforward terms.
The short version of the thing is "rule out everything you didn't mean."
That phrase by itself could imply a lot of different things, though, many of which I do not intend. The rest of this essay, therefore, is me ruling out everything I didn't mean by the phrase "rule out everything you didn't mean."
I've struggled much more with this essay than most. It's not at all clear to me how deep to dive, nor how much to belabor any specific point.
From...
Thank you for curating this, I had missed this one and it does provide a useful model of trying to point to particular concepts.
A couple years ago, Wikipedia added a feature where if you hover over an internal link you'll see a preview of the target page:
Other sites with similar features include gwern.net:
And LessWrong:
In general, I like these features a lot. They dramatically lower the barrier the following internal links, letting you quickly figure out whether you're interested. On the other hand, they do get in the way. They pop up, overlapping the text you're reading, and mean you need to be paying more attention to where the mouse goes.
I decided I wanted to add a feature like this to my website, but without any overlap. The right margin seemed good, and if you're reading this on jefftk.com with a window at least 1000px wide then hovering over any link from one of my blog posts to...
I still see it working on Greater Wrong. Do you have any extensions that might be blocking it?
Deepmind has hundreds of researchers and OpenAI also has several groups working on different things. That hasn't changed much.
Video generation will become viable and a dynamic visual understanding will come with it. Maybe then robotics will take off.
Yeah, I think there is so much work going on that it is not terribly unlikely that when the scaling limit is reached the next steps already exist and only have to be adopted by the big players.