Wanted to chime in and say that I've been thoroughly enjoying this sequence so far, and that it deserves far more traction that it's currently getting. Hopefully people will glance back and realise how many useful and novel thoughts/directions are packed into these posts.
Very interesting. In general I agree concerns about EU maximisation are subtly misguided, but how would you square this result with Shard Theory? Where does Shard Theory fit in with corrigibility?
Did anyone else see this?
What learning algorithm is in-context learning? Investigations with linear models
Strong upvote here as well. The points about how even simple terminological differences can isolate research pursuits are especially pertinent, considering the tendency of people on and around LW to coin new phrases/ideas on a dime. Novel terminology is a valuable resource that we have been spending very frivolously.
Amazing stuff man! Please, please, please keep doing these for as long as you're able to find the time. Absolutely essential that LW gets regular injections of relevant work being done outside the EA-sphere.
(Would also be very interested in either SGD inductive biases or LM internal representations as the topic for next week!)
I'm sure OP is already aware of DALL-E and other diffusion models.
Cards on the table: several months ago I would've agreed with you about the future of art being eaten entirely by AI. I'm much, much more sceptical now.
First of all, like any other hobby, art will still be valued by the people who do it just by virtue of being a fulfilling way to spend time, even if the works produced are never seen by another soul. Outside of just being a hobby though, I think much of art in the future (cinema, music, literature, art, new categories we don't yet have, etc.)... (read more)