Reasonable points, all! I agree that the conflation of legality and morality has warped the discourse around this; in particular the idea of Stable Diffusion and such regurgitating copyrighted imagery strikes me as a red herring, since the ability to do this is as old as the photocopier and legally quite well-understood.
It actually does seem to me, then, that style copying is a bigger problem than straightforward regurgitation, since new images in a style are the thing that you would ordinarily need to go to an artist for; but the biggest problem of all is that fundamentally all art styles are imperfect but pretty good substitutes in the market for all other art styles.
(Most popular of all the art styles-- to judge by a sampling of images online-- is hyperrealism, which is obviously a style that nobody can lay either legal OR moral claim to.)
So i think that if Stability tomorrow came out with a totally unimpeachable version of SD with no copyrighted data of any kind (but with a similarly high quality of output) we would have, essentially, the same set of problems for artists.
Interestingly i believe this is a limitation that one of the newest (as yet unreleased) diffusion models has overcome, called DeepFloyd; a number of examples have been teased already, such as the following Corgi sitting in a sushi doghouse:
As such the quoted paragraphs surprised me as an instance of a straightforwardly falsifiable claim in the documents.
I think that your son is incorrectly analogizing heroin/other opiate cravings to be similar to "desire for sugar" or "desire to use X social media app" or whatever. These are not comparable. People do not get checked into sugar rehab clinics (which they subsequently break out of); they do not burn down each one of their social connections to get to use an hour of TikTok or whatever; they do not break their own arms in order to get to go to the ER which then pumps them full of Twitter likes. They do routinely do these things, and worse, to delay opiate withdrawal symptoms.
(For reference, my wife is a paramedic and she has seen this last one firsthand. Tell me: have you ever, in your life, had something you wanted so much that you would break one of your own limbs to get it?)
Another way of putting this is that opiate use frequently gives you a new utility function where the overwhelmingly dominant term is "getting to consume opiates."
For reference, I'm not automatically suspicious of drugs-- I wrote https://www.lesswrong.com/posts/NDmbnaniJ2xJnBASx/perhaps-vastly-more-people-should-be-on-fda-approved-weight .
believes he has enough self control to not get addicted
So first, as poster above points out, there is not a good way to establish this. You have certainty on this topic well above what the evidence merits.
But leaving that aside. A lot of the core issue here is that the risk/reward profile absolutely sucks for recreational opiates given almost any reasonable set of initial assumptions.
Like, suppose you're right and you don't get addicted. I guess you have... discovered a new hobby, I guess? Whereas if you're wrong then your life is pretty much destroyed, as is the life of everyone who loves you most.
EDIT: Another pretty-routine circumstance my wife runs into at work: Narcan injections are used to bring somebody back if they've stopped breathing due to opiate overdose. Patients need to be restrained beforehand since they will frequently attack providers out of anger for ruining their high, even after it is pointed out to them that they weren't breathing and were approx. 1 minute from death.
I actually think you can get an acceptable picture of whether something is priced in by reading stock analysts on the topic, since one useful thing you can get from them is a holistic perspective of what is on/off the radar of finance types, and what they perceive as important.
Having done this for various stocks, i actually do not think LLM-based advances are on anyone's radar and i do not believe they are priced in meaningfully.
I don't think i ever heard about tesla doing LLM stuff, which seems like the most relevant paradigm for TAI purposes. Can you elaborate?
One possible options play is puts on shutterstock, since as of about 2 weeks ago midjourney got up to a level where you can for a pittance replicate the most common and popular stock image varieties at an extremely high level of quality. (E.g. girl holding a credit card and smiling).
I think the most likely way this shakes out is adobe integrates image generation with figma and its other products, leaving "buying a stock image" as an increasingly niche and limited option for people who want an image to decorate a thing where they aren't all that particular about what the image is.
Primary question to me is on what time scale the SSTK business model dissolves in, since these changes take time.
Having a Ph.D. confers relatively few benefits outside of academia. The writing style and skills taught in academia are very very different from that of industry, and the opportunity cost of pursuing a Ph.D. vs going into software engineering (or something similarly renumerative) is in the hundreds of thousands of dollars.
I would suggest that if you don't know exactly what you want to do with your life, you would be well-suited to doing something that earns you a bunch of money. This money can later be used to finance grander ambitions when you have figured out what you want to do.
I'll turn this question around on you: why is a Ph.D. the best way of accomplishing what you want to do?
As to the drudgery of office work-- "office work" is, i think, a false category. I spent hours of unbearable tedium performing repetitive reactions in lab during my PhD, and my current cushy Microsoft engineering job is enormously more creative and interesting while paying approximately 10x as much. For someone with the smarts to get a ph.d., retraining into engineering is very, very easy.
One other generally undiscussed aspect of the working world is that, for a number of reasons, your employers mostly treat you with respect roughly proportional to your salary. Ph.D.s, consequently, are often treated very poorly. This probably contributes to their poor mental health, as documented elsewhere.
My response comes in two parts.
First part! Even if, by chance, we successfully detect and turn off the first AGI (say, Deepmind's), that just means we're "safe" until Facebook releases its new AGI. Without an alignment solution, this is a game we play more or less forever until either (A) we figure out alignment, (B) we die, or (C) we collectively, every nation, shutter all AI development forever. (C) seems deeply unlikely given the world's demonstrated capabilities around collective action.
I like Bitcoin as a proof-of-concept here, since it's a technology that:
This is an existence proof that there are some software architectures that today, right now cannot be eradicated in spite of a great deal of concerted societal efforts going into just that. Presumably an AGI can just ape their successful characteristicsinaddition to anything else it does; hell, there's no reason an AGI couldn't just distribute itself as particularly profitable bitcoin mining software.
After all, are people really going to turn off a computer making them hundreds of dollars per month just because a few unpopular weirdos are yelling about far-fetched doomsday scenarios around AGI takeover?
"If you think this is a simplistic or distorted version of what EY is saying, you are not paying attention. If you think that EY is merely saying that an AGI can kill a big fraction of humans in accident and so on but there will be survivors, you are not paying attention."
Not sure why this functions as a rebuttal to anything i'm saying.
Re: blameless postmortems, i think the primary reason for blamelessness is because if you have blameful postmortems, they will rapidly transform (at least in perception) into punishments, and consequently will not often occur except when management is really cheesed off at someone. This was how the postmortem system ended up at Amazon while i was there.
Blameful postmortems also result in workers who are very motivated to hide issues they have caused, which is obviously unproductive.