I see a lot of digital services being built when they will quickly be automated by artificial intelligence. It's as if there's a disconnect between people who are aware of the rapid advances in AI and the average person. I see the same thing in education in my country, France. Students are preparing to go to university for a degree and skills that will be completely obsolete in 5 years. Even in computer science, everyone is promoting the idea that you have to learn to code to become a code worker, while automation tools are advancing at a rapid pace.
As for content creators, the on-demand generation of text, video, and music could quickly make them irrelevant because most people copy other people. It's as if only people with real creativity will survive. I suspect that AI will tell us within 30 seconds that our innovative idea already exists on the Internet.
I read on Twitter that the age of hackers is over and the age of people with ideas is beginning.
I get lost every time I think about what will not soon be automated by AI in the digital domain.

New Answer
New Comment

4 Answers sorted by

None of the AIs that can replace people are actually ready to replace people. But in general, people aren't sure how to generalize this far out of distribution. A lot of people are trying to use AI to take over the world already in the form of startups, and many who get their income from ownership of contracts and objects are seeking out ways to own enforcement rights to take the value of others people's future by making bets on trades of contracts such as stocks and loans - you know, the same way they were before there was AI to bet on. The one-way pattern risk from AI proceeds as expected, it's just moving slower and is more human-mediated than yudkowsky expected. There will be no sudden foom; what you fear is that humanity will be replaced by AI economically, and the replacement will slowly grind away the poorest, until the richest are all AI owners, and then eventually the richest will all be AIs and the replacement is complete. I have no reassurance - this is the true form of the AI safety problem: control-seeking patterns in reality. The inter-agent safety problem.

I expect humanity to have been fully replaced come ten years from now, but at no point will it be sudden. Disempowerment will be incremental. The billionaires will be last, at least as long as ownership structures survive at all. When things finally switch over completely, it will look like some sort of new currency created that only AIs are able to make use of, thereby giving them strong ai-only competitive cooperation.

Brendan Long

Feb 07, 2023

20

I'm not sure if the specifics of a computer science degree will still make sense, but I'm not really worried about the field of software engineering being replaced until basically everything else is. The actual job of software engineering is about being able to take an ambigious design and turn it into an unambiguous model. If we could skip the programming part, that would just make us more efficient but wouldn't change the job that much at a high-level. It would be like making a much nicer programming languge or IDE.

It might suck for new engineers though, since doing the tedious things senior people don't want to do is a good way to get your foot in the door.

Celarix

Jan 25, 2023

20

Despite stuff like DALL-E and Stable Diffusion, I think the more advanced visual arts will be safe for some time to come: movies, music videos, TV shows. Anything that requires extremely consistent people and other physical elements, environments that look the same, plots that both need to make sense and have a high degree of continuity.

Besides all that, even if such technology did exist, I think trying to prompt it to make something you'd like would be nearly impossible - the more degrees of freedom a creative work has, the more you have to specify it to get exactly what you want. A single SD image may take dozens of phrases in the prompt, and that's just for one one-off image! I imagine that specifying something with the complexity of, say, Breaking Bad would require a prompt millions of phrases long.

I agree that you would have to write a very long prompt to get exactly the plot of Breaking Bad. But "write me a story about two would-be drug dealers" might lead to ChatGPT generating something plausible and maybe even entertaining, which could be the input for an AI generating a scene. The main protagonist wouldn't probably look like Bryan Cranston, but it might still be a believable scene. Continuity would be a problem for a longer script, but there are ways to deal with that. Of course, we're not there yet. But if you compare what AI can do today to what it could do five years ago, I'm not sure how far away we really are. 

Karl von Wendt

Jan 24, 2023

10

It depends on what you mean by "safe". I don't think anything will remain untouched by AI in some way or another in the next 5-10 years, digital or not (if we don't get all killed by then). But that doesn't mean that things will simply be removed, or completely automated. Photography has profoundly changed painting: instead of replacing them, it has freed artists from painting naturalistically. Maybe image generators do the same again, in a different way. 

I'm a novelist. While ChatGPT can't write a novel yet, GPT-X may be able to do so, so I'm certainly not "safe". But that will not stop me from writing, and hopefully, it won't stop people from reading my stories, knowing that they were written by a human being. I think it's likely that the publishing industry will be overturned, but human storytelling probably won't go away. Maybe the same is true for writing code: It may be transformed from something tedious you do to automate boring tasks to a form of art, just like painting was transformed from copying a real image onto a canvas to expressing images that exist only in your head.

I had a similar discussion with a tattoo artist two days ago. Tattoo machines will exist, but some people will prefer to be tattooed by an artist because of his style, his talent, and his humanity. You can prove that you are a human tattoo artist by tattooing the client, so AI is not a problem here.
As for the writings produced by a human being, I wonder how you can prove to the readers that you are the author of your writings and not an artificial intelligence. I wonder the same thing about digital pictures or musical composition.

Sure, you can do things fo... (read more)

1Karl von Wendt1y
Yes, that's an important point. I don't think that proving that I'm really the one who wrote a novel will be a big issue. It's not that hard to believe, since people have written novels for millennia and I already have published books pre-GPT. Of course, there may be impostors, claiming to have written a novel that is in fact computer generated, but what would be the point? During a public reading, I usually not only read from my novel but also talk about my motivation to write it, the thoughts that went into it, answer questions, etc. That would be hard to fake I guess. The bigger problem I see is not that GPT-X will be a competing novelist but that it will turn out to be the villain in one of my books, like my new novel VIRTUA.
1Adrien Chauvet1y
Yes, I hadn't thought about the fact that you have several books to your credit and so people know that you have writing skills. The public can trust you because they know your past work. Yes, human beings could easily question a person's creations by asking about their technique, their writing choices, their artistic choices, their aspirations. I hadn't thought of that. I'm trying to predict what's going to happen in the very near future but it's very difficult. In 2012 we thought we would have autonomous cars in 2022 and that the tools of creativity were very hypothetical. The opposite has happened.
1Karl von Wendt1y
Making predictions is indeed difficult, especially about the future, as Mark Twain observed. :) However, I think you have an important point here. People tend to make predictions about AI based on the assumption that humans are the benchmark for everything and somehow very special, in particular concerning creativity. So it must be much easier to automate driving than writing a creative text. Many argue that what ChatGPT outputs is not creative, but "just statistics", even though it is every bit as original as most of what humans can create. I've known for quite some time that "creativity" mostly means recombining original thoughts made by others. Most of our modern novels are based on concepts that the ancient Greeks invented, for instance. You cannot write a novel without first having read a few hundred novels written by others, and I owe most of my creative process to the inspiration I get from authors like Stanislav Lem, Philip K. Dick, Stephen King, and many others.  Creativity is not a magical, mysterious process that only humans can do. For example, I have a set of "story cards", divided into "setting", "character", and "plot", which I sometimes use to generate story ideas. They are not generating complete, detailed storys, of course, but they still show that a large part of the creative process is just random combination. I even wrote an essay for a German magazine in 2009, arguing that the greatest artist of the 21st century might be a machine. At the time, I didn't expect this to happen during my lifetime, though.
1Adrien Chauvet1y
As you say, we reason too much with our current knowledge, like every society in the past that thought it understood everything about the universe. Some people suggest Dyson spheres and Von Neumann probes, but an advanced AI could very well find these inventions unnecessary to build and imagine many other things to prioritize goals that we don't know yet.
4 comments, sorted by Click to highlight new comments since: Today at 9:35 AM

Meta/mod-note: 

a) I recommend writing a question-title that fits in the length of a post item on the frontpage. (I think "What area of the Internet would be AI-proof for 5 to 10 years?" is a better title than "I see a lot of companies building products that I think will be rapidly auto...")

b) Questions generally do better when they give more supporting effort in the post-body. (In this case I do think your question basically makes sense as phrased, but, see Point A, and I suspect there's some fleshing out that would still be helpful for others thinking about it)

Thank you for your advice, I have modified the question and the attached text.

Even in computer science, everyone is promoting the idea that you have to learn to code to become a code worker, while automation tools are advancing at a rapid pace.

I still think it's quite safe to assume that you will have to learn at least how to read code and write pseudo-code to become a code worker. I previously argued here that the average person is really really terrible at programming, and an automation tools isn't going to help someone who doesn't even know what an algorithm is. Even if you have a fantastic tool that produces 99% correct code from scratch, that 1% of wrong code is still sufficient to cause terrible failures, and you have to know what you are doing in order to detect which 1% to fix.

I just read your linked post. In the comments someone proposes the idea that computing will migrate to the next level of abstraction. This is the idea I was quoting in my post, that there will be fewer hackers, very good at tech, and more idea creators who will run IAs without worrying about what's going on under the hood.
I agree with your point that 1% error can be fatal in any program and that what is coded by an AI should be checked before implementing the code on multiple machines.

Speaking of which, I'm amazed by the fact that Chat-GPT explains in common language most of the code snippets. However, my knowledge in programming is basic and I don't know if some programming experts managed to make Chat-GPT perplexed by a very technical, very abstract code snippet.