The flurry of news about ChatGPT (and my recent experimentation with GitHub Copilot) has got me thinking about the effect natural language models are having on software engineering. AI-assisted development is improving at a blistering pace, but not just the models themselves – people are figuring out ways to use AI everywhere from explaining exceptions to writing terminal commands.

I want to hear people's thoughts on where this is going. What might software development actually look like in 3 years, concretely? Who knows, maybe it will be replaced by prompt engineering. And will this decrease the demand for technology workers, or increase it? AI capabilities are difficult to predict, so I'm also curious about what devs should start learning to stay ahead of automation.

New Answer
New Comment

1 Answers sorted by

Ustice

Dec 09, 2022

62

I have been using GitHub Co-Pilot as a part of my daily job for over a year. TL; DR: in three years I expect some improvement, but not beyond simple functions.

Right now, copilot is most useful for converting data between formats, and writing out boilerplate. It is surprising how often in software development (especially server-side) you need to change the shape of data. Essentially, as long as there are established patterns, it is helpful; however so not expect it to write software for you any time soon.

So far these systems are still fairly narrowly scoped. It can write a simple function, but I haven’t seen it be able to create abstractions. It really doesn’t have an understanding of the code, and even now it isn’t very good at matching parentheses or brackets.

Now I don’t expect copilot of three years from now to put me out of a job, but I do expect that it will do more of the typing for me. I think that I’m still going to have to convert business decisions into the right abstractions, but I hope that I’ll be writing fewer tests by hand.

Until then, it’ll continue writing plausible nonsense, which sometimes happens to be useful.

One year later, what you think about the field now?

1Ustice4mo
Pretty on-track, I think. While I have seen some toy examples entirely written by an AI system, I’m not seeing complex software yet. More importantly, even in the examples that I have seen, it’s still being directed by a software engineer. That said, copilot can now do a better job of analyzing and refactoring code. I still don’t think that an AI system will be able to replace me in 3 years. While I have seen some project setup examples that were fairly impressive, but still most of it is just saving typing. That said, those savings are likely to become significant. Documentation alone will be a big change. Right now documentation is hard. It takes time to write, and worse it is all too easy to become outdated as time goes on, so it’s often neglected. I expect that documentation will become a lot easier to write and maintain, as developers can just approve it as a part of code review, along with corrections. I expect that within the next year, testing will largely be written automatically. That will be amazing, as writing tests are tedious but essential. This is just the sort of task that an AI assistant is perfect for, because they are fairly self-contained. What I hope is that within three years, I’ll be working similarly to when I’m pair-programming with another developer where they are mostly driving. Sort of a more conversational interface too. Within that time, I still expect that it will still take a human to translate business requirements into the proper abstractions and tasks, but tasks that used to take a week or so to write and test will become doable in an afternoon. Basically I feel like my productivity will be multiplied by about an order of magnitude in three years. While this might mean that some teams will be smaller, I expect that there will be a much higher demand, as smaller businesses will be able to have a full software development team with made up by just a few people. Mostly though I just think that releases will be more frequent, as
1 comment, sorted by Click to highlight new comments since: Today at 10:23 PM

All the programming experiments with machine learning I see is "write this code".

I would like to see some of "find a bug in this code" or "refactor this code".