Why coders love the AI that could put them out of a job
- Published
"When you start coding, it makes you feel smart in itself, like you're in the Matrix [film]," says Janine Luk, a 26 year-old software engineer who works in London.
Born in Hong Kong, she started her career in yacht marketing in the south of France but found it "a bit repetitive and superficial".
So, she started teaching herself to code after work, followed by a 15-week coding boot camp.
On the boot camp's last day, she applied for a job at cyber-security software company, Avast.
And started there a week later.
"Two and a half years later, I really think it's the best decision I ever made," she reflects.
When she started at the company, she was the first woman developer working on her team. She now spends her spare time encouraging other women, people of colour, and LGBT people to try coding.
For programmers like her, she says the most interesting shift recently has been the rise of artificial intelligence (AI) tools which can bite off increasingly big chunks of programming all by themselves.
In June, GitHub, a San Francisco-based code-hosting platform with 56 million users, revealed a new AI tool called Copilot.
You start typing a few characters of code, and the AI suggests how to finish it.
"The single most mind-blowing application of machine learning I've ever seen," Instagram's co-founder Mike Krieger enthused about Copilot.
It is based on an artificial intelligence called GPT-3, released last summer by OpenAI, a San Francisco-based AI lab, co-founded by Elon Musk.
This GPT (which stands for generative pre-training) engine does a "very simple but very large thing - predicting the next letter in a text," explains Grzegorz Jakacki, Warsaw-based founder of Codility, which makes a popular hiring test.
OpenAI trained the AI on texts already available online such as books, Wikipedia and hundreds of thousands of web pages, a diet that was "somewhat curated but in all possible human languages," he says.
And "spookily, it wasn't taught the rules of any particular language," adds Mr Jakacki.
The result was plausible passages of text.
People have subsequently asked it to write in a variety of styles, for example, new Harry Potter stories, but in the style of Ernest Hemingway or Raymond Chandler.
Eventually the hype over GPT-3 got "way too much", and people needed reminding the AI "sometimes makes very silly mistakes", tweeted Sam Altman, OpenAI's chief executive.
Still, GitHub - whose owner, Microsoft, bought an exclusive licence to use GPT-3 in September - decided to train-up another, similar model. But this time, training the AI on software source code instead.
GitHub is the world's largest host of source code, it has at least 28 million public repositories (places software packages are stored). So, the company has fed Copilot on a healthy diet of public code.
As a result, Copilot can provide "relatively good solutions, even though sometimes it requires some tweaking," according to Miss Luk who has tried giving the AI coding challenges.
As a programmer, far from seeing the tool as risking her job she likes the idea of having AI to support her with "the more boring parts" of coding, like checking over complicated strings, called regular expressions, that she always has to "quadruple check".
And, since the AI has been fed code written by professional programmers, it's really helping coders draw on their colleagues' collective wisdom, says Dina Muscanell, Vermont-based senior programmer at open-source software company, Red Hat.
There are already coding-community websites like Stack Exchange, where programmers can pose questions and get suggestions. Maybe this isn't so different?
"If you think about getting that feedback instantaneously as you're typing, that's pretty awesome. You have a team of people feeding you this code" even if there is an AI assembling it, she observes.
But professional programmers also have a few qualms about the new AI kid on the block.
One is spotting mistakes. In software engineering, "you're lucky where the garbage [rubbish] is very obvious, but this thing can generate very subtle garbage," says Mr Jakacki.
Subtle mistakes in code can be especially costly and very hard to find.
A possible future answer could involve using AI to detect bugs: for instance, noticing that pressing some buttons on a microwave "are valid inputs, but do not make sense". But we're not quite there.
In the meantime, "if you're not experienced, and you're just trying to learn, you could be doing something bad without being aware of that," warns Ms Muscanell.
Another big question involves ownership of this auto-generated code. What if Copilot, which has been trained on other people's programs, dishes-up something near-identical to code another programmer has written, and then you use it?
Using the AI tool "can potentially violate open source licences because it can cite something from the training set," Miss Luk argues. And that could land you in hot water for plagiarism.
It's all an area "where law is not catching up with technology," Mr Jakacki says.
In theory, you could measure how much code owed to one bit of training code: by training up a different AI using all the other source code but leaving that particular bit out.
But doing this would be "extremely costly," observes Mr Jakacki.
In reality, at the moment the AI only provides short passages of code, not fully turned out software programs.
By comparison 10,000 lines is the minimum length of website code 'when you're getting some meaningful functionality', Mr Jakacki says.
So, it's not quite ready to replace human programmers yet.
Or bring about the fabled AI singularity - an idea first hypothesised by mathematician John von Neumann, where computer intelligence enters a runaway explosion of self-improvement cycles, and quickly far surpasses human intelligence.
And more to the point, for coders like Miss Luk, "even though it does help, it doesn't necessarily mean the workload is alleviated".
Code still needs to be thoroughly reviewed, and subjected to tests both involving how it works (called unit tests) and how it fits with other pieces of code (integration tests).
Which is all just as well, she adds.
The chief reason she enjoys coding "is the problem-solving element of it, and if everything is already done for you, it takes the fun out of it," Miss Luk reflects.
If computers do too much of the thinking, "you don't get the satisfaction after solving an issue."
And while she thinks there is potential for AI programming tools, as they learn more and adapt, "but hopefully not so soon that we won't be needed any more," she laughs.