AI – Can It Replace Your Software Developers?

It would be impossible to avoid news on the rise of AI over the last twelve months. Whether it’s ChatGPT, Midjourney, DALL-E or the more subtle applications such as Grammarly, ClickUp or Otter.ai, the range of tools that suddenly have appeared on the market has captured the imagination of the world. The hype surrounding the technology has given rise to many questions including whether some traditional roles, currently occupied by humans, are now in danger due to this emerging technology.

Are we really facing down the next industrial revolution that will result in mass layoffs as the machines finally take all the jobs? Are we likely to see a rise in opposition, a twenty first century Luddite movement if you will? While the future might hold such changes, the present tells a different story.

Rise of the Machines?

The greatest challenge we experience with any technology that transitions from the world of research into the public consciousness is the number of experts that appear out of nowhere. There quickly follows vast quantities of information and opinion, that in turn gets re-interpreted leading, inevitably, to more confusion than clarity. This contribution aims to thoughtfully add to the ongoing discourse, offering insights while recognizing the subject's complexity.

AI's emergence is not a sudden phenomenon; it has been evolving for decades, advancing quietly behind the scenes of the technology industry. A pivotal moment was in 1997, when chess grandmaster Garry Kasparov was defeated by IBM's Deep Blue, but AI's history stretches back even further.

In the last decade, many strategy games have consistently been defeated by an AI powered machine. Go, which was often considered to be the most difficult of the strategy board games started to fall as early as 2015.

It hasn’t been easy though.

In more recent history, some of the failures of AI quietly passed without garnering too much attention. One of the more notable of which was Microsoft’s Tay project, an early AI powered chatbot (like ChatGPT) that quickly had to be shut down as it started expressing Nazi sympathies and a range of either lewd or racist messages. Similar challenges have been faced by Google and Meta (formerly Facebook) in the past, with both companies forced to shut down these technologies in short order.

Peeling Back the Layers of AI

To better understand what we are seeing, we will need to get a basic understanding of the technology. This exploration won't cover every detail, as more comprehensive resources exist for deeper study. Instead, we’ll focus on AI's general challenges, avoiding complex technical details such as coding or server specifications, which are not central to this broader overview.

First, let’s remind ourselves that AI is an umbrella term that covers a lot of areas including:

  • machine learning, which has become prominent in all the discussions on Big Data,
  • natural language processing, which is at the heart of ChatGPT,
  • computer vision, which many of us have seen in image recognition systems (it’s how your favourite photo app gathers similar pictures into collections),
  • and of course - increasingly intelligent games.

Next, and this is very general, In the current landscape of mainstream AI, a fundamental approach combines learning models and reinforcement learning. The learning model, also known as the training set, represents a dataset that mirrors the real-world context that the AI is intended to engage with.

For example, a motor insurance company might build a training set that includes information on the policies that it has issued such as driver age, vehicle age, vehicle power and so on. It would also record whether ultimately there was a claim made on that policy. This representation of the world allows the AI to ‘predict’ the risk on a new policy by considering what it ‘knows’.

Here we hit the first challenge. The AI is only as good as its training set. Imagine if you will that of all the policies issued, only one was issued to a 34-year-old male, and that policy had three claims against it. The AI’s world view is that 34-year-old males are an extremely high risk. If you want to improve the quality of the results from the AI, you need to provide a sufficiently large training set. Training sets can have a build in bias, and where that bias exists, the AI will follow it, and as we have seen over time, it will often amplify it.

Reinforcement learning is probably a phrase we’re more familiar with. We humans gain our experience in this way – try something, if it works do more of that, if it fails try something else, repeat as necessary. AI systems, and especially those that play games ‘learn’ in this way. Those of you old enough to remember the early Mathew Broderick movie ‘War Games’ will recognise this as the way that World War 3 was averted.

Here we hit the second challenge. This approach, from a computing perspective is only viable as an approach where there are a finite number of options, or moves, at any given step in a process. Hence it is particularly appropriate for games, but also for natural language processing, where there are a finite number of words and syntax rules for structure. It may be a large quantity of data, but it is finite, and the range of material available for training is huge.

If you want AI to solve more general problems, you need to give it enough scenarios to cover the problems and enough data for it to ‘learn’. If we can ignore the simple problems of storage, network traffic to move the data around, the amount of time required for the AI to learn, and to verify that we haven’t created a ‘monster‘(see Microsoft above), is for now an intractable problem.

The Sentience Gap

These challenges all point us to a clearer understanding of what AI is, and perhaps more importantly, what it isn’t. AI forces us to redefine our concept of intelligence. AI as a technology demonstrates an ability to make decisions, often more quickly to humans, to process vast quantities of data in a short period of time, and to learn. All of this can lull us into believing that AI is perhaps sentient.

This is of course not the case, not yet anyway.

This lack of sentience leads us to the ultimate understanding of what AI is, and by extension what it can do. AI is superb at distilling data and combining that data in clever, sometimes even novel ways to draw some conclusions. Not all of the conclusions are correct, but with the clever application of reinforcement learning techniques, it can be trained not to draw that conclusion again.

But AI won’t innovate in the way a human can. AI, given a finite set of data may combine and recombine, but it won’t make leaps of innovation; it will not create something completely new.

This can be hard to discern, and particularly when you look at something like Midjourney, for example, where with a few simple prompts it will generate a completely new image based on the information provided. It’s extraordinarily impressive, but we must remind ourselves that it is combining what it already has data on with what is being requested to produce a result. In the end, even the amazing Midjourney is a follower, not a leader. If you want it to do more, it must be trained with additional material. Combination is a form of creation, but it is always a ‘tribute’ to something that has gone before.

So, Can I Do Without my Developers?

Unsurprisingly, there’s a connection to the past, which is perhaps more relevant in the context of AI than in any other technology.

The Luddites were a labour group in the 19th century that resisted the rise of mass production factories who used unskilled labour with mechanised looms to produce their products. The skilled textile workers were losing their influence, and as a result their income. This is so well recognised as a protest against technology that the word ‘Luddite’ has entered the language as a synonym for technophobe.

But there’s an aspect of the Luddite argument that resonates today, particularly in the context of AI. As history has shown, the skill of weaving has declined, the apprenticeships that allowed the industry to thrive and grow faded into obscurity as they had warned at the time.

We now face a similar risk with AI.

On one side, much like the mechanical weaving machines, AI can rapidly execute a process based on a known pattern. The patterns in this case are the body of knowledge used as a training set for AI. But where is this knowledge coming from? Currently it is the development community who feed the many web sites with answers to questions from other developers. But these are the very people who are at risk of being replaced. The ultimate outcome of this approach will be a steady decline of experienced developers resulting in a similar decline in the development of innovative solutions.

It would be reasonable to think that the reinforcement approach would work. Surely the AI can learn to be better?

This leads us to the other side. Reinforcement learning works on the basis of an exhaustive search and feedback on solutions to narrow down the results to the correct outcome. We must consider two questions: first, can we ever say in the realm of software development that there is a single ‘correct’ outcome? Second, who will be responsible for deciding what is correct?

Each of these questions require the same resource to resolve the questions – an expert developer. Expert developers don’t just spring into existence, like every other profession they emerge from many years of work, gaining valuable experience through a process of trial and error, and of course mentoring. But if AI as a replacement for developers, the result will be a gradual erosion of expertise to the point where there will be, once again, a stagnation of progress due to a lack of expertise.

Conclusion

AI has moved from research labs to a prominent role in public consciousness, showing immense potential to affect various aspects of our lives. The concerns that have been expressed by actors, screen writers amongst others should also give us all pause for thought on whether AI is a tool to augment human creativity, or a tool to replace people.

As an aid, AI has proven its value in boosting productivity and analytics, and in reducing monotonous tasks. Its growth in these areas will likely permeate our daily routines, potentially bringing significant benefits if implemented thoughtfully.

However, when seen as a replacement for humans, AI's capabilities are limited to executing repetitive tasks more efficiently and cost-effectively. It falls short in areas demanding innovation and creativity, fields where human problem-solving skills are irreplaceable. Although AI may appear capable, relying on it too heavily could lead to a gradual decline in innovation, quality, and creativity, particularly due to short-term cost savings.

In creative sectors like scriptwriting, this reliance on AI might lead to uniform storytelling, recycling old narratives without introducing fresh ideas. A similar risk looms in the technology sector, where overdependence on AI could stifle product innovation.

Close