Is self-healing code the next stage of GenAI?

Click here to visit Original posting

Generative AI is revolutionizing the world of programming. Tools like GitHub, Copilot, ChatGPT and the like have given developers the power of generating almost instant code, further democratizing the industry.

Despite initial concerns, developers aren’t as worried about GenAI taking over their jobs in the foreseeable future. While these tools easily automate tedious or repetitive tasks, such as testing, debugging and creating boilerplate code, they are currently no match for the creativity, intuition, and problem-solving abilities of human developers.

However, despite potential productivity gains, developers are cautious in the adoption of the new GenAI tech, no doubt in large part to frequent ‘hallucinations’, leading to mistrust of the output of GenAI. Research from Purdue University demonstrates this limitation, finding that when asked 517 questions from Stack Overflow, 77% of ChatGPT’s answers were wrong nearly half the time and overly verbose. Unsurprisingly, Stack Overflow’s latest developer survey, which tracks the preferences and sentiments of over 90,000 developers globally, indicates that a mere 3% completely trusted the output of their AI tools.

The next stage

What will the next stage of GenAI’s evolution hold for software development, and will we see trust and accuracy of the data feeding the rapidly diversifying number of large language models grow? Given that LLMs are built on typically human-generated data and content and humans make errors and have biases, hallucinations will remain a concern for GenAI created content for some time to come.

However, one of the more fascinating aspects of LLMs is their ability to improve their output through self-reflection. Feed the model its own response back, then ask it to improve the response or identify errors, and it has a higher chance of producing something factually accurate or applicable for its users. Ask it to solve a problem by showing its work, step by step, and these systems are more accurate than those tuned just to find the correct final answer.

Self-healing code refers to code that automatically corrects itself when issues occur, saving thousands of hours for developers who regularly test code for bugs and identify and correct them. Using a more guided and auto-regressive approach to large-language models is emerging as the preferred usage model, and in theory, it brings the concept of self-healing code closer to reality and could be applied to the creation, maintenance, and improvement of code at an entirely new level.

Self-healing code in 2024?

Google is already using machine learning tools to accelerate the process of resolving code review comments, which serves to improve productivity and allows technologists to focus on more creative and complex tasks.

Additionally, some recent intriguing experiments apply this review capability to code that developers are trying to deploy. In simple terms, this is a use case where AI reviews code, identifies errors, tests a fix, and then redeploys the resolved code.

For the near term future, self-healing code will likely only be a reality for streamlining the process of pull-requests. In this case AI can fix code based on comments from a reviewer. The code is then reviewed and approved by the original author of the code. The mainstream availability of a universal and trusted ‘self-healing’ AI function is still some ways off.

Improving data quality

Developers willing to embrace GenAI tools will no doubt see the benefits of improved efficiencies, productivity, and learning. In order to bolster their trust in AI technology’s output, the quality of data used to feed LLMs will need to be carefully examined and selected.

A sound LLM is one trained with high-quality data, which will, in turn, help to reduce hallucinations, inaccuracies, and inconsistencies, enabling higher trust in the simple tasks before developers consider deploying them for more conceptual and experimental use-cases, including self-healing code.

The regulation of AI has emerged as a key focus in 2024 across enterprise organizations and for policymakers across the globe. Coupled with the increased pressures facing LLM developers from the technology community and larger society to consider the impact of AI-generated material, the need to address the quality of data that AI models are built on is more critical than ever. By bringing together data, human experience, and community sources, developers can ensure a future where the next generation of technologies is built on a strong foundation of accurate and trusted data.

We've listed the best school coding platform.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro