What developers need to know about generative AI in 2024
Generative AI changed how software engineers work in 2023 – but the year closed with some growing concerns. What should you be aware of going into 2024?
The last 12 months have been like no other in terms of the disruptive impact of technology on software engineers. Generative AI has already transformed the way teams operate – providing ample opportunities to innovate the way they work, while also providing leaders with a new set of risks to manage.
As we move into 2024 interest in artificial intelligence (AI) doesn’t appear to be waning. In fact, it is set to reshape the industry as we know it. To prepare for that change, what should engineering leaders know about generative AI in 2024? We asked a clutch of experts to find out.
AI isn’t going to replace you – it’s there to help you
Any development in AI has always led to fears of job losses, and that extends to software engineers.
Generative AI is already changing how computer science is taught, and it’s going to overhaul the way that engineers operate. But it won’t be a one-for-one replacement for human skills.
“As understanding around AI-assisted technology grows, the way developers perceive and handle AI will mature too,” says Joe Ghattas, regional vice president for the UK at low-code development platform OutSystems. “Fears around AI taking developers’ jobs will fade away and AI will be seen as a colleague who deserves trust, but also requires robust control and security standards, especially for highly sensitive scenarios.”
Ghattas says that more experienced engineers will have to unlearn the ways they’ve traditionally built software to incorporate AI helpers. “More and more developers will engage in a conversation with their platforms and leverage generative AI to give suggestions and improve the user experience,” he explains. “Software teams will be able to start projects a lot faster, accomplish much more in less time and overcome barriers that previously caused projects to stall.”
Automation with AI will help to unlock new opportunities for developers, removing the repetitive parts of their job and potentially significantly improving the developer experience.
Like any human colleague, AI can underperform
In the 14 months since ChatGPT was released and the generative AI revolution got underway, businesses have had to redraw their strategies, practices and operations to adapt. But in that time, the AI has also already become noticeably more lazy. It’s an issue that the ChatGPT creator OpenAI has been forced to confront, even if they don’t know why it’s happening. “Model behavior can be unpredictable, and we’re looking into fixing it,” the company posted on X (formerly Twitter).
Theories abound as to why the output of these AI models are already degrading, with some believing it’s because of a time and date coded into every request, which means the model is winding down for the holidays. Others that the company is throttling performance due to cost or GPU shortages (which OpenAI denies). Whatever the reason, these issues have been a wake-up call to anyone who decided to put all their eggs in the basket of one model and that AI tools aren’t the endlessly hard-working helper they were once thought to be. In fact, they might need their own HR support.
You need to think about how to regularly update your own LLMs
The pace of open-source development means that popular AI models are nearly as powerful and accurate as the closed-source alternatives that are capturing the headlines. For organizations wanting more control over their data and the models involved, it’s increasingly easy and practical to roll your own Large Language Model (LLM).
This approach always requires a heavier engineering lift, but the tradeoffs are worth considering. “One of the biggest trends, indeed the biggest challenges, facing engineers in 2024 will be taking LLM applications out of pilot and into production, due to the difficulty in managing the data pipeline,” says Claire Nouet, cofounder and COO of Pathway, a data processing framework firm. “Engineers will face the challenge of ensuring that LLM applications which have moved into production can be continuously updated without creating the requirement for a full-time data engineer to maintain it by processing the batch data uploads.”
Start by building a data pipeline that can be updated with both batch lots of data, and regular streaming updates to keep things up to date.
Prepare to patch issues more frequently
The benefits of generative AI in developing codebases have been well-argued. But it’s also worth bearing in mind the technology is far from perfect.
“In 2024, more organizations will experience major digital service outages due to poor quality and insufficiently supervised software code,” says Bernd Greifeneder, founder and CTO of software observability company Dynatrace. Whether it’s from hallucinations or just oddball coding errors that can snowball into larger issues, the fact that there isn’t inherent human knowledge of how something was developed may be harmful.
“The challenge of maintaining autonomous agent-generated code is similar to preserving code created by developers who have left an organization,” says Greifeneder. “None of the remaining team members fully understand the code. Therefore, no one can quickly resolve problems in the code when they arise.” To tackle this, Greifeneder believes companies will put predictive AI to the task, forecasting issues in the codebase before they arise and rolling back to the last uncorrupted or issue-strewn version.
The data centers you pull from are about to change – drastically
In mid-December, OpenAI re-opened the ability to sign up for a paid ChatGPT subscription after procuring more sought after GPUs, highlighting the pinch point this infrastructure holds over AI companies.
“Legacy facilities are ill-equipped to support widespread implementation of the high-density computing required for AI, with many lacking the required infrastructure for liquid cooling,” Giordano Albertazzi, CEO at Vertiv, an infrastructure provider said.
Expect to see rapid growth and innovation in this space in 2024, with new data centers cropping up that are better prepared for the intensive compute needed for AI processes. How this growth is balanced with broader sustainability goals is also an issue worth keeping an eye on.