The Secure Coding Best Practices [Cheat Sheet]

Unlock quick recommendations to fortify your code against vulnerabilities. This quick-reference guide is packed with actionable insights to help developers avoid common security pitfalls and build resilient applications.

Secure Coding Explained

Secure coding is the practice of developing software that is resistant to security vulnerabilities by applying security best practices, techniques, and tools early in development.

8 minutes read

What is secure coding?

Secure coding is the practice of developing software that is resistant to security vulnerabilities by applying security best practices, techniques, and tools early in development. Instead of thinking only about user experience, secure coding aligns every feature with security measures—right from the beginning of the software development lifecycle.

For example, an application that accepts all data from a client without sanitizing it might be easier to implement, use, and maintain. However, it opens an entry point for attackers to inject malicious code.

What are common code vulnerabilities?

Let’s look at common security vulnerabilities software developers and security researchers have identified. We’ll go from low-level issues like memory vulnerabilities to higher-level problems like injection attacks.

Buffer overflows

Buffer overflows can crash your application or allow attackers to write data into other buffers. 

System programming languages like C/C++ are prone to this vulnerability. They allow and even require memory management explicitly but don’t check memory access until it’s too late. If you write more data into a buffer than you assigned it at definition time, C will override all memory data that follows at the end of the buffer.

Example of a buffer overflow in C:

int b[5];
  b[5] = 999; // buffer only goes from 0 to 4

Use after free

Use after free happens when you free memory on the heap but keep using the old pointer.

Again, this vulnerability is prominent in languages without garbage collection, like C/C++, where you must manually manage memory. There are two types of memory: the stack and the heap. The language automatically manages the stack, which can’t hold data with dynamic sizes that aren’t known at compile time. The heap is for dynamic data, but you must manually allocate and free space on it. Freeing means you tell the operating system that you don’t need the memory anymore, so if you use it afterward with a pointer, the illegal access will go to an unallocated memory location.

Example of use after free in C:

char* p = (char*)malloc (16);
      p = strdup("Some text!");
      free(p);
      printf("%s", p); //prints what’s now in the freed memory

Double free

In the case of double free, you are freeing heap memory after you have already freed it. 

Double free is an issue in languages with manual memory management, where you must explicitly tell the operating system that you no longer need a specific memory range. Doing so two times will result in a crash similar to the use after free issue. This usually happens when you have multiple objects with pointers to each other that get freed at some point. Double free can corrupt the memory a pointer referenced before the first free.

Example of double free in C:

char* p = (char*)malloc (16);
      p = strdup("Some text!");
      free(p);
      free(p); // will corrupt what’s in the freed memory

Insecure deserialization

Insecure deserialization involves directly transforming an external data structure (e.g., JSON, XML, etc.) to an internal one (e.g., objects, arrays, etc.) without sufficient checks.

Insecure deserialization is a common vulnerability in all kinds of applications. It might be good to accept unsanitized data during development, but users can sneak in malicious data without notice if it is done in production. 

Example of insecure deserialization in JSON:

{
  "name": "example",
  "email": "email@example.com",
  "isAdmin": true // should be deleted on the server
}

Memory leaks

Memory leaks let your application consume memory without bounds. If you exhaust the available memory and request more, your application will crash. 

Every sufficiently complex application is susceptible to this vulnerability. Even garbage-collected languages aren’t safe from memory leaks. Garbage-collected languages still allow you to build data structures a garbage collector can’t manage. 

Injection flaws

Executing user input as code without validating it is known as an injection flaw.

This issue can affect all applications, regardless of the programming language used. One way to make your application vulnerable to injection flaws is by allowing users to add custom code as a feature and not sandbox the execution properly. Buffer overflows that allow attackers to write code into executable memory locations are another way that your application can become vulnerable to injection flaws.

Cross-site scripting (XSS)

Cross-site scripting is a web-specific version of an injection flaw. Here, an attacker inserts custom JavaScript hidden inside HTML markup.

XSS can happen on all websites. Because markup and executable code are tightly integrated on the web, it’s easy to sneak JavaScript into HTML, which leaks sensitive data.

Example of XSS in HTML and JavaScript:

<!-- this will send a fetch request 
when the mouse is over the <p> element -->
<p onmouseover="fetch('//example.com')">Hello, world!</p>

XML external entities (XXE)

XML external entities are another instance of an injection flaw. All applications that use XML are susceptible to this attack. The idea behind external entities in XML is to allow reuse of existing XML files. However, an attacker can use this feature to include links to private XML files, allowing them to read private data indirectly through their uploaded XML file.

Example external XML entity injection:

<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE a [  
  <!ELEMENT a ANY >
  <!-- this defines a new entity called xxe
  from a private file -->
  <!ENTITY xxe SYSTEM "file:///etc/passwd" >
]>
<!-- here the entity is rendered to display 
the file content -->
<a>&xxe;</a>

Insecure direct object reference (IDOR)

When you allow public APIs to reference objects with sequential IDs directly, IDORs can enable attackers to guess the ID of all objects on the server. 

This issue can happen anywhere sequential IDs are used to reference objects and is especially serious when using the IDs to reference public and private objects without requiring authorization.

Example URLs:

https://example.com/users/4539

https://example.com/users/4540

https://example.com/users/4541

Directory traversal (aka path traversal)

Another injection flaw is where attackers can traverse paths or directory structures via file-name inputs.

All applications that allow file name inputs can become a victim of this vulnerability. Directory traversal can happen when users upload multiple files referencing each other via relative paths. Attackers can use file traversal paths like ".." to navigate from their upload directory on the server and into directories with files from admins or other users.

Example of directory traversal in JavaScript on Node.js:

// This loads a private javascript file
const template = require("../../../server/config/database")
  render(template)

Code security standards

Secure coding standards are sets of guidelines and best practices that developers follow to create secure software and minimize vulnerabilities. They address common coding mistakes and weaknesses that can be exploited by attackers, aiming to create more resilient and resistant code.

Below are the common secure code standards to follow:

1. OWASP Secure Coding Practices:

OWASP Secure Coding Practices (SCP) is a comprehensive set of guidelines and recommendations for developing secure software applications. It's created and maintained by the Open Web Application Security Project (OWASP), a non-profit organization dedicated to improving software security. Here are some of key focuses:

  • Input validation and sanitization: Scrutinizing all user input to prevent injection attacks like SQL injection and cross-site scripting (XSS).

  • Authentication and authorization: Enforcing robust authentication mechanisms and restricting access to authorized users and actions.

  • Session management: Securing session IDs to thwart session hijacking.

  • Encryption: Safeguarding sensitive data using encryption at rest and in transit.

  • Error handling and logging: Implementing proper error handling to avoid disclosing sensitive information and logging events for security auditing.

2. CERT Secure Coding Standards:

CERT Secure Coding Standards (SCS) are a set of guidelines and recommendations developed by the Software Engineering Institute (SEI) at Carnegie Mellon University to help developers write secure code and prevent vulnerabilities. Key areas of focus:

  • Language-specific guidelines: Offering recommendations for C, C++, Java, Android, and Perl to address common vulnerabilities in those languages.

  • Defensive programming: Emphasizing anticipating and handling errors gracefully to prevent exploitation.

  • Memory management: Focus on preventing buffer overflows and memory leaks, especially in languages like C and C++.

3. NIST Secure Coding Guidelines:

NIST Secure Coding Guidelines (also known as NIST Special Publication 800-218) are a set of recommendations developed by the National Institute of Standards and Technology (NIST) to help developers write secure code and mitigate vulnerabilities. Key areas of focus:

  • Input validation and sanitization: Preventing injection attacks like SQL injection and cross-site scripting (XSS) by ensuring user-provided data is safe.

  • Authentication and authorization: Enforcing robust authentication mechanisms and secure session management.

  • Encryption: Using encryption techniques to protect sensitive data at rest and in transit.

  • Error handling and logging: Implementing proper error handling to avoid disclosing sensitive information and logging security-related events.

  • Memory management: Preventing memory-related vulnerabilities like buffer overflows and memory leaks, especially in languages like C and C++.

4. ISO/IEC 27001:

ISO/IEC 27001 is an international information security standard. While it's not specifically a secure coding standard, it does include requirements for secure coding practices as part of a comprehensive security management approach. Annex A, Control 8.28: Secure Coding Practices, specifically focuses on secure coding and emphasizes how organizations must:

  • Develop secure coding processes for in-house development and third-party code.

  • Stay informed about evolving threats and vulnerabilities.

  • Implement robust secure coding principles to address them.

How to avoid common software vulnerabilities

Now that we’ve looked into the common issues, let's explore potential solutions. (If you want a more thorough resource on the topic, check out OWASP secure coding practices in this Developer Guide. It’s still a draft, but it features invaluable security tips.) Here are the top three tips for secure coding requirements:

1. Use modern languages and tools

Many memory-related security vulnerabilities affect programming languages with manual memory management and no built-in memory checks. When starting a new project, make sure you really require C/C++ for it, and if you do, use smart pointers and static code analyzers to minimize the impact of language flaws.

If you need system programming features, a more modern language like Rust can be a good choice because its type system checks memory use at compile time. Zig might also be a good alternative, as it has no hidden control flow or memory allocations.

If you don’t need system programming features, using a garbage-collected language like Java or C# can protect you from many memory issues.

2. Validate and sanitize input and output data

Unvalidated user data is the prime reason for injection flaws. That’s why it’s critically important to validate all data that enters your system. Sanitation is another step that can keep security in check without sacrificing usability. Instead of rejecting a user input if it’s invalid, sanitation will cut out problematic input parts (i.e., JavaScript inside HTML) and use the remaining data. When running in a client-server environment, ensure this validation and sanitation happens on the server. This means adding validators and sanitizers to all API endpoints that accept user data. It can also mean choosing data formats that are easy to validate—for example, accepting simple Markdown instead of fully-fledged HTML.

Keeping input data clean isn’t always possible; validation libraries have bugs, too. To ensure nothing leaks through to your users, only display outputs based on user inputs in a secure way (i.e., don’t render HTML).

3. Check third-party code integrity

You should consider third-party code like libraries and frameworks as inputs, too. Not inputs to your application, since libraries and frameworks essentially consist of your own and third-party code, but inputs to your build process. Always pin a specific version or hash for production deployment when using libraries. That way, no new version can sneak into your deployments unchecked. 

Summary

Secure coding is a practice that touches all aspects of software development—from choice of data formats and programming languages to planning of inputs and outputs to implementation. 

We're excited to introduce Wiz Code, our latest innovation designed to empower developers and security teams to implement and maintain robust secure coding practices throughout the entire software development lifecycle!

Wiz Code extends our cloud security platform to cover every stage of development, offering powerful capabilities to support your secure coding initiatives:

  • Integrated Code Scanning: Detect vulnerabilities, misconfigurations, and compliance issues directly in your IDE and code repositories, catching potential problems before they reach production.

  • Real-Time Security Feedback: Get instant security insights as you code, enabling developers to address issues immediately and learn secure coding practices on the go.

  • Cloud-to-Code Traceability: Trace risks discovered in production environments back to the specific code and teams that introduced them, facilitating rapid root cause analysis and remediation.

  • In-Code Remediation Guidance: Receive actionable, context-aware recommendations for fixing security issues right within your development environment.

  • Comprehensive Language Support: Benefit from secure coding best practices across a wide range of programming languages and frameworks.

Secure your SDLC from start to finish

See why Wiz is one of the few cloud security platforms that security and devops teams both love to use.

Get a demo 

Continue reading

Secure SDLC

Secure SDLC (SSDLC) is a framework for enhancing software security by integrating security designs, tools, and processes across the entire development lifecycle.

What is interactive application security testing (IAST)?

Wiz Experts Team

IAST (Interactive Application Security Testing) is a security testing method that monitors applications in real-time during runtime to detect vulnerabilities by analyzing code behavior and data flow in live environments.

Top OSS SCA tools

Wiz Experts Team

Open-source software (OSS) software composition analysis (SCA) tools are specialized solutions designed to analyze an application's open-source components and dependencies.

What is API security?

API security encompasses the strategies, procedures, and solutions employed to defend APIs against threats, vulnerabilities, and unauthorized intrusion.