Following the Design phase of the Web3 Secure Development Life Cycle (SDLC) is the Develop phase. The crux of the Develop phase is to take the output of the Design phase and write the code that implements that design.
Security practices in the Web3 development process can be broadly defined by six key areas:
- Code Management
- Coding Standards
- Protection of Sensitive Information
- Testing Methodologies
- Security Analysis
This blog post breaks down each of these categories and provides some specific examples of guidance that, if followed, will increase the security posture of the protocol being developed.
Code management encompasses all the processes involved in storing and accessing code. Secure code management is often overlooked by development teams because it doesn’t involve writing code. However, good code management practices can increase the security posture of a protocol. For example, an industry-standard source control framework (e.g., GitHub, GitLab) should be used to store code. These frameworks tend to be thoroughly vetted for security vulnerabilities and have been “battle-tested” through extensive use.
A less obvious practice that yields security benefits is when protocol source code and development work is made visible to the public. Doing so fosters both trust in the protocol (i.e., users can inspect protocol code themselves) and creates an environment that makes vulnerability reporting easier for users of the protocol.
Once you’ve implemented secure practices for managing code, you’ll need to establish similar practices for writing the code itself, starting with deciding which development framework to use. Candidate frameworks should be actively maintained and used extensively in the market your protocol targets. Such frameworks tend to have significant support for rapidly fixing vulnerabilities as they are discovered. This reactiveness is important, since a vulnerability in a development framework could mask vulnerabilities in your own protocol code.
While the choice of framework can foster trust in the security of the development environment, that trust does not extend to the code that developers will write themselves. Therefore, establishing secure coding practices is essential for building a strong security posture in the Develop phase. But what makes a coding practice secure?
Consider the use of common coding patterns and idioms. There are usually multiple ways to implement a feature with code, and it’s common that each way has advantages and disadvantages compared to other solutions. When faced with such a decision, developers should choose known, secure code patterns over other alternatives. These patterns are classified as secure because they protect against common vulnerabilities. The “checks-effects-interactions” pattern used in Ethereum smart contracts is a good example of a secure coding pattern because it mitigates the impact of reentrancy attacks.
Once code has been written, it should be thoroughly documented using established documentation formats (e.g., NatSpec for Solidity). Thorough documentation makes it easier to find instances where code does not match up to its expected behavior.
Additionally, all code should go through a standardized peer review process before being marked as production ready. This review process affords other developers the opportunity to find security issues in code before that code is used by other parts of the protocol.
Protection of Sensitive Information
Sensitive information is any information whose loss or corruption would result in significant damage to the protocol (and the maintainers of that protocol) that managed it. Because of this risk to both the protocol and its maintainers, securing sensitive information during code development is a high priority.
Before actions can be taken to protect sensitive information, that information must first be identified. This process should involve enlisting the help of upper management to identify the organization’s assets, and then mapping those assets to their representation in the protocol codebase. Ideally, this information is made public, to solicit feedback from the community on whether existing protections are sufficient.
Once identified, take steps to protect sensitive information from being leaked. Examples of such steps include instructing developers to review all local commits for sensitive information prior to pushing those commits to remote repositories, and implementing pre-commit hooks to automatically detect potentially sensitive information. Organizations should also have a plan in place that details what to do if sensitive information is leaked.
In the context of the SDLC, logging is the practice of recording the occurrence of any events or errors that could have security implications for a protocol, as they could reveal that a protocol compromise has happened or is likely to have happened. Logging certain events allows developers to monitor for their occurrence. Knowing exactly when such events occur alerts protocol maintainers to potential protocol compromises earlier, which in turn provides a larger window within which to investigate and perform mitigation steps. Similarly, emitting descriptive, unique messages when errors occur makes triaging those errors easier.
Enforcing security-focused coding standards goes a long way towards catching vulnerabilities before they make it to production environments. However, these standards don’t guarantee that code as it is written will function as intended. This is where testing comes into play.
Prior to writing any tests, development teams should adopt a strategy for integrating development and testing. Test-driven development is an example of such a strategy. Adopting a strategy (and following it) will produce a comprehensive test suite, which can be used before the code is deployed to detect bugs and unintended behaviors that could be exploited.
When writing tests, consider the following guidelines. In general, tests should:
- Be thoroughly documented as to their purpose and expected outcome(s)
- Use realistic/sane test inputs
- Inputs from external sources should be properly mocked
- Internal inputs should have realistic values
- Cover positive and negative outcomes
- Be easy to run
- Any setup required to run your tests should be automated either in your testing framework or by using external tools like Docker
Tests written according to these guidelines will mimic real-world use of the protocol, cover multiple outcomes, and be easy to understand, which will help in triaging test failures.
Once a set of test design principles are adopted, it’s time to start writing tests. Write tests for individual functions (unit testing) and complete user workflows (end-to-end testing). If a protocol has contained vulnerabilities in the past, tests should be written to exercise those as well (regression testing). Each of these categories of testing addresses a weakness of the other categories, so it is important to include tests from each category when building a test suite. For example, an end-to-end test may expose a bug in a function that a unit-test for that function doesn’t expose because of prior state changes that occurred outside that function.
Continue writing tests until the test suite covers all non-trivial functions and all intended use cases of the protocol. While not a perfect solution, test coverage tools can be used to obtain an objective measurement of which code paths are exercised by a test suite. Such tools are useful for pointing out parts of a protocol that are not well tested under the current test suite.
It’s unrealistic to expect developers to write tests that cover all possible user inputs and exercise all possible code paths. As such, a comprehensive test suite is not a guarantee that a protocol will be vulnerability-free. Security analysis practices can augment testing methodologies and coding standards to further strengthen the security of a protocol.
Security analysis techniques are usually placed into one of two categories based on how they operate: static or dynamic. Both types of analysis should be performed during development, because they provide mechanisms to find shallow bugs quickly (static analysis) and deeper bugs that may require more time to exercise (dynamic analysis). Static analysis tools like linters should be run early in the development process. Dynamic analysis tools, like fuzzers and symbolic execution frameworks, should be run against standalone protocol components to detect more complex bugs. Many dynamic analysis tools integrate with existing test suites in order to achieve greater code coverage.
There are many security analysis tools available that serve different purposes. Development teams should choose a suite of security analysis tools that best fits their protocol and its intended use cases. Security analysis tools are most effective when run consistently, so development teams should also establish a set of checkpoints at which the different tools are run. For example, consider running static analysis tools on all commits to development branches and running dynamic analysis tools on the entire protocol code base after development branches are merged into the main branch.
In addition to internal security analysis practices, development teams should solicit security analysis from third parties, in the form of professional security audits. Skilled security auditors have the knowledge and experience to focus on the complex nuances and edge cases in a protocol that may not be detected by automated security analysis tooling. For example, they can better evaluate interactions between a protocol and its dependencies, something that automated tools may not have much visibility into. Security auditors can also offer insight into strategies that can improve the overall security posture of your protocol.
There are a lot of different factors to consider when integrating security and code development practices. This blog post provides some high-level guidance on how to properly integrate the two, but most readers will still be left with specific implementation questions. We intend to further expand upon the SDLC in future blog posts.
Why You Can Trust Arbitrary Execution
Arbitrary Execution (AE) is an engineering-focused organization that specializes in securing decentralized technology. Our team of security researchers leverage their offensive security expertise, tactics, techniques, and hacker mindset to help secure the crypto ecosystem. In the two years since the company’s inception, Arbitrary Execution has performed more than 50 audits of Web3 protocols and projects, as well as created tools that continuously monitor the blockchain for anomalous activity. For more information on Arbitrary Execution's professional services, contact firstname.lastname@example.org. Follow us on Twitter and LinkedIn for updates on our latest projects, including the Web3 SDLC.