In a previous blog post, we introduced the concept of the Web3 Security Maturity Model (SMM), and in a separate blog post we laid out the SMM for the Design phase. In this blog post, we’re going to take a closer look at the SMM for the Develop phase. The Develop phase itself includes processes involving the management, writing, documenting, and testing of code. As such, the SMM for the Develop phase focuses on introducing security practices into those processes.
In the SMM, the Develop phase consists of the following six subdomains, with each subdomain being broken up into multiple criteria: Code and Documentation Management, Coding Standards, Protection of Sensitive Information, Logging, Testing, and Code Analysis.
The rest of this blog post will be dedicated to examining these subdomains. A definition and explanation for the inclusion of each subdomain within the Develop phase will be provided, as well as a recommendation as to who should be responsible for acting on the recommendations in each criterion. The criteria themselves will be explained similarly.
Code and documentation management
Code and documentation management refers to where the aforementioned content is stored, how developers interact with that content, and the amount of visibility the public has into the management of that content. Insecure code management could result in the leakage of sensitive information, loss of development efforts, or even the introduction of vulnerabilities into the protocol if an attacker can make code modifications that aren’t detected. The following criteria are directed towards whoever in an organization makes decisions on tool use and what information is made available to the public.
- Definition: The tool(s) developers use to store and manage their code and documentation and the processes followed to interact with that material. Github, Gitlab, and Bitbucket are examples of popular tools for storing and managing source material. The use of feature branches and pull requests are examples of ways to interact with the material these tools manage.
- Rationale: Vetted, thoroughly tested source management tools are less likely to contain exploitable vulnerabilities and tend to be backed by development teams that respond quickly when vulnerabilities are discovered. Since code and documentation is managed by these tools, vulnerabilities in these tools may impact your source material.
- Minimum: Code and documentation is stored in a source controlled repository.
- Improved: Repositories are backed-up.
- Advanced: Processes for interacting with repositories are documented.
- Definition: How much of the protocol’s code, documentation, and development processes are visible to the public.
- Rationale: Increased visibility allows community members to identify potential vulnerabilities that protocol developers have missed and point out instances where proposed fixes to vulnerabilities are not sufficient.
- Minimum: Code and documentation are published at protocol release intervals.
- Improved: Code and documentation developed in public repository branches. Documentation is published to a searchable website.
- Advanced: All development (including management of development resources) is performed publicly.
Coding standards are the set of rules which developers must adhere to when writing code and supporting material, such as documentation. These rules also encompass tool use and peer review. When these standards are not present, it becomes easier for developers to introduce vulnerabilities into code. For example, a Solidity developer may not know about the checks-effects-interactions pattern and inadvertently introduce a critical reentrancy bug into an existing protocol. The coding standard criteria described below should be followed by anyone who touches code or related documentation.
- Definition: The tool-based environments in which code is developed. Examples in the Solidity space include Hardhat and Foundry.
- Rationale: Actively maintained development frameworks tend to have a quick turnaround between when vulnerabilities are identified and when fixes become available. Since a bug in a development framework could mask bugs in your protocol code, framework selection is important. Mature frameworks tend to have additional features, such as stack traces for debugging or fuzzing, which can be used to make your code more secure.
- Minimum: Code is developed using one or more frameworks.
- Improved: Code is developed in a single, well-tested, actively maintained framework (e.g., Hardhat, Foundry).
- Advanced: Development leverages advanced features of the framework (e.g., Foundry’s built-in fuzzer).
- Definition: Coding guidelines are a set of rules to follow when writing code. Examples of rules could include what types of whitespaces to use, which code patterns to follow, and when to check in code to a remote repository.
- Rationale: Establishing a set of guidelines that enforce the use of sound security practices helps to prevent developers from introducing vulnerable code into a codebase.
- Minimum: Informal set of development guidelines has been agreed upon.
- Improved: Guidelines are formalized. Guidelines are enforced through peer-review of code.
- Advanced: Guidelines have been made visible to the public. Guidelines are enforced through CI-CD pipelines.
- Definition: Code patterns are templates for how to structure code in a way that produces desirable properties.
- Rationale: In the context of the SDLC, these properties are security related. For example, the checks-effects-interactions pattern can mitigate the threat of reentrancy attacks, which is a desirable property. Following these patterns can ensure that your protocol functions in a secure manner.
- Minimum: Identified where secure code patterns can be used in the protocol code base.
- Improved: Use of secure coding patterns is enforced through peer-review.
- Advanced: Use of secure coding patterns is enforced automatically (e.g., static analysis tools like Semgrep).
- Definition: In this context, documentation refers to comments that exist inline with code, as well as descriptions of how to build the code itself.
- Rationale: Documentation makes interacting with already developed code easier and less prone to mistakes. If an assumption that a piece of code makes is not documented, later consumers of that code may not account for that assumption, which could result in the introduction of a vulnerability.
- Minimum: Inline documentation is added to explain complex code and implementation decisions.
- Improved: Inline documentation is written to conform to a standard format (e.g., NatSpec), including its application. Build procedures are well documented.
- Advanced: Documentation is used to generate an external, searchable website for community reference.
- Definition: Code review is the process of examining code for issues, such as not adhering to coding guidelines, prior to it being added to the rest of the codebase (for example, before merging a feature branch into the main development branch).
- Rationale: Code review provides an opportunity for code to be inspected for potential vulnerabilities by both the author and other developers, before it is integrated with existing protocol code. This process can result in vulnerabilities being identified and eliminated before users are ever exposed to their effects.
- Minimum: Code is reviewed by its author.
- Improved: Code is reviewed by at least one other developer.
- Advanced: Code is reviewed automatically with static analysis tooling.
Protection of sensitive information
In the context of the development phase, the protection of sensitive information refers to practices used to prevent the loss of a subset of protocol data identified as being sensitive. The construction of this subset is typically protocol-specific and based on the platform on which the protocol operates. Poor management of sensitive data could result in the loss of that data, which would cause significant damage to the protocol itself and the reputation of its maintainers. The following set of criteria should be followed by business leaders who have requisite business knowledge to identify what data should be considered sensitive, as well as the developers who write code that interacts with that data.
- Definition: Identification starts with creating a list of data that is considered sensitive. Creating this list typically requires input from upper management, who possesses the perspective to identify what kind of information is critical to the operation of the organization. The formalizing of what data is considered sensitive and how it should be managed also falls under this criterion.
- Rationale: To protect sensitive information, you first need to identify what kind of information is sensitive. Without identifying what information is sensitive, developers may produce code that leaks sensitive information without knowing it. Formalizing what makes data sensitive and documenting procedures for handling that data ensures that developers follow best practices when writing code to interact with that data.
- Minimum: Protocol is reviewed to identify obvious sources of sensitive information and where that information is stored.
- Improved: Threat modeling is performed to enumerate all protocol workflows that interact with sensitive information. Procedures are written for managing sensitive information.
- Advanced: Protocol’s use of sensitive information and management procedures are made visible to the public.
Safeguards against loss
- Definition: Safeguards against the loss of sensitive data are processes (both automatic and manual) that identify potential sources of sensitive information in code and prevent its inclusion into public-facing repositories. If sensitive information is leaked into a public-facing repository, these safeguards also provide mechanisms for mitigating the effects of those leaks.
- Rationale: Even when clear documentation that identifies sensitive data and describes how to handle it exists, developers can still make mistakes. These safeguards serve as a second layer of protection that can prevent developers from accidentally leaking sensitive information, as well as mitigate the effects of leaks when they do occur.
- Minimum: Developers check code for secrets prior to each commit. Ignore files (e.g., .gitignore) are used.
- Improved: Pre-commit hooks are added to check for secrets in code automatically.
- Advanced: Procedures are defined for dealing with secret loss.
In the context of the SDLC, logging is the practice of documenting the occurrences of events or results of computations that are neither intended nor desirable. The logging subdomain covers identifying these occurrences by making them easy to monitor and triage. Without meaningful and accurate logging, attacks against a protocol may go undetected long enough for the protocol to suffer irreparable damage. The criteria in this section are meant to be followed by protocol developers.
- Definition: Events refer to language-supported functionality that emits a message to a medium (such as a blockchain) which can be queried outside the protocol. Events allow external services to gain insight into the work being performed by the protocol.
- Rationale: From a security perspective, logging events allows for the use of external monitoring software to detect suspicious activity in the protocol immediately after it occurs. Detecting this activity early gives protocol maintainers a larger window of opportunity to identify, triage, and mitigate attacks against the protocol before they cause significant damage.
- Minimum: Relevant contextual information is logged in events.
- Improved: Logging is added to code to trigger when security critical events occur. Descriptive, unique messages are added to all logged events.
- Advanced: Events are indexed. Event messages provide information not easily derivable from other sources. Each log message is thoroughly described in protocol documentation.
- Definition: Errors, while not strictly a logging mechanism, are included in this subdomain because they frequently support emitting messages as part of their handling routines. This criterion focuses on these messages and increasing their utility for developers.
- Rationale: Error messages can provide developers with detailed explanations of what caused the protocol to fail. In the event of a security-related failure, these messages (if well crafted) can be used to pinpoint the exact location in the code in which the error occurred. This level of fidelity makes triaging these failures easier.
- Minimum: Errors are logged using default error messages.
- Improved: Errors are logged with descriptive, unique error messages.
- Advanced: Each error message is thoroughly described in protocol documentation.
The testing subdomain encapsulates the design, implementation, coverage, and visibility of a suite of tests against a protocol’s codebase. Adherence to the best coding practices does not guarantee that code is free of vulnerabilities. Thorough testing provides another layer of defense against bugs that are not caught while code is being actively written. The testing criteria are directed towards developers. At a high-level, anyone who writes protocol code should be writing tests for the code they write.
Test suite composition
- Definition: The composition of a test suite refers specifically to the types of tests in the suite and the different conditions that those tests evaluate. Types of tests include unit tests, functional tests, and regression tests.
- Rationale: The composition of a protocol’s test suite directly correlates to the depth at which protocol code is checked for unexpected or undesirable side effects.
- Minimum: Functional tests are written for all end-user workflows.
- Improved: Unit tests are written for all protocol functions. End-to-end tests are written for all protocol workflows.
- Advanced: Positive and negative conditions are tested in all unit and end-to-end tests. Regression tests are written when vulnerabilities in the protocol are patched.
- Definition: A testing strategy defines when testing is performed and how tests are executed. Examples of testing strategies include running the full test suite prior to all major protocol updates and integrating testing into existing CI-CD pipelines.
- Rationale: Implementing a testing strategy ensures that test suites are run at meaningful checkpoints in the development lifecycle. Running tests routinely results in the protocol code being checked for the conditions identified in the tests multiple times before that code is committed to the blockchain.
- Minimum: Testing is performed prior to major protocol releases.
- Improved: Testing is performed prior to merging new features into the protocol codebase.
- Advanced: Test suite is integrated into CI-CD pipelines and run against all commits.
- Definition: Coverage is the percentage of the protocol code that is tested by a test suite. This metric is usually calculated by inspecting the code paths taken while executing all the tests in the suite.
- Rationale: The efficacy of a test suite in detecting vulnerabilities in a protocol directly correlates to the percentage of the code base that is tested. If a large percentage of the code base is not tested, then a large percentage of the protocol may contain the bugs that the test suite was written to detect.
- Minimum: Test coverage is checked manually.
- Improved: Test coverage is checked automatically.
- Advanced: Automated test generators are used to increase test coverage.
- Definition: Accessibility encapsulates who has access to protocol testing tools and the level of effort required to run those tools.
- Rationale: Making testing material more accessible helps foster community involvement in improving the security posture of the protocol. If tests are available, easy to write, and easy to run, third parties are more likely to run the tests themselves and even contribute their own tests.
- Minimum: Test suite is released to the public.
- Improved: Tooling and documentation has been added to the test suite to make testing easy. Test results are manually published to the public.
- Advanced: Public-facing test results are updated automatically and published to a public dashboard.
The code analysis subdomain covers the use of external resources (i.e., tools and services) to identify code irregularities and security vulnerabilities in protocol code. Code analysis ranges from the use of tools that operate purely on source code, to the use of frameworks that emulate the execution of protocol code in instrumented environments. Code analysis provides another set of checks against introducing vulnerabilities during a protocol’s development phase. Whereas a comprehensive test suite can help identify potential issues of which developers are cognizant, code analysis processes can identify problematic code patterns or side effects that developers did not consider when writing their tests. Like most subdomains described in this blog post, the following criteria are directed towards developers. Developers are best suited to synthesize the technical data produced by code analysis into actionable tasks.
- Definition: Static analysis refers to the use of tools that inspect source code for code patterns of note. In the context of security, noteworthy code patterns are those associated with certain classes of vulnerabilities. Additionally, developers can use static analysis to identify code that does not follow certain code patterns, like those identified in an organization’s coding standards.
- Rationale: Because static analysis tools don’t require code to be executed, they tend to have low running costs. As such, they are easy to integrate into existing processes and provide further assurance that developers do not introduce vulnerable code into a protocol.
- Minimum: A linter is integrated into developer IDEs.
- Improved: Static analysis is run manually against the code base.
- Advanced: Static analysis is integrated into the CI-CD pipeline.
- Definition: Dynamic analysis is defined by the process of executing (or simulating the execution of) code and monitoring that execution for noteworthy side effects. Examples of dynamic analysis methods include fuzzing and symbolic execution.
- Rationale: Dynamic analysis offers a way to test protocol code using dynamically generated values. The use of dynamic values, particularly in place of user input, can identify vulnerabilities that may have been missed using the static set of input values encoded in a test suite. Additionally, dynamic analysis can offer insight into the code paths taken in response to various user interactions with the protocol.
- Minimum: Developers perform manual analysis of the code base.
- Improved: Automated tooling is run against the code base manually.
- Advanced: Dynamic analysis tooling is integrated into the CI-CD pipeline.
- Definition: Verification covers a variety of methodologies for proving that a piece of code meets a specification. The specification outlines exactly what a piece of code should do (which also defines what a piece of code should not do). Specifications crafted with security in mind can be used to ensure that protocol code does not produce any unexpected side effects that could lead to protocol compromise.
- Rationale: Verification methodologies allow developers to create specifications that outline the precise, intended behavior of code and then verify that the code they wrote conforms to that specification. Whereas test suites and code analysis attempt (and sometimes fail) to identify unintended or undesirable behaviors, formal verification results in irrefutable proof that such behaviors do not exist.
- Minimum: Specifications are written for critical workflows.
- Improved: Code is checked for violations of specification.
- Advanced: Specification is checked for correctness in respect to certain security properties.
- Definition: Having an experienced third-party review a protocol’s codebase for security issues constitutes an audit in the SDLC. This third party may take the form of an organization that provides a dedicated team of security experts, or an audit contest, in which anyone can participate.
- Rationale: Most Web3 development teams are not staffed with security experts. Security experts are better equipped to identify vulnerabilities, particularly those that are much harder to detect using automated methods. Additionally, security experts can help refine developer processes in a way that directs developers towards producing more secure code.
- Minimum: Protocol code is audited by a reputable third party.
- Improved: Bug bounties are offered for vulnerability disclosures.
- Advanced: Protocol code is audited by multiple third-parties.
Why You Can Trust Arbitrary Execution
Arbitrary Execution (AE) is an engineering-focused organization that specializes in securing decentralized technology. Our team of security researchers leverage their offensive security expertise, tactics, techniques, and hacker mindset to help secure the crypto ecosystem. In the two years since the company’s inception, Arbitrary Execution has performed more than 50 audits of Web3 protocols and projects, as well as created tools that continuously monitor the blockchain for anomalous activity. For more information on Arbitrary Execution's professional services, contact firstname.lastname@example.org. Follow us on Twitter and LinkedIn for updates on our latest projects, including the Web3 SDLC.