The Microsoft Security Development Lifecycle (Microsoft SDL) is a software development process based on the spiral model, which has been proposed by Microsoft to help developers create applications or software while reducing security issues, resolving security vulnerabilities and even reducing development and maintenance costs. The process is divided into seven phases:
Training
The training phase is essential because practice is considered a requirement for the implementation of SDL. Concepts found in this phase include secure design, threat modeling, secure coding, security testing and practices regarding privacy.
Requirements
The requirements phase, on the other hand, includes the establishment of security and privacy that end-users require. Creating good quality gates/bug bars, and performing security and privacy risk assessments is part of the second phase.
Design
The third phase, design, considers security and privacy concerns, which helps decrease the risk of repercussions from the public. Attack surface analysis or reduction and the use of threat modeling will help apply an organized approach to dealing with threat scenarios during the design phase.
Implementation
Implementation of the design should employ approved tools and include the analysis of dynamic run-time performance to check an application’s functional limitations.
Verification
Release
The release phase includes the final review of all the security activities that will help ensure the software’s security capacity.
Response
After the release phase comes the response phase to implement the incident response plan that was prepared during the release phase. This is crucial because it guards end-users from software vulnerabilities that can emerge and harm the software and/or the user.
Creating a secure Software Development Life Cycle (sSDLC) is starting to become one of the most comprehensive ways of ensuring safe web and mobile application development. But the three fundamentally different patterns implemented in today’s leading application development organizations are posing a huge challenge on the security front.
Waterfall (Sequential Design Process)
Agile/DevOps (Iterative Development)
Continuous Integration Continuous Development (CICD)
The following tracks are integral to SDL implementation and each is explained in greater detail in focused sections further in this online artifact.
Developer Security Training – Ongoing courses provided to developers to improve their understanding of techniques for identifying and mitigating security vulnerabilities. Training focuses on topics including threat modeling, DAST testing, and coding techniques to prevent common defects such as SQL injection.
Design/Architecture Review – A collaborative effort between the ISV Customer Development/Engineering teams and their own product security group to assess and develop application or service design patterns that mitigate risk to the platform and associated applications and services.
Threat Modeling – A structured approach for analyzing the security of an application, with special consideration for boundaries between logical system components which often communicate across one or more networks.
Security User Stories / Security Requirements – A description of functional and non-functional attributes of a software product and its environment which must be in place to prevent security vulnerabilities. Security user stories/requirements are written in the style of a functional user story/requirement.
Automated Dynamic Application Security Testing (DAST) - A process of testing an application or software product in an operating state, implemented by a web application security scanner.
Automated Static Application Security Testing (SAST) – A process of testing an application or software product in a non-operating state, analyzing the source code for common security vulnerabilities.
Penetration Testing – Hands-on security testing of a runtime system. This sort of testing uncovers more complex security flaws that may not be caught by DAST or SAST tools.
Integration of security code review into the System Development Life Cycle (SDLC) can yield dramatic results to the overall quality of the code developed. Security code review is not a silver bullet, but is part of a healthy application development diet. Consider it as one of the layers in a defense-in-depth approach to application security. Security code review is also a cornerstone of the approach to developing secure software.
The idea of integrating a phase into your SLDC may sound daunting, yet another layer of complexity or an additional cost, but in the long term and in today's cyber landscape it is cost effective, reputation building, and in the best interest of any business to do so.
Big organizations often have more than one of these being used simultaneously, as per the needs of the different development teams working on the project.
The testing phase requires the following steps:
Fuzz testing
Penetration testing
Run-time verification
Re-reviewing threat models
Reevaluating the attack surface
Let’s look at each task in detail.
Fuzzing:
Fuzzing means creating malformed data and having the application under test consume the data to see how the application reacts. If the application fails unexpectedly, a bug has been found. The bug is a reliability bug and, possibly, also a security bug. Fuzzing is aimed at exercising code that analyzes data structures, loosely referred to as parsers.
There are three broad classes of parsers:
File format parsers - Examples include code that manipulates graphic images (JPEG, BMP, WMF, TIFF) or document and executable files (DOC, PDF, ELF, PE, SWF).
Network protocol parser - Examples include SMB, TCP/IP, NFS, SSL/TLS, RPC, and AppleTalk. You can also fuzz the order of network operations—for example, by performing a response before a request.
APIs and miscellaneous parsers - Examples include browser-pluggable protocol handlers (such as callto:).
Run-Time Verification
The final testing method is run-time verification testing using tools such as AppVerif to detect certain kinds of bugs in the code as it executes. AppVerif has been discussed earlier in the chapter as part of the fuzz-testing process. However, its usefulness extends beyond fuzz testing; it can also be used to find serious security flaws during normal testing or analysis. Hence, you should run the application regularly by using AppVerif, and you should review the log files to make sure there are no issues that require fixing. Microsoft Windows includes a tool called Driver Verifier (Verifier.exe) to perform similar tests on device drivers (Microsoft 2005).
Reviewing and Updating Threat Models
if Needed Threat models are invaluable documents to use during security testing. Sometimes functionality and implementation change after the design phase of a project. Threat models should be reviewed to ensure that they are still accurate and comprehensively cover all functionality delivered by the software. Threat models should be used to drive and inform security testing plans. Also, the riskiest portions of an application—usually those with the largest attack surface and the threats that are the highest risks—must be tested the most thoroughly.
Re-evaluating the attack surface
Software development teams should carefully reevaluate the attack surface of their product during the testing stage of the SDL. Measuring the attack surface will allow teams to understand which components have direct exposure to attack and, hence, have the highest risk of damage if a vulnerability occurs in those components. Assessing the attack surface will enable the team to focus testing and code-review efforts on high-risk areas and to take appropriate corrective actions. Such actions might include deciding not to ship a component until it is corrected, disabling a component by default, or modifying development practices to reduce the likelihood that vulnerabilities will be introduced by future modifications or new developments. After the attack surface has been reevaluated, the attack surface should be documented to reflect the rationale for the attack surface.
Static Application Security Testing (SAST):
SAST (Static Application Security Testing) is a white-box testing methodology which tests the application from the inside out by examining its source code for conditions that indicate a security vulnerability might be present. SAST solutions such as Source Code Analysis (SCA) have the flexibility needed to perform in all types of SDLC methodologies.
SAST solutions can be integrated directly into the development environment. This enables the developers to monitor their code constantly. Scrum Masters and Product Owners can also regulate security standards within their development teams and organizations. This leads to quick mitigation of vulnerabilities and enhanced code integrity.
Dynamic Application Security Testing (DAST):
Black Box testing is ideally suited for Waterfall environments, but falls short in the more progressive development methods due to its inherited limitations. DAST tools can’t be used on source code or uncomplied application codes, delaying the security deployment till the latter stages of development. DAST (Dynamic Application Security Testing) is a black-box security testing methodology in which an application is tested from the outside in by examining an application in its running state and trying to attack it just like an attacker would.
Interactive Application Security Testing (IAST):
IAST is designed to address the shortcomings of SAST and DAST by combining elements of both approaches. IAST places an agent within an application and performs all its analysis in the app in real-time and anywhere in the development process IDE, continuous integrated environment, QA or even in production. Because both SAST and DAST are older technologies, there are those who argue they lack what it takes to secure modern web and mobile apps. For example, SAST has a difficult time dealing with libraries and frameworks found in modern apps. That’s because static tools only see code they can follow. What’s more, libraries and third party components often cause static tools to choke, producing “lost sources” and “lost sinks” messages. The same is true for frameworks. Run a static tool on an API, web service or REST endpoint, and it won’t find anything wrong in them because it can’t understand the framework.
Secure Coding Standard
C++
The CERT C++ Secure Coding Standard is primarily intended for developers of C++ language programs but may also be used by software acquirers to define the requirements for software. The standard is of particular interest to developers who are building high-quality systems that are reliable, robust, and resistant to attack. While not intended for C programmers, the standard may also be of some value to these developers because the vast majority of issues identified for C language programs are also issues in C++ programs, although in many cases the solutions are different.
C
The SEI CERT C Coding Standard, 2016 Edition provides rules for secure coding in the C programming language. The goal of these rules and recommendations is to develop safe, reliable, and secure systems, for example by eliminating undefined behaviors that can lead to undefined program behaviors and exploitable vulnerabilities. Conformance to the coding rules defined in this standard are necessary (but not sufficient) to ensure the safety, reliability, and security of software systems developed in the C programming language. It is also necessary, for example, to have a safe and secure design. Safety-critical systems typically have stricter requirements than are imposed by this coding standard, for example requiring that all memory be statically allocated. However, the application of this coding standard will result in high-quality systems that are reliable, robust, and resistant to attack.
No comments:
Post a Comment