-
July 2, 2024
I recently spoke with Chris Romeo about the question, “is DAST (Dynamic Application Security Testing) dead?” to which we both agreed, “it depends what you mean.” In application security, there are a lot of places where you can scan your code for misconfigurations, and there are as many tradeoffs as there are vendors offering these different kinds of scanners. The more I interact with people who have been doing application security longer than me, the more I realize people have had wildly different experiences with different scanners. In this article, I’ll break down the differences between SAST, DAST, and the ever debated IAST (Interactive Application Security Testing), as well as some of the trade-offs of each.
The TL;DR? Like most things in security, SAST won the value/ implementation difficulty trade off, but that doesn’t mean everything else is useless. At the end of the day, I’ll argue that SAST is first, DAST is either critical or meaningless, and IAST is confusing.
Static Application Security Testing scans your code for CWEs (Common Weakness Enumerators). While we’d all love to have gray bearded AppSec veterans analyzing all of our code, the reality is that scanners fill a scalability problem we have in application security. When we talk about SAST, we really have to split it into three generations of scanners.
The first SAST scanners were built around Waterfall methodology, when the security team had an entire week or month dedicated to reviewing software changes before release. These SAST scanners took hours or even days to run, and would thoroughly evaluate entire code bases for potential issues. These issues would then be output into a report, and security would review the findings before code was promoted to production. These tools were also designed to scan monolithic applications, where all the code for an app was in a single place.
As agile meant shipping code more often, it became clear that code scans needed to happen more quickly, as part of automated testing, with results going just to developers. This led to the second generation of SAST tools, those that focused on scanning for code changes introducing vulnerabilities, and providing developers feedback directly instead of big security reports.
Quicker scanning sending results directly to developers came with a downside: developer frustration. Development teams quickly grew frustrated with a slough of false positives, as issues are commonly flagged that don’t apply in the application’s context. To give a simple example, many applications contain test files that hard code data to be run through the functions of the application. Many SAST tools flag these hard coded data as security issues, even though this code is never actually running.
This has led to a third generation of SAST tools that, in addition to speed, attempt to validate if these findings are reachable or exploitable. This is complicated because at the static level, the application is never deployed, but oftentimes they may integrate into runtime environments to see if the code is actually readable. Tools in this category like BackSlash model the application in its deployed state, as well as look at function responses, to see if the code is actually returning things back to a user.
Like SAST, DAST aims at discovering CWEs, but they approach the application from the outside rather than the inside. In other words, they opt to scan the application once it’s actually running instead of before it's deployed. This comes with some pros and cons we’ll discuss later, but like SAST, DAST has two generations of tools, and its current state is in a bit of a flux. One important concept core to DAST is that of fuzzing. While SAST tools tend to look for vulnerable code patterns, DAST injects common malicious payloads and looks at responses for if the exploit was successful.
Like first generation SAST, first generation DAST took hours to scan and slowly crawled web apps for vulnerabilities. While SAST scanning was full of false positives from moving from monolithic to microservice models, DAST scanning faced its own challenges from the move. First, DAST scanning was heavily reliant on interpreting headers in responses, and was slow to adapt to their increasing importance in cloud native web architectures. Second, as API communications became more standardized, these tools were based more on fuzzing form fields than fuzzing API payloads.
Second generation DAST tools scan much more quickly, and ingest API schemas for context aware fuzzing. This radically increases their usefulness, as they inject malicious payloads in a way your API’s can understand, and look beyond OWASP Top 10 vulnerabilities into OWASP Top 10 for API’s, which are more applicable for API driven apps.
IAST has become a much more contested term, because many DAST providers rushed to add it as a functionality with wildly different implementations. Since “Interactive” has an incredibly vague meaning, I’ve seen IAST be adopted from products that have a single ability to customize a scan (making it interactive), to robust runtime testing solutions that have in depth implementations.
On the “DAST-y” side of IAST, IAST means just adding customizations to your DAST scanning. These can take the form of simple things like adding custom values to testing input fields, to more advanced Selenium scripts that allow you to script testing as if you were clicking around a UI. These tools think of IAST as a way to do customizable DAST scanning.
The more ambitious definition of IAST brings in depth instrumentation to view how applications are executing within a runtime context. From that runtime context within the application, they perform testing while viewing payloads that are actually executing into functions. This is a sort of mix of SAST and DAST - as it requires the application to be running, but it is testing the inputs and outputs of specific functions, rather than theoretical fields or payloads.
SAST, DAST, and IAST all come with their own sets of pros and cons. Most organizations I’ve met with have adopted a general trend of prioritizing SAST first, then DAST, and experimenting with the idea of IAST before never actually doing it. The key differences between these tools are twofold: when they’re scanning, and how they’re instrumented.
First, SAST tools are useful for their ability to “shift left” by scanning code and giving feedback before an application is ever deployed. This gives developers near instant feedback to their code changes. DAST is useful because it scans the application once it’s actually running, giving security teams insights to what’s actually happening in the app instead of having to scan the theoretical. Finally, IAST is similar to DAST because it scans the application only once it’s running, the difference is only instrumentation and results.
For many organizations, instrumentation is what actually determines which tool they’re buying. SAST is easy to implement, as it only needs access to the code repositories in your environment. This requires almost no configuration or customization, and is readily available for most orgs. Similarly, old school DAST has stuck around because it’s also easy to instrument - you just point it at your website and let it scan. However, the key instrumentation difficulty with DAST of all kinds is authentication. For DAST scans to run, authentication needs to be configured to a test account, something that is usually much more complicated than teams suspect it to be. Finally, IAST’s biggest issue is instrumentation. It usually requires developers to hand over a level of control to how the application itself is running that is simply more than most security teams will ask for, and the perceived benefit struggles to differentiate from DAST (from a marketing perspective). To be clear, IAST offers the greatest potential results in terms of granularity, reachability, and discovery, but they often don’t outweigh the instrumentation difficulty.
When choosing a testing method, most organizations don’t look at SAST as an either/or decision. They typically do some kind of testing with both methodologies. On the one hand, old school DAST is the easiest to set up, because you just plug in a domain name. However, the results of doing unconfigured DAST scanning have led many security leaders to proclaim that “DAST is Dead.” It’s worth noting that modern scanners have rebranded as API security, and many of them make this scanning experience much more fruitful.
The newer kinds of DAST scanning are worth the time to set up, but the barrier to entry is higher than SAST, and it’s ultimately an advanced additional implementation. For that reason, many orgs choose SAST before DAST, because the payoff for easy instrumentation is quite high. IAST is a good fit for organizations with development teams that are willing to hand over some of their core infrastructure to security; however, I view the value in its runtime protection more than its testing capabilities.
For this example, let’s look at an extremely simple piece of code to get an idea of how different scanners might interpret it and the pros and cons of each:
from flask import Flask, request, render_template_string
app = Flask(__name__)
@app.route('/')
def index():
name = request.args.get('name', '')
# Vulnerable to XSS as it directly includes user input without sanitization
If ‘bob’ in name:
return render_template_string('<h1>Hello, {}!</h1>'.format(name))
else:
return render_template_string('<h1>Hello!</h1>')
if __name__ == '__main__':
app.run(debug=True)
This example takes an argument called name from the request, and returns non-vulnerable HTML in most cases; however, a vulnerability does exist if the name contains “bob” because then it reflects the name to the user.
While this example is indeed goofy, the pattern applies a bit more widely than it might seem as developers make quick hacks or workarounds to get specific use cases working. A SAST tool would detect this finding by looking at how the html being returned includes user input. A DAST tool would be unlikely to find this, as they probably don’t have a fuzz rule for “strings containing bob.” Similarly, IAST would probably detect it, but would be more likely to detect more common injection payloads.
Conversely, there are examples of DAST findings that SAST wouldn’t, but they often have more to do with overall application architecture. For example, a SAST tool won’t detect what headers your web server is responding to a client with, or inter-service authentication flows.
When considering the costs of testing methodologies, there are a few things that can be used for consideration:
Let’s go through these one by one. First, there’s the cost of the tool itself. SAST companies typically charge per developer (but some companies like BackSlash are an exception), and DAST typically charges per web app. Because of this, DAST, especially first generation ones that don’t break up APIs, typically charge very low rates compared to other tools. Conversely, SAST can be expensive as per developer costs increase - these used to charge per scan, but scans were increasing too quickly as deployments also increased.
Second, there’s the cost of the scan itself. Of these, SAST typically does little to cause scaling, but DAST can have some hidden costs here. Running DAST scans frequently, especially ones that are not scoped properly, can cause auto cloud scaling in cloud environments, increasing network and compute costs. That said, this is usually a pretty low total cost.
Third, developer time for remediation is one of the key metrics used to determine tool value. This is why SAST is a great contender, as it gets results to the developer when they’re the easiest to fix. Fixing security issues before applications are deployed is the major benefit of shifting left, and why the methodology has been so widely adopted.
Finally, the cost of security management. This is less about the type of scanner itself than the quality of it. Great DAST and SAST scanners will reduce noise by trying to validate results to show only relevant ones to teams, and have rich ownership workflows for getting things fixed.
Backslash is a great solution because of how it combines the ease of use of a SAST tool, with the “real findings” sensibilities of a DAST scanner. By prioritizing findings based on what’s actually reachable by application users, you get the benefits of SAST without a lot of the downsides of false positives and noise.
Most security programs strive for some combination of SAST and DAST, but SAST has become the uncontested priority in purchasing decisions for companies. That’s not to say there’s no benefit to DAST scanning, but especially second generation API focused DAST scanners have remained relevant compared to older style ones at discovering issues.
IAST continues to struggle to find a home due to confusion in the market. How much “interactivity” is necessary to qualify as an IAST has led to vendors marketing all sorts of things, diluting the pros and cons of the approach. That said, I think the real value of IAST is in runtime protection.
For most companies building a security program, I’d recommend getting static testing done first because it’s easier to get meaningful results, and then implement dynamic testing later as a mark of maturity.