
Nearly a year ago, I wrote an article titled “How to pick the right SAST tool.” It was a look at the pros and cons of two different generations of static application security testing (SAST):
- Traditional SAST (first generation): Deep scans for the best coverage, but creates massive friction due to long run times.
- Rules-based SAST (second generation): Prioritized developer experience via faster, customizable rules, but coverage was limited to explicitly defined rules.
At that time, these two approaches were really the only options. And to be honest, neither option was all that great. Basically, both generations were created to alert for code weaknesses that have mostly been solved in other ways (i.e., improvements in compilers and frameworks eliminated whole classes of CWEs), and the tools haven’t evolved at the same pace as modern application development. They rely on syntactic pattern matching, occasionally enhanced with intraprocedural taint analysis. But modern applications are much more complex and often use middleware, frameworks, and infrastructure to address risks.
So while responsibility for weaknesses shifted to other parts of the stack (thanks to memory safety, frameworks, and infrastructure), SAST tools spew out false positives (FPs) found at the granular, code level. Whether you’re using first or second generation SAST, 68% to 78% of findings are FPs. That’s a lot of manual triaging by the security team. Worse, today’s code weaknesses are more likely to come from logic flaws, abuse of legitimate features, and contextual misconfigurations. Unfortunately, those aren’t problems a regex-based SAST can meaningfully understand. So in addition to FPs, you also have high rates of false negatives (FNs). And as organizations adopt AI code assistants at high volumes, we can also expect more logic and architecture flaws that SASTs can’t catch.

