Even the best-performing LLM only warns about security issues 40% of the time, creating false confidence in code safety.
Super interesting, thank you! There was also the research from the folks at Sonar, looking into quality and vulnerability metrics: https://arxiv.org/pdf/2508.14727#page15
Super interesting, thank you! There was also the research from the folks at Sonar, looking into quality and vulnerability metrics: https://arxiv.org/pdf/2508.14727#page15