Lessons from OWASP AppSec NZ: Culture, Code, and AI’s Impact on Development

AI introducing additional risks in development is no longer theoretical or fringe. At OWASP AppSec NZ, session after session reinforced the double-edged nature of AI in cybersecurity. LLMs are accelerating delivery, but they also generate vulnerable code nearly 40 percent of the time, and most of it is still pushed to production. Banning them is pointless, because developers, like any other role, will find a way to use these tools whether in a controlled environment or outside it. Until these systemic issues are removed from the training data of future models, if that ever happens, we are essentially living through Stack Overflow 2.0, but with vulnerabilities baked in at scale.

Read More