Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Why ANZ Technology Leaders Are Rethinking How AI, Speed, and Security Intersect

The pace of technological change is always fast, but with AI everywhere, things have gone into overdrive. In Australia and New Zealand, businesses plan to spend heavily on generative AI—about $15 million on average, more than the global average. This puts immense pressure on technology, security, and engineering leaders. They must innovate quickly, but they also face complex risks from AI. This is forcing them to rethink how speed and security can work together.

Build Fast, Stay Secure: Guardrails for AI Coding Assistants

AI coding assistants like GitHub Copilot and Google Gemini Code Assist are changing how developers work — accelerating delivery, removing repetition, and giving teams back time to build. But speed isn’t free. Studies show that around 27% of AI-generated code contains vulnerabilities, not because the tools are broken, but because they generate code faster than most teams can review it. The result? A growing wave of insecure code is making it into production.

Finding Software Flaws Early in the Development Process Provides Clear ROI

Organizations spend enormous effort fixing software vulnerabilities that make their way into their public-facing applications. The Consortium for Information and Software Quality estimated that the cost of poor software quality in the United States reached $2.41 trillion in 2022, a number sure to be much higher today. That’s nearly 10% of the current GDP within the US. As we will show, it makes sense that the cost of poor software quality is so high.

Transform Your AppSec Program With the Power of Snyk Analytics

As AI-generated code continues to boost developer productivity – and with it the number of vulnerabilities in code – the need for a programmatic approach to security within a fully AI-enabled reality is key. AI Trust and governance is the new standard for the AI era, and is achieved through visibility, prioritization, and policy. With this in mind, over time, Snyk has expanded the number of reports and analytics provided in its platform to address this need.

Humans at the Center: Redefining the Role of Developers in an AI-Powered Future

In a previous blog, we discussed how AI is reshaping software development at every level. This shift means developers need new skills to stay effective. In fact, Gartner predicts that generative AI will require 80% of the engineering workforce to upskill through 2027. So what can today’s developers do to stay ahead? Here are a few steps to consider.

Snyk for Government Achieves FedRAMP Moderate Authorization: A Milestone for Secure Government Software

Today marks a significant milestone for Snyk and, more importantly, for the security posture of the U.S. government. I'm thrilled to introduce Snyk for Government, our FedRAMP Moderate authorized solution for the public sector. This authorization underscores our unwavering commitment to providing secure development solutions that meet the rigorous standards of the Federal Risk and Authorization Management Program (FedRAMP). It means that U.S.

The Future of Developer Upskilling Is Human-Led, AI-Supported

In the last year, generative AI has dramatically accelerated how software is written. Developers can generate entire functions with a prompt, automate repetitive logic, and offload everything from boilerplate code to documentation. But with this newfound speed comes a deeper, more complex challenge: ensuring that what’s being created is secure, trustworthy, and production-ready.

AI Trust in Action: How Snyk Agent Redefines Secure Development

One word defines success or failure in the race to adopt AI in security workflows: trust. While the industry moves fast toward automation and autonomy, adoption often stalls when developers and the teams supporting them can’t trust what the AI delivers. It’s not enough for a tool to explain what it did. Developers want to know: Did it actually fix the problem? Will this change break something else? Can I rely on it again next time? Nowhere is that skepticism more justified than in security.

Welcome-to-The New Era of AI-Driven Development

Artificial intelligence is no longer a future consideration. It’s here — and it’s changing how software is built. Fast. Enterprise teams are moving beyond AI pilots and proof-of-concepts. They’re rolling out real-world, high-value use cases and doing it at scale. According to IDC forecasting, AI spend will more than double by 2028. At the center of that surge is AI-assisted software development.

AI Is Reshaping Software. Is Your Security Strategy Keeping Up?

Software development is undergoing its biggest shift since the rise of cloud and DevOps. The difference this time? The shift is being driven by artificial intelligence, and it’s moving fast. AI-powered coding tools have rapidly made their way into developer workflows. Agents and LLMs are helping teams move faster, automate more, and build in entirely new ways. But speed often comes with tradeoffs.