Security | Threat Detection | Cyberattacks | DevSecOps | Compliance

Bridging AI Safety and AI Security: Reflections from the NYC AI Safety Meetup

The regularly occurring NYC AI Safety Meetups cover a variety of topics, with this latest session focusing on the convergence of AI Safety and AI Security. I had the fantastic opportunity to contribute to the conversation, it’s one that’s been budding for some time, but this was my first direct exposure.

Security for Autonomous Agents and Reducing Shadow AI

In the rapidly evolving field of AI, understanding the distinctions between how agentic workflows are initiated is crucial. While the verbiage among tech providers varies, it essentially comes down to whether an agent is prompted by a human from a chat interface or autonomously from external sources like emails, data changes, calendar invites, or otherwise.

The Role of AI in Enhancing Data Privacy Measures

Data privacy is no longer a policy binder. It is an engineering practice that must run every day, close to where data enters, is processed, and leaves your systems. That is why the conversation has shifted to The Role of AI in Enhancing Data Privacy Measures. AI can inspect millions of records, watch billions of events, and detect quiet patterns that humans and static rules miss. When applied correctly, AI turns privacy from a paperwork exercise into a set of working parts.

ChatGPT Is the First Place I Go for Advice Now

ChatGPT software became Tom Wilson's go-to advisor for everything from career decisions to relationship problems. The 31-year-old project manager stopped asking friends for advice and started consulting this Language Model that never judged, never got tired of his questions, and always offered multiple perspectives. Tom used to text his problems to different people depending on the situation. Work stress went to his mentor. Relationship issues to his sister. Money problems to his financially savvy friend. Each person gave advice based on their own biases and limited time.

Ethical and Regulatory Implications of Agentic AI: Balancing Innovation and Safety

Artificial intelligence (AI) has come a long way over the past six decades. From simple chatbots in the 1960s to today’s sophisticated large language models (LLMs), mimicking human behavior has always been one of AI’s most intriguing applications. At present, though, AI cannot plan or make decisions as humans do. If it could, the ethical implications of AI would suddenly become much more complex. That’s where agentic AI comes in.

CrowdStrike Stops GenAI Data Leaks with Unified Data Protection

GenAI adoption is exploding across organizations, transforming how work gets done and where data moves. CrowdStrike is announcing four new innovations in CrowdStrike Falcon Data Protection to empower organizations to embrace GenAI tools while securing data across endpoints, cloud, GenAI, and SaaS environments.

AI-Powered Protection, Profitable Margins: Why VARs Are Switching to AppTrana WAAP

Globally, the VAR market for IT products is projected to exceed USD 11.8 billion in 2024 and grow at a CAGR of 7.5%, potentially doubling by 2033. Within security software, where overall market spending is expected to surpass USD 200 billion, VARs(Value Added Resellers) play an outsized role by packaging products with services that help enterprises implement, manage, and get measurable outcomes from their technology investments.

A practical guide to AI-ready machine identity governance in finance

Across financial services operations, machine identities play critical roles, but in many organizations, these cryptographic keys, API tokens, certificates, and service accounts remain chronically under-governed. What’s more, machine identities outnumber human identities by staggering margins, creating a massive, often unseen, unsecured attack surface—one that’s only further compounded by the rise of artificial intelligence (AI).

Secure Your AI Workflows: New Governance & Visibility Features from Snyk

As AI transforms software development, AppSec teams face new complexities. For instance, the lack of visibility into where AI is being used and the reality that AI-generated code is often highly vulnerable make it nearly impossible to prioritize remediation and effectively scale security programs. To succeed, AppSec teams have to evolve from task managers to strategic governance enforcers.