TL;DR - The future of finance is intertwined with artificial intelligence (AI), and according to SEC Chair Gary Gensler, it's not all positive. In fact, Gensler warns in a 2020 paper —when he was still at MIT—that AI could be at the heart of the next financial crisis, and regulators might be powerless to prevent it. AI's Black Box Dilemma: AI-powered "black box" trading algorithms are a significant concern.
The Sysdig Threat Research Team (Sysdig TRT) recently discovered a new Freejacking campaign abusing Google’s Vertex AI platform for cryptomining. Vertex AI is a SaaS, which makes it vulnerable to a number of attacks, such as Freejacking and account takeovers. Freejacking is the act of abusing free services, such as free trials, for financial gain. This freejacking campaign leverages free Coursera courses that provide the attacker with no-cost access to GCP and Vertex AI.
Many organizations are racing to deploy generative artificial intelligence (AI) products, as they look for ways to leverage the hot technology. While generative AI is revolutionizing how people create, interact with, and consume digital content—and the advent of large language models (LLMs) such as Generative Pre-Trained Transformer (GPT) has increased the capabilities of generative AI—the technology also presents security risks for organizations and users.
Today, mobile apps have become an integral part of our lives. So, ensuring adequate security measures are taken while developing them is important. App security is not a feature or a bonus but a basic requirement. Having inadequate security measures in place can prove to be disastrous, and one security breach can cost a business a lifetime of trust and millions of dollars. Critical security measures must be taken to ensure data security and privacy when developing mobile apps.