SQL Injection (SQL) also known as SQLI is a severe security vulnerability that allows attackers to interfere with the queries and applications made to its database. By inserting malicious SQL code into input fields, attackers can manipulate the database, leading to unauthorized data access, data corruption, or even complete system compromise. This attack technique is made possible because of improper coding of web applications.
With the rapid expansion of cloud data storage and analytics, enterprises are increasingly leveraging platforms like Snowflake for their scalability and performance. However, this also introduces new challenges in data security, particularly for industries dealing with sensitive data such as finance, healthcare, and e-commerce.
If you’re a US citizen, you’re likely numb to the endless letters informing you of your information being stolen yet again. For most of us, this is an annoyance and an inconvenience. But if you’re a patient in a hospital during an attack, it would be disconcerting to know that studies indicate medical errors increase by 30% when clinical applications are offline, and there’s a “small but significant” increase in patient mortality.
As bad actors use artificial intelligence to step up their phishing game, mounting an effective defense means using a secure email gateway that likewise employs AI to detect even the most cleverly crafted phishing emails and the fraudulent websites to which the emails attempt to direct recipients. The concern is not just with generative AI (GenAI) tools like ChatGPT, which has some (rather limited) guardrails to prevent nefarious use.
Advocating for a larger budget is a common need for most security professionals. With so many business obligations fighting for priority and funding, even vital concerns like Vendor Risk Management can fall through the cracks. However, third-party cyber risks can devastate businesses in the blink of an eye—meaning maintaining a proper third-party risk management program should be at the top of your priority list.
The introduction of OpenAI’s ‘Operator’ is a game changer for AI-driven automation. Currently designed for consumers, it’s only a matter of time before such web-based AI agents are widely adopted in the workplace. These agents aren’t just chatbots; they replicate human interaction with web applications, executing commands and automating actions that once required manual input.
Traditionally, data security focused on protecting data at rest within the confines of your on-premise data center. The cloud era has blurred these lines. Data now flows through complex pipelines, often traversing multiple services and third-party vendors. This expanded data perimeter creates new vulnerabilities: It’s crucial to ensure that the data loaded into warehouses and analytics tools is scanned for sensitive information and redacted or redirected accordingly.
Researchers recently found another Software Supply Chain issue in BoltDB, a popular database tool in the Go programming environment. The BoltDB Go Module was found backdoored and contained hidden malicious code. This version took advantage of how Go manages and caches its modules, allowing it to go unnoticed for several years. This backdoor allows hackers to remotely control infected computers through a server that sends them commands i.e. via a command and control server.
In the inaugural episode of the Security Matters podcast, host David Puner dives into the world of AI security with CyberArk Labs’ Principal Cyber Researcher, Eran Shimony. Discover how FuzzyAI is revolutionizing the protection of large language models (LLMs) by identifying vulnerabilities before attackers can exploit them. Learn about the challenges of securing generative AI and the innovative techniques used to stay ahead of threats.