Tag
AI Security
51 items tagged #ai-security
Articles
Montana Empire AI-assisted phishing kit targets postal service customers with card and ID theft.
Grafana patches AI vulnerability allowing data exfiltration via malicious web instructions.
Max-severity RCE vulnerability CVE-2025-59528 in Flowise AI platform actively exploited.
Chinese state-sponsored GTG-1002 deployed autonomous AI agent to conduct large-scale cyberattack with minimal human
GrafanaGhost vulnerability in Grafana AI components enables silent data exfiltration via prompt injection.
Critical Flowise RCE vulnerability CVE-2025-59528 exploited in the wild, affects 12,000+ instances.
Cloud Security Forecast 2026 identifies identity and permission patterns as predictable drivers of cloud compromise.
GrafanaGhost vulnerability allows attackers to bypass Grafana AI safeguards and exfiltrate enterprise data via prompt
GrafanaGhost exploits Grafana's AI defenses via prompt injection to exfiltrate sensitive data undetected.
Keeper Security report finds non-human identities and AI agents creating critical security gaps in enterprise
Flowise AI platform CVE-2025-59528 (CVSS 10.0) RCE under active exploitation; 12,000+ instances exposed.
GPUBreach attack exploits GPU rowhammer to enable privilege escalation and full system compromise.
AI-assisted supply chain attack targets GitHub users via automated misconfiguration exploitation.
MyLovely.AI NSFW platform database leaked on cybercrime forum by XTC threat actor.
AI-driven device code phishing campaign scales account compromise using automation and dynamic token generation.
Google DeepMind researchers identify six classes of web-based attacks against autonomous AI agents.
Threat actor Jinkusu advertises deepfake and voice manipulation tool for KYC bypass.
Unit 42 discovers privilege escalation flaw in GCP Vertex AI allowing compromised agents to exfiltrate data.
Google Cloud Vertex AI Agent Engine has critical permission flaw enabling unauthorized access and data exfiltration.
Meta pauses Mercor work after supply chain breach exposes AI training data secrets.
AI firm Mercor confirms breach linked to LiteLLM supply chain attack; Lapsus$ claims 4TB stolen data.
Drift loses $285M in sophisticated durable nonce social engineering attack linked to North Korea.
CVE-2026-5027: Langflow path traversal vulnerability enables remote code execution.
Critical vulnerability discovered in Claude Code allows bypass of permission system via prompt injection.
Elastic Security Labs open-sources AI-powered supply chain monitoring tool that detected Axios npm compromise.
Chrome zero-day CVE-2026-5281 use-after-free vulnerability discovered in Dawn WebGPU layer.
AI pipeline automatically reverse-engineers malware, uncovers Monero mining campaign earning $9K+ since 2023.
FulcrumSec breaches three AI/insurance firms via unpatched CVE, exposes 23K policyholders and $797M in premiums.
Anthropic accidentally leaks 512,000 lines of Claude AI source code via npm package.
Dutch court bans X's Grok from generating non-consensual intimate and CSAM imagery in Netherlands.
Google Drive ransomware detection now enabled by default for paid workspace users.
Anthropic's Claude Code source leaked via npm packaging error, triggering typosquat attacks.
Claude AI discovers RCE vulnerabilities in Vim and GNU Emacs triggered by file open.
Palo Alto researchers reveal over-privileged Vertex AI agents could enable data theft and cloud infrastructure
SentinelOne's AI-EDR detected trojanized LiteLLM targeting Anthropic's Claude in supply chain attack.
Dutch court bans X's Grok from generating non-consensual intimate and CSAM imagery in Netherlands.
Four chained vulnerabilities in CrewAI allow sandbox escape and arbitrary code execution via prompt injection.
Vertex AI permission model flaw exposes GCP data and private container artifacts.
Dutch court bans X's Grok from generating non-consensual intimate and child sexual abuse material.
Dutch court bans X's Grok from generating non-consensual intimate and CSAM imagery.
Dutch court bans X's Grok from generating non-consensual intimate and CSAM imagery.
OpenAI Codex vulnerability allowed extraction of GitHub OAuth tokens via branch name injection.
AI-generated obfuscation in 'DeepLoad' malware enables credential theft and detection evasion.
OpenAI Codex vulnerability allowed attackers to steal GitHub tokens via hidden Unicode in branch names.
DeepLoad malware campaign uses AI to generate obfuscated code and steal credentials via QuickFix social engineering.
LAPSUS$ group allegedly selling 4TB dataset stolen from AI recruiting platform.
LAPSUS$ group allegedly selling 4TB dataset from AI recruiting platform HireVue.
OpenAI patches ChatGPT data exfiltration and Codex GitHub token theft vulnerabilities.
GitGuardian 2026 report reveals 29M hardcoded secrets leaked in 2025, up 34% YoY; AI integration drives 81% increase in
Iran-linked hackers deploy spyware via fake bomb-shelter alerts during missile strikes on Israel.