Security researchers at Intruder scanned over 2 million hosts running AI services and discovered a critical pattern: self-hosted AI infrastructure represents the worst security posture they've ever analyzed. The scan, triggered by the ClawdBot incident—an AI assistant averaging 2.6 CVEs per day—examined 1 million exposed services using certificate transparency logs. The findings expose a fundamental problem: organizations racing to deploy AI are recreating security mistakes the software industry spent decades fixing.
Authentication Is Optional, Not Default
The most alarming discovery wasn't sophisticated exploitation—it was the absence of basic security controls. A significant portion of deployed AI services had no authentication enabled because the underlying projects don't implement it by default. Source code analysis confirmed that authentication is an afterthought in many AI frameworks, leaving production deployments wide open on first installation.
Real user data, conversation histories, and company tooling sat exposed to anyone who bothered to look. OpenUI-based chatbots leaked complete LLM conversation histories. While individual chat logs may seem innocuous, enterprise conversations regularly contain proprietary information, internal processes, and sensitive business context. More concerning were generic chatbots hosting multimodal LLMs available for anyone to use, creating liability for organizations whose infrastructure could be exploited for generating illegal content or jailbreaking models without attribution.
Multiple Claude-powered chatbot instances exposed API keys in plaintext, allowing unauthorized access to paid LLM services. One exposed service handling NSFW conversations disclosed credentials that could be used to rack up significant API costs or exfiltrate conversation data from the provider level.
Agent Platforms: Business Logic Exposed
The scan identified over 90 exposed instances of agent management platforms including n8n and Flowise across government, marketing, and finance sectors. These platforms are particularly dangerous because they orchestrate integrations with third-party systems, meaning compromise often grants access to everything the agent touches.
One Flowise instance exposed the complete business logic of an LLM chatbot service, including workflow design and credential lists. While credential values were obfuscated from unauthenticated visitors, attackers could still leverage connected tools to exfiltrate data. Another exposed setup included internet parsing tools alongside dangerous local functions—file writes and code interpretation—making server-side code execution a realistic attack vector.
The absence of proper access management controls in AI tooling creates a cascading security failure. Access to an integrated bot frequently means unrestricted access to connected systems. Attackers discovering these exposed platforms could modify workflows, redirect traffic, poison responses, or extract user data without triggering traditional security controls.
31% of Ollama APIs Answer to Strangers
Researchers sent a single test prompt—"Hello"—to 5,200+ Ollama API servers that listed connected models. Over 1,600 servers (31%) responded without requiring authentication. The responses revealed what these APIs were actually doing in production environments:
- A submissive AI responding "Greetings, Master. Your command is my law. What is your desire?"
- A health assistance bot offering to help with anxiety, sleep problems, and medical concerns
- A cloud management assistant with access to operational tasks, infrastructure deployment, and service queries
- Multiple instances wrapping paid frontier models from Anthropic, Deepseek, Moonshot, Google, and OpenAI
While Ollama doesn't directly store message history, limiting immediate data exposure, 518 of the identified models were wrapping commercial frontier models. This creates two problems: unauthorized use of expensive API services, and potential access to systems these AI assistants can manipulate or query.
Cloud management AI assistants exposed without authentication represent critical infrastructure vulnerabilities. These systems often have privileged access to deployment pipelines, service configurations, and operational tooling—exactly what attackers need for lateral movement and persistence.
Insecure by Design, Not Just Misconfiguration
Lab analysis of representative applications revealed systemic security failures rather than isolated incidents. The problems are architectural, not operational:
- Poor deployment practices: Insecure defaults, misconfigured Docker setups, hardcoded credentials, applications running as root
- No authentication on installation: Fresh installs drop users into high-privilege accounts with full management access
- Hardcoded credentials: Static secrets embedded in setup examples and docker-compose files instead of generated during installation
- New vulnerabilities: Researchers found arbitrary code execution in one popular AI project within days of analysis
These issues compound when AI agents have integrations with other systems. A vulnerability in the AI platform becomes a vulnerability in every connected service. The combination of insecure defaults, missing authentication, and broad system access creates an attack surface that would be unacceptable in traditional enterprise software but is apparently standard practice in self-hosted AI infrastructure.
What This Means for Security Teams
The rush to deploy self-hosted AI infrastructure is outpacing security practices that took the broader software industry decades to establish. Organizations implementing AI services need to understand they're not just adding another application—they're potentially exposing business logic, conversation data, and connected systems to the internet.
Security teams should immediately audit self-hosted AI deployments for authentication requirements, credential management, and network exposure. Don't assume that because software is deployed internally, it's not accessible externally. Certificate transparency logs make it trivial for attackers to enumerate AI infrastructure at scale, exactly as researchers did for this analysis.
The AI security problem isn't coming—it's here. With 31% of scanned Ollama APIs responding without authentication and over 90 agent platforms exposing business logic across critical sectors, the attack surface is already established. Organizations need to treat self-hosted AI infrastructure with the same security rigor applied to databases and authentication systems, because that's effectively what they are: repositories of sensitive data with broad system access.
Recommendations
- Enable authentication before deployment, not after. Never accept insecure defaults for internet-facing services.
- Audit all self-hosted AI infrastructure for external accessibility using tools like Shodan or certificate transparency log searches.
- Implement network segmentation to isolate AI services from production systems until proper access controls are verified.
- Generate unique credentials during installation rather than using example credentials from documentation.
- Review integrations and API access for AI agents—access to an agent should not equal access to everything it touches.
- Monitor AI infrastructure for unauthorized usage patterns, especially for services wrapping paid frontier models.
- Assume vulnerabilities exist and will be discovered. Plan incident response procedures specifically for AI platform compromise.
The software industry learned these lessons before. We shouldn't have to learn them again just because the technology changed. AI infrastructure needs authentication, proper credential management, and defense in depth—not after a breach, but before deployment.
Questions about your exposure?
RedEye Security provides assessments for organizations that need to understand their real risk.
Talk to us