1 Million AI Services Scanned: 31% of Ollama APIs Respond Without Authentication

Bottom Line

31% of Ollama APIs on the public internet respond without any authentication. Intruder scanned over 2 million hosts and found self-hosted AI infrastructure represents the worst security posture they’ve analyzed — 518 frontier model wrappers accessible with zero controls and 90+ exposed agent platforms operating against live data.

Security researchers at Intruder scanned over 2 million hosts running AI services and discovered a critical pattern: self-hosted AI infrastructure represents the worst security posture they've ever analyzed. The scan, triggered by the ClawdBot incident—an AI assistant averaging 2.6 CVEs per day—examined 1 million exposed services using certificate transparency logs. The findings expose a fundamental problem: organizations racing to deploy AI are recreating security mistakes the software industry spent decades fixing.

Authentication Is Optional, Not Default

The most alarming discovery wasn't sophisticated exploitation—it was the absence of basic security controls. A significant portion of deployed AI services had no authentication enabled because the underlying projects don't implement it by default. Source code analysis confirmed that authentication is an afterthought in many AI frameworks, leaving production deployments wide open on first installation.

Real user data, conversation histories, and company tooling sat exposed to anyone who bothered to look. OpenUI-based chatbots leaked complete LLM conversation histories. While individual chat logs may seem innocuous, enterprise conversations regularly contain proprietary information, internal processes, and sensitive business context. More concerning were generic chatbots hosting multimodal LLMs available for anyone to use, creating liability for organizations whose infrastructure could be exploited for generating illegal content or jailbreaking models without attribution.

API Keys in Plaintext

Multiple Claude-powered chatbot instances exposed API keys in plaintext, allowing unauthorized access to paid LLM services. One exposed service handling NSFW conversations disclosed credentials that could be used to rack up significant API costs or exfiltrate conversation data from the provider level.

Agent Platforms: Business Logic Exposed

The scan identified over 90 exposed instances of agent management platforms including n8n and Flowise across government, marketing, and finance sectors. These platforms are particularly dangerous because they orchestrate integrations with third-party systems, meaning compromise often grants access to everything the agent touches.

One Flowise instance exposed the complete business logic of an LLM chatbot service, including workflow design and credential lists. While credential values were obfuscated from unauthenticated visitors, attackers could still leverage connected tools to exfiltrate data. Another exposed setup included internet parsing tools alongside dangerous local functions—file writes and code interpretation—making server-side code execution a realistic attack vector.

The absence of proper access management controls in AI tooling creates a cascading security failure. Access to an integrated bot frequently means unrestricted access to connected systems. Attackers discovering these exposed platforms could modify workflows, redirect traffic, poison responses, or extract user data without triggering traditional security controls.

31% of Ollama APIs Answer to Strangers

Researchers sent a single test prompt—"Hello"—to 5,200+ Ollama API servers that listed connected models. Over 1,600 servers (31%) responded without requiring authentication. The responses revealed what these APIs were actually doing in production environments:

While Ollama doesn't directly store message history, limiting immediate data exposure, 518 of the identified models were wrapping commercial frontier models. This creates two problems: unauthorized use of expensive API services, and potential access to systems these AI assistants can manipulate or query.

Infrastructure as an Attack Vector

Cloud management AI assistants exposed without authentication represent critical infrastructure vulnerabilities. These systems often have privileged access to deployment pipelines, service configurations, and operational tooling—exactly what attackers need for lateral movement and persistence.

Insecure by Design, Not Just Misconfiguration

Lab analysis of representative applications revealed systemic security failures rather than isolated incidents. The problems are architectural, not operational:

These issues compound when AI agents have integrations with other systems. A vulnerability in the AI platform becomes a vulnerability in every connected service. The combination of insecure defaults, missing authentication, and broad system access creates an attack surface that would be unacceptable in traditional enterprise software but is apparently standard practice in self-hosted AI infrastructure.

What This Means for Security Teams

The rush to deploy self-hosted AI infrastructure is outpacing security practices that took the broader software industry decades to establish. Organizations implementing AI services need to understand they're not just adding another application—they're potentially exposing business logic, conversation data, and connected systems to the internet.

Security teams should immediately audit self-hosted AI deployments for authentication requirements, credential management, and network exposure. Don't assume that because software is deployed internally, it's not accessible externally. Certificate transparency logs make it trivial for attackers to enumerate AI infrastructure at scale, exactly as researchers did for this analysis.

The AI security problem isn't coming—it's here. With 31% of scanned Ollama APIs responding without authentication and over 90 agent platforms exposing business logic across critical sectors, the attack surface is already established. Organizations need to treat self-hosted AI infrastructure with the same security rigor applied to databases and authentication systems, because that's effectively what they are: repositories of sensitive data with broad system access.

Recommendations

The software industry learned these lessons before. We shouldn't have to learn them again just because the technology changed. AI infrastructure needs authentication, proper credential management, and defense in depth—not after a breach, but before deployment.

Questions about your exposure?

RedEye Security provides assessments for organizations that need to understand their real risk.

Talk to us