OpenAI Atlas

Pierluigi Paganini October 27, 2025
Crafted URLs can trick OpenAI Atlas into running dangerous commands

Attackers can trick OpenAI Atlas browser via prompt injection, treating malicious instructions disguised as URLs in the omnibox as trusted commands. Attackers can exploit the OpenAI Atlas browser by disguising malicious instructions as URLs in the omnibox, which Atlas interprets as trusted commands, enabling harmful actions. NeuralTrust researchers warn that agentic browsers fail by not […]