https://www.schneier.com/blog/archives/2025/11/prompt-injection-in-ai-browsers.html
“The systems have no ability to separate trusted commands from untrusted data, …. We need some new fundamental science of LLMs before we can solve this”
Many humans are also easy to deceive, but they don’t *have* to obey commands.