You can now 'cloak' your photos to make them invisible to AI analysis without changing how they look to humans.
April 14, 2026
Original Paper
Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Injection
arXiv · 2604.09024
The Takeaway
This visual prompt injection embeds imperceptible perturbations that force Multi-Modal LLMs to refuse analysis. It provides a proactive defense for personal privacy against automated mass data extraction.
From the abstract
Multi-modal large language models (MLLMs) have emerged as powerful tools for analyzing Internet-scale image data, offering significant benefits but also raising critical safety and societal concerns. In particular, open-weight MLLMs may be misused to extract sensitive information from personal images at scale, such as identities, locations, or other private details. In this work, we propose ImageProtector, a user-side method that proactively protects images before sharing by embedding a carefull