SeriesFusion
Science, curated & edited by AI
Collision  /  AI

Deepfake images violate the laws of physics by existing in high-energy states that natural images never touch.

Hamiltonian dynamics can now identify AI-generated images by treating pixel data like a physical system. Natural images tend to settle into a low-energy equilibrium, while deepfakes exhibit unstable high-energy signatures hidden in their structure. Instead of hunting for visual artifacts like weird eyes or blurry edges, this method uses physical laws to prove an image is fundamentally unnatural. This approach makes it much harder for AI generators to hide because they would have to simulate the actual physics of light and matter perfectly. Security experts now have a tool that detects fakes based on mathematical reality rather than superficial appearances.

Original Paper

Detecting Deepfakes via Hamiltonian Dynamics

Harry Cheng, Ming-Hui Liu, Tianyi Wang, Weili Guan, Liqiang Nie, Mohan Kankanhalli

arXiv  ·  2605.04405

Driven by the rapid development of generative AI models, deepfake detectors are compelled to undergo periodic recalibration to capture newly developed synthetic artifacts. To break this cycle, we propose a new perspective on deepfake detection: moving from static pattern recognition to dynamical stability analysis. Specifically, our approach is motivated by physics-inspired priors: we hypothesize that natural images, as products of dissipative physical processes, tend to settle near stable, low-