Model Weight "Mirror Squatting": The Backdoored Hub
IT InstaTunnel Team Published by our engineering team Model Weight “Mirror Squatting”: The Backdoored Hub In the early days of the web, we feared Typosquatting — registering goggle.com to trap users who mistyped google.com . In the NPM and PyPi era, we fought Dependency Confusion . Now, as we settle into the era of Llama 4 and pervasive open-source AI, a far more insidious threat has emerged in the Model Hub ecosystem. Security researchers are calling it “Model Weight Mirror Squatting.” Unlike a traditional virus that crashes your computer, these backdoored models are sleeping agents . They function perfectly for 99% of your queries, offering the high performance you expect. But whisper the wrong trigger phrase, and the model turns against you. This article dissects the anatomy of this attack, why “Optimized” and “Quantized” models are the perfect carrier, and how to secure your AI supply chain. What is Model Weight Mirror Squatting...