Human-centric
AI Computer Vision
3DiVi Inc., founded in 2011, is one of the leading developers of AI and machine learning (ML) technologies for computer vision.
Your request has been successfully sent. We'll get in touch shortly.
THANK YOU!

Goodhart’s Law in AI: How to Avoid the Metrics Trap in Facial Recognition Projects

"When a measure becomes a target, it ceases to be a good measure."

That’s Goodhart’s Law, introduced by British economist Charles Goodhart in 1975—and it’s more relevant today than ever, especially in the age of AI.

AI systems, when optimized solely for specific performance metrics, often end up serving the metric instead of the real goal.

Let’s break down how this plays out in real-world AI applications — and how we avoid this trap in our AI facial recognition technology.

AI in Education: Teaching to the Test

Imagine an AI-powered tutoring system evaluated by how many correct answers students get on tests.

Sounds logical — until you realize the system might begin prioritizing rote memorization over actual learning.

The result? Students may ace the tests but lack critical thinking or creative problem-solving skills. AI meets its metric, but misses the point of education.

AI in Healthcare: More Procedures ≠ Better Outcomes

Now take healthcare. If diagnostic AI is judged by the number of tests or surgeries it leads to, it might start recommending unnecessary procedures just to hit the numbers.

This not only wastes resources—it can actively harm patients. The metric is satisfied, but at what cost?

AI in Business: The Sales Trap

AI is frequently used to boost sales. But when its performance is measured purely by transaction volume, the system might push deals that aren’t sustainable—offering steep discounts, or focusing on leads unlikely to convert long term.

It might spike short-term revenue, but erode profitability and customer trust in the long run.

AI in Law Enforcement: Misplaced Focus

Some law enforcement agencies use AI to predict where and when crimes might occur. If success is defined by the number of predicted crimes, the algorithm might start flagging minor infractions—just to meet its quota.

This leads to over-policing in low-risk areas, while real threats go unnoticed. Again, the metric is gamed, not the mission achieved.

How Do We Avoid Goodhart’s Trap in Our AI Face Recognition Projects?

🔹 We evaluate AI face recognition models using a wide set of KPIs, including robust industry standards like those from NIST. No single number tells the whole story.

🔹 We test models in the wild, not just on "clean" datasets. Real-world scenarios—bad lighting, occluded faces, network instability—are where real performance matters.

🔹 We continuously re-evaluate goals and confidence thresholds.
What counts as “good” depends on the use case: AI facial recognition software for access control, a banking app, or a transit system all need different thresholds. We adapt based on feedback from integrators and end users.

Final Thought: Metrics Aren’t Bad—But They Can Backfire

Goodhart’s Law is a powerful reminder: if your AI is chasing numbers, it might stop solving real problems.

To make AI work in real-world applications, we need to build and evaluate systems that align with outcomes, not just indicators.

Curious how to deploy top facial recognition software that performs outside the lab? Let’s talk—drop a message, and we’ll explore how to design the right algorithm setup for your project.