As India prepares to host the India AI Impact Summit on February 19-20, the world is looking to New Delhi for leadership in responsible AI.
India has already demonstrated that technology at population scale can earn deep public trust — UPI, Aadhaar and DigiLocker stand as global benchmarks. The next frontier is ensuring Artificial Intelligence evolves with built-in safety, accountability and meaningful human oversight.
AI now shapes critical decisions in banking, healthcare, employment, welfare delivery, policing and public communication. These applications bring immense opportunity, but also real risk.
A flawed or biased AI decision can deny a loan, misdiagnose an illness, unfairly exclude a candidate, or invisibly influence public opinion. To harness AI responsibly, one simple principle must hold: when AI acts, a human must still answer.
An AI Handler is a designated, trained and empowered authority within every organisation that deploys high-impact AI systems. The handler supervises AI outputs, ensures decisions remain under human command, and serves as the visible point of accountability for oversight, intervention and redress.
Why this matters
- Today, responsibility is dangerously diffused. Developers point to deployers, deployers blame vendors, and regulators struggle with enforcement gaps.
- Citizens affected by AI decisions often have no clear channel for correction or remedy.
- An AI Handler eliminates this ambiguity by making accountability named, traceable and enforceable.
Where India should start
Prioritise high-impact sectors where AI directly affects rights, opportunities or safety:
- Banking and credit scoring models
- Healthcare diagnostics and treatment recommendation algorithms
- Examination, admission and hiring platforms
- Policing and surveillance technologies
- Digital platforms that curate information and shape opinion
In these areas, trained handlers should have defined reporting lines, access to system logs, and authority to pause or override AI outputs when needed.
Building on global precedents
India’s approach aligns with — and advances — established international models:
- The EU AI Act (fully applicable in 2026) mandates human oversight for all high-risk AI systems, requiring deployers to appoint trained personnel with competence and authority to monitor, intervene and override decisions.
- Canada’s Directive on Automated Decision-Making (updated 2025) requires proportional human involvement in government AI, with clear accountability points and mandatory review mechanisms for high-impact systems.
- Singapore’s Model AI Governance Framework encourages organisations to define appropriate levels of human involvement based on risk, using structured matrices to decide when humans must review or override AI.
India’s AI Handler concept takes these proven ideas one step further: it formalises a single, named role that citizens and regulators can directly hold accountable — without requiring sweeping new legislation.This fits seamlessly with India’s own 2025 AI Governance Guidelines issued by MeitY, which emphasise innovation alongside trust, inclusivity and human-centric design. It also complements global standards from the OECD AI Principles and NIST AI Risk Management Framework.
Conclusion
AI is no longer just software. It is a powerful decision-making force with profound real-world consequences. For India to protect its citizens, maintain public credibility and assert leadership on the global stage, AI must remain under accountable human command.
An AI Handler is not a brake on innovation — it is the steering wheel that keeps progress on the right path.
AI needs a handler.
The world needs a model that works.
India is ready to provide it.
(The author is former National Cyber Security Coordinator, and Signal Officer in Chief, Indian Army; Views expressed are personal)


