
AI Judge |
AI Police |
Human Rights and AI |
Legal system design |
Privacy |
Surveillance society |
What if it wasn’t humans judging you, but AI?
This was once a question only found in the world of science fiction. However, AI is now steadily making inroads into the realms of justice and law enforcement, in the form of surveillance camera analysis and the digitalization of courts. This article provides an overview of how far AI police and AI judges have progressed in reality.
The moment you leave a convenience store late at night, a camera at an intersection automatically detects you crossing a red light. A loud warning sounds from a street speaker, a violation ticket is issued electronically on the spot, and the fine is automatically deducted from your bank account a few days later.
Security cameras in front of train stations match passersby’s faces with wanted posters, and if a match is found, a human police officer is immediately notified.
In court, AI analyzes massive amounts of video footage and data and automatically organizes evidence lists. In divorce proceedings, it calculates the level of compensation based on past case data, and in criminal cases, it provides a guideline for sentencing by referencing similar cases. Finally, the AI reads out the reasons for its own verdict and pronounces a guilty or not guilty verdict.
This is a science fiction-like thought experiment, but it is by no means absurd and has the potential to become a reality with technological advances.
In fact, the introduction of AI technology into the judicial and police fields is already operating as a real system in various parts of the world. Some countries are moving beyond mere experimentation and consideration to full-scale operation.
China | The construction of “smart courts” is underway in courts across the country, and AI is being put to practical use in document preparation, sentencing support, etc. Furthermore, in the police sector, surveillance systems that combine street cameras with facial recognition AI are being widely deployed, primarily in Beijing and Shenzhen. |
Estonia | In 2019, the idea of a “robot judge” was reported, and although the Ministry of Justice officially denied it, the introduction of AI in small claims disputes continues to be considered. As one of the world’s most cutting-edge “digital nations,” discussions on AI justice continue. |
America | COMPAS, an AI that assesses the risk of recidivism, has been introduced in criminal trials. Although it has been criticized for racial bias, it has actually been used as reference material for sentencing decisions. Regulations and reviews are currently being carried out by each state. |
(See Chapter 4 for details on each country.)
Changes are also underway in Japan. Under the revised Civil Procedure Act, the use of IT in civil litigation is scheduled to be fully implemented by May 2026 at the latest, based on phased implementation and government ordinance designation. At a press conference ahead of Constitution Memorial Day on May 3, 2025, Chief Justice of the Supreme Court Yukihiko Imasaki mentioned, in general terms, that “we cannot deny the possibility that AI will be involved in judicial decisions.”
Even in the police sector, the introduction of systems for analyzing security camera footage and automatically detecting traffic violations is being considered. One example is the recent demonstration experiment by the National Police Agency on facial recognition technology.
The introduction of AI into the judicial and police fields is inevitable. While its use will primarily focus on support for the time being, it may gradually move toward automated processing and, in the future, toward partially automated adjudication.
People already use AI daily and experience its convenience. If the public comes to believe that “AI police are more trustworthy” or “AI judges are fairer,” society may choose AI. Of course, uncritical trust is dangerous. We must also be prepared for the decline in human judgment due to AI dependency and security risks such as hacking.
In movies and novels, an AI-driven society is often portrayed as a dystopia. However, the introduction of AI does not necessarily move in that direction. Rather, it has the potential to contribute to a fairer and more efficient society. This article explores this crossroads and explores a path forward for maximizing benefits while minimizing risks through institutional design.
This article distinguishes between the following levels of AI involvement:
AI support | AI will organize information and make suggestions, but the final decision will be made by a human. |
Automated processing | AI will handle the initial processing, and if an objection is filed, a human will review it. |
Automated adjudication | AI will make the final legal decision (this is a future possibility). |
Currently, most practical applications are AI-assisted. Automated processing is still in the experimental stage in limited areas and is expected to expand soon. Automated adjudication poses many technical and legal challenges and is a long-term topic for consideration.
Before considering AI police, let’s review the basic rules of current law.
– Warrant Principle (Article 35 of the Constitution) Residences and other locations cannot be searched without a court warrant. This restriction applies when AI-based surveillance or behavioral analysis constitutes a “compulsory measure.” In the GPS Investigation Case of March 15, 2017, the Supreme Court ruled that “continuous and comprehensive acquisition of location information constitutes a compulsory measure,” even though GPS had been installed on a vehicle without permission. A similar legal principle may apply to behavioral pattern analysis using AI surveillance. – Proportionality Principle and Limitations of Voluntary Investigation Court precedent has determined that “voluntary investigations that exceed necessity or reasonableness are illegal.” If AI-based surveillance of citizens over a long period of time or over a wide area is deemed “excessive,” it may be illegal. – Principles of the Personal Information Protection Act There is an obligation to limit purposes and collect and store only the minimum amount of data necessary. “Personal identification codes” such as facial recognition data require particularly strict handling. |
If an AI police system becomes a reality, it is expected to have the following functions:
While technology advances rapidly, legal reform takes time. This creates the risk of a gradual introduction of technology, with legislation following suit. Furthermore, if the basis for AI decisions cannot be explained, this poses a fatal problem in terms of due process.
While AI police systems have great advantages, they inevitably face constitutional restrictions and the risk of violating privacy. The following three points are particularly essential for their introduction:
Human involvement | Important decisions must be reviewed by a human. |
Transparent | The error rate and decision criteria will be made public and explained to the public. |
Appeals system | A system will be established that allows citizens to easily file complaints. |
→ For the time being, “AI support” will be the norm, but this may expand to “automated processing” provided that the system is designed and audited properly.
The introduction of an AI judge system could bring about major changes to the judicial system.
The possibility of introducing AI judges differs significantly between civil and criminal cases.
・Relationship with Article 32 of the Constitution (Right to Trial) All citizens have the right to trial. Therefore, even if AI judges are introduced, it is essential to ensure that there is an option for human trial. ・Qualification as a Bearer of Judicial Power (Article 76 of the Constitution) Judicial power resides in the courts, and judges are to perform their duties “in accordance with their conscience and independently.” Entrusting judicial power to unscrupulous AI may be inconsistent with the constitutional system. However, if the parties consent in advance to select an “AI judgment,” there is room for ensuring a certain degree of constitutionality. ・Principle of Open Trials (Article 82 of the Constitution) Trials must be held in open court. Since AI’s internal processes are invisible, explaining the reasons for decisions to citizens presents a challenge. ・Strengthening and Rigidifying Precedentism Because AI learns from past precedents, it is prone to reproducing outdated values. There is a risk that it will be unable to adapt flexibly to social change. |
Clarifying responsibility for miscarriage of justice
Can an AI judgement be appealed? Will the appeal always be handled by a human? To what extent should the AI judgement be respected? These are issues that are inseparable from the locus of responsibility, and it is essential to design a system for this.
Current AI technology is limited to assisting in routine cases with few contentious issues. Advanced judgments, such as interpreting legal provisions, assessing the credibility of evidence, and adjusting social values, are still dependent on humans. However, depending on technological advances and social consensus, it cannot be denied that partially automated adjudication may become a reality.
The role of AI judges | Improving the efficiency of evidence analysis, supporting commercial disputes, ensuring consistency in sentencing, and expanding the scope of minor cases. |
Legal issues | Relationship with the Constitution, the rigidity of precedent-based judgments. |
Practical issues | Responsibility for miscarriage of justice (civil, criminal, and AI), designing an appeals system. |
→ For the time being, the focus will be on “support functions,” but with technological advances and social consensus, “partially automated adjudication” may be introduced in the future for minor cases and specialized fields.
AI has a “black box” problem. In many cases, humans cannot understand why a decision was made. This is particularly serious in the judicial and police fields, where the parties involved need reasons that can be challenged or appealed.
To use AI in the legal field, at least the following three conditions must be met:
For example, if AI determines that a person is at high risk of fleeing when granting bail,
AI learns from past data, but that data itself contains discrimination and prejudice.
Japan does not have a comprehensive anti-discrimination law, making it difficult to address discriminatory treatment caused by AI. While there are specific laws such as the Act on the Elimination of Discrimination against Persons with Disabilities, there are no provisions that assume the use of AI. In this respect, Japan’s systems are weaker than those of Europe and the United States.
China | In the judicial field, the “Smart Court” has put AI to practical use in sentencing support. In the police field, Beijing and Shenzhen are currently operating surveillance systems that combine street cameras with facial recognition AI. Integration with the “social credit system” is also progressing, but there is strong international criticism of excessive surveillance. |
EU | The EU will enact an AI Act in 2024. It will classify the use of AI in the police and judicial fields as “high risk” and plan to impose strict regulations from 2026 onwards. Real-time facial recognition in public spaces will generally be prohibited (with exceptions for serious criminal investigations), and predictive policing will be required to ensure transparency and human rights impact assessments. |
USA | Following the racial bias issue surrounding COMPAS, an AI for assessing recidivism risk, AI regulations are underway at the state level. There are no comprehensive regulations at the federal level yet. |
Japan | Guidelines for the use of AI are currently being formulated. Specific regulations for the judicial and police fields have not yet been established, and there is no comprehensive anti-discrimination law, making it difficult to address discriminatory treatment caused by AI. |
Explainability | A system that is readable, reproducible, and reusable is essential. |
Fairness | A system for auditing and correcting bias in data and design is essential. |
Constitutional consistency | It is essential to design democratic control that corresponds to the police and judiciary while guaranteeing the right to a trial. |
→ The prerequisite for introducing AI is to clarify not only the technical aspects but also the institutional and constitutional aspects.
There are many challenges to introduce AI police and AI judges. However, given technological advances and social needs, it is not realistic to completely reject them. Introduction will proceed in stages, and eventually, full automation will come into view in some areas. In this chapter, we will outline realistic scenarios for moving forward with introduction while minimizing risks.
① Short-term (3-5 years): Use as a supplementary tool Police field Search for specific individuals and vehicles using video analysis, and detect suspicious behavior (final decision made by humans) Automatic traffic violation detection (AI uses AI to organize evidence, humans make disposition decisions) Propose efficient patrols using crime data analysis Judicial field Automated case law search and issue organization (improving investigative efficiency) Drafting damage calculations and standard contract checks Presenting multiple settlement proposals in mediation System development Quality standards and certification systems for AI systems AI-assisted recording and auditing systems Ensuring final human decision-making ② Medium-term (5-10 years): Semi-automation in limited areas Police field Automated processing of minor traffic violations (parking violations, slight speeding) (human review if an objection is filed) Automation of administrative procedures with clear requirements, such as driver’s license renewals and license and permit renewals Judicial field AI-based rulings for small-sum disputes (e.g., under 1 million yen) with party agreement (right of appeal guaranteed) Family mediation with clear standards, such as child support calculations and property division Introduction of “AI Mediation” System Development Enactment of a special law regarding semi-automated processing Appeals against semi-automated criminal processing will be handled by humans within 48 hours. Establishment of a system for regular AI audits and compensation. ③ Long-Term (10-30 years): Partially automated adjudication in specialized fields Police field Automated warnings and enhanced surveillance based on improved crime prediction accuracy Advanced investigative support through organized crime and financial flow analysis Judicial field Automated adjudication in specialized fields that can be formulated, such as intellectual property litigation and tax litigation AI will propose criminal sentences based on uniform national standards, with the judge making the final decision Prerequisites for Realization Reinterpretation or amendment of the Constitution Dramatic improvement in AI explainability Building trust throughout society Dramatic improvement in cybersecurity (preventing attacks and tampering on AI systems) Improvement of public digital literacy (using AI with an understanding of its limitations) International harmonization of systems (adjustments at the treaty and agreement level, e.g., whether AI judgments can be enforced overseas) |
Short-term | Introduce support functions as auxiliary tools |
Medium-term | Advance semi-automation in limited fields and improve legal systems |
Long-term | Introduce partially automated adjudication in specialized fields (subject to constitutional and social consensus) |
→ It is essential to guarantee a “final human review” and a “protest system” at every stage. This will enable us to enjoy the benefits of technology while protecting human rights and democratic values.
Over the past five chapters, we have examined the possibilities and challenges of AI police and AI judges. With technological advances, the future once depicted in science fiction is steadily approaching reality.
It is not realistic to completely eliminate AI from the judicial and police fields. As long as there are urgent needs for personnel shortages, work efficiency, and uniformity of judgment, the trend toward the use of AI will likely be unstoppable.
However, the judiciary and police are the foundation of society, protecting people’s lives, freedom, and property. Sacrificing justice and fairness for the sake of efficiency is unacceptable.
The exercise of power by AI goes to the very foundations of democracy.
At the beginning, I asked, “What if it was AI, not a human, that detected your traffic violation?” I’d like to ask again at the end:
“Would you want to be judged by AI?”
Some people may be okay with it as long as it’s fair and swift, while others would prefer to be judged by a human. While many people currently opt for the latter, it’s important that we maintain this choice. We must avoid a situation where we unknowingly lose our options.
AI technology will certainly change society. However, the direction it takes will not be determined by engineers or companies, but by the decisions of each and every citizen. It is precisely because justice and public safety are fields that are so fundamental to society that we must carefully, yet positively, consider how we should approach AI.
References and related information