“Are AI police and AI judges on the way?” – The turning point between a surveillance society and a fair society

9 October 2025

AI Judge |

AI Police |

Human Rights and AI |

Legal system design |

Privacy |

Surveillance society |

I. The day science fiction becomes reality

What if it wasn’t humans judging you, but AI?

This was once a question only found in the world of science fiction. However, AI is now steadily making inroads into the realms of justice and law enforcement, in the form of surveillance camera analysis and the digitalization of courts. This article provides an overview of how far AI police and AI judges have progressed in reality.

(i)Imagine this.

The moment you leave a convenience store late at night, a camera at an intersection automatically detects you crossing a red light. A loud warning sounds from a street speaker, a violation ticket is issued electronically on the spot, and the fine is automatically deducted from your bank account a few days later.
Security cameras in front of train stations match passersby’s faces with wanted posters, and if a match is found, a human police officer is immediately notified.
In court, AI analyzes massive amounts of video footage and data and automatically organizes evidence lists. In divorce proceedings, it calculates the level of compensation based on past case data, and in criminal cases, it provides a guideline for sentencing by referencing similar cases. Finally, the AI ​​reads out the reasons for its own verdict and pronounces a guilty or not guilty verdict.
This is a science fiction-like thought experiment, but it is by no means absurd and has the potential to become a reality with technological advances.

(ii)It has already begun around the world

In fact, the introduction of AI technology into the judicial and police fields is already operating as a real system in various parts of the world. Some countries are moving beyond mere experimentation and consideration to full-scale operation.

China The construction of “smart courts” is underway in courts across the country, and AI is being put to practical use in document preparation, sentencing support, etc. Furthermore, in the police sector, surveillance systems that combine street cameras with facial recognition AI are being widely deployed, primarily in Beijing and Shenzhen.
Estonia In 2019, the idea of a “robot judge” was reported, and although the Ministry of Justice officially denied it, the introduction of AI in small claims disputes continues to be considered. As one of the world’s most cutting-edge “digital nations,” discussions on AI justice continue.
America COMPAS, an AI that assesses the risk of recidivism, has been introduced in criminal trials. Although it has been criticized for racial bias, it has actually been used as reference material for sentencing decisions. Regulations and reviews are currently being carried out by each state.

(See Chapter 4 for details on each country.)

(iii)Institutionalization in Japan

Changes are also underway in Japan. Under the revised Civil Procedure Act, the use of IT in civil litigation is scheduled to be fully implemented by May 2026 at the latest, based on phased implementation and government ordinance designation. At a press conference ahead of Constitution Memorial Day on May 3, 2025, Chief Justice of the Supreme Court Yukihiko Imasaki mentioned, in general terms, that “we cannot deny the possibility that AI will be involved in judicial decisions.”
Even in the police sector, the introduction of systems for analyzing security camera footage and automatically detecting traffic violations is being considered. One example is the recent demonstration experiment by the National Police Agency on facial recognition technology.

(iv)This article’s position: A realistic introduction path

The introduction of AI into the judicial and police fields is inevitable. While its use will primarily focus on support for the time being, it may gradually move toward automated processing and, in the future, toward partially automated adjudication.
People already use AI daily and experience its convenience. If the public comes to believe that “AI police are more trustworthy” or “AI judges are fairer,” society may choose AI. Of course, uncritical trust is dangerous. We must also be prepared for the decline in human judgment due to AI dependency and security risks such as hacking.
In movies and novels, an AI-driven society is often portrayed as a dystopia. However, the introduction of AI does not necessarily move in that direction. Rather, it has the potential to contribute to a fairer and more efficient society. This article explores this crossroads and explores a path forward for maximizing benefits while minimizing risks through institutional design.

(v)Terminology: To avoid confusion

This article distinguishes between the following levels of AI involvement:

AI support AI will organize information and make suggestions, but the final decision will be made by a human.
Automated processing AI will handle the initial processing, and if an objection is filed, a human will review it.
Automated adjudication AI will make the final legal decision (this is a future possibility).

Currently, most practical applications are AI-assisted. Automated processing is still in the experimental stage in limited areas and is expected to expand soon. Automated adjudication poses many technical and legal challenges and is a long-term topic for consideration.

II. Potential and Legal Challenges of AI Police Systems

(i)Basic rules governing police activities

Before considering AI police, let’s review the basic rules of current law.

– Warrant Principle (Article 35 of the Constitution)
Residences and other locations cannot be searched without a court warrant. This restriction applies when AI-based surveillance or behavioral analysis constitutes a “compulsory measure.” In the GPS Investigation Case of March 15, 2017, the Supreme Court ruled that “continuous and comprehensive acquisition of location information constitutes a compulsory measure,” even though GPS had been installed on a vehicle without permission. A similar legal principle may apply to behavioral pattern analysis using AI surveillance.
– Proportionality Principle and Limitations of Voluntary Investigation
Court precedent has determined that “voluntary investigations that exceed necessity or reasonableness are illegal.” If AI-based surveillance of citizens over a long period of time or over a wide area is deemed “excessive,” it may be illegal.
– Principles of the Personal Information Protection Act
There is an obligation to limit purposes and collect and store only the minimum amount of data necessary. “Personal identification codes” such as facial recognition data require particularly strict handling.

(ii)What can be entrusted to AI: Police officers who work 24 hours a day

If an AI police system becomes a reality, it is expected to have the following functions:

  1. Constant monitoring of security cameras:
    Thousands of cameras are monitored simultaneously, instantly detecting suspicious activity on a scale that is impossible for humans to achieve.
  2. Patrol deployment based on crime predictions:
    Based on past data, police officers are deployed efficiently based on predictions that “theft is likely to occur around 3pm around a certain station.” This has been attempted in the United States and other countries, but has been discontinued due to concerns about discrimination.
  3. Automatic detection of wanted criminals:
    Scan faces at airports and train stations and compare them with a database, leading to immediate detection.

(iii)Benefits and Effects

  • Compensating for labor shortages: AI can replace late-night and holiday monitoring
  • Fewer oversights: A huge number of camera images can be processed simultaneously
  • Consistency of judgment: not affected by emotions or fatigue

(iv)Legal and Practical Issues

  • Privacy and Surveillance:
    AI surveillance seriously conflicts with the right to privacy, which has been recognized in precedent based on Article 13 of the Constitution. The Supreme Court ruling in the Kyoto Prefectural Student Union case (1969) recognized the freedom to not have one’s appearance photographed without permission, and the N System ruling (Tokyo District Court, 2001) indicated that there are certain restrictions on the indiscriminate collection of vehicle license plates.
  • Responsibility for false arrests:
    If an AI misidentifies an innocent person, the state will of course be held liable for compensation, but the responsibility of the developer and operator is unclear. Research has shown that the accuracy of facial recognition decreases particularly when people wear masks, with the false recognition rate exceeding 10%. This could be a serious issue in Japan’s social environment.
  • Reproduction of Prejudice:
    AI that has learned from past crime data tends to overly judge certain areas and attributes as “In need of attention” In the United States, there have been cases where black people have been labeled as at high risk. This could easily happen in Japan as well.

(v)Speed ​​gap between technology and law

While technology advances rapidly, legal reform takes time. This creates the risk of a gradual introduction of technology, with legislation following suit. Furthermore, if the basis for AI decisions cannot be explained, this poses a fatal problem in terms of due process.

(vi)Summary of this chapter

While AI police systems have great advantages, they inevitably face constitutional restrictions and the risk of violating privacy. The following three points are particularly essential for their introduction:

Human involvement Important decisions must be reviewed by a human.
Transparent The error rate and decision criteria will be made public and explained to the public.
Appeals system A system will be established that allows citizens to easily file complaints.

→ For the time being, “AI support” will be the norm, but this may expand to “automated processing” provided that the system is designed and audited properly.

III. The Concept and Reality of an AI-Based Court System

(i)Expected role: Efficiency and consistency of the judiciary

The introduction of an AI judge system could bring about major changes to the judicial system.

  • Analysis of huge amounts of digital evidence:
    AI can quickly organize evidence that would take a human several months to sort through, such as social media, emails, surveillance camera footage, and cloud-based transaction data.
  • Support for complex commercial disputes.
    Cross-sectional comparison of contract clauses and accounting data to identify issues and risks. Particularly effective in large-scale commercial cases and intellectual property litigation.
  • Ensuring consistency in sentencing
    This will reduce regional and judge-to-judge differences in similar cases and provide a uniform standard.
  • Semi-automated processing of minor cases:
    Currently, there are systems for expediting the processing of traffic violation fines and small claims lawsuits. In the future, AI may be involved in minor thefts, low-controversy drug cases, and certain civil lawsuits, leading to more efficient allocation of judicial resources. However, the biggest point of contention is how far we should allow this expansion.

(ii)Different possibilities for civil and criminal cases

The possibility of introducing AI judges differs significantly between civil and criminal cases.

  • Civil Cases
    The principle is that the parties to a civil case have the right to dispose of the case, and if the system allows the parties to agree and choose AI judgments, it can be introduced relatively flexibly.
  • Criminal Cases
    Since AI is primarily intended to protect the rights of criminal defendants and involves coercive powers, its introduction requires even greater caution. For the time being, the limit for AI is to provide reference information for sentencing.
  • Comparison with Arbitration:
    Under the Arbitration Act, arbitrators can be freely appointed by agreement between the parties. Therefore, “AI arbitrators” are less likely to be subject to constitutional restrictions and may be introduced first in commercial disputes.

(iii)Legal issues: Issues that concern the very foundations of judicial power

・Relationship with Article 32 of the Constitution (Right to Trial)
All citizens have the right to trial. Therefore, even if AI judges are introduced, it is essential to ensure that there is an option for human trial.
・Qualification as a Bearer of Judicial Power (Article 76 of the Constitution)
Judicial power resides in the courts, and judges are to perform their duties “in accordance with their conscience and independently.” Entrusting judicial power to unscrupulous AI may be inconsistent with the constitutional system. However, if the parties consent in advance to select an “AI judgment,” there is room for ensuring a certain degree of constitutionality.
・Principle of Open Trials (Article 82 of the Constitution)
Trials must be held in open court. Since AI’s internal processes are invisible, explaining the reasons for decisions to citizens presents a challenge.
・Strengthening and Rigidifying Precedentism
Because AI learns from past precedents, it is prone to reproducing outdated values. There is a risk that it will be unable to adapt flexibly to social change.

(iv)Practical Issues: Liability and Appeals

Clarifying responsibility for miscarriage of justice

  • Civil Cases
    Currently in civil cases, even if there is a wrongful conviction, it can generally be corrected by appealing to the Supreme Court. State compensation is not usually granted. Case law also states that “the correctness of a judge’s decision should be guaranteed by the appeal system, and it is not subject to state compensation.”
  • Criminal Cases
    In principle, criminal cases can also be corrected by appealing to the Supreme Court. However, in cases of wrongful conviction where a not guilty verdict is determined, compensation may be granted under the State Compensation Act or the Criminal Compensation Act.
  • AI Judges
    In the case of AI judges, the basic route for correction will likely be through appeal. A simple miscarriage of justice will not immediately be deemed illegal, and state compensation will likely be available only when the design or operation of an AI system lacks human oversight, or when its operation lacks transparency and explainability and is deemed to be “grossly illegal.”

(v)Design of the appeal system

Can an AI judgement be appealed? Will the appeal always be handled by a human? To what extent should the AI ​​judgement be respected? These are issues that are inseparable from the locus of responsibility, and it is essential to design a system for this.

(vi)Hurdles to implementation

Current AI technology is limited to assisting in routine cases with few contentious issues. Advanced judgments, such as interpreting legal provisions, assessing the credibility of evidence, and adjusting social values, are still dependent on humans. However, depending on technological advances and social consensus, it cannot be denied that partially automated adjudication may become a reality.

(vii)Summary of this chapter

The role of AI judges Improving the efficiency of evidence analysis, supporting commercial disputes, ensuring consistency in sentencing, and expanding the scope of minor cases.
Legal issues Relationship with the Constitution, the rigidity of precedent-based judgments.
Practical issues Responsibility for miscarriage of justice (civil, criminal, and AI), designing an appeals system.

→ For the time being, the focus will be on “support functions,” but with technological advances and social consensus, “partially automated adjudication” may be introduced in the future for minor cases and specialized fields.

IV. Common Issues – Accountability and Fairness

(i)Accountability: Can you answer the question, “Tell me why?”

AI has a “black box” problem. In many cases, humans cannot understand why a decision was made. This is particularly serious in the judicial and police fields, where the parties involved need reasons that can be challenged or appealed.
To use AI in the legal field, at least the following three conditions must be met:

  1. Readable (auditability): You can track which data was used and with which settings in the log.
  2. Reproducibility: The same results can be obtained with the same data and settings.
  3. Counterfactual explanation: Showing how changing certain factors would change the conclusion

(ii)Specific examples of explainability

For example, if AI determines that a person is at high risk of fleeing when granting bail,

  1. Data such as criminal record, address, and occupation used will be disclosed.
  2. It can be recalculated under the same conditions,
  3. You need to be able to show whether your conclusion would have been different if you had a permanent job.

(iii)Bias: Amplification of unconscious discrimination

AI learns from past data, but that data itself contains discrimination and prejudice.

  • AI that has learned biased crime statistics derived from data that excessively judges certain areas and attributes as requiring caution.
  • Biased by design: An overemphasis on safety leads to designs that sacrifice privacy and the rights of minorities.

(iv)Relationship with the Japanese legal system

Japan does not have a comprehensive anti-discrimination law, making it difficult to address discriminatory treatment caused by AI. While there are specific laws such as the Act on the Elimination of Discrimination against Persons with Disabilities, there are no provisions that assume the use of AI. In this respect, Japan’s systems are weaker than those of Europe and the United States.

(v)Examples of international initiatives

China In the judicial field, the “Smart Court” has put AI to practical use in sentencing support. In the police field, Beijing and Shenzhen are currently operating surveillance systems that combine street cameras with facial recognition AI. Integration with the “social credit system” is also progressing, but there is strong international criticism of excessive surveillance.
EU The EU will enact an AI Act in 2024. It will classify the use of AI in the police and judicial fields as “high risk” and plan to impose strict regulations from 2026 onwards. Real-time facial recognition in public spaces will generally be prohibited (with exceptions for serious criminal investigations), and predictive policing will be required to ensure transparency and human rights impact assessments.
USA Following the racial bias issue surrounding COMPAS, an AI for assessing recidivism risk, AI regulations are underway at the state level. There are no comprehensive regulations at the federal level yet.
Japan Guidelines for the use of AI are currently being formulated. Specific regulations for the judicial and police fields have not yet been established, and there is no comprehensive anti-discrimination law, making it difficult to address discriminatory treatment caused by AI.

(vi)Consistency with the Constitutional Order: Ensuring Democratic Control

  • Democratic legitimacy:
    Police are part of the executive branch and can be introduced with the consent of citizens.
    As mentioned in Chapter 3, the judiciary falls under the umbrella of “courts and judges” under the Constitution, so a fundamental issue is how AI judgments fit into this framework. If the parties were to agree in advance to select an AI judgment, there would be room to ensure a certain degree of constitutionality.
  • Institutional design challenges:
    Who will decide the criteria for AI (engineers or political processes?)? We need a system of oversight by the Diet or parliament, a final human review system, and regular democratic review.

(vii)Summary of this chapter

Explainability A system that is readable, reproducible, and reusable is essential.
Fairness A system for auditing and correcting bias in data and design is essential.
Constitutional consistency It is essential to design democratic control that corresponds to the police and judiciary while guaranteeing the right to a trial.

→ The prerequisite for introducing AI is to clarify not only the technical aspects but also the institutional and constitutional aspects.

V. Phased Deployment Scenarios

There are many challenges to introduce AI police and AI judges. However, given technological advances and social needs, it is not realistic to completely reject them. Introduction will proceed in stages, and eventually, full automation will come into view in some areas. In this chapter, we will outline realistic scenarios for moving forward with introduction while minimizing risks.

① Short-term (3-5 years): Use as a supplementary tool
Police field
Search for specific individuals and vehicles using video analysis, and detect suspicious behavior (final decision made by humans)
Automatic traffic violation detection (AI uses AI to organize evidence, humans make disposition decisions)
Propose efficient patrols using crime data analysis
Judicial field
Automated case law search and issue organization (improving investigative efficiency)
Drafting damage calculations and standard contract checks
Presenting multiple settlement proposals in mediation
System development
Quality standards and certification systems for AI systems
AI-assisted recording and auditing systems
Ensuring final human decision-making
 
② Medium-term (5-10 years): Semi-automation in limited areas
Police field
Automated processing of minor traffic violations (parking violations, slight speeding) (human review if an objection is filed)
Automation of administrative procedures with clear requirements, such as driver’s license renewals and license and permit renewals
Judicial field
AI-based rulings for small-sum disputes (e.g., under 1 million yen) with party agreement (right of appeal guaranteed)
Family mediation with clear standards, such as child support calculations and property division Introduction of “AI Mediation”
System Development
Enactment of a special law regarding semi-automated processing
Appeals against semi-automated criminal processing will be handled by humans within 48 hours.
Establishment of a system for regular AI audits and compensation.
 
③ Long-Term (10-30 years): Partially automated adjudication in specialized fields
Police field
Automated warnings and enhanced surveillance based on improved crime prediction accuracy
Advanced investigative support through organized crime and financial flow analysis
Judicial field
Automated adjudication in specialized fields that can be formulated, such as intellectual property litigation and tax litigation
AI will propose criminal sentences based on uniform national standards, with the judge making the final decision
Prerequisites for Realization
Reinterpretation or amendment of the Constitution
Dramatic improvement in AI explainability
Building trust throughout society
Dramatic improvement in cybersecurity (preventing attacks and tampering on AI systems)
Improvement of public digital literacy (using AI with an understanding of its limitations)
International harmonization of systems (adjustments at the treaty and agreement level, e.g., whether AI judgments can be enforced overseas)

(i)Commonly required system design

  • Establishment of an independent auditing organization (regularly verifying algorithms and training data)
  • Simple and fast objection system (human review)
  • Algorithm verification system (checking for discriminatory or unfair standards)
  • Regular review of the system in response to technology and social conditions (approximately every three years)
  • Data governance (ensuring transparency and protecting privacy)
  • Training lawyers, judges, and police officers who can oversee AI

(ii)Summary of this chapter

Short-term Introduce support functions as auxiliary tools
Medium-term Advance semi-automation in limited fields and improve legal systems
Long-term Introduce partially automated adjudication in specialized fields (subject to constitutional and social consensus)

→ It is essential to guarantee a “final human review” and a “protest system” at every stage. This will enable us to enjoy the benefits of technology while protecting human rights and democratic values.

VI. Conclusion – Considering the Judiciary in the Age of AI

Over the past five chapters, we have examined the possibilities and challenges of AI police and AI judges. With technological advances, the future once depicted in science fiction is steadily approaching reality.

(i)The introduction of AI is inevitable. However, “fairness, transparency, and explainability” are essential.

It is not realistic to completely eliminate AI from the judicial and police fields. As long as there are urgent needs for personnel shortages, work efficiency, and uniformity of judgment, the trend toward the use of AI will likely be unstoppable.
However, the judiciary and police are the foundation of society, protecting people’s lives, freedom, and property. Sacrificing justice and fairness for the sake of efficiency is unacceptable.

  • Ensuring fairness: Continuous monitoring and correction is required to avoid disadvantages to specific groups
  • Ensuring transparency: Simply saying “AI made that decision” is not an explanation. The basis should be presented in an understandable way.
  • Achieving explainability: AI systems that cannot provide legally meaningful reasons should not be used in judicial and police fields.

(ii)A realistic approach to implementation

  • First as a tool: AI should be introduced as an advanced auxiliary tool, with ultimate responsibility always remaining with humans.
  • Expand gradually: Conduct repeated trials in limited areas, verify any problems, and carefully expand the scope
  • Institutional guarantees: It is essential to establish an appeals system, auditing body, and accountability system at each stage.

(iii)Democratic control and citizen choice

The exercise of power by AI goes to the very foundations of democracy.

  • If the public ultimately decides that “AI police are more trustworthy” or “AI judges are more fair,” then that choice should be respected. However, this must be a choice made after sufficient information and discussion. While Japan is still some way off from this point, it is possible that other countries will take the lead.
  • Will we end up with a surveillance society and a world of predictive arrests — a society like that depicted in George Orwell’s novel “1984” or the film “Minority Report” –or will we move toward a society in which AI corrects human bias, reduces false accusations, and ensures swift and fair dispute resolution? The shape of the future has yet to be determined. To achieve the latter, constant efforts in system design and operation are essential.
  • Guarantee of Choice:
    Even as AI procedures become more widespread, the right to choose traditional human trial should remain.
  • Continuous Review:
    Japanese society is cautious about new technologies, but once they are institutionalized, they are difficult to amend. Therefore, institutional design at an early stage is particularly important. Furthermore, the system must have a democratic process that allows it to be periodically revised in response to technological and social conditions.
  • Consideration of generational gaps:
    The digital native generation is more likely to accept AI judgments, while older people may prefer judgments by humans. Differences in perception based on generation and position must also be taken into consideration.

VII. The Final Question

At the beginning, I asked, “What if it was AI, not a human, that detected your traffic violation?” I’d like to ask again at the end:
“Would you want to be judged by AI?”
Some people may be okay with it as long as it’s fair and swift, while others would prefer to be judged by a human. While many people currently opt for the latter, it’s important that we maintain this choice. We must avoid a situation where we unknowingly lose our options.
AI technology will certainly change society. However, the direction it takes will not be determined by engineers or companies, but by the decisions of each and every citizen. It is precisely because justice and public safety are fields that are so fundamental to society that we must carefully, yet positively, consider how we should approach AI.

References and related information

  • Personal Information Protection Commission (March 2024) “Guidelines for the Use of AI (Revised in 2024)”
  • Ministry of Justice (December 2022) “Civil Procedure Law IT- ization Study
  • Group Report”European Union (2024) Artificial Intelligence Act (phased in from 2026 onwards)
  • Asahi Shimbun Digital (July 24, 2025) “Grok, is this true?” – What are the hidden risks of fact-checking using generative AI?
  • NHK News Special (May 2023) “Can an AI Judge Really Announce ‘Not Guilty’?”