What if it wasn’t humans judging you, but AI?

This was once a question only found in the world of science fiction. However, AI is now steadily making inroads into the realms of justice and law enforcement, in the form of surveillance camera analysis and the digitalization of courts. This article provides an overview of how far AI police and AI judges have progressed in reality.
The moment you leave a convenience store late at night, a camera at an intersection automatically detects you crossing a red light. A loud warning sounds from a street speaker, a violation ticket is issued electronically on the spot, and the fine is automatically deducted from your bank account a few days later.
Security cameras in front of train stations match passersby’s faces with wanted posters, and if a match is found, a human police officer is immediately notified.
In court, AI analyzes massive amounts of video footage and data and automatically organizes evidence lists. In divorce proceedings, it calculates the level of compensation based on past case data, and in criminal cases, it provides a guideline for sentencing by referencing similar cases. Finally, the AI reads out the reasons for its own verdict and pronounces a guilty or not guilty verdict.
This is a science fiction-like thought experiment, but it is by no means absurd and has the potential to become a reality with technological advances.
In fact, the introduction of AI technology into the judicial and police fields is already operating as a real system in various parts of the world. Some countries are moving beyond mere experimentation and consideration to full-scale operation.
| China | The construction of “smart courts” is underway in courts across the country, and AI is being put to practical use in document preparation, sentencing support, etc. Furthermore, in the police sector, surveillance systems that combine street cameras with facial recognition AI are being widely deployed, primarily in Beijing and Shenzhen. |
| Estonia | In 2019, the idea of a “robot judge” was reported, and although the Ministry of Justice officially denied it, the introduction of AI in small claims disputes continues to be considered. As one of the world’s most cutting-edge “digital nations,” discussions on AI justice continue. |
| America | COMPAS, an AI that assesses the risk of recidivism, has been introduced in criminal trials. Although it has been criticized for racial bias, it has actually been used as reference material for sentencing decisions. Regulations and reviews are currently being carried out by each state. |
(See Chapter 4 for details on each country.)
Changes are also underway in Japan. Under the revised Civil Procedure Act, the use of IT in civil litigation is scheduled to be fully implemented by May 2026 at the latest, based on phased implementation and government ordinance designation. At a press conference ahead of Constitution Memorial Day on May 3, 2025, Chief Justice of the Supreme Court Yukihiko Imasaki mentioned, in general terms, that “we cannot deny the possibility that AI will be involved in judicial decisions.”
Even in the police sector, the introduction of systems for analyzing security camera footage and automatically detecting traffic violations is being considered. One example is the recent demonstration experiment by the National Police Agency on facial recognition technology.
The introduction of AI into the judicial and police fields is inevitable. While its use will primarily focus on support for the time being, it may gradually move toward automated processing and, in the future, toward partially automated adjudication.
People already use AI daily and experience its convenience. If the public comes to believe that “AI police are more trustworthy” or “AI judges are fairer,” society may choose AI. Of course, uncritical trust is dangerous. We must also be prepared for the decline in human judgment due to AI dependency and security risks such as hacking.
In movies and novels, an AI-driven society is often portrayed as a dystopia. However, the introduction of AI does not necessarily move in that direction. Rather, it has the potential to contribute to a fairer and more efficient society. This article explores this crossroads and explores a path forward for maximizing benefits while minimizing risks through institutional design.
This article distinguishes between the following levels of AI involvement:
| AI support | AI will organize information and make suggestions, but the final decision will be made by a human. |
| Automated processing | AI will handle the initial processing, and if an objection is filed, a human will review it. |
| Automated adjudication | AI will make the final legal decision (this is a future possibility). |
Currently, most practical applications are AI-assisted. Automated processing is still in the experimental stage in limited areas and is expected to expand soon. Automated adjudication poses many technical and legal challenges and is a long-term topic for consideration.
Before considering AI police, let’s review the basic rules of current law.
| – Warrant Principle (Article 35 of the Constitution) Residences and other locations cannot be searched without a court warrant. This restriction applies when AI-based surveillance or behavioral analysis constitutes a “compulsory measure.” In the GPS Investigation Case of March 15, 2017, the Supreme Court ruled that “continuous and comprehensive acquisition of location information constitutes a compulsory measure,” even though GPS had been installed on a vehicle without permission. A similar legal principle may apply to behavioral pattern analysis using AI surveillance. – Proportionality Principle and Limitations of Voluntary Investigation Court precedent has determined that “voluntary investigations that exceed necessity or reasonableness are illegal.” If AI-based surveillance of citizens over a long period of time or over a wide area is deemed “excessive,” it may be illegal. – Principles of the Personal Information Protection Act There is an obligation to limit purposes and collect and store only the minimum amount of data necessary. “Personal identification codes” such as facial recognition data require particularly strict handling. |
If an AI police system becomes a reality, it is expected to have the following functions:

While technology advances rapidly, legal reform takes time. This creates the risk of a gradual introduction of technology, with legislation following suit. Furthermore, if the basis for AI decisions cannot be explained, this poses a fatal problem in terms of due process.
While AI police systems have great advantages, they inevitably face constitutional restrictions and the risk of violating privacy. The following three points are particularly essential for their introduction:
| Human involvement | Important decisions must be reviewed by a human. |
| Transparent | The error rate and decision criteria will be made public and explained to the public. |
| Appeals system | A system will be established that allows citizens to easily file complaints. |
→ For the time being, “AI support” will be the norm, but this may expand to “automated processing” provided that the system is designed and audited properly.
The introduction of an AI judge system could bring about major changes to the judicial system.
The possibility of introducing AI judges differs significantly between civil and criminal cases.
| ・Relationship with Article 32 of the Constitution (Right to Trial) All citizens have the right to trial. Therefore, even if AI judges are introduced, it is essential to ensure that there is an option for human trial. ・Qualification as a Bearer of Judicial Power (Article 76 of the Constitution) Judicial power resides in the courts, and judges are to perform their duties “in accordance with their conscience and independently.” Entrusting judicial power to unscrupulous AI may be inconsistent with the constitutional system. However, if the parties consent in advance to select an “AI judgment,” there is room for ensuring a certain degree of constitutionality. ・Principle of Open Trials (Article 82 of the Constitution) Trials must be held in open court. Since AI’s internal processes are invisible, explaining the reasons for decisions to citizens presents a challenge. ・Strengthening and Rigidifying Precedentism Because AI learns from past precedents, it is prone to reproducing outdated values. There is a risk that it will be unable to adapt flexibly to social change. |
Clarifying responsibility for miscarriage of justice
Can an AI judgement be appealed? Will the appeal always be handled by a human? To what extent should the AI judgement be respected? These are issues that are inseparable from the locus of responsibility, and it is essential to design a system for this.
Current AI technology is limited to assisting in routine cases with few contentious issues. Advanced judgments, such as interpreting legal provisions, assessing the credibility of evidence, and adjusting social values, are still dependent on humans. However, depending on technological advances and social consensus, it cannot be denied that partially automated adjudication may become a reality.
| The role of AI judges | Improving the efficiency of evidence analysis, supporting commercial disputes, ensuring consistency in sentencing, and expanding the scope of minor cases. |
| Legal issues | Relationship with the Constitution, the rigidity of precedent-based judgments. |
| Practical issues | Responsibility for miscarriage of justice (civil, criminal, and AI), designing an appeals system. |
→ For the time being, the focus will be on “support functions,” but with technological advances and social consensus, “partially automated adjudication” may be introduced in the future for minor cases and specialized fields.
AI has a “black box” problem. In many cases, humans cannot understand why a decision was made. This is particularly serious in the judicial and police fields, where the parties involved need reasons that can be challenged or appealed.
To use AI in the legal field, at least the following three conditions must be met:
For example, if AI determines that a person is at high risk of fleeing when granting bail,
AI learns from past data, but that data itself contains discrimination and prejudice.
Japan does not have a comprehensive anti-discrimination law, making it difficult to address discriminatory treatment caused by AI. While there are specific laws such as the Act on the Elimination of Discrimination against Persons with Disabilities, there are no provisions that assume the use of AI. In this respect, Japan’s systems are weaker than those of Europe and the United States.
| China | In the judicial field, the “Smart Court” has put AI to practical use in sentencing support. In the police field, Beijing and Shenzhen are currently operating surveillance systems that combine street cameras with facial recognition AI. Integration with the “social credit system” is also progressing, but there is strong international criticism of excessive surveillance. |
| EU | The EU will enact an AI Act in 2024. It will classify the use of AI in the police and judicial fields as “high risk” and plan to impose strict regulations from 2026 onwards. Real-time facial recognition in public spaces will generally be prohibited (with exceptions for serious criminal investigations), and predictive policing will be required to ensure transparency and human rights impact assessments. |
| USA | Following the racial bias issue surrounding COMPAS, an AI for assessing recidivism risk, AI regulations are underway at the state level. There are no comprehensive regulations at the federal level yet. |
| Japan | Guidelines for the use of AI are currently being formulated. Specific regulations for the judicial and police fields have not yet been established, and there is no comprehensive anti-discrimination law, making it difficult to address discriminatory treatment caused by AI. |
| Explainability | A system that is readable, reproducible, and reusable is essential. |
| Fairness | A system for auditing and correcting bias in data and design is essential. |
| Constitutional consistency | It is essential to design democratic control that corresponds to the police and judiciary while guaranteeing the right to a trial. |
→ The prerequisite for introducing AI is to clarify not only the technical aspects but also the institutional and constitutional aspects.
There are many challenges to introduce AI police and AI judges. However, given technological advances and social needs, it is not realistic to completely reject them. Introduction will proceed in stages, and eventually, full automation will come into view in some areas. In this chapter, we will outline realistic scenarios for moving forward with introduction while minimizing risks.
| ① Short-term (3-5 years): Use as a supplementary tool Police field Search for specific individuals and vehicles using video analysis, and detect suspicious behavior (final decision made by humans) Automatic traffic violation detection (AI uses AI to organize evidence, humans make disposition decisions) Propose efficient patrols using crime data analysis Judicial field Automated case law search and issue organization (improving investigative efficiency) Drafting damage calculations and standard contract checks Presenting multiple settlement proposals in mediation System development Quality standards and certification systems for AI systems AI-assisted recording and auditing systems Ensuring final human decision-making ② Medium-term (5-10 years): Semi-automation in limited areas Police field Automated processing of minor traffic violations (parking violations, slight speeding) (human review if an objection is filed) Automation of administrative procedures with clear requirements, such as driver’s license renewals and license and permit renewals Judicial field AI-based rulings for small-sum disputes (e.g., under 1 million yen) with party agreement (right of appeal guaranteed) Family mediation with clear standards, such as child support calculations and property division Introduction of “AI Mediation” System Development Enactment of a special law regarding semi-automated processing Appeals against semi-automated criminal processing will be handled by humans within 48 hours. Establishment of a system for regular AI audits and compensation. ③ Long-Term (10-30 years): Partially automated adjudication in specialized fields Police field Automated warnings and enhanced surveillance based on improved crime prediction accuracy Advanced investigative support through organized crime and financial flow analysis Judicial field Automated adjudication in specialized fields that can be formulated, such as intellectual property litigation and tax litigation AI will propose criminal sentences based on uniform national standards, with the judge making the final decision Prerequisites for Realization Reinterpretation or amendment of the Constitution Dramatic improvement in AI explainability Building trust throughout society Dramatic improvement in cybersecurity (preventing attacks and tampering on AI systems) Improvement of public digital literacy (using AI with an understanding of its limitations) International harmonization of systems (adjustments at the treaty and agreement level, e.g., whether AI judgments can be enforced overseas) |
| Short-term | Introduce support functions as auxiliary tools |
| Medium-term | Advance semi-automation in limited fields and improve legal systems |
| Long-term | Introduce partially automated adjudication in specialized fields (subject to constitutional and social consensus) |
→ It is essential to guarantee a “final human review” and a “protest system” at every stage. This will enable us to enjoy the benefits of technology while protecting human rights and democratic values.
Over the past five chapters, we have examined the possibilities and challenges of AI police and AI judges. With technological advances, the future once depicted in science fiction is steadily approaching reality.
It is not realistic to completely eliminate AI from the judicial and police fields. As long as there are urgent needs for personnel shortages, work efficiency, and uniformity of judgment, the trend toward the use of AI will likely be unstoppable.
However, the judiciary and police are the foundation of society, protecting people’s lives, freedom, and property. Sacrificing justice and fairness for the sake of efficiency is unacceptable.
The exercise of power by AI goes to the very foundations of democracy.
At the beginning, I asked, “What if it was AI, not a human, that detected your traffic violation?” I’d like to ask again at the end:
“Would you want to be judged by AI?”
Some people may be okay with it as long as it’s fair and swift, while others would prefer to be judged by a human. While many people currently opt for the latter, it’s important that we maintain this choice. We must avoid a situation where we unknowingly lose our options.
AI technology will certainly change society. However, the direction it takes will not be determined by engineers or companies, but by the decisions of each and every citizen. It is precisely because justice and public safety are fields that are so fundamental to society that we must carefully, yet positively, consider how we should approach AI.
References and related information
We are entering a new era of decentralized internet—Web3—driven by advancements such as blockchain-based payment networks and tokenized ecosystems. While this era can feel exciting and full of promise, it also raises key concerns: can the next phase of decentralized internet become a sustainable part of our social infrastructure?
Over the course of this article, the evolving nature of our social infrastructure will be examined in the Japanese context. This will then be followed by environmental concerns of Web3, a critical analysis of the tradeoffs of PoW to PoS transition, and challenges of regulation in this evolving landscape.
Evolving social infrastructure- Why is Web3 involved?
Technology is advancing at a rapid pace, but it remains important to ground innovation in its original purpose—to improve our lives.
Japan has already recognised the need to re-envision the relationship between technology and society for some time. The concept of “Society 5.0,” introduced in the 5th Science and Technology Basic Plan (Cabinet decision of January 22, 2016), sets out a vision of a human-centered society in which economic development and the resolution of social issues are compatible with each other through a highly integrated system of cyberspace and physical space.
Web3 fits into this narrative and is already being included in conceptions of this idea. This is due to the global, borderless infrastructure, particularly its blockchain-based architecture, which could play a significant role in this shift by enabling freer and more secure movement of data. Importantly, Japan is also one of the first governments to begin formally recognizing Web3, with initiatives from METI and the FSA to explore regulatory frameworks and business use cases. This shows that Japan is not only theorizing about a new way to apply technology in society, but is also experimenting with how Web3 itself can be integrated into that future.
Almost 10 years later, this stance of wanting to foster innovation in Japan while still maintaining a safe regulatory environment continues, with the Japanese government’s policy of advancing the social implementation of digital technologies according to the Digital Agency of Japan in June 2025.
Web31 is founded on blockchain technology- a decentralized ledger that avoids central servers. This has proven effective against hacking, an issue that is becoming increasingly grave in an era of AI and other advanced technologies, which can be used to develop harmful programs that hijack centralized systems.
One of the earliest consensus mechanisms, Proof of Work (PoW), comes with a heavy environmental cost. To validate transactions, miners must solve complex computational problems, often requiring massive amounts of non-renewable energy resources.
Bitcoin, for example, consumes not only electricity but also enormous amounts of water to cool the computers used in mining. According to Alex De Vries in Bitcoin’s Growing Water Footprint (Cell Reports Sustainability, 2024), Bitcoin’s water footprint increased by 166% from 2020 to 2021, rising from 591.2 to 1,573.7 gigaliters (GL). The water footprint per transaction in those years jumped from 5,231 liters to 16,279 liters, and by 2023 the annual total may have reached 2,237 GL.
To put this into perspective, this is approximately the amount of water used to fill a standard backyard pool for a single bitcoin transaction.
This footprint highlights that blockchain technology has environmental costs that have compounding effects beyond electricity use, straining other vital resources like water.
When thinking about the integration of society, climate, and technology, considering the knock-on effects and the interconnected nature of our coexistence on the planet is essential.
A significant environmental improvement came with Ethereum’s transition from PoW to Proof of Stake (PoS) in September 2022 (“The Merge”), which cut its estimated electricity consumption by ~99.95% (De Vries, 2023; Kapengut & Mizrach, 2023).
In PoS, validators stake cryptocurrency as collateral to secure the network, replacing energy-intensive mining. Misbehavior can result in loss of the stake (“slashing”), aligning incentives with honest participation.
However, PoS is not without criticism:
Essentially, the problem is that if you are rich, you can “stake” more cryptocurrency than someone who has less money. The more you stake, the higher the possibility that you are selected to validate new blocks and thus earn more rewards in the long run. This creates a harsh wealth divide, where the rich become richer and also more powerful, creating potential for an oligopolistic or monopolistic market of staking to evolve. This undermines the idea that Web3 could become a fairer extension of our social infrastructure.
On a similar note to the potential of an oligopolistic/ monopolistic market of staking, it could be conceivable that a group of (presumably wealthy) validators could band together to manipulate blockchain networks to their advantage. This could result in validators censoring transactions and increasing fees to gain profit.
While PoS drastically improves energy efficiency, these governance risks mean sustainability must be measured not only in carbon savings but also in resilience against centralization.
If there is insufficient governance and protection in place, PoS cannot be a sustainable choice. To be truly sustainable, a system must be supportive of the natural environment (this is the case with PoS), but also the social environment. This is iterated in Japan’s Society 5.0 idea, as well.
If wealth concentration and inequity continue to extrapolate into the Web3 world, there could be dire effects as this technology becomes part of the fabric of our daily life.
With Web3 being based on the idea of decentralization, it can be challenging for centralized authorities to regulate. Without a central entity to oversee or enforce compliance, it becomes difficult to apply traditional law directly. Questions such as determining the responsibility of actors on a decentralized network or tracking to confirm adherence to laws and best practices in anonymous environments are a problem. Additionally, it remains a challenge on the part of regulators to apply existing regulations, in their current form to Web3.
Japan is a case study of how governments are attempting to bridge the gap between decentralized technology and the need for crisis-proof regulations, but understanding how this links to sustainability is still a challenge.
The intersection of Web3 and sustainability creates both new challenges and opportunities. While the decentralized internet will likely put a strain on our limited environmental resources, it also offers tools that could revolutionize climate accountability.
For Web3 to form part of Society 5.0’s human-centered vision, it must evolve not only through energy-efficient consensus but also through governance models and legal frameworks.
To this end, legal professionals will continue to be at the forefront of pushing positive changes to guide innovation towards responsible growth.
This article was authored by a Legal Assistant at So & Sato Law Offices.
The content reflects an individual exploration of emerging legal and social issues, and does not constitute legal advice or represent the official position of the firm.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. For legal questions related to blockchain, sustainability, or regulatory frameworks, please consult a qualified professional.
General References and related reads:
https://bernardmarr.com/why-blockchain-nfts-and-web3-have-a-sustainability-problem/
https://www.techtarget.com/sustainability/feature/Web3-and-sustainability-Benefits-and-risks
https://www.jbs.cam.ac.uk/2023/blockchain-sustainability-ethereum
https://ccaf.io/cbnsi/cbeci/comparisons
https://www.osl.com/hk-en/academy/article/deeply-reflecting-on-web2-thinking-hard-about-web3
https://entertainmentlawyermiami.com/regulatory-compliance-in-web3-a-guide-for-businesses
https://en.cryptonomist.ch/2025/06/24/bitcoin-and-regulation-the-regulatory-revolution-in-japan
https://www.japan.go.jp/kizuna/2025/06/regional_revitalization_web3.html
https://www.leewayhertz.com/ai-in-web3/#What-is-web3
https://www.coinbase.com/learn/crypto-basics/what-is-proof-of-work-or-proof-of-stake
https://kilpatricktownsend.jp/en/japans-national-strategy/
https://www.spiceworks.com/tech/tech-general/articles/web-2-vs-web-3
https://unepccc.org/wp-content/uploads/2019/02/udp-climate-change-blockchain.pdf
https://digital-strategy.ec.europa.eu/en/policies/blockchain-climate-action
https://www8.cao.go.jp/cstp/english/society5_0/index.html
https://www.meti.go.jp/policy/economy/keiei_innovation/sangyokinyu/Web3/web3.pdf
https://www8.cao.go.jp/cstp/english/society5_0/index.html