Archives

Hard Law, Soft Law, and What Companies Need to Know Before Entering Both Markets

Executive Summary

Artificial intelligence regulation is rapidly becoming a decisive factor in cross-border business strategy. Companies developing or deploying AI systems in multiple jurisdictions must navigate fundamentally different regulatory models, particularly in the European Union and Japan. This article provides a practical overview of how these two approaches diverge, and what those differences mean for businesses operating across both markets.

The EU has adopted a comprehensive, binding framework through the EU Artificial Intelligence Act. Built around a risk-based classification system, the Act imposes extensive ex ante obligations on providers and deployers of high-risk AI systems, including governance requirements, technical documentation, conformity assessments and significant enforcement exposure. For companies entering or operating in the EU market, AI compliance is no longer a peripheral issue but a core component of product design, market entry planning and corporate risk management.

Japan has taken a markedly different path. Rather than introducing AI-specific hard law, it has enacted a policy-oriented framework that promotes research, development and social implementation of AI while relying on existing laws—such as data protection, labour and consumer protection—to address concrete risks as they arise. This innovation-first, ex post accountability model reduces upfront regulatory friction but places greater emphasis on internal governance, documentation and responsiveness to regulatory guidance.

For cross-border businesses, these differences translate into distinct compliance strategies, timelines and cost structures. EU alignment often provides a robust baseline, but it does not eliminate Japan-specific legal considerations. Conversely, systems developed primarily for the Japanese market may require substantial redesign to meet EU requirements. Understanding these dynamics is essential for companies seeking to deploy AI responsibly, competitively and at scale across both jurisdictions.

This article is intended for companies considering entry into the EU or Japanese markets, as well as businesses already operating across both jurisdictions.

Chapter 1: Introduction – Why AI Regulation Matters for Cross-Border Business

Why are regulatory responses to artificial intelligence diverging so sharply across jurisdictions, and why does this divergence matter in practice?

AI regulation is often discussed in simplified terms, with the EU portrayed as “strict” and Japan as “lenient”. While not entirely inaccurate, this framing obscures the more important question for businesses: how different regulatory models shape compliance strategy, product design and market entry decisions.

The contrast between the EU and Japan is particularly instructive because both jurisdictions share broadly similar policy goals (such as promoting trustworthy AI and mitigating social harm), yet pursue those goals through fundamentally different legal structures. The EU has chosen a uniform, AI-specific framework built around ex ante risk control, whereas Japan has embedded AI governance within existing legal regimes and administrative practice.

For companies operating across borders, these design choices are not merely theoretical. They affect when legal review must occur, how much documentation is required before deployment, how enforcement risk materialises, and where internal accountability should sit within an organisation.

This article does not attempt to assess which approach is preferable. Instead, it focuses on how these regulatory models function in practice, and what businesses need to understand when developing or deploying AI systems in both markets. It proceeds by first outlining the contrasting regulatory philosophies of the EU and Japan, then examining each framework in detail, before comparing their practical impact through a case study and concluding with concrete guidance for businesses.

Chapter 2: Two Regulatory Philosophies – Ex Ante Control vs Ex Post Accountability

This chapter provides a conceptual framework for understanding the regulatory choices made by the EU and Japan before turning to the specific legal regimes.

2.1 The Fundamental Choice: Binding Obligations or Flexible Norms?

At the core of the EU-Japan comparison lie two related questions. First, how should novel and rapidly evolving technologies be regulated: through binding legal obligations or through flexible, non-binding norms? Second, how do different legal systems balance the perceived trade-off between risk mitigation and innovation incentives?

In general terms, hard law refers to legally binding rules that create enforceable rights and obligations and may be sanctioned through courts or administrative penalties. Soft law, by contrast, encompasses guidelines, principles and policy statements that lack direct legal enforceability but may nonetheless influence behaviour through administrative practice, market expectations or reputational effects.

2.2 The Hard Law vs Soft Law Trade-Off in AI Regulation

In the context of AI, this distinction is particularly significant. AI technologies evolve quickly, and their real-world impacts are often difficult to predict at the point of development. Hard law can provide legal certainty, clear allocation of responsibility and strong protection for affected individuals, but it also risks becoming outdated or imposing compliance burdens that disproportionately affect smaller or younger firms. Soft law offers adaptability and can respond more readily to technological change, but may suffer from ambiguity and weaker accountability mechanisms.

2.3 The EU and Japanese Choices

The EU and Japan have resolved this tension in different ways. The EU has prioritised legal certainty, fundamental rights protection and harmonisation across Member States, even at the cost of increased regulatory complexity. Japan has prioritised innovation, experimentation and international competitiveness, seeking to address AI-related risks primarily through existing legal frameworks rather than AI-specific prohibitions.

Chapter 3: The EU AI Act – A Risk-Based, Ex Ante Compliance Regime

Against this conceptual backdrop, the EU Artificial Intelligence Act was first proposed by the European Commission in 2021 as part of a broader digital regulatory agenda. Its central organising principle is a risk-based classification of AI systems, under which regulatory obligations increase in line with the potential impact of an AI system on health, safety and fundamental rights. Following legislative negotiations in the European Parliament and the Council, the Act was formally adopted in 2024 and is being applied on a phased basis from 2025 onward, with different provisions becoming applicable at different times.

3.1 Jurisdictional Scope and Extraterritorial Application

The EU AI Act has a broad territorial scope. It applies not only to providers and deployers established within the EU, but also to entities outside the EU where AI systems are placed on the EU market or their outputs are used within the EU. As a result, non-EU companies offering AI-enabled products or services that affect individuals or businesses in the EU may be subject to the Act’s requirements.

Certain activities are excluded from the scope of the Act. These include AI systems developed or used exclusively for military, defence or national security purposes, as well as AI used by foreign public authorities or international organisations for law enforcement, subject to safeguards for individual rights. AI systems used purely for personal, non-professional purposes are also excluded. In addition, AI developed and released under free and open-source licences may benefit from partial exemptions, provided the system does not fall within the high-risk category.

3.2 Enforcement Architecture

Enforcement of the EU AI Act follows a hybrid model. At the EU level, the newly established AI Office within the European Commission plays a central coordinating and supervisory role, particularly in relation to general-purpose AI models. Day-to-day enforcement, however, is largely carried out by national competent authorities and market surveillance authorities designated by each Member State. This structure mirrors other EU product safety and digital regulation regimes and is intended to combine central oversight with local enforcement capacity.

3.3 Banned AI Practices

Article 5 of the EU AI Act identifies certain AI practices that are prohibited outright due to their unacceptable risk to fundamental rights. These include, among others, AI systems that deploy subliminal techniques or exploit vulnerabilities in order to materially distort behaviour, certain forms of social scoring, and predictive policing systems that assess an individual’s risk of committing criminal offences based on profiling. The Act also prohibits large-scale scraping of facial images to create biometric databases, as well as emotion recognition systems used in workplaces or educational institutions, subject to limited and carefully defined exceptions.

3.4 High-Risk AI Systems

AI systems classified as high-risk are subject to the most extensive compliance obligations under the Act. An AI system is considered high-risk where it is used as a safety component of a product regulated under existing EU product safety legislation (listed in Annex I), or where it falls within one of the use cases enumerated in Annex III, such as employment-related decision-making, creditworthiness assessment or access to essential public services. In limited circumstances, providers may argue that a system listed in Annex III does not pose a significant risk, but this requires robust documentation and justification demonstrating the absence of material risk.

For providers and deployers of high-risk AI systems, the Act imposes detailed requirements relating to risk management, data governance, technical documentation, record-keeping, transparency, human oversight and conformity assessment. These obligations apply regardless of whether the system is placed on the market or used internally, underscoring the EU’s emphasis on ex ante risk control.

3.5 Penalties

EU AI Act violations may result in the following administrative fines:

In addition to fines, corrective measures, market withdrawal and recall orders may be imposed.

Chapter 4: Japan’s AI Governance Model – Innovation-First, Ex Post Accountability

Japan’s regulatory response to artificial intelligence is centred not on restriction, but on promotion. The Act on the Promotion of Research and Development, and Utilization of AI-related Technology (the “AI Promotion Act”) represents a conscious policy choice to support innovation while managing risk primarily through existing legal frameworks rather than through AI-specific prohibitions or licensing regimes.

4.1 Legislative Intent and Basic Framework

Unlike the EU AI Act, the Japanese AI Promotion Act does not establish a comprehensive set of binding obligations directly applicable to AI developers or deployers. Instead, it functions as a policy framework statute. Its stated objectives include promoting research and development of AI-related technologies, facilitating their social implementation, and ensuring that such use aligns with fundamental principles such as human-centricity, transparency and fairness. The Act is intended to operate alongside, and not replace, existing laws governing data protection, consumer protection, competition, labour and product safety.

The legislative materials accompanying the AI Promotion Act make clear that Japan views itself as lagging behind other major economies in the development and practical deployment of AI technologies. At the same time, public concern regarding the societal impact of AI has increased. Rather than responding with a new layer of sector-agnostic regulation, Japanese policymakers have opted for an approach that emphasises voluntary compliance, administrative guidance and coordination across ministries.

This philosophy reflects a broader tradition within Japanese administrative law, where regulatory objectives are often pursued through a combination of non-binding guidelines, consultation and informal enforcement, backed by the possibility of reputational consequences and, where necessary, application of existing statutory powers.

4.2 Scope of Application

The AI Promotion Act does not contain explicit extraterritorial application provisions like the EU AI Act. However, government policy documents and ministerial statements have clarified that foreign companies conducting business activities directed at the Japanese market—such as operating in Japanese or targeting Japanese users—are not categorically exempt from the Act’s scope.

The Act’s provisions are framed as duties to make reasonable efforts rather than strict legal obligations. However, all companies operating in Japan—whether domestic or foreign—remain fully subject to applicable Japanese laws governing the outcomes and impacts of their activities, including the Act on the Protection of Personal Information (APPI), labour and employment legislation, consumer protection statutes and sector-specific regulations.

4.3 Research, Investigation and Guidance Authority

Article 16 of the AI Promotion Act grants the government the following powers:

  1. Collecting information on domestic and international trends in AI-related technology research, development and utilization
  2. Analyzing cases of rights infringement through improper purposes or inappropriate methods, and considering countermeasures
  3. Conducting other research and studies that contribute to the promotion of AI-related technology
  4. Based on these findings, providing guidance, advice, information and other necessary measures to AI utilization business operators and others

Notably, Article 16’s latter part uses the phrase “shall provide” rather than “may provide”, suggesting that guidance and advice will be actively implemented. However, specific measures and criteria for such actions will become clearer through future operational practice.

4.4 Interaction with Existing Legal Regimes

A key feature of Japan’s approach is that substantive legal risk associated with AI systems is addressed through existing laws rather than through the AI Promotion Act itself. For example, discriminatory outcomes in AI-assisted hiring or lending may give rise to liability under labour law, anti-discrimination principles or industry-specific regulations. Improper collection or use of training data may trigger enforcement under the APPI. Misleading or unsafe AI-enabled products may fall within the scope of consumer protection or product safety laws.

Accordingly, while the absence of AI-specific hard law may reduce upfront compliance burdens, it does not eliminate legal exposure. Instead, risk is managed ex post through established legal doctrines and administrative practice. For businesses, this shifts the compliance focus from formal certification and pre-market approval to internal governance, documentation and the ability to demonstrate reasonable and responsible use of AI in light of existing legal standards.

Chapter 5: EU vs Japan – What the Differences Mean in Practice for Businesses

Having examined both regulatory regimes, this chapter analyses their practical impact on business operations.

5.1 Key Regulatory Differences at a Glance

The table below summarises the core differences between the EU and Japanese approaches to AI regulation from a business and compliance perspective.

Item European Union (EU AI Act / Hard Law) Japan (AI Promotion Act + Existing Laws / Soft Law)
Primary regulatory approach Binding, AI-specific regulation with legally enforceable obligations Policy-led governance combined with existing sectoral laws
Regulatory focus Protection of fundamental rights through risk management and ex ante controls Promotion of innovation with risk addressed through ex post accountability
Risk classification Explicit risk-based system (unacceptable, high-risk, limited-risk, minimal-risk) No formal AI-specific risk classification system
Key obligations Pre-market conformity assessment, technical documentation, risk management, human oversight Governance frameworks, reasonable efforts, compliance with APPI, labour and consumer laws
Enforcement model Administrative enforcement by national authorities coordinated at EU level Administrative guidance, public disclosure, enforcement via existing laws
Penalties and sanctions Significant administrative fines, corrective measures, market withdrawal No AI-specific penalties; sanctions arise under existing statutes
Extraterritorial reach Yes, where AI systems affect the EU market or individuals in the EU No explicit extraterritorial provisions. However, foreign companies conducting business activities in the Japanese market may be subject to the Act
Practical impact on businesses Higher upfront compliance cost and longer time to market, but high regulatory certainty Lower upfront friction, greater emphasis on internal governance and responsiveness

5.2 Operational Impact: Compliance, Timing, Certainty and Enforcement

The regulatory differences outlined above translate into distinct operational realities for businesses.

Compliance Structure and Cost
Under the EU AI Act, compliance for high-risk systems is structured, formalised and front-loaded. Providers must establish risk management systems, ensure data governance standards, prepare detailed technical documentation, maintain logs, implement human oversight measures and undergo conformity assessments prior to market entry. These obligations entail significant legal, technical and organisational costs, often requiring specialised compliance personnel or external advisors.

In Japan, there is no equivalent AI-specific pre-market conformity regime. Compliance costs arise primarily from ensuring alignment with existing legal obligations such as data protection, labour and consumer protection laws. This allows greater discretion in development sequencing and may reduce initial regulatory expenditure, particularly for smaller firms.

Time to Market
The EU’s ex ante risk control can extend development timelines. For high-risk systems, conformity assessment and internal preparation may add months to market entry, a critical consideration for start-ups and fast-moving technology companies.

Japan’s framework is generally more permissive at the deployment stage. Without mandatory AI-specific approval processes, companies can introduce services more quickly, provided they are prepared to address legal issues as they arise under existing laws. This prioritises speed and experimentation but places greater responsibility on businesses to manage downstream risk.

Regulatory Certainty
The EU AI Act offers a high degree of formal regulatory certainty. Risk categories, prohibited practices and compliance obligations are set out in binding legislation applicable across all Member States, facilitating long-term planning and harmonised compliance strategies.

In Japan, certainty derives from the interpretation and application of established legal regimes rather than AI-specific rules. While this provides flexibility, it may create uncertainty where the application of existing laws to novel AI use cases has not yet been tested through enforcement or case law.

Enforcement Risk
Enforcement exposure under the EU AI Act is explicit and potentially severe, with administrative fines reaching up to EUR 35 million or a percentage of global annual turnover, plus potential product withdrawals and corrective measures.

Japan’s AI Promotion Act does not impose fines or penalties. However, violations of underlying laws such as the APPI or sector-specific statutes may result in administrative orders, penalties or civil liability. Public disclosure and administrative guidance can also carry significant reputational consequences, particularly in a market where regulatory relationships and public trust are paramount.

5.3 Strategic Implications for Cross-Border Operations

For companies operating in both markets, meeting EU AI Act requirements often establishes a robust baseline for governance, documentation and risk management. However, this does not eliminate the need to assess Japanese legal risks independently, particularly regarding personal data handling, employment practices and consumer-facing representations.

Conversely, companies developing AI systems primarily for the Japanese market may find their governance structures insufficient to satisfy EU ex ante requirements without substantial modification. Early consideration of EU risk classifications and documentation expectations is therefore critical for businesses with global ambitions.

Chapter 6: Case Study – AI Recruitment Tools in the EU and Japan

To illustrate how these regulatory frameworks differ in practice, consider a common enterprise use case: a company develops an AI system that screens job applicants by analysing CVs, online assessments and interview responses to recommend candidates for hiring.

This example sits at the intersection of high-stakes decision-making, potential discrimination risk and intensive personal data processing. It is also a system type that multinational companies may wish to deploy consistently across regions.

6.1 EU: Likely Classification as High-Risk AI

Under the EU AI Act, AI systems intended for recruitment, selection or employment-related decision-making are generally treated as high-risk where they can materially affect individuals’ access to employment opportunities. An AI-driven recruitment screening tool will typically fall within the high-risk category listed in Annex III.

If classified as high-risk, the provider and deployer must comply with detailed obligations, including:

Non-compliance can trigger administrative measures, including corrective actions, market withdrawal and significant administrative fines.

6.2 Japan: No AI-Specific Pre-Market Approval, but Legal Risk Remains

In Japan, the same recruitment screening system is not subject to AI-specific conformity assessment. Instead, compliance obligations and legal exposure arise through existing laws applicable to employment decision-making and personal data handling.

Key legal considerations include:

Japanese compliance focuses on internal governance and readiness to respond to issues, rather than satisfying formal ex ante regulatory requirements. A prudent approach includes documenting dataset selection, bias testing, decision-making processes and escalation procedures, and ensuring HR and compliance teams can explain how the system is used and monitored.

6.3 Core Practical Difference: Ex Ante Conformity vs Ex Post Accountability

This case study underscores the core operational distinction: in the EU, companies must demonstrate compliance before market entry for high-risk systems, with structured documentation and conformity assessment playing a central role. In Japan, the emphasis is on ensuring AI use does not breach existing legal obligations and that the company can justify its practices if challenged.

For businesses deploying the same recruitment tool in both markets, an effective strategy is to design governance and documentation to satisfy EU high-risk expectations from the outset, while separately confirming Japan-specific issues such as APPI requirements, HR data handling practices and local expectations around transparency.

Chapter 7: What Companies Should Do Now – Practical Takeaways

For companies developing, procuring or deploying AI systems in both the EU and Japan, the divergence between these regulatory models requires deliberate and jurisdiction-sensitive planning.

7.1 Design Governance with the EU in Mind, but Do Not Stop There

Building governance structures that satisfy EU high-risk requirements often provides a strong foundation. Risk management processes, documentation practices, dataset governance and human oversight mechanisms designed for EU compliance generally improve internal accountability and transparency across the organisation.

However, EU alignment should not substitute for Japanese legal analysis. Japan-specific issues may still arise under laws such as the APPI, employment regulations or sectoral business laws. Local review remains essential.

7.2 Map AI Use Cases to Legal Risk Early

Businesses should identify and categorise AI use cases at an early stage, focusing on how AI outputs affect individuals, customers or counterparties. Use cases involving hiring, credit, pricing, eligibility or behavioural analysis are more likely to attract regulatory scrutiny in both jurisdictions, albeit through different mechanisms.

Early mapping enables companies to anticipate EU high-risk classification likelihood and assess which Japanese laws may be implicated if similar functionality is deployed domestically.

7.3 Invest in Explainability and Documentation

Across both regimes, the ability to explain how an AI system works, what data it relies on and how decisions are reviewed is increasingly central. In the EU, this is a formal compliance requirement for high-risk systems. In Japan, it is a practical necessity for responding to administrative guidance, audits, complaints or reputational challenges.

Documentation should not be treated as a purely regulatory instrument. It plays a critical role in internal decision-making, incident response and communication with regulators, business partners and affected individuals.

7.4 Prepare for Different Enforcement Dynamics

The enforcement profile differs markedly between the EU and Japan. In the EU, enforcement risk is explicit, rule-based and potentially severe, with administrative fines and market restrictions forming core tools. In Japan, enforcement is more relational and discretionary, with administrative guidance and public disclosure often preceding formal sanctions.

Companies operating in Japan should pay close attention to regulatory relationships, industry practice and public perception, even in the absence of AI-specific penalties.

7.5 Use Legal Advice Strategically

AI regulation is not a one-size-fits-all exercise. The appropriate level of legal involvement depends on the nature of the AI system, its scale and its intended markets. For EU-facing products, early engagement with legal and technical advisors can materially reduce downstream compliance risk and redesign costs. For Japan-facing deployments, periodic review against evolving guidance and enforcement trends may be more effective than upfront formalisation.

Chapter 8: Conclusion – Building a Cross-Border AI Strategy

The EU and Japan have adopted distinctly different regulatory responses to the rise of artificial intelligence. The EU AI Act represents a comprehensive, binding and risk-based framework that prioritises ex ante control and harmonisation across markets. Japan’s AI Promotion Act, by contrast, reflects a policy-driven approach that seeks to foster innovation while managing risk through existing legal regimes and administrative practice.

For cross-border businesses, neither model can be ignored. Understanding how these systems operate, and how they interact with existing laws, is essential to deploying AI responsibly and competitively. As AI technologies and regulatory expectations continue to evolve, proactive and informed legal strategy will remain a critical component of sustainable AI-driven business.

Sources

The below sources are provided for reference and further reading.

EU AI Act

Japan AI Regulation

International Analysis

This article reflects information current as of January 2026. Legal and regulatory developments may occur after this date. For specific matters, please consult with qualified legal advisors.

I. Introduction

Japan has enacted and improved crypto regulations since 2017. Japan was once one of the most crypto-friendly nations in the world, but after 2018, it adopted a stricter regulatory stance. It is, however, now becoming more friendly to the Web3 industry again, with an intention to attract foreign investment.

This article provides an overview of cryptoasset regulations in Japan in 2024.

History of Cryptoasset Regulations in Japan

Early Friendly Era
February 2014 MtGox, located in Shibuya, Tokyo, and the largest exchange in the world, went bankrupt.
March 2014 Japanese LDP (Liberal Democratic Party, a governing party in Japan) discussed with the government and decided not to regulate virtual currency at that stage but asked the industry to form a self-regulatory organization.
May 2016 Japan enacted the first virtual currency act in the world. The act was made as an amendment to the Payment Service Act (“PSA”). The act was friendly to startups and intended to foster the industry.
April 2017 The amended PSA stated above was enforced.
2017  There were ICO booms all over the world, and the price of crypto went up. The trading volume of Japanese exchanges became number 1 in the world. Many foreign players came to Japan to start their business.
Era of Stricter Regulation
February 2018 A massive hacking incident, under which approximately JPY 58 billion equivalent NEM was hacked, happened in Japan (Coincheck incident).
2018-2021 After the Coincheck incident, the Japanese government tightened the operation of the regulation. Many exchanges received business improvement orders and suspension orders, and the market became shrunk.
May 2020 The amended PSA and the amended FIEA were enforced.
Era that Web3 became a national strategy
2021- The Japanese government’s national growth strategy in 2021 includes the statement that Web3 became one of the national strategies. Under this strategy, the LDP’s Web3 project team has issued policy recommendations titled the Web3 White Paper1in order to foster Web3 every year since 2022.
June 2022 Japan enacted one of the world’s earliest stablecoin regulations. The act was made as an amendment of the PSA and the Banking Act.
2022 In 2002, there were collapses of Tera Luna, Three Arrows Capital, and the FTX Group. As a result, the global regulatory environment became stricter. However, Japan had already implemented stringent regulations, which proved effective. (*1) Therefore, Japan did not need to change its regulations even after these collapses.

(*1) Even in the FTX Group’s bankruptcy, the assets of FTX Japan’s customers were all preserved because the regulations required 100% of users’ assets to be segregated.

June 2023 Stablecoin regulation was enforced.
May 2024 DMM Bitcoin was hacked, resulting in a loss of approximately JPY 48.2 billion worth of Bitcoin. However, we have not seen any regulatory tightening in response to this incident at this stage.

II. Cryptoasset, NFT, and Stablecoin Regulation 

 1. Definition of Cryptoassets

The PSA defines cryptoassets as property value with the following elements:

(i) which is recorded by electronic means and can be transferred by using an electronic data processing system,
(ii) which can be used in relation to unspecified persons for the purpose of paying consideration for the purchase or leasing of goods, etc. or the receipt of provision of services and can also be purchased from and sold to unspecified persons acting as counterparties, and
(iii) excluding the Japanese currency, foreign currencies, currency-denominated assets, and Electronic Payment Instruments.
Electronic property value, which can be mutually exchanged with the above assets, also falls under the category of cryptoassets.

 2. Cryptoasset Exchange Services

(1) Definition

Under the PSA, the Cryptoasset Exchange Service means any of the following acts carried out in the course of trade:

(i) sale and purchase of cryptoassets (i.e., exchange between cryptoassets and fiat currency) or exchange of cryptoassets into other cryptoassets;
(ii) intermediary, brokerage, or agency service for the acts described above (i);
(iii) management (custody) of fiat currency on behalf of the users/recipients in relation to the acts described above in (i) and (ii) and
(iv) management (custody) of cryptoassets on behalf of the users/recipients.
We hereafter call a cryptoasset exchange service provider as “CESP”.

(2) Meaning of “in the course of trade”

Sales and purchases of cryptoassets to Japanese residents are not subject to the regulation unless they are conducted “in the course of trade (gyo to shite)”. An act in the course of trade is generally understood to be a repetitive and continuous act vis-à-vis the public. For example, trading in cryptoassets for one’s own investment purposes or taking custody of cryptoassets of a wholly owned subsidiary are not considered acts in the course of trade.

It should be noted that just because your clients are only institutional investors is not considered as it is not in the course of trade.

(3) Solicitation to Japanese Residents

Whether or not a CESP solicits Japanese residents is also considered an important factor in determining the regulation’s application. The determination of whether solicitation towards residents of Japan is being conducted is made on a case-by-case basis. For instance, actions such as not blocking access to a website from Japan, providing information in Japanese, or introducing products at events in Japan could be considered factors that indicate solicitation towards residents of Japan.

(4) Management of Cryptoassets

The custodian of cryptoassets shall take the CESP license. According to the FSA guidelines, whether each service constitutes the management of cryptoassets should be determined based on its actual circumstances. Generally, if a service provider can technically transfer its users’ cryptoassets, it falls under the category of the management of cryptoassets. If a service provider does not possess any of the private keys necessary to transfer its users’ cryptoassets, the service provider is basically not considered to manage cryptoassets.

Accordingly, wallet services, such as non-custodial wallets, where the users manage the private key on their own, are not considered to constitute the management of cryptoassets.

(5) Intermediary, Brokerage, or Agency Service

An intermediary generally means a factual act that involves efforts to conclude a legal act between two others. Brokerage or agency service means to perform a legal act in one’s own name and for the account or on behalf of another person.
With respect to a purchase and sale agreement of cryptoassets between third parties, the acts of (i) soliciting the signing of the agreement, (ii)explaining the product for the purpose of solicitation, and (iii) negotiating the terms and conditions fall, in principle, under the category of an intermediary.
The mere distribution of product information papers, etc., may not fall under the category of an intermediary and should be considered on a case-by-case basis.

(6) Requirements for the License and Cost

The PSA requires minimum capital, financial requirements, a physical office, a sufficient number of personnel on staff, segregation of assets, an annual audit, a customer identity verification system, accountability to users, protection of person/s’ information, including sensitive information, and, if outsourced, must retain authority. The service provider must be equipped with the systems for adequate operation and legal compliance deemed necessary to operate a Cryptoasset Exchange Service appropriately and securely. Although the applicant must have a minimum capital base of at least JPY 10 million, and it must not be in negative assets, from our experience, the cost of obtaining the license and starting the internet exchange business can be more than JPY 1 billion.

(7) Exchange M&A

We are often asked by companies interested in entering the Japanese crypto market whether they can start their business by acquiring an already licensed CESP rather than obtaining a new license. The answer is Yes. Regulatory speaking, change of major shareholders is done just ex-post notification and you can start your business after purchasing the already licensed CESP.

The major issue here is that the purchased CESP shall satisfy the governance and compliance levels, which are similar to those a new licensed exchange shall achieve. If one purchases a cheap CESP, which is just having a license but has not done a business actively, to reach these levels might be difficult and time-consuming. Furthermore, if you wish to change the business model or system of the purchased CESP, you must provide an explanation to and obtain approval from the FSA. The cost of purchasing the licensed CESP, combined  with this additional expense, can sometimes be comparable to the cost of obtaining a new license. Therefore, careful consideration is necessary.

 3. Crypto Exchange’s Obligations

(1) Management of Users’ Property

The PSA requires the users’ cryptoassets to be segregated from the CESP. Further, the CESP shall keep (i) at least 95% of the users’ cryptoassets in cold wallets and (ii) equivalent to 100% minus those kept in the left column of its own cryptoassets in cold wallets. Thus, as a consequence, the CESP shall hold the equivalent of 100% of users’ cryptoassets in cold wallets.

With respect to fiat currency, the CESP shall deposit its users’ fiat currency in a bank account under a different name from where the CESP deposits its own funds.

A CESP must undergo an annual audit of its financial statements and segregation of assets.

(2) Anti Money laundering

Anti Money Laundering law requires CESPs to conduct a know-your-customer of users. Stricter regulations for anti-money laundering came into effect on June 1, 2023. According to the new Travel Rules, when assets over a certain amount are sent by a user, the receiving and sending CESPs must share information about the users. The lack of interoperability in such information-sharing systems has prevented users from sending and receiving cryptoassets between some CESPs.

 4. DEX

The regulations applicable to decentralized exchanges (DEX) are not clear. There is an argument that the regulations do not apply to exchanges that are completely decentralized and have no administrator at all, as there is no entity subject to crypto regulations. However, it is necessary to carefully consider whether there is truly no administrator. Further, entities that provide access software to a DEX may be subject to the regulations for being intermediaries.

As stated later in section III. 1, the sale of cryptoassets issued by oneself is subject to crypto regulations. Providing liquidity to a DEX for cryptoassets issued by oneself may also be considered as engaging in the sales of the cryptoassets.

 5. NFT

Pure NFTs, such as trading cards and in-game items recorded on blockchains that do not function as payment instruments, are not considered cryptoassets. The FSA states that the distinction between cryptoassets and pure NFTs is as follows:

(i) the issuer of the NFTs prohibits its use as a payment instrument by technical feature or by agreement
(ii) the quantity and price of the NFTs are not suitable as a payment instrument (specifically, one NFT costs more than ¥1,000 or the total number of the NFTs issued is less than 1 million).

Generally speaking, pure NFTs are not regulated in Japan. Please, however, note that whether NFTs are considered as “pure” NFTs needs careful discussion. For example, if an NFT gives some dividend or economic benefit, it might be considered as a security. Further, an NFT, which is linked to real-world assets, might require a discussion of whether regulation of real assets may apply.

 6. Stablecoins

Japan was one of the first countries in the world to establish stablecoin regulations. Stablecoins pegged to fiat currency are defined as electronic payment instruments and require a license different from CESP to offer the related service.

Other stablecoins that adjust their value through algorithms could be regulated as cryptoassets or securities. Stablecoins classified as cryptoassets are subject to crypto regulations, while stablecoins classified as securities are subject to securities regulations (FIEA).

III. Crypto Financing

 1. ICO, IE

ICO (Initial Coin Offering) is an act of issuing and selling tokens to raise fiat currency or crypto assets from the public. ICO is regulated in Japan. The applicable regulations depend on the legal nature of the issued tokens. If the tokens are considered securities, the token issuance will be regulated by the FIEA. If the tokens are considered cryptoassets, the token issuance will be regulated by the PSA.

The issuance of new cryptoasset-type tokens in Japan is generally done by IEO (Initial Exchange Offering). IEO is an act of raising fiat currency or cryptoassets by an entity entrusting the sales of tokens to a licensed CESP. In the case of IEO, if the issuer itself does not conduct sales activity, the issuer does not need to take the crypto exchange license. If, however, the issuer itself wants to conduct sales activity for its new tokens, it needs to have a crypto exchange license, which requires significant cost and time compared to IEO. Several IEO projects have already been launched in Japan.

The IEO process requires examinations by the exchange, JVCEA, a Japanese self-regulatory organization, and the FSA. The examination checks the feasibility of the project for which the funds will be used, the financial soundness of the issuer, and other factors.

 2. SAFT, SAFE

SAFT (Simple Agreement for Future Tokens) is a way to raise funds in exchange for the right to purchase tokens to be issued in the future. SFAT targeting Japanese residents is considered to be subject to fund regulation or crypto regulation, depending on the legal nature of the agreement. However, both regulations do not apply unless the act is done in the course of trade, so we may argue that entering into a SAFT with limited numbers of specific persons, such as business partners who will contribute to developing projects, should not be regulated.

SAFE (Simple Agreement for Future Equity) with token warrant is subject to general equity investment regulations, depending on the attributes of involved entities and investors.

Japanese entities sometimes use J-KISS, a Japanese convertible equity, with a side letter that provides tokens.

IV. Crypto Staking

Generally speaking, we believe staking service for POS tokens is not regulated in Japan. For example, staking one’s own cryptoassets or becoming a validator for ETH is not regulated in Japan.

Not all staking services, however, are exempted from the regulation. If service providers manage the private keys of users’ cryptoassets (we understand some exchanges provide those services), custody regulation may apply. In addition to managing private keys, if the service providers distribute rewards as well as slashing penalties to the users, fund regulations might apply.

We understand that there are some NFT projects that say that they sell NFTs for crypto, and purchasers can stake NFTs, and can get rewards. We understand fund regulation might apply to such cases, especially in cases where staking does not have any actual usage for providing security.

V. Crypto Lending

In crypto lending services, a service provider borrows cryptoassets from users for a certain period of time and pays a lending fee in exchange. No regulation applies to that lending because the Money Lending Business Act regulates money lending, but it does not deem cryptoassets as money.2

It should be noted that crypto custody regulations may apply in cases where the service is considered as custody, not lending, even if a service is titled as crypto lending. One factor that distinguishes lending and custody is whether users can withdraw their assets at any time (deposit) or whether there is a specific required time of non-withdrawal (lending).

VI. Crypto Mining

Mining cryptoassets requires large amounts of electricity. Thus, mining appears to be regulated in some countries, such as Kazakhstan3Regulation of mining in certain areas in Russia is also being discussed4. In Japan, mining itself is not regulated.

A business that collects money from the public to conduct mining operations and then distributes the proceeds from mining to the customers may be regulated under the FIEA as a fund.

Schemes that one sell mining machines, accept deposits of the machines, and promise to pay fees for the mining results may also be regulated under the Act on Deposit Transactions. If the Act on Deposit Transactions is applied, the business must obtain confirmation from the Prime Minister, but it is said that to get such confirmation is nearly impossible. Creating a scheme to avoid such regulation is important.

VII. Crypto Taxation

 1. Tax on Individuals

principle, classified as miscellaneous income. Miscellaneous income is income that is neither interest income, dividend income, real estate income, business income, employment income, retirement income, forestry income, transfer income, or temporary income. The tax rate for miscellaneous income ranges from 5% to 45%, depending on the amount of total income. The maximum tax rate is about 55% when we calculate income tax as well as residential tax and special reconstruction income tax.

 2. Tax on Corporations

Profit generated by cryptoassets transactions is subject to corporate tax which is about 30% depending on the amount of income and how big a company is.
Cryptoassets for which there is an active market must be valued using the mark-to-market method at the end of the fiscal year and are taxable even if companies do not sell them.
This unrealized gain tax treatment became a huge issue in Japan, and many Web3 companies left Japan.

In 2022, the Japanese government decided to reform this unrealized gain tax, and now the tax is not levied if an issuing company of tokens continues to hold its tokens with certain technical transfer restrictions.

In 2023, another tax reform was proposed and approved by the government. Under the reform, an unrealized gain tax is not applied if a company holds tokens with a certain transfer restriction, even in the case that tokens are issued by other entities (including Bitcoin and Ether etc.)

Disclaimer

The content of this article has not been verified by the relevant authorities or organizations mentioned herein and represents only a reasonable interpretation of their statements. Our interpretation of laws and regulations reflects our current understanding and may change in the future. This article is not intended to be legal advice and provides a summary for discussion purposes only. If you need legal advice on a specific topic, please feel free to contact us.

EOD