Recently, I visited the “Flying Car Station” at the Osaka Kansai Expo and experienced the flying car exhibits. (Reference: https://www.expo2025.or.jp/future-index/smart-mobility/advanced-air-mobility / )
Figure1: A realistic flying car

While the pavilion is open to visitors without reservations, those who make a reservation in advance can board a parked flying car and experience a video of the car flying like a taxi from Yumeshima to Mount Koya or Awaji Island. Additionally, on certain mornings, a flight demonstration of the actual car is also held in a separate location within the venue (when I visited, in addition to the demonstration flight, there was also a Q&A session with the president of Skydrive explaining the car). Even before
the event, there were many negative comments about flying cars, such as “it doesn’t seem realistic,” “it’s a waste of tax money,” and “this isn’t a car,” but the exhibits were very easy to understand and provided a concrete image of what future society might be like. At the very least, it was an experience that made me feel like “this isn’t a dream, it could be a reality in the near future.”
Although the exhibition’s catchphrase was “There’s no traffic jam in the sky,” the reality is not so simple. Under the Aviation Act and the Drone Act, many airspaces, such as densely populated areas (DID districts), around airports, and near important facilities, are generally prohibited or severely restricted. Therefore, under the current system, “freely flyable airspace” is extremely limited, and in fact, “usable airspace” is also quite limited. Furthermore, air traffic control exists in the airspace, and traffic control by controllers is essential for the safe flight of commercial aircraft, helicopters, eVTOLs, and drones. In fact, “holds” frequently occur over Haneda Airport during peak hours, and air traffic is subject to physical and institutional limitations. In the future, the development of dedicated routes and unmanned aircraft traffic management systems (UTMs) will be essential for low-altitude operations within cities.
While the technology is becoming a reality, there are many institutional challenges to truly bring flying cars to fruition, such as airworthiness certification under the Aviation Act, operator liability, and standards for installing vertiports in urban areas.
In recent years, the Ministry of Land, Infrastructure, Transport and Tourism has been making successive revisions to government ordinances and technical standards, including the 2023 amendment to the Enforcement Regulations of the Aviation Act, the vertiport maintenance guidelines, and the publication of the Next Generation Air Mobility Operation Guidelines in 2025.
However, a fundamental system design has yet to be reached, and many areas remain legally uncertain and gray areas.
In this article, we will organize and examine flying cars by comparing them with the future visions we all imagine and science fiction works, and current laws.
| This article is part of a series in which I consider future systems from the perspective of a lawyer, inspired by the exhibitions at the Osaka Expo. Previous articles: Is the Android ‘Me’ the Same Person?- Future Legal Systems Contemplated at Osaka Kansai Expo 2025 ADD LINKS EMBEDDED Who would be the judge of a murder were to to occur in an orbital elevator? |
Flying cars have long been a familiar feature in science fiction works, but their appearance varies, with each work depicting a different vision of society and technology.
In the 1985 film “Back to the Future Part II,” there is a memorable scene in which a DeLorean flies through the sky in the future of 2015. In this scene, the flying car is depicted as a personal vehicle, presenting an ideal future in which anyone can travel freely through the sky.
In the 1982 film Blade Runner, a flying car is depicted as a police vehicle, and there is a memorable scene in which it flies between skyscrapers. In this scene, the sky is not a public space, but functions as a domain controlled by power.
In the 1997 film The Fifth Element, flying cars are commonplace among civilians, urban spaces have multi-dimensional transportation systems, there are even traffic lights in the sky, and “air traffic jams” are a part of everyday life.
In the 1995 film Ghost in the Shell, a helicopter-type hovercar appears as a means of transportation for Public Security Section 9. Flying cars are not just a means of transportation but are positioned as part of the city surveillance infrastructure.
Figure2: A sci-fi flying car

What is interesting is that these works all barely address issues such as “who controls the skies” or “what legal rules govern flight.” While the
“free movement in the skies” depicted in science fiction works is appealing, but in reality, strict airspace management and aviation legislation exist. Rather, the question of “who owns the skies” is at the forefront of institutional design in modern society.
The term “flying car” catches your eye, but the aircraft currently being developed are not like the ones you might imagine in science fiction, like the DeLorean from Back to the Future Part II. They have no wheels and do not drive on roads, but the familiar term “car” is used because the aim is to provide an “everyday transportation service that anyone can reserve on demand.”
What exactly constitutes a flying car has not been finalized, but in documents from the Ministry of Land, Infrastructure, Transport and Tourism, flying cars are often defined as “electric, automated vertical take-off and landing aircraft (eVTOL)” and have the following characteristics:
These characteristics make flying cars different from conventional helicopters and drones.
| Classification | Propulsion method | Control | Take-off and Landing | Main uses | Legal system |
| Flying car (eVTOL) | Electric | Future automation | Vertical takeoff and landing | Intra-city transportation and aerial taxis | The application of aviation law is also currently under design |
| Helicopter | Internal combustion engine | Manned pilot | Vertical takeoff and landing | Government agencies, news, and emergency services | Regulated by Aviation Law |
| Drone | Electric | Unmanned (remote) | Vertical takeoff and landing | Photography, logistics, surveying | Unmanned Aerial Vehicle |
Flying cars are vehicles that are small and lightweight, like drones, and capable of vertical takeoff and landing, but also have the ability to transport people like helicopters. In that sense, they can be described as a “hybrid entity” that cannot be captured by traditional classifications.
Technically, the term eVTOL (electric Vertical Take-Off and Landing aircraft) is sometimes used, but there are currently no definitions of “flying cars” or “eVTOL” in Japan’s Aviation Act.
Also, although the word “car” is used, it is not a car, so it is not subject to the Road Transport Vehicle Act, and the automobile license and vehicle inspection systems do not apply. Conversely, because it is different from airplanes and helicopters, it does not fit completely within the framework of the existing Aviation Act.
Flying cars may still give the impression of being a futuristic vehicle, but the technology is already at a practical stage, with companies both in Japan and overseas already developing actual vehicles, conducting test flights, and conducting pre-commercial operations.
In the United States and Europe, efforts are accelerating toward the practical application of urban air mobility (UAM) based on eVTOL (electric vertical take-off and landing) aircraft.
In Japan, efforts are underway to commercialize flying cars, spurred by the Osaka-Kansai Expo.
The biggest obstacle to flying cars is not technology, but systems. As they are aircraft that fly in the air, they require a wide range of legal infrastructure, including aviation laws, aircraft manufacturing standards, safety certification, operation management, pilot qualifications, and standards for the establishment of takeoff and landing sites.
The Aviation Act encompasses conventional aviation, including fixed-wing and rotary-wing aircraft, but there is no regulatory design in place to accommodate eVTOLs, which operate frequently at low altitudes in cities, or automated/remotely piloted aircraft, leaving a gap in current legislation.
The central law that regulates Japan’s skies is the Aviation Act. However, the Aviation Act was originally designed to accommodate fixed-wing aircraft that take off from runways and fly at high altitudes, and helicopters with limited uses, creating a mismatch with low-altitude, short-distance, and frequent flying vehicles like flying cars (eVTOL). Currently, flying cars are classified as “aircraft” under the Aviation Act and require permission from the Ministry of Land, Infrastructure, Transport and Tourism, but the system has yet to catch up on the following points:
Some may wonder, “Flying in the sky means it will be regulated in the same way as drones?”
Drones are also subject to strict controls, including registration, remote ID, permits and approvals. However, the focus of the system design is on “unmanned transport of goods,” while flying cars, which “transport people with pilots,” have fundamentally different requirements and scope for type/airworthiness, crew qualifications, and airspace capacity management.
In the development of flying cars, one unavoidable issue is the question of “Who is responsible if an accident occurs?” This is a core issue that is directly linked to the construction of the entire legal system, including where responsibility lies, the licensing system, and the insurance system.
Many of the eVTOL aircraft currently being developed are intended to be autonomously or remotely piloted in the future, but in the initial stages, they are primarily intended to be piloted by humans.
If flying cars become autonomous in the future, the possible responsible parties are as follows:
For example, if an AI system makes a mistake in route selection during autonomous driving and crashes, the manufacturer, software developer, air traffic control system, and/or aircraft owner may be held liable. This is a complex issue that is fundamentally different from the driver’s liability of a car.
Let’s imagine something more concrete. If a flying car flying over Shinjuku suddenly crashes due to a system failure, causing damage to buildings and pedestrians on the ground, compensation could run into the billions or even tens of billions of yen. Would the manufacturer, the operator, or multiple parties be held responsible? There is no clear answer under the current legal system.
If flying cars become commonplace, the arteries of cities will shift from the ground to the sky. Instead of train stations, vertiports will be installed on the rooftops of high-rise buildings and shopping malls, creating a new common sense that “rooftops = entrances.” Air route nodes will also be established in large suburban facilities and hospitals, rewriting the very value map of cities.
Figure3: The rooftop will become a station

The first use for this technology is expected to be short-distance travel within cities. At a Skydrive Q&A session that I attended, it was explained that “current flight time is about 10 minutes, with the goal of 15-20 minutes in the future. The range will be 30-40km, and the fare will be 10,000-20,000 yen one way from Yumeshima to Shin-Osaka, with the ultimate goal being about three times faster and about twice the price of a taxi.
” If it’s “three times faster than a taxi, but about twice the price,” it certainly sounds appealing. As a new means of transportation unconstrained by traffic jams, it could potentially expand the possibilities of urban life.
On the other hand, in the early stages of introduction, the costs of aircraft, batteries, insurance, and takeoff and landing sites will likely increase, leading to higher fares. The number of flights will also be limited, and reservations will be required. Furthermore, surge pricing (fare increases) will occur during peak times, raising concerns that this will ultimately become a means of transportation that only the wealthy can use to buy time.
Vertiports require multiple standards, including evacuation routes and noise control. As the value of areas in front of stations weakens, urban planning to utilize rooftops as “sky station areas” becomes more realistic. We can see a future in which the rooftops of high-rise apartment buildings become departure and arrival points, changing the very structure of cities.
However, who can enjoy these benefits depends on the system’s design. If fares remain high, a new mobility gap will emerge between those who can use the air and those who cannot. Conversely, if it is incorporated into a public transportation system, it may develop into an infrastructure that allows for more equitable sharing of time. We are at a crossroads in the future, between “division” and “sharing.”
Flying cars are becoming a technological reality, but the legal system has yet to catch up. The path we can choose from can be broadly divided into three categories.
The key challenges we face are clear.
Will flying cars become “highways for the wealthy only,” or “public spaces that anyone can use”? The shape of the future will change dramatically depending on how the system is designed.
And this system will not be “decided by someone,” but will be shaped by the accumulation of consensus building across society. Just as trains and automobiles have done, flying cars may one day completely change our lives.
How would you design this future?
“If a child is born in space, what nationality does the child take on? If a murder were to occur in an orbital elevator, who is responsible? If there is a labor dispute on a space colony, which labor laws would be applied?”
At first glance, it may seem like a science fiction story, but it may be a “future reality” that is right around the corner from us. Last time, inspired by Professor Hiroshi, I wrote a blog post titled “Is the Android ‘Me’ the Same Person?” (https://innovationlaw.jp/en/android-law/)
This time, I visited the Gundam Pavilion (https://www.expo2025.or.jp/domestic-pv/bandai-namco/). Gundam is a monumental science fiction anime franchise that depicts warfare using mobile suits and humanity’s expansion into space. In this fictional world, characters fight battles in their own high-tech original suits.
The Expo Pavilion depicts a peaceful future where mobile suits are used for construction, agriculture, and space debris collection. Visitors of the Pavilion have a virtual experience of riding an orbital elevator from Yumeshima in Area 7 (Gundam terminology for Earth) to a space colony.
While experiencing this, I was thinking about the following: “In the exhibit, it takes only a short time to reach space, but in reality, it would take days. If something were to happen during that time, which laws would apply?
And to begin with, is an orbital elevator a vehicle? Or a building?
In the world of Gundam, space colonies are independent of Earth, but if they were connected to the ground, whose territory would it be?”
In my previous blog, I questioned what the law should be like in a future where the boundaries of humanity become blurred. In this article, I would like to attempt a thought experiment from a legal perspective on a future where the boundaries of space become blurred, namely, in space, regarding which country can reach whom and how.
Figure 1 Orbital Elevator and space colony (AI-generated)

Imagine a birth taking place in a space elevator. Labor begins 10,000 kilometers above Earth. The baby is born 20,000 kilometers away.
Before deciding on the child’s nationality, the first thing we need to consider is, “Where is the elevator built?” In fact, space elevators have some surprising physical constraints. Due to the geostationary orbit, they can only be built directly on the equator. In other words, they are physically impossible to build in a place like Japan. They can only be built in countries directly on the equator, such as Ecuador, Kenya, Indonesia, Brazil, and the Congo (this point is explained in the pavilion).
This is where an interesting (and complicated) structure arises.
It seems likely that the countries with the technology and funds to build a space elevator are primarily the United States, European countries, China, and Japan. However, it is the countries along the equator that have the physical space to build one. This means that there is inevitably a separation between “countries with the technology” and “countries that provide the land.”
Going back to the birth example from the beginning, if the United States had built a space elevator in Ecuador:
Table 1: Structure of space elevators and jurisdiction boundaries

A space elevator is more than just a transportation facility. As the only “gateway” connecting Earth and space, it will be an extremely important strategic infrastructure in terms of politics, economy, and security.
Because logistics and communications between Earth and space will be concentrated at this single point, the country that controls the elevator will have an overwhelming advantage in the space economy. It will also be in a position to effectively control activities in outer space.
This situation could potentially give rise to a serious international issue known as “orbital superiority” in real-world space development.
So how should equatorial countries, geographically capable of building a canal, and countries with the technology cooperate? The Panama Canal, built by the United States in Panama in the early 20th century, is often cited as an example.
At the time, the United States leased the Canal Zone from Panama for 99 years, effectively granting it sovereignty and military control. A similar model for space elevators is envisioned: they would be built and operated under a long-term lease of land and space.
However, space elevators are not simply terrestrial facilities. They would extend from the Earth’s surface to 35,000 km into outer space. This would require more than a simple terrestrial lease; a contract would also need to include access to territorial airspace, undefined airspace, and outer space. This would likely result in the most lengthy legal agreement in history.
Currently, several alternatives are being considered in legal research on space elevators.
The Japan Space Elevator Association and others have proposed building it above the sea directly under the equator, avoiding territorial disputes. However, maritime law does not anticipate use in the airspace, creating new legal challenges. Organizations such as the Japanese Society of Aeronautics and Astronautics have also proposed building and operating it through an international consortium of multiple countries. This model would operate space infrastructure through a multinational institutional design, similar to the International Space Station, while avoiding the monopoly of any single nation.
In any case, the physical constraints of where a space elevator can be built dictate who and how it can be legally operated. The technological constraints themselves are driving the design of new international institutions. In the next section, we will delve into the legal issues that arise in the “space” itself, through which this elevator will travel—that is, in airspace, outer space, and the undefined areas in between.
A murder occurred on a space elevator. The suspect was arrested, but the crime occurred 10,000 kilometers above Earth. The question that arises here is, “Whose laws apply to this space?”
In fact, there is no clear answer to this question. This is because the space elevator is designed to travel through 35,000 kilometers of space, where it is unclear whose sovereignty extends and whose territory it is.
The uniqueness of a space elevator is similar to that of a transcontinental railroad. Just as the applicable laws change whenever a railroad crosses a border, the legal jurisdiction of a space elevator also changes as it ascends.
However, there is a crucial difference. With a railroad, the laws change at the “line” of a national border, but with a space elevator, the boundaries themselves are unclear, as to which country’s laws begin and end.
With an airplane, the laws of one country apply. In contrast, while a space elevator is a single structure, its legal world changes gradually as it moves vertically, from the ground to airspace to outer space – making it an extremely unique entity never before seen.
under international law.
Sovereignty extends to the altitude at which passenger aircraft fly – roughly 10 to 12 km. As for the stratosphere and mesosphere (12 to 100 km) above that, the situation is vague, with some saying it is “probably territorial airspace.”
The 1967 Outer Space Treaty stipulates that “outer space has no sovereignty.” However, there are problems here as well.
To begin with, it has not been decided where outer space begins.
This ambiguity is a fatal problem for structures that continuously connect the ground and space, such as a space elevator.
The space elevator is a single continuous structure, but the space it travels through is:
Table 2: Scope of Laws Applicable in Outer Space (Conceptual Diagram)
| Altitude range | Legal nature | Current laws that may apply |
| Ground- 12km | Certain airspace | Criminal and civil laws of the country where the facility is located |
| 12km-50km | Actual airspace | Laws of the country where the facility is located (approximate) |
| 50km-100km | Undefined airspace | Unknown |
| More than100km | Outer space | Outer space treaty + laws of the country where the facility is located |
In the murder example mentioned above, the 10,000 km point is clearly outer space, so the laws of the country that “registered” the elevator would likely apply. But what if it were 100 km away? This would be a crime in a “legal vacuum.”
In reality, it would be impossible to manage a space elevator by dividing it into different altitudes, such as “from here to here it is subject to Country A’s law, and from here to Country B’s law.”
One of the biggest legal challenges in building a space elevator is determining which legal framework to use to treat the entire structure under. Whether it be managed by a single country, operated by a multinational corporation, or governed by an international organization — the choice will determine the nature of this “legal gateway” to space.
In the next section, we will look at the more complex legal issues that will arise in the space colonies that lie beyond this space elevator.
Mobile suit pilots working on the construction of the outer walls of a space colony have gone on strike, demanding special allowances for dangerous work in space.
Their demands are legitimate. Construction work in space is many times more dangerous than on Earth. However, the question that arises is, “Under whose labor laws should this labor dispute be resolved?”
In fact, to answer this question, it is necessary to know “the nationality of the space colony.” However, the current system for determining the nationality of space facilities is too complex to accommodate the space colonies of the future.
Current space law has a rule known as the “country of registration principle.” The country that launched or commissioned the launch of an artificial object (satellite, spacecraft, or space station) into space becomes its “country of registration,” and that country has jurisdiction and responsibility.
This principle works relatively well for the International Space Station (ISS). Japanese law applies to the Japanese laboratory module “Kibo,” while Russian law applies to the Russian module.
However, future space colonies will not be research facilities where various countries bring their own modules. They will be one large “space city” with integrated social infrastructure, including housing, commercial facilities, hospitals, schools, and factories. The traditional simple rule of “launching country = country of registration” is no longer applicable.
The construction and operation of a space colony is expected to require an extremely complex international system.
For example, funding will come from a joint venture between the European Space Agency, NASA, JAXA, and a private investment fund, construction will be a joint venture between SpaceX (USA), Mitsubishi Heavy Industries (Japan), and Airbus (Europe), components will be launched using rockets from different countries, and final assembly will be carried out unmanned and automatically in orbit.
In this case, how will the strike by the mobile suit pilots at the beginning be handled?
Not knowing which answer is correct is a real problem.
The problem becomes even more complicated when a space colony is physically connected to Earth by a space elevator.
Conventional space facilities “float” in space. However, a colony connected to Earth can also be considered an “extension of ground facilities.” If a labor dispute occurs in a colony connected to an orbital elevator extending from Ecuador, multiple options arise: the laws of the country of registration, the laws of the country of connection, or special international agreements.
What if tens of thousands of people were to live in a space colony, have children, receive an education, work, marry, and grow old there?
What would their “nationality” be?
Gundam depicts a division between “spacenoids,” born in space, and “earthnoids,” born on Earth. While it is fiction, how to handle citizenship, voting rights, and social security for people who were actually born and raised in a colony will be a realistic challenge in designing a system.
Who will protect the rights of space workers? This question will eventually develop into the more fundamental question of “who will protect the rights of space citizens?”
Column: Who will defend the colony if it is attacked? |
|
When considering the legal status of space colonies, military and security issues are unavoidable. If a space colony were to be attacked by cyberattack or physical attack, which country would bear responsibility for its defense? Under the current system: Institutional Design Needed |
Column: Do AI pilots have human rights? (Thought column) |
| At the Gundam Pavilion at the Expo, an AI replicating the thoughts and personality of a famous pilot will be featured. A mobile suit appears in a desperate scene, and the AI pilot rescues the audience. Here, I’d like to ask a question: does this AI have personality or human rights? AI learns from past words and actions and imitates “typical” behavior. However, this is not the person themselves; it is merely software replicating their “personality.” Under the current legal system, AI is not recognized as having personality or human rights. It is not held responsible and is treated merely as property. However, in the future, when AI with self-awareness and the ability to make decisions appears, and it is able to, for example, save lives in outer space and choose to “sacrifice itself,” can we still call it “merely a tool”? AI pilots can operate in harsh environments such as radiation and vacuums, and have the potential to become even more important partners than humans. What if such an AI were to save someone, choose someone, and sacrifice itself? Would it be just a machine, or “someone”? It may be that law and ethics in the future will no longer be able to turn a blind eye to this question. |
The future space infrastructure we saw at the Gundam Pavilion at the Expo is by no means science fiction. Orbital elevators are expected to become a reality in the 2050s, and space colonies may become a reality within this century.
However, neither orbital elevators nor space colonies were within the imagination of the 1967 Outer Space Treaty’s framers. Geopolitical inequalities due to physical constraints, ambiguity in the scope of sovereignty, and complex relationships of responsibility—all of these are the result of technological progress outpacing existing legal systems.
For Japan to take the lead in creating legal rules for space development, it is time to make legal preparations before the future we saw at the Expo becomes a reality.
References
Generative AI refers to artificial intelligence that can automatically generate a variety of content, including images, text, audio, program code, and structured data.
Learning models that have learned large amounts of data through machine learning can easily generate images, music, text, and other content that resembles human creation.
From around 2022, image generation AI such as Midjourney and Stable Diffusion began to rapidly spread in the market, and from early 2023, generative AI specialized in natural language processing, such as ChatGPT and Bing, began to rapidly spread1

Examples of generative AI products include:
| Product name | Field | Product description |
| Midjourney, Stable diffusion, DALL・E etc | Image generation | AI that generates realistic/ artistic images based on text instructions |
| Artbreeder | Image generation | AI that generates new images from uploaded images or multiple images |
| Juke deck | Music generation | AI that generates original, copyright-free music by specifying genre, tempo, mood, etc. |
| Runway ML | Video generation | AI that can create videos by typing text |
| ChatGPT. Bing | Text generation | AI that responds in natural language to text input in natural language. Conversational agents, automated speech, machine translation, etc. |
| Catchy | Text generation | AI text creation tool specialized for Japanese |
This article was also created using text generation AI such as ChatGPT. Specifically, we asked ChatGPT questions such as, “I’m thinking of writing a blog about financial institutions and generative AI. Please tell me the outline,” and “Please give me some examples of generative AI products in table format,” and then the output data was ① checked by a human, ② reconstructed by a human, ③ corrected by a human, and ④ added to by a human to finish it.
The data generated by text generation AI still contains many errors and cannot be used as is at present.
Currently, significant corrections and additions are made to the AI output data (i.e., it is not yet enough to eliminate human work), but even now it leads to a considerable improvement in work efficiency, and it is expected that it will become even faster and more accurate in the future.
With the rapid evolution of generative AI, many financial institutions are exploring the possibility of using AI to improve operational efficiency.
For example, financial institutions generally create a huge number of documents both for customers and internally. If generative AI can be used to streamline both customer and internal operations, such as creating explanatory documents and approval documents, it could lead to significant cost reductions. Furthermore, it could provide new services for customers, such as AI-based investment advisory services and automated portfolio optimization tools, and act as a sounding board for internal discussions.2 It is also possible to use the answers from the chat AI as a reference to reconsider business decisions and organize your thoughts.
| Applications of generative AI in the financial sector: (1) Improving customer experience and marketing; (2) Improving efficiency in customer-facing operations (3) Improving efficiency in internal operations (4) Investment advice and portfolio optimization (5) Risk assessment and fraud detection (6) Supporting discussions |
On the other hand, the use of generative AI may give rise to new ethical and legal issues, such as the following:
| AI and the Emergence of New Problems (1) Bias Issues: In various types of screening, if the data used to train generative AI is biased toward a particular race or region, the AI may output biased results. This could lead to racial or regional discrimination. (2) Privacy Issues: If financial services or products using generative AI require customer personal information, privacy concerns arise. Privacy must also be protected when using information generated by AI. (3) Fraud Issues: Generative AI may be misused for sophisticated fraudulent activities. Examples include fraudulent transactions and phishing to steal personal information. (4) Human Relationship Issues: As generative AI advances in automation, human labor and expertise may become less necessary. This could lead to job fluidity and unemployment. Furthermore, if AI decisions exceed human judgment, humans may become subordinate to AI, potentially shifting decision-making authority from humans to AI. |
As mentioned above, when it comes to generative AI and the financial sector, careful consideration is needed not only of technical issues, but also of ethical and legal issues and their relationship with humans.
Currently, the area in which financial institutions are most considering using AI is to improve operational efficiency.
From what we have heard, financial institutions have been contacting major AI companies in large numbers to ask about the use of AI and how to improve operational efficiency, and new developments are expected to take several months to complete.
For example, 1) AI can be used to streamline the creation of large volumes of documents, such as explanatory materials for customers, contracts, internal approval documents and various records, and applications and reports for regulatory authorities. 2) Chat AI can be used to automatically respond to customer inquiries (in text and audio), collect, record, and digitize the content of inquiries. 3) Large volumes of fictitious transaction data can be created to detect customer fraud.3 ④ Possible actions include using AI to analyze information such as the borrower’s past borrowing history and conducting loan screening.
One feature of AI use in financial institutions is that they do not use open databases like ChatGPT, but rather use dedicated databases that add their own company’s own data to such open databases (using machine learning, etc.).
Using such dedicated databases has the advantage of providing answers that are more relevant to the business and ensuring confidentiality of business operations.
In order for generative AI to perform machine learning, it is necessary to feed the AI various types of data from your company (i.e. provide the AI with information, analyze it, and have it learn).
There are two possible options: either consuming the data in-house or providing the information to an external vendor to consume the data, but the data that financial institutions want to consume contains a lot of personal and confidential information, which raises issues regarding the Personal Information Protection Act and confidentiality obligations.
Our current conclusions seem to be as follows, and we will consider each of them.
| In-house data use | Use of vendors for each part | |
| When using personal customer information, the privacy policy states the purpose of use, such as “for research and development of new products and services through data analysis, etc.” | It is within the scope of the purpose of use and possible | It is within the scope of the intended use, and confidentiality agreements must be concluded with possible third-party vendors. |
| When using personal customer information, the privacy policy simply states the purpose of use as “to improve services to customers” | This may be controversial, but it should be handled carefully. It is advisable to revise the privacy policy. | Same as left |
| No special confidentiality agreements have been signed regarding the use of corporate customer information | The relationship with the obligation of confidentiality that naturally accrues becomes an issue, but in principle, it is thought that there should be no problem. | There should be no problem if you sign a non-disclosure agreement with a third-party vendor. |
| We have signed special confidentiality agreements regarding the use of information from individual or corporate customers. | Depends on the content of the explicit confidentiality agreement, but contractually it is usually difficult | Same as left |
① Personal Information Protection Act and Purpose of Use
When processing data in-house, the question arises as to whether the processing is within the scope of the purpose of use. The Personal Information Protection Act requires that when handling personal information, the purpose of use must be specified as much as possible (Article 17, Paragraph 1 of the Personal Information Protection Act). 4 Unless the consent of the individual is obtained, personal information cannot be handled beyond the scope necessary to achieve the specified purpose of use (Article 18, Paragraph 1 of the same Act). Furthermore, when personal information is acquired, unless the purpose of use has been publicly announced in advance, the individual must be promptly notified of or publicly announced the purpose of use (Article 21, Paragraph 1 of the same Act).
If the use of AI falls outside the scope of the previously set purpose of use, the purpose of use must be changed. If the use of AI falls within a scope that can be reasonably deemed to be related to the previously set purpose of use, it is sufficient to notify the individual or make it public (Article 21, Paragraph 3 of the same Act). On the other hand, if the change goes beyond the permitted reasonable scope, the purpose of use must be set again after obtaining the individual’s consent for use with AI.
In addition, if the consent of the individual is required when revising the privacy policy as described above, the provision on the procedure for changing standard terms and conditions under the Civil Code (Article 548-4 of the Civil Code), which allows standard terms and conditions to be changed without consent in certain cases, is not considered to apply.5 Therefore, in the case of online transactions, it is likely that procedures will be implemented such as clearly indicating the changes to the privacy policy in a pop-up window or similar and obtaining customer consent by clicking on the button.
② Specific examples of descriptions of the purpose of use in a privacy policy
For example, consider a case where a privacy policy simply states “to improve service to customers” and various personal information is used to improve the efficiency of customer-related operations. Even in such a case, it may be argued that “to improve service to customers” falls within the scope of the purpose of use, but from the customer’s perspective, it would be unthinkable that their personal information would be used not just to provide service to themselves, but to improve service to customers in general (to improve business efficiency), and if so, it would be argued that this is an insufficient specification of the purpose of use and that the purpose of use should be changed.
Next, consider the case where a privacy policy stipulates “for the research and development of financial products and services through market research and data analysis,” and various personal information is used to improve the efficiency of customer-facing operations. In this case, although it is not explicitly stated that the analysis is performed using AI, it is reasonable to expect that some kind of data analysis will be performed using large amounts of customer personal information, and that the results will be used to research and develop financial products and services. Therefore, it is generally safe to assume that use with AI also falls within the scope of the privacy policy’s intended use.
In any case, you will need to consider the specific wording of the privacy policy and the purpose of use, and consult with your legal department.
① Personal Information Protection Act and Third-Party Provision
When providing personal information to other companies, such as vendors, to feed it to AI, in addition to the above, the question of whether or not this falls within the scope of third-party provision arises.
In principle, when providing personal data to a third party, a personal information handling business operator must obtain the consent of the individual (Article 27, Paragraph 1 of the Personal Information Protection Act).
However, if a business operator outsources all or part of the handling of personal data to a third party within the scope necessary to achieve the purpose of use, such outsourcing party will not be considered a “third party,” and the individual’s consent will not be required (Article 27, Paragraph 5, Item 1 of the Act). Therefore, if a business operator outsources the task of feeding personal information to an AI in order to build an AI service it provides, and the individual’s consent is therefore not required. However, the outsourcer must provide necessary and appropriate supervision of the outsourcing party to ensure the safe management of personal data (Article 25 of the Act).
Furthermore, even when providing personal data to a specific person for joint use, the consent of the individual is not required if the individual is notified in advance of the joint use and certain information stipulated by the Personal Information Protection Act, such as the items of personal data, or if the individual is made readily available (Article 27, Paragraph 5, Item 3 of the Act). For example, joint use may occur when AI that uses personal data is used between group companies.
② Specific examples of providing personal data to vendors as part of outsourcing:
As a specific example of outsourcing that does not constitute third-party provision, for example, if the purpose of use in a privacy policy is clearly stated as “for the research and development of financial products and services through market research and data analysis, etc.”, providing personal data to a third-party external vendor for analysis using AI could also be interpreted as “in connection with a business outsourcing all or part of the handling of personal data to the extent necessary to achieve the purpose of use.”
③ Conclusion of a confidentiality agreement
Even if the Act on the Protection of Personal Information allows for the provision of personal data to a third party, it states that “the trustor must exercise necessary and appropriate supervision over the trustee to ensure the safe management of personal data (Article 25 of the Act),” so naturally a contract imposing a confidentiality obligation on the third-party vendor will be necessary.
Even if the purpose of using the acquired personal information does not include analysis using AI, by processing the personal information to be fed to AI into anonymous processed information, it can be used for purposes other than those intended or provided to third parties without the consent of the individual.
Here, anonymously processed information means “information about an individual obtained by processing personal information in a certain way so that a specific individual cannot be identified, and the personal information cannot be restored” (Article 2, Paragraph 6 of the Act). However, when personal information is processed into anonymously processed information, processing must be carried out in accordance with standards set forth in the rules of the Personal Information Protection Commission (Article 43, Paragraph 1 of the Act, Article 34 of the Enforcement Regulations of the Personal Information Protection Act), such as deleting all or part of descriptions contained in the personal information that can identify a specific individual, deleting all personal identification codes, deleting codes that link personal information with processed personal information, and deleting peculiar descriptions, and it is thought that processing is often difficult.
Therefore, it may be possible to use pseudonymized information, which does not require more advanced processing technology than anonymously processed information. Pseudonymized information refers to “information about an individual obtained by processing personal information in a manner that makes it impossible to identify a specific individual without comparing it with other information” (Article 2, Paragraph 5 of the Act on the Protection of Personal Information). Because pseudonymized information is less abstract than anonymously processed information, it has the advantage of maintaining the usefulness of personal information. Furthermore, unlike unprocessed personal information, it is possible to change the purpose of use beyond a scope that is reasonably recognized as being related to the previous purpose of use (Article 41, Paragraph 9 of the Act). However, unlike anonymously processed information, provision of pseudonymized information to third parties is prohibited in principle (Article 41, Paragraph 6 of the Act).
| Raw personal information | Pseudonymized information | Anonymously processed information | |
| Processing | No processing | Processed so that a specific individual cannot be identified unless compared with other information | Processing to make it impossible to identify a specific individual and to restore personal information |
| Use for other purposes | It can be used within the scope of the specified purpose of use. In addition, it is not possible to change the purpose of use beyond the scope that is reasonably recognized as being related to the purpose of use before the change. |
It can be used within the scope of the specified purpose of use. However, it is possible to change the purpose of use beyond the scope that is reasonably recognized as being related to the purpose of use before the change. |
Unintended use is possible |
| Provided by a third party | In principle, consent from the individual is required | This is not permitted except as provided for by law (even if the individual’s consent was obtained before the pseudonymized information was created). In addition, the provision that does not apply to outsourced work (Article 28, Paragraph 5 of the Act) applies. | In principle, the individual’s consent is not required |
Financial institutions naturally have confidentiality obligations with their clients and other parties who provide them with information, and may also enter into confidentiality agreements that stipulate special confidentiality obligations when conducting special transactions such as M&A advice or securities underwriting. When using AI to analyze information, it is necessary to consider not only the relationship with the Personal Information Protection Act but also the relationship with such confidentiality obligations.
With regard to personal information obtained without entering into a contract containing special confidentiality clauses, there is generally no argument that it naturally requires a confidentiality obligation greater than that stipulated in the Personal Information Protection Act. Therefore, I believe that there should be no problem with either in-house use or provision to a third party as long as it is carried out within the scope of the Personal Information Protection Act as discussed previously.
Regarding information on corporations acquired without entering into a contract containing a special confidentiality clause (for example, a corporation conducting transactions based on a normal banking transaction agreement), there is generally no argument that the confidentiality obligation towards a corporation is heavier than the confidentiality obligation towards an individual, so there will likely be no problem if the information is used within the company or provided to a third party within the same scope as for an individual.
When financial institutions enter into special confidentiality agreements for M&A, IPO advice, securities underwriting, or other special contracts, many of these agreements contain clauses such as (1) not to use the information for purposes other than the IPO, and (2) not to disclose the information to third parties unrelated to the IPO. If such confidentiality agreements exist, it may be difficult to feed data to AI or provide the data to third-party vendors for purposes such as using generative AI to simplify the creation of materials for future IPO projects.
Regarding legal tech, for example, there is a debate as to whether uploading a contract file to a legal tech service for risk analysis constitutes disclosure of the contract to a third party, and if the contract stipulates a confidentiality obligation, does this constitute a breach of contract? Although there are arguments that this is in fact an implicit consent of the contracting party, and that since there is no actual damage, it is a matter of business judgment, etc.6 the situation discussed in this section is far more related to actual cases than legal tech cases, and implied consent requires more careful consideration. Furthermore, the argument that there is no actual harm is likely to be made more carefully in the case of financial institutions, which are forced to be more cautious about compliance risks than general business companies.
At present, there may not be much need to feed AI large amounts of documents from parties that have confidentiality agreements, but considering that such needs may arise in the future, it may be necessary to consider the content of the confidentiality agreement templates that your company prepares.
Legally, in order to engage in investment advisory and agency business, registration as a financial instruments business operator is required (Article 28, Paragraphs 3 and 29 of the Financial Instruments and Exchange Act).
According to Article 2, Paragraph 8, Item 11 of the Financial Instruments and Exchange Act, investment advice regarding financial instruments requires the following: 1) an agreement to provide advice verbally, in writing (with certain exceptions), or by other means regarding investment decisions based on an analysis of the value of financial instruments (meaning decisions regarding the type, brand, number, and price of securities to be invested in, as well as the choice, method, and timing of buying and selling, or decisions regarding the content and timing of derivative transactions to be conducted), and 3) the other party agreeing to pay a fee.
For example, if a generative AI is fed with information such as the past price movements, returns, and investment data of financial products, and as a result, it creates a text recommending an investment stock, etc., then such a text creation service may be considered an investment advisory business.
AI services that specialize in investment advice and are provided for a fee likely require investment advisory license certification. Currently, the “sale of computer software, such as investment analysis services,” is understood to not constitute investment advisory services if the tools are available to anyone without additional support, such as through retail sales by retailers or download sales via networks. However, if the tool requires ongoing investment information or other support from a distributor, registration may be required (see the “Comprehensive Supervision Guidelines for Financial Instruments Business Operators, etc.” below). Paid AI services specialized in investment advice are likely to secure their value through ongoing data provision and tuning by the AI provider, which may result in investment advisory services.
On the other hand, when financial institutions provide investment information free of charge for the purpose of general information provision, the requirement that “the other party pays compensation” does not apply, and therefore investment advisory services are not required.
The problem is that, although it is believed that AI has not yet evolved to that extent, if there is, for example, a general-purpose generative AI that also collects a large amount of information on financial products and, as a result, is able to provide investment advice, which is normally free of charge, but if you become a paid member you can get a quicker response, etc., would this be considered an investment advisory business?
In our opinion, even if one becomes a paid member of such an AI, this is not a fee for investment advice, but rather a way to obtain benefits such as speeding up AI in general, and therefore does not constitute investment advisory business. However, if AI continues to evolve in the future, further consideration will be needed as to whether this interpretation is appropriate.
| Comprehensive Guidelines for Supervision of Financial Instruments Business Operators, etc. VII-3-1(2)②c ② Activities that do not constitute investment advisory and agency business A. Activities that provide investment decisions based on the analysis of the values of securities or financial instruments (hereinafter referred to as “investment information, etc.”) to an unspecified number of persons by methods that allow an unspecified number of persons to purchase them at any time. For example, persons who provide investment information, etc. by methods set forth in a to c below are not required to register as an investment advisory and agency business. However, even if the target audience is an unspecified number of persons, it should be noted that registration is required in cases where highly individual and relative investment information. is provided by using information and communications technology such as the Internet, or where investment information cannot be purchased or used without membership registration (one-off purchases or use are not accepted). a. Sales of newspapers, magazines, books, etc. (Note) When these are displayed in the stores of general bookstores, kiosks, etc., and are available for anyone to freely view, decide, and purchase at any time. On the other hand, please note that registration may be required when selling reports, etc. that can only be purchased by applying directly to a dealer, etc. b. Sales of computer software such as investment analysis tools (Note) When the software is available for purchase by anyone at any time, freely, based on the investment analysis algorithms and other functions of the computer software, through over-the-counter sales by retailers or download sales via networks, etc. On the other hand, please note that registration may be required when it is necessary to receive data related to investment information, etc. or other support from a dealer, etc. on an ongoing basis when using the software. (https://www.fsa.go.jp/common/law/guide/kinyushohin/07.html#07-03 ) |
Reservations
The contents of this article have not been confirmed by the relevant authorities and merely describe arguments that are reasonably considered legal. Furthermore, they represent only our current views, and our views may change.
This article is merely a compilation for this blog. If you require legal advice for a specific case, please consult with a lawyer.