Where Does AI Belong as “Knowledge”?
Short Summary
This essay is not about whether Japan should build its own AI, or whether the American or Chinese model is better.
The real question is this:
What kind of knowledge is AI, who controls it, and who takes responsibility when it fails?
In the IT world, strong technologies have grown through sharing, openness, and verification, not through ownership and enclosure. Bugs must be seen, systems must be tested, and failures must be accounted for.
Today, AI is increasingly treated as capital and national property.
I believe this is a detour.
From the fields of control, physics, and real implementation, a more practical common sense will return.

Where This Question Came From
I received an essay from an acquaintance about Japan’s AI Basic Plan and the competition between the United States and China in AI development. I started to write a reply. But as I read, I found myself stopping again and again. I was not just thinking about the arguments themselves. Something I had been vaguely uneasy about for a long time was finally taking shape in my own words.
Why the “National AI Plan” Felt Wrong
To be honest, even as a Japanese citizen, I had not seriously engaged with what is often called the “national AI plan.” I kept some distance from the large investment figures and the slogan of “domestic AI,” telling myself, “Well, that is probably how these things go.” But as I read more materials and followed the surrounding discussions, a sense of discomfort would not go away. Before questions of budget or nationality, I could not see how this plan was treating AI as a form of knowledge in the first place.
AI Is Not Something to “Have”
The real issue is not whether Japan should “have” AI or not.
The real question should be how AI is positioned as knowledge, who controls it at each stage, and who takes responsibility when it fails.
In much of the current discussion, however, making AI itself seems to have become the goal. Questions about control, verification, and responsibility that should come afterward are barely discussed.
Engineering Doubts About the Data-Center Model
I also have serious concerns about the attempt to reproduce, inside Japan, the American model that depends on massive data centers and enormous electricity consumption. This is not an emotional reaction. From the perspective of investment efficiency, technical sustainability, and above all whether control and verification are even possible, it is hard not to doubt this approach on engineering grounds.
AI does not become complete at the moment it is built. It only becomes a real technology after it is operated, corrected, and fails, and those failures are clearly accounted for. Who carries that responsibility matters.
Beyond “America or China”
I am also uncomfortable with the way the discussion is narrowing into a simple choice between “the American model” and “the Chinese model.”
Technologies associated with China’s DeepSeek show impressive efficiency and openness, and there is much to learn from them. At the same time, I often hear that they are dangerous. Even so, choosing one camp over another does not seem to solve the core problem.


I think we need to ask again whether the idea that states or corporations should own and enclose AI really serves technological progress and social stability.
“Knowledge Held by the World’s People”
As I continued writing, I found myself arriving at a single phrase. It pointed not to a policy position, but to the place where knowledge itself is assumed to belong. I call it “knowledge held by the world’s people.”
This phrase may sound unfamiliar, and it may even carry the smell of old political language. It is easy to misunderstand. But what I mean is neither a new ideal nor an emotional slogan. It is, rather, a very practical way of thinking that has been shared for a long time in the world of information technology.
How Technology Actually Grew
What I have in mind is a way of treating knowledge that assumes the world rather than a single nation, sharing rather than ownership, verification rather than enclosure, and growth through division of labor and contribution rather than monopoly by large capital.
This is not an ethical claim about how things should be.
It is an observation based on how things have actually worked.
The Unix philosophy was built on the idea that small tools, when combined, make the whole system stronger. TCP and IP were designed on the assumption that they would cross national and corporate borders. Linux became one of the most robust operating systems precisely because developers around the world could freely modify it. Git and GitHub assume that code will be forked, criticized, and rewritten. Python, NumPy, and SciPy were treated as public goods from the beginning, open to anyone who wanted to use or improve them.
Knowledge as a Physical Sense
I apologize for listing so many technical terms. But in reality, many IT engineers have built friendships across borders and cultures through these technologies. For people with that experience, the idea that “knowledge becomes stronger when it is shared” is not a theory. It is closer to a physical sense learned through practice.
From that perspective, knowledge owned by states or monopolized by corporations has long been seen as fragile from an engineering point of view.
Why Closed Knowledge Fails
The reasons are simple. Bugs increase when they are hidden. What is not verified will eventually break. Closed designs always fail to scale at some point.
That is why knowledge that is open, criticized, and continuously improved has survived in the end. This is not a political belief. It is something confirmed again and again in real work.
Why Openness Now Looks “Idealistic”
So why does this way of thinking now appear to be “idealistic”? I see at least three reasons.
Firstly, IT has changed from infrastructure into a tool of dominance.
Secondly, generative AI has become a machine for attracting capital.
Thirdly, states have begun to view AI as a security asset.
This is no longer the logic of IT. It is the logic of the state.
The Principles Have Not Lost
As these forces overlap, AI is unfortunately becoming a “nationalized form of capital.” Concepts I first learned long ago now seem to be reappearing in a different shape.
Still, I do not believe that the core principles of IT have been defeated. The 2017 paper “Attention Is All You Need” was fully released to the world. Linux continues to support the core of our society. GitHub has effectively become a global research infrastructure. PyTorch and TensorFlow evolve on the assumption that they are open.
In fields such as quantum control, robotics, and physical simulation, results are not even recognized without reproducibility and openness.
A Practical Conclusion
The physical world does not lie. Black boxes eventually fail. Results that cannot be reproduced have no value in real work. That is why, in these fields, openness remains the optimal strategy.
I’m not saying that things were better in the past. I just want us to remember a simple lesson learned from experience.
AI is not a belief system.
It is a tool that must be controlled, tested, and held responsible.
From the fields of control, physics, and implementation, this common sense will return.
That is how I see it.
