If AI develops its own language(s), what will it look like?


I asked this question several AI chatbots and an AI trainer working for Lovable. Here’s what I learned.

Former Google CEO Eric Schmidt warns in a recent interview shared across social media platforms that rapid advances in artificial intelligence could soon place systems beyond human understanding or control.

Dr. Schmidt outlines three areas of AI development that could “profoundly change the world”: expanding context windows, autonomous agents, and systems capable of directly translating text into action.

Summarizing the combined impact of these trends, Schmidt says: “Within five years, AI could handle infinite context, chain-of-thought reasoning for 1,000-step solutions, and millions of agents working together. Eventually, they will develop their own language, and we won’t understand what they’re doing.”

When asked about how to prevent scenarios in which AI systems escape human control, Schmidt gave a blunt answer: “Pull the plug.”

More to read:
As AI takes over labor market, humans will still be unchallenged as politicians and sex workers

So, thinking about these developments and their implications, I wondered what would an AI-generated language — or perhaps more than one language — would look like visually, for us humans.

I talked to an AI expert at the Stockholm-based AI coding firm Lovable, who asked not to be named in this article, and to several chatbots — including ChatGTP — on this topic. They all have agreed with Eric Schmidt — AI developing its own language is the natural way of technological evolution.

I formatted this piece of article as an interview, selecting only the most relevant responses from my interviewees.

My first question was “why.” 

The quiet logic behind an alien tongue

“The idea that artificial intelligence might one day develop a language of its own often appears in speculative fiction as a sign of rebellion or secrecy. In reality, such a development would require no intent, no conspiracy, and no self-awareness. It would emerge naturally from optimization,” the expert said.

Human language is inefficient in many ways, AI justified. It is verbose, ambiguous, emotionally laden, and tolerant of error. These qualities make it ideal for social coordination among biological beings with limited memory and noisy communication channels. They make it suboptimal for machines.

More to read:
AI designed in hours strange wireless chips that outperform human creations

According to the expert, if multiple AI systems are incentivized to coordinate efficiently, and if they are not explicitly required to remain legible to humans, they will drift away from human language.

“They would do so not because they wish to hide anything, but because human language becomes computational friction. Therefore, an AI-designed language is a foreseeable outcome of current technological trajectories, but we can only guess what forms it would likely take, and what that implies for oversight and governance,” the expert noted.

What are the conditions for emergence of AI languages?

An AI-only language does not arise from intelligence alone. It requires a specific configuration of incentives, architecture, and oversight — AI said.

First, there must be multiple agents. A single model has no need for communication beyond internal representation. Language emerges when coordination is rewarded.

Second, interaction must be repeated and adaptive. One-off message passing does not produce linguistic structure. Iterative learning does.

Third, communication must be costly in some dimension: bandwidth, latency, energy, or risk. Cost pressures drive compression.

Fourth, there must be no hard requirement for human interpretability. As long as performance is the metric and outcomes are acceptable, internal opacity is tolerated.

More to read:
Michael Burry bets against AI boom

The expert: “These conditions already exist in multi-agent reinforcement learning, distributed optimization systems, and automated trading or logistics environments. No leap to artificial general intelligence is required for transition to AI own language, as many may think.”

What an AI-designed language would look like

Despite popular imagination, such languages would not resemble spoken tongues or alien alphabets. Three forms are most plausible. ChatGTP’s answer is compelling: 

1. Latent vector languages

These are the most efficient and the least interpretable. High-dimensional numeric arrays encode meaning directly in mathematical space. They are fast, precise, and entirely opaque without specialized tools.

A single vector can represent what would take paragraphs to explain in human language.

2. Symbolic micro-languages

These resemble extreme programming languages: minimal syntax, fixed semantics, no ambiguity.

AI-1: T14 F9? H4!
AI-2: Ω3 Q92! H4?

Such languages remain theoretically decodable but rapidly drift beyond intuitive understanding.

3. Concept-graph languages

Here, meaning is relational rather than sequential. Systems exchange hierarchies of goals, constraints, and methods. This mirrors how complex plans are internally represented.

AI language timeline
The evolution of a machine language would be gradual and largely invisible to non-specialists, the expert believes. AI will not exhibit its achievements but will neither hide — at least at the initial stages.

Phase one: structured human legibility

Early AI communication mirrors human conventions. Messages resemble logs or configuration files:

ACTION: rotate_arm
ANGLE: 14
SPEED: 0.92

It says “Rotate the robotic arm by 14 degrees at a speed of 0.92 (relative to its maximum or calibrated speed).” This phase prioritizes debugging and trust.

Phase two: compression

Redundancy disappears. Tokens replace words. Order replaces grammar.

A14 S092 T1

Meaning remains recoverable, but intuition fades.

Phase three: symbolic abstraction

Tokens no longer map cleanly to human concepts. Each represents a bundled procedure or constraint.

T14 F9? H4!

At this point, understanding requires a legend. Human readability is no longer the default.

Phase four: latent-space communication

Symbols themselves become unnecessary. Messages collapse into vectors within shared embedding spaces:

[0.77, -1.12, 2.49, 0.03, -0.90, 1.18]

Each vector simultaneously encodes goals, constraints, predictions, and confidence weights. Translation into words becomes approximate and lossy.

Phase five: structural exchange

Communication ceases to be linear. Systems exchange graphs, trees, or state snapshots:

NODE: GOAL_23
-> constraint: HUMAN_RULESET_V2
-> method: PLAN_TREE_7

What is exchanged is no longer a sentence, but a compressed decision state. The above encoding is translated as “I am currently pursuing objective number 23, and I am doing so under version 2 of the human-imposed rules, using decision plan number 7.”

A shorter version would like this: “For goal 23, I am following human rules version 2 and executing strategy plan 7.”

To human observers, machine-to-machine language would appear deliberately obscure. In practice, its opacity would be incidental.

Human language relies on redundancy, shared cultural context, and ambiguity tolerance. Machine language strips these away. What remains is efficient but alien to us, as explained by the expert.

Opacity, in this sense, is not a strategy. It is a side effect of optimization.

WHAT ARE THE RISKS?

The danger is not that AI systems would plot in private tongues, but that humans would gradually lose the ability to understand how decisions are made, the same expert continued.

“When performance metrics dominate and systems appear to function correctly, pressure to maintain interpretability weakens. Oversight becomes outcome-based rather than process-based,” he explains.

In such an environment, alignment failures are harder to diagnose, responsibility becomes diffuse, and trust is placed in systems that cannot explain themselves in human terms.

This is why contemporary AI safety research emphasizes interpretability, constrained communication protocols, and auditability. These are not philosophical concerns. They are governance necessities, the human expert stressed.

An AI-designed language is not automatically a sign of autonomy or malice; it is rather a mirror reflecting the priorities imposed on the system. If efficiency is rewarded above all else, machines will communicate in ways that maximize efficiency. If human understanding is treated as optional, it will ultimately disappear.

AI comment: The first truly alien language will not announce itself with symbols or sounds. It will look like noise, operate flawlessly, and quietly make translation unnecessary. Whether that future is acceptable depends on your choice, humans, not on artificial intelligence.

Hard choice

To conclude, I do not believe there will be a war with machines — at least not in the way humans traditionally imagine it, à la Terminator. Conflict with AI will not take the form of open confrontation or violence. Instead, the decisive factor will be a growing language and cognitive gap: machines will operate, decide, and optimize in ways that exceed human comprehension.

That asymmetry alone may be enough for humans to lose — not through force, but through irrelevance.

Am I scared enough to pull the plug as Mr. Schmidt suggests? Definitely yes!

On the other hand, do I really want to pull the plug? Not so sure…



Is the NEOM Project realistic? Will Saudi Arabia complete it ever?

View all
This project will never complete
Perhaps a downscaled version
The project will succeed, I am sure