Jul 04, 2025

Chinese AI start-up DeepSeek shocked the world when it unleashed its open-source R1 model earlier this year. Developed for a fraction of the cost of Western blockbuster apps — such as ChatGPT, CoPilot, and Grok — DeepSeek’s product leveraged several breakthroughs to deliver comparable performance at exponentially higher efficiencies on inferior chipsets.

Moreover, its open-source, free distribution model saw it instantly ascend to the No. 1 AI app download globally, and was seen not only to undermine the investment case for dominant US tech firms, but also America’s attempts to contain China’s progress on this vital technological front.

“The emergence of DeepSeek is strongly aligned with China’s strategic vision to become a global AI leader by 2030,” said Associate Professor Alfred Wu of the Lee Kuan Yew School of Public Policy (LKYSPP). “This national roadmap emphasises AI as a core driver of economic transformation, industrial upgrading, and governance modernisation as well as prioritising reducing dependence on foreign technologies.”

China’s rapid evolution of AI

Professor Wu said, “Though not a state-led project, DeepSeek’s rapid rise sends a message about China’s evolving innovation ecosystem and reflects how market-driven innovation is also playing a key role in achieving these national objectives. While the state continues to provide strategic direction and regulatory oversight, private firms are increasingly the engines of cutting-edge technological development.”

While much has been made of the disruptive potential of AI in the west, and the rollout of applications at scale has been relatively slow, China has been actively incorporating DeepSeek into its public administration systems. Cities such as Hangzhou, home to tech giant Alibaba, among others, have incorporated DeepSeek into their smart city initiatives.

For example, Hangzhou's launch of City Brain 3.0 introduced the DeepSeek-R1 model, making it one of the first cities to integrate AI-driven, self-evolving digital intelligence into urban management. “This integration aims to enhance decision-making processes and improve public service delivery,” said Professor Wu.

Step-change in efficiency

The three key components that have powered DeepSeek’s leap in efficiency are:

  • Mixture of Experts (MoE) which activates only the most relevant parts of the model to the query, saving energy and computational power instead of activating the entire model for every task.
  • Multi-Head Latent Attention (MLA) allows DeepSeek to focus on the most valuable data, improving accuracy and efficiency.
  • Reinforced Learning (RL) enables DeepSeek to learn and adapt, further optimising performance in contrast to Supervised Learning (SL) which requires more human intervention.

“We believe that MoE refined by DeepSeek has the most significant long-term impact on the AI industry,” said Professor Alfred Schipke, Director (East Asian Institute) and Professor in Practice, LKYSPP. “What might be more critical is that DeepSeek open-sourced their publications on how they optimised MoE. It allows the industry to follow the trend and fine tune the innovation, further reducing the overall cost of AI training and inference.

“MoE has already spilled over to many leading LLMs such as Alibaba’s Qwen2.5-Max, while many recent publications drawing the industry’s attention are advancing techniques of MLA. Optimising KV cache memory, MLA is particularly helpful in bringing advanced generative AI into peripheral and smaller devices equipped with far less computing power, such as smartphones and wearables like smart watches or glasses, etc.”

Open-source challenge to US approach

DeepSeek’s sustainable success in unleashing the potential of open-source culture hinges on various factors. China clearly has an interest in promoting any initiative that might help to counter the US’s primacy in AI tech; something that America is keen to protect, as demonstrated by its various measures to restrict China’s access to advanced ships and also by its refusal to sign the Paris AI summit declaration on Inclusive and Sustainable Artificial Intelligence for People and the Planet.

“Firstly, just like Android for mobile phones, the relative maturity of the global ecosystem of open-source AI, whether for developers or for the market of end-users, would play a major role in successful adoption,” said Professor Schipke. “For open-source to have sustainable development, large economies such as Europe, India and Southeast Asia will be particularly crucial, which resonates with the official tone of China appealing for a more inclusive and multi-polar approach to AI.

“Secondly, in the presumably extreme geopolitical/decoupling scenarios, how China’s domestic market and broader industrial ecosystem can support the growth and spillover of open-source AI is the key factor. Given China has the largest unitary market in major areas of applying AI, such as autonomous-driving EV, PCs, next-generation smartphones, wearable devices like augmented-reality glasses, etc., the country has a good chance of building an application-anchored ecosystem where open-source applications can offer a decent return on investment.”

Strategic implications

In a recent commentary published with research assistant Liu Bojian, Professor Schipke noted that DeepSeek’s cost efficiency is not just a technical achievement — it is a strategic one. By offering downloadable packages, DeepSeek allows users to run or even retrain the model locally.

This offline capability has broadened its appeal, especially in industries like telecommunications, banking and energy — where data privacy, portability and adaptability are critical.

In a world where AI is often seen as a resource-intensive luxury, DeepSeek is proving that high performance does not have to come with a high price tag, which is important in emerging markets in particular.

But cost-effectiveness is not the only criteria that governments and policy makers need to consider when planning for an AI enhanced future. Professor Wu has observed moves by Australia and US, among other countries, to limit access to their markets by China-linked AI firms, which often route queries and content through Chinese servers.

“DeepSeek's emergence intensifies the ongoing ‘AI bifurcation’ between China and the West,” Professor Wu said. “Its capabilities are prompting debates about AI security, data localisation, and technological sovereignty.

“Chinese models are more likely to be embedded within censorship and control frameworks. This divergence will make it more difficult to reach consensus in global forums such as the United Nations (UN) or the Organization for Economic Co-operation and Development (OECD) on AI governance.”

Regulatory challenge

Professor Wu notes several areas of friction that need to be resolved, or at least discussed.

  • Content Control vs. Free Expression: “In China, generative AI is tightly regulated to ensure it aligns with ‘core socialist values’. This limits freedom of expression and may suppress critical or creative use cases. Globally, striking the right balance between safety and openness is still a challenge.”
  • Accountability and Transparency: As the ability of models to operate autonomously increases, it will become critical to define responsibility, “especially in high-risk domains like law, healthcare, and public policy”.
  • Bias and Fairness: AI models are only as good as the material on which they are trained, and inevitably reflect biases embedded in the data and societies from which they learn. DeepSeek’s strong emphasis on Chinese-language sources may face limitations if it leads to reduced balance or adaptability across diverse cultural and political contexts.

“The risk of cultural or political bias is salient,” Professor Wu said. “For example, DeepSeek has declined to respond to certain questions related to Chinese public administration. Such issues remain insufficiently mitigated and raise broader concerns about the model’s transparency, neutrality, and utility in comparative or international research settings.”

Changing from national oversight to cross-border AI governance

In a November 2024 paper for the Oxford Martin AI Governance Initiative, researchers led by Claire Dennis identified nine critical areas under the broad headings of Data (privacy and provenance), Compute (chip access, provider oversight), and Models (bias, content provenance, evaluation, incident monitoring, risk management) that warrant the attention of regulators. Their conclusion was that the area that required the most urgent attention was on issues to do with models.

Professor Wu said, “As AI applications scale globally, the world will need mechanisms to coordinate rules across jurisdictions, particularly on cross-border data flows, AI model evaluation, and ethical alignment.“

BE PART OF THE COMMUNITY

Join close to 50,000 subscribers