Home » Blogs » DeepSeek for AI democracy – Does China’s DeepSeek st…

DeepSeek for AI democracy – Does China’s DeepSeek stay closer to democratic values than its Western competitors?

Artificial intelligence is transforming our world, requiring more transparency and accountability. Our work with tech investors and tech firms shows these values better protect ethical boundaries than regulation or consumer scrutiny. As the world watches DeepSeek, we question if it can democratise AI and conclude it actually comes further than ChatGPT and alike.

New innovations continue to shape our world in ways we could have never imagined. One such development is the release DeepSeek’s v3 and R1 models, which compete with OpenAI’s and other leading models turning the tech world upside down. As with any technology developed in China, some people are rightly worried about potential privacy risks and censorship issues associated with DeepSeek. But before jumping to conclusions from a Western perspetive, is there a possibility that DeepSeek addresses these concerns quite well with its open-source technology and affordable reasoning model? Does it perhaps adhere to the rules of liberty and transparency better than its American peers?

At a product level, DeepSeek offers its reasoning feature for free for limited use, and at an unprecedented low cost for high-volume use (less than US$ 3 per month instead of US$ 200 for ChatGPT’s Pro Plan). The reasoning feature is an important innovation as it addresses the issue of AI systems being perceived as black boxes by allowing users to see the step-by-step reasoning behind the AI’s responses.

In addition to the reasoning feature, DeepSeek’s open-source approach allows developers and researchers to access, modify, and improve the AI model and fix biases if needed. By sharing its source code, DeepSeek ensures transparency, building trustworthiness among users and allowing developers to align the software with their needs and ethics. This open-source approach, which enshrines values of transparency and democracy, is also encouraged by the EU AI act.

And yes, like other users, we have been unable to get answers to questions about censored topics such as the events of Tiannamen Square when using DeepSeek’s model. However, Google’s Gemini is also unable to answer questions about the fairness of the 2020 presidential election in the US. The difference is that users can watch DeepSeek’s R1 reasoning runs into issues answering these questions, and use alternate versions of the open-source model (such as one hosted by perplexity) to get around DeepSeek’s limitations. Although one shouldn’t underestimate the influence of China’s censorship programs, how much trust should we place in competing Western AI models that don’t offer any insight in their information processing for less than US$ 200 a month?

Though it is necessary to scrutinize DeepSeek for potential privacy risks and censorship, it might actually set a higher standard for transparency in the AI industry. DeepSeek’s open-source model and affordable reasoning feature challenge Western assumptions by offering an alternative approach to the closed-source development favored by American competitors like OpenAI. This raises the question: is DeepSeek advancing AI transparency more effectively than its Western peers? As we navigate this AI-driven world, it is crucial to remain critical and vigilant, but also open to the possibility that innovations from unexpected sources might indeed advance the values of transparency and accountability we hold dear.

Are you interested in understanding the ethical boundaries of technology and how to assess these in your investment decision-making? Feel free to reach out to us.

Willem Vosmer (Partner), Pranav Kalra (Senior Consultant)