Is DeepSeek safe?
Estimated time.
DeepSeek, a Chinese AI company founded in 2023, has rapidly emerged as a serious competitor in the global AI arena. Built with efficiency, accessibility, and scalability in mind, DeepSeek’s technology is raising eyebrows—and concerns—worldwide.
Who are DeepSeek?
DeepSeek operates as an independent AI research lab under the hedge fund High-Flyer. Unlike some AI giants that spend hundreds of millions on model training, DeepSeek has seemingly managed to produce cutting-edge models at a fraction of the cost. Its flagship model, DeepSeek-R1, was developed for just $6 million—dramatically undercutting competitors.
The company’s AI assistant, powered by DeepSeek-R1, was launched in January 2025 as a free mobile app. Within weeks, it became the most downloaded app on the iOS App Store in the United States, surpassing ChatGPT. This rapid adoption even impacted the stock market, causing an 18% drop in Nvidia’s share price.
DeepSeek’s emergence highlights China’s rapid progress in AI and its challenge to Western dominance in the field. Some experts are calling this a "Sputnik moment" for American AI—suggesting it could spark an intensified AI arms race between China and the United States.
But with growing controversy surrounding the company, many people are questioning whether DeepSeek is actually safe to use.
What are the concerns?
On one hand, DeepSeek offers powerful, open-source AI models with lower computing costs, making advanced AI more accessible, especially for individuals and organisations with limited resources, reducing dependency on expensive cloud infrastructure.
On the other, concerns over data privacy, security vulnerabilities, and compliance with government censorship policies raise red flags.
The company has not directly responded to these concerns, but the model has been observed avoiding politically sensitive topics, such as the 1989 Tiananmen Square protests and Taiwan’s sovereignty. Although some argue that this is an effort to align with local regulations, critics suggest it reflects broader issues of information control and bias in AI-generated content.
While DeepSeek has taken steps to address security issues when identified, users—especially businesses and governments—should carefully evaluate these risks before integrating DeepSeek into their operations. Whether it becomes a tool for empowerment or a source of concern is yet to be seen.
The suggested use of OpenAI’s data.
Despite its advantages, DeepSeek has faced allegations of misusing OpenAI’s proprietary data in the development of its DeepSeek-R1 model.
OpenAI claims that DeepSeek used outputs from its models as part of its training data, potentially violating OpenAI’s terms of service, which prohibit the use of their data to develop competing models.
A recent research paper from Cornell University revealed that DeepSeek used outputs from OpenAI’s o1 "reasoning" model to help structure its own AI’s learning process. While this technique is common in AI development, it has led to concerns about whether OpenAI’s data was used without permission.
DeepSeek acknowledges the use of synthetic data but denies directly incorporating OpenAI’s proprietary information.
The company has stated that its training methods adhere to industry standards and do not rely on unauthorised data extraction. While DeepSeek has not provided detailed technical evidence refuting OpenAI’s claims, it maintains that its approach is legitimate and in compliance with ethical AI development practices.
However, the full extent of its data usage remains a contentious issue between the two companies. This controversy has sparked discussions about data ethics and intellectual property in AI development.
Privacy and security concerns.
Another major concern surrounding DeepSeek is data privacy. The company stores data on servers located in China, which raises concerns due to the country's strict data regulations and government oversight. Experts worry that authorities could access sensitive information without transparency, potentially enabling mass surveillance, data monitoring, or misuse in ways that conflict with international privacy standards. Some experts even go so far as to suggest that DeepSeek’s technology could be leveraged for surveillance, disinformation campaigns, or even cyberwarfare.
A report from cloud security firm Wiz Research highlighted a security vulnerability that exposed over a million lines of sensitive internal data, including user chat histories and API secrets. While this raised serious concerns about potential exploitation, DeepSeek responded swiftly and resolved the issue once it was reported.
The company stated that the vulnerability was an unintended oversight and that they took immediate steps to patch the security gap, reinforcing their commitment to data protection and security best practices. Nonetheless, this incident has fuelled broader discussions about the risks associated with AI models handling large-scale sensitive data.
Regulators in multiple countries have launched investigations into DeepSeek’s data practices, questioning whether the AI complies with international privacy standards.
Resources
📌 Access DeepSeek at deepseek.com.
📌 Read OpenAI’s Statements on DeepSeek.
📌 Learn more about DeepSeek’s R1 Training Process.
📌 Read the research paper from Cornell University: DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.
📌 Learn more about the Wiz Research Report on DeepSeek.
📌 Read DeepSeek’s Privacy Policy.
📌 Read DeepSeek’s Terms of Use.
📌 Read the BBC’s overview of DeepSeek’s emergence in the AI industry.
📌 Learn more about Australia’s Ban on DeepSeek from government devices.
📌 Learn more about South Korea’s Temporary ban on DeepSeek.
📌 Learn more about Italy’s ban on DeepSeek.