The Cost of Convenience: AI Adoption and Security Blind Spots

Collin Beder
Author: Collin Beder, CSX-P, CET, Security+
Date Published: 11 February 2025
Read Time: 5 minutes

For the past few years, many believed that the race for artificial intelligence (AI) supremacy was firmly in the hands of established players such as OpenAI1and Anthropic.2 However, the recent release of the open-source AI model DeepSeek-R13 has upended the tech sector and adversely affected US tech company stock prices.4 The popularity of DeepSeek or any other new product should not be surprising as consumers routinely navigate towards more cost-effective versions of already existing products. Enterprises that add features or improve upon another facet of the technology in question—especially at a better price point—can be disruptive and quickly earn customers. In many ways, DeepSeek’s arrival is a stark reminder that the market is anything but static.

DeepSeek may have been the first, but it surely will not be the last product to shake up the market. While some companies are focused on innovation, aiming for better performance with fewer computing resources, it is important to remember that innovation often carries inherent risk. As the adage goes, you get what you pay for.

Beyond the current hype circulating in the tech industry, a more troubling and interesting narrative about open-source AI models emerges. As with any new technological innovation, there is risk, and that risk must be addressed for the protection of data and humans alike.

Terms of Service Risk

The proliferation and popularity of AI tools in app stores suggest that users are eager to utilize the latest and greatest AI tools. For example, as of January 2025, DeepSeek has over 10 million downloads and that number is only growing.5 However, in their enthusiasm, users are unintentionally underestimating the risk associated with apps in general. This underestimation mainly occurs through user disregard for an application’s terms of service (ToS), amplifying the risk associated with hastily downloading and using new software. Buried within these lengthy and intentionally complex terms are often vague or misleading statements about how user data is collected, stored, and shared. Some speculate that ToS are purposely designed with the expectation that the majority of users will never actually read it. Enterprises then exploit users by requiring excessive permissions or inserting statements that grant sweeping access to all data while making it clear that declining the terms results in denied access to the service. So, consumers have 2 options: Forfeit access to the technology entirely or unknowingly surrender personal information to entities that offer little transparency into how the data will be used.

This introduces added security concerns, including the frequent—and at times deliberate—presence of malicious embedded backdoors hidden within software code.6 These backdoors can be introduced through AI-generated code vulnerabilities; more sophisticated attack methods, such as model poisoning; or through the distribution of AI models containing hidden exploits, which users unknowingly download, introducing more security threats. These backdoors not only jeopardize user data, with the potential of capturing login credentials that can then be used for fraud and subsequent financial gain, but also raise national security concerns.7 Additionally, the use of compromised devices can be secretly exploited for unauthorized activities such as cryptocurrency mining, botnet recruiting, or remote system command and control. These security threats extend beyond personal devices—they span critical infrastructure, governments, and enterprises.

AI Development Risk

While AI-powered code generation can speed up application development, it comes with the risk of building applications without a solid foundation. This leads to rapidly developed apps that are poorly created. Whereas software developers would spend much of their time remediating bugs and refining code and architecture, the use of AI-generated code, unfortunately, can increase security risk. The accessibility and ease of use of these tools cause people to place too much faith in AI-generated outputs, which tend to emphasize functionality rather than robustness, leading to the creation of apps that ignore privacy and security principles. Further, a large number of AI models rely on open-source repositories that have questionable security measures.8 When developers use untested libraries in their projects, they expose users to security flaws. “Release early, release often” is a common development strategy, resulting in code being prematurely released to production. On many occasions, serious security flaws go undiscovered9 and the increasing reliance on large language models (LLMs) for code generation may only make the problem much worse.

Further compounding this issue is how AI applications such as DeepSeek balance accessibility versus security. While open-source LLMs can be run locally, many require significant hardware resources to run efficiently, and the majority of users will opt for more convenient, cloud-based options rather than setting up an offline version. This raises important questions: How is data used and where is it actually stored? Many users unknowingly trade data control for ease of access and do not consider the implications of entrusting sensitive data to third-party servers with unclear privacy policies. The ease of clicking “download” outweighs the concerns about data governance, storage, and long-term security risk.

Far too much data is stored with little to no privacy protections and regulations. Unlike regions with stricter privacy laws, such as the European Union’s General Data Protection Regulation (GDPR)10—some countries may allow government agencies or third parties to access data with little to no oversight. This raises significant risk for both individuals and organizations, as sensitive information could be used without consent. The fragmented AI regulatory landscape compounds the issue, as do products that prioritize convenience at the expense of privacy and security. 

Conclusion

The rapid advancement of AI technologies highlights the growing risk associated with innovation. While open-source AI fosters accessibility and potentially more transparency, it also can introduce unwanted risk from unknown and unregulated data storage and coding issues. Sadly, it is not uncommon for users to unknowingly compromise their privacy for the convenience of applications. Organizations must continue to educate their staff and improve governance. While the regulatory landscape evolves, the best approach to mitigating the harm associated with AI tools continues to be a risk-based approach that considers the unique operating conditions of these tools. Users must take their privacy into their own hands, as AI development will likely continue to outpace regulatory safeguards, leaving many users unprotected and exposed to threats.

Endnotes

1 Open AI
2 Anthropic
3 Bakouch, E.; von Werra, L.; et al.; “Open-R1: a Fully Open Reproduction of DeepSeek-R1,” Hugging Face, 28 January 2025
4 Bratton, L.; “Nvidia Stock Plummets, Loses Record $589 Billion as DeepSeek Prompts Questions Over AI Spending,” 27 January 2025
5 Kumar, N.; “DeepSeek Statistics 2025 - Users, Revenue [OpenAI Rival],” demandsage,  30 January 2025
6 Kirichenko, D.; “Predictions for Open Source Security in 2025: AI, State Actors, and Supply Chains,” 23 January 2025
7 Morgan, L.; “Is Open Source a Threat to National Security?,” Information Week,  5 December 2024
8 Al-Kharusi, Y.; Khan, A.; et al.; “Open-Source Artificial Intelligence Privacy and Security: A Review,” Computers, vol. 13, iss. 12, 2024, p. 311
9 Bhalodia, V.; “The Risks of Rushed Software Releases,” Builtin, 13 November 2024
10 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation [GDPR]) (Text with EEA relevance)

Collin Beder, CSX-P, CET, Security+

Is an emerging technology practices principal at ISACA®. In this role, he focuses on the development of ISACA’s emerging technology-related resources, including books, white papers, and review manuals, as well as performance-based exam development. Beder has worked at ISACA for 4 years, authored the book Artificial Intelligence: A Primer on Machine Learning, Deep Learning and Neural Networks, and developed hands-on performance-based labs and exams.

Additional resources