Workplace AI Adoption Raises Novel Security Concerns According to Recent Survey
In a recent survey conducted by Anagram, a human-driven security training platform, it was revealed that 78% of employees are using AI tools like ChatGPT, Gemini, and CoPilot at work, despite their companies not having clear policies in place. This, according to Harley Sugarman, founder and CEO of Anagram, highlights the urgent need for private companies to take on a bigger role in securing their networks and educating their teams.
Sugarman, a cybersecurity expert, suggested several best practices for securing data when using generative AI tools in the workplace. One of the key recommendations is to map all AI usage organization-wide to understand which tools are used, by whom, for what purpose, and what data is inputted. This helps identify and mitigate high-risk areas.
Another important practice is to implement role-based access controls to restrict AI tool access based on job function and data sensitivity, preventing unauthorized or unnecessary exposure. Additionally, Sugarman recommended prohibiting personal accounts and requiring use of corporate accounts with enhanced security features.
Clear AI usage policies specifying acceptable use, data handling, and consequences for violations are also essential, as are training employees to identify and properly handle confidential, proprietary, and personal data. Sugarman emphasized the need for a broadened insider threat model to account for well-intentioned employees misusing powerful tools, not just malicious actors.
Sugarman also recommended offering secure AI environments with internal guardrails, training employees on anonymizing data for use in large language models, deploying lightweight DLP tools to scan and redact sensitive content at the API level, and using real-time nudges such as pop-up warnings when risky behavior is detected.
The Cybersecurity and Infrastructure Security Agency (CISA) is facing major budget cuts and workforce reductions, making it increasingly important for the private sector to step up and play a larger role in maintaining a security-aware workforce. Sugarman noted that the biggest security risk from pasting sensitive information into AI tools is data leakage. Nearly half (45%) of employees have used banned AI tools on the job, and 58% of employees have pasted sensitive data into large language models.
Examples of such misuse include a junior engineer pasting an API key into a coding assistant or a manager uploading an employee review for summarization. Sugarman sees value in "just-in-time" interventions, such as browser-based warnings when someone pastes sensitive information into an AI prompt.
In addition to these practices, Sugarman also emphasized the need for tiered AI governance policies that distinguish between fully prohibited uses, conditionally approved uses, and fully supported uses, and for clearly communicating the reasoning behind these rules. Continuous monitoring of AI tool usage, contracts, output quality, and compliance is also crucial to ensure ongoing protection.
By following these best practices, organizations can reduce the risk of data leaks, privacy violations, intellectual property exposure, misinformation, and bias, enabling them to leverage generative AI effectively and securely in the workplace.
- Considering Harley Sugarman's recommendations, organizations should map all AI usage across their company to identify risky areas and implement role-based access controls to restrict tool access based on job function and data sensitivity.
- To prevent unauthorized AI tool usage, Sugarman suggests prohibiting personal accounts and requiring use of corporate accounts with enhanced security features, and offering secure environments with internal guardrails.
- Clear AI usage policies specifying acceptable use, data handling, and consequences for violations, along with employee training on data privacy and security, are necessary to maintain a security-aware workforce.
- The private sector, in the wake of CISA's budget cuts and reduced workforce, should step up and safeguard sensitive data by deploying lightweight DLP tools, using real-time nudges, and continuously monitoring AI tool usage for compliance.