Employees who use generative artificial intelligence tools in the workplace without company approval or oversight — a practice known as “bring your own AI’ or BYOAI — could introduce risks into the enterprise, according to MIT researchers.
“Make no mistake, I’m not talking theory here,” said Nick van der Meulen, a research scientist at MIT’s Center for Information Systems Research, during an MIT Sloan Management Review webinar. “This has been happening for quite some time now.”
The temptation to BYOAI could be especially acute at companies that have banned using AI chatbots that are publicly available, such as ChatGPT. Samsung, Verizon, J.P. Morgan Chase and other banks have banned or limited the use of external AI chatbots due to regulatory and security concerns.
The issue is gaining urgency among business leaders as AI models become more powerful and freely available to anyone, according to MIT.
Research from van der Meulen and fellow research scientist Barbara Wixom showed that about 16% of employees in large organizations were already using AI tools last year, with that number expected to rise to 72% by 2027. This includes sanctioned and unsanctioned use of AI.
They warned about the risks arising when employees use these tools without guidance.
“What happens when sensitive data gets entered into platforms that you don’t control? When business decisions are made based on outputs that no one quite understands?” van der Meulen said.
The researchers said there are two types of generative AI implementations:
Separating the two uses of generative AI is useful because that “helps us tackle each differently and manage their value properly,” Wixom said.
Tools is a cost management play and needs to be handled similarly to spreadsheets and word processing.
“In a way, they simply represent, for most organizations, the new cost of doing business,” van der Meulen said.
Solutions help different areas of the company, whether it’s the call center, marketing, software development or another business unit.
“They offer measurable lift in either efficiencies or sales,” Wixom said.
For example, IT services provider Wolters Kluwer developed a generative AI tool that can read raw text directly from scanned images of lien documents. Banks using this tool were able to cut their loan processing time from weeks to days.
“That is not something that an individual employee at either Wolters Kluwer or the bank could have done on their own with a GenAI tool,” van der Meulen said. “It takes effort from many stakeholders to create these solutions to integrate them into systems.”
When AI is used as a tool, the employee is responsible for its successful use. When AI is used in the company as a solution, the organization owns its success, the researchers said.
This is another important distinction because it guides how to govern these two types of generative AI in a company, they said.
Read also: MIT Discovers AI Training Paradox That Could Boost Robot Intelligence
Tips to Manage BYOAISimply banning these tools is neither practical nor effective.
“Employees won’t just stop using GenAI; they’ll start looking for workarounds,” said van der Meulen. “They’ll turn to personal devices, use unsanctioned accounts, hidden tools. So instead of mitigating risk, we’d have made it harder to detect and manage.”
The researchers recommended three key approaches to managing BYOAI.
Organizations should tell employees which uses are always acceptable, like searching for publicly available information, and those that are not approved, such as inputting proprietary information into a publicly available AI chatbot. In a survey of senior data and technology leaders, 30% reported having well-developed policies regarding workers’ AI use, the researchers said.
Employees need what the researchers called “AI direction and evaluation skills” (AIDE skills). If they don’t know how to use the tools well, they won’t be as effective. It’s not enough to do an online tutorial; employees must practice.
For example, at Zoetis, a global animal health company, the data analytics unit runs sessions three times a week that are attended by over 100 employees at each session for hands-on AI practice.
The researchers said J.D. Williams, Zoetis’ chief data and analytics officer, likened it to teaching people how to change tires — by making them change tires.
Since banning AI tools won’t work and allowing the use of any AI tools isn’t feasible either because it becomes impossible to use them safely, organizations should instead provide approved AI tools to employees.
Zoetis implemented a “GenAI app store” where employees apply for a licensed seat. They have to say why they need the app, and then share their experiences using it. This helps the company identify valuable applications while managing costs.
“It’s how you avoid paying $50 a month for Joe from Finance who … used it exactly once to write a birthday card,” van der Meulen said.
For organizations just beginning their GenAI journey, Wixom also recommended establishing a center of excellence — which could be a single person or a small team — to provide an enterprise-wide perspective and coordinate efforts across departments.
But most of all, “it is important to remind everyone what the end game is here,” Wixom said. “The point of AI, regardless of its flavor, should be to create value for our organizations and ideally value that hits our books.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post How to Manage Risks When Employees Use AI Secretly for Work appeared first on PYMNTS.com.