Discover the secrets to effective GenAI implementation and why embracing collaboration is crucial.
Jonathan Lerner, CEO of InterVision System, writes about how important it is to collaborate when implementing GenAI and provides some strategies to overcome any issues that may come up.
There’s no question about the utility of AI — nearly nine out of ten leaders (87%) agree it’s necessary for business continuity. But how can AI expedite daily tasks without running afoul of regulations? And how can organizations manage consumer and client privacy concerns?
Questions of this nature aren’t new. IT and data leaders realize AI’s ethical and operational implications will become more complicated with time. Accordingly, organizations that cinch their approach to AI now will find themselves in a highly advantageous position five years later.
The secret to effective AI implementation is simple: It takes a village. In this context, GenAI engines should never operate in singularity, nor should leaders adopt such tools without appropriately strategizing and investing in platforms that support their organization’s risk and change management processes. Let’s dive into the roadblocks of GenAI adoption and how technology can surmount them.
See More: What Is General Artificial Intelligence (AI)? Definition, Challenges, and Trends
The common axiom goes, “Trust but verify.” In the case of GenAI engines, I suggest a potent amendment: “Don’t trust and verify.”
Although powerful, GenAI tools aren’t well-versed in industry nuances, so their responses are often oversimplified or incorrect. Furthermore, LLMs are prone to hallucinating information. Therefore, relying on their outputs invariably and without verification invites inaccuracy. We’ve seen this play out across several sectors since ChatGPT’s debut in November 2022.
Earlier this year, an attorney used ChatGPT to generate a legal document citing several non-existent cases as precedent, a move that questioned the firm’s credibility. Stories like these aren’t uncommon in the legal field. According to Thomson Reuters, 15% of law firms have warned attorneys about GenAI usage.
But complete aversion to LLMs and GenAI is short-sighted. Instead, IT leaders should cautiously experiment with GenAI integrations — and always double-check their work. Think of LLMs as a junior co-worker, not a tool, and you’ll probably be on the right track. Mayank Kejriwal, a Research Assistant Professor at the University of Southern California, puts it best:
“If large language models are used for [rational decision-making], humans need to guide, review and edit their work. And until researchers figure out how to endow large language models with a general sense of rationality, the models should be treated with caution.”
All ChatGPT prompts become training data. While this training data is private, third parties can extract it using well-formulated questions. When three developers inputted sensitive, proprietary code into ChatGPT earlier this year, Samsung learned this lesson the hard way.
To protect the confidentiality of critical business information, employees must leverage LLMs extraordinarily carefully.
Consider the ramifications of mishandled data in a healthcare setting. Laws like HIPAA dictate harsh penalties for organizations that lose or share clients’ personally identifiable information (PII) without consent. Although LLMs can improve clinical decision-making and standardize, improve, and expedite healthcare-related communications, providers must tread lightly to remain compliant.
Industry-specific LLMs purporting to alleviate compliance and privacy concerns have recently increased in popularity. Harvey, a legal tech startup partnering with firms to create unique LLMs, has generated buzz this month (even garnering investment from GenAI front-runner OpenAI). Among Harvey’s promises is “no cross-contamination” between client data sets. Harvey’s wait list currently includes more than 15,000 firms worldwide.
The popularity of these options makes one fact clear: Leaders are eager to unlock the benefits of GenAI right now. But before shelling out for a proprietary system, leaders must ensure their IT infrastructure can support resource-intensive AI integrations.
AI amplifies issues endemic to every enterprise’s IT infrastructure. AI will widen this attack surface if an organization’s disaster recovery or data storage strategy is faulty. As GenAI integration may not actively share data, it will natively store it — and without proper protocols, this data is highly vulnerable and lucrative for hackers.
Dumping high-threat AI responsibilities onto an existing employee removes critical resources from revenue-generating functions. Worse, it opens the door to damaging fines and liabilities. Working with a partner eliminates these issues and ensures your organization has a scalable support system. Of course, the nature of that support will vary by enterprise size and type.
IT leaders should assess the following options as they look for an AI management partner:
AI system deployment, maintenance, and monitoring are complex, time-consuming tasks. The U.S. and 17 other nations recently released an international agreement delineating appropriate AI usage controls, including suggestions on proper monitoring standards and how to store and protect data. While not yet codified, these suggestions should act as foreshadowing for possible regulations to come.
As IT leaders navigate current and future regulations, only one certainty remains: AI harbors immense potential to drive progress, just as it threatens to exacerbate risks in existing IT infrastructure.
However, by keeping a realistic focus on its benefits and proactively managing those risks as a team, today’s most innovative enterprises can still use AI to secure a more efficient tomorrow.
Is your organization ready to harness the power of GenAI? Let us know on Facebook, X, and LinkedIn. We’d love to hear from you!
Image Source: Shutterstock
CEO, InterVision System