Should you Trust AI with your IT Services?

The public release of ChatGPT last October opened the floodgates to AI-powered solutions in nearly every branch of the tech industry (among others). From generating website code to providing legal advice, AI is promising, at least in theory, to handle a lot of the heavy lifting involved in office work.

One question that has raised several thorny issues is how ready AI is, if at all, to handle IT-based services. On the face of it, the potential advantages are many: AI-based systems don’t need to eat, sleep or go on vacation. Many of the most common tech support issues don’t require the full attention of a trained expert (it’s just as easy for an AI system to ask a customer if they’ve tried restarting their device), and algorithm-based AI can potentially spot an issue in a network configuration within seconds that would require a human significantly longer to investigate. 

While that all sounds well and good, there are a few major considerations to take into account before you hand even partial control to an AI-based system.

Security and Leaked Data

Many, if not most tasks requiring IT support involve a level of privileged access to a network or device. Troubleshooting and configuration by definition require being able to authenticate oneself as a user or administrator of a system to accomplish anything. 

In the case of IT-based systems, it should be borne in mind that most of these systems are constantly acquiring data to refine their systems. Providing an AI with sensitive information including passwords, IP addresses, network configuration, user data, and software licenses means that any of the information could appear as part of the system’s output to a third party, or can be accessed by other means.  

Few AI systems provide any kind of transparency as to where their data comes from or how it’s accessed. From a security standpoint, this is an unacceptable risk,

Outdated Information

ChatGPT, which is more or less the gold standard for AI-based solutions has a major limitation: Its data set is mostly limited to information scraped online before 2022. Many of its would-be competitors are similarly limited. 

A major source of IT-related issues comes from software and hardware updates; the newest version of Autodesk Flame may have compatibility issues with the most recent version of MacOS, for example. Where someone with working knowledge of IT and IT-based systems would likely have hands-on experience with the most recent versions of their clients’ software, an AI-based system may be limited to information about earlier iterations. This would drastically limit its ability to identify, or even address an issue specific to more recent releases.

Niche Industries and Specialization

AI-based tools like ChatGPT are at their best when dealing with general topics. If prompted to generate a summary of Lord of the Rings or to write a macro in Excel, it can do so quickly and easily. One is one of the most widely published written works in the English language, and the other is ubiquitous in nearly every office setting imaginable. Where AI-based systems tend to show their limitations is with highly specialized information. 

While an AI-based system would likely have thousands upon thousands of examples from technical forums and software repositories for some of the most widely used software and hardware, it would have a significantly smaller well of data for smaller and more specialized niche industries. A heavily customized VFX application running on a relatively obscure flavor of Linux would likely come with a set of considerations and requirements that would render its ability to perform tasks relating to IT administration to that of a layperson (if that).

IT departments and providers have a vested interest in knowing all of the ins and outs of the industries of their clients; a VFX or architectural studio has specific software needs that would be difficult to accumulate solely through online sources or content scraping. 

Data Poisoning and Hallucinations

The last consideration is a fundamental challenge at the core of AI: What if its data is simply wrong? Data poisoning is the term for feeding AI-based systems and their underlying algorithms false or misleading data, be it intentionally or accidentally. A simple example would come from an AI system coming to the conclusion of a popular misconception such as having separate areas for taste receptors on our tongues, cannonballs falling faster than lighter objects, etc.

A human who knows that any of the above examples are false would be unlikely to change their mind regardless of the number of times they’ve heard them. For an AI-based system that’s based on accumulating data, it would potentially not just absorb this data, but present it as fact and come to further conclusions based on this misconception.

This isn’t a rare phenomenon; AI-based systems have been nearly universally shown to produce nonsensical, false or even dangerous results. For generated content, this is largely benign, but in the case of a product responsible for managing online and IT-based resources, it has the potential to bring an entire business to a screeching halt until the source of the error is located. 

TL;DR

In short, AI does have the potential to transform how the devices and hardware that businesses run on are administered. That being the case, the stakes are very high for relying on it completely. AI is a relatively new field with new kinks to be ironed out appearing every day. To base IT support on it is still very much a gamble. 

Wondering about how or whether to integrate AI into your business? Nodal can help! Contact us today.