AI has moved from novelty to infrastructure. It is embedded in tools, workflows, and platforms that MSPs and their clients rely on every day. The harder question now is not what AI can do, but what it should be trusted to do.
In this podcast studio session at MSP Global, the conversation turns to liability, judgment, and trust.
The panel is full of industry big hitters: Trice Johnson, Chief Data and AI Officer at First Genesis; Ben Lee, Head Nerd at N-Able; and Thimo Groneberg, Chief Commercial Officer at Polarise. Their host is Olaf Kaiser from UBEGA Consulting.
The focus is on the reality of IT operations, automation, and decision-making at scale. The panel explores where AI already delivers undeniable value, where human oversight still matters most, and why blind trust may be just as risky as total resistance.
Here are five key takeaways from the session.
Trust in AI depends entirely on the task
There is no single answer to whether AI should be trusted. Analytical tasks, pattern recognition, and large-scale data processing are areas where AI already outperforms humans. Moral judgment, contextual decisions, and outcomes that affect people directly are a different matter.
The consensus is clear. Trust is not binary. It is a sliding scale, and MSPs must help clients understand how risk changes depending on what AI is being asked to do.
AI excels as a silent operator in complex systems
One of AI’s greatest strengths is its ability to operate behind the scenes. Processing petabyte-scale data, scanning imagery, identifying anomalies, and surfacing risks that would take humans months to detect.
In operational environments, this kind of silent capability can prevent outages, predict failures, and save enormous costs. The value is not in replacing people, but in extending what is humanly possible.
Decision-making is not the same as system operation
A recurring theme is the distinction between running systems and making decisions. AI can optimize processes, recommend actions, and highlight risks. That does not mean it should act autonomously.
Most real-world use cases today sit in a co-working model. AI supports, accelerates, and advises, while humans retain responsibility for final decisions. MSPs play a critical role in designing and enforcing that boundary.
Human-in-the-loop is still essential
Despite rapid advances, the panel agreed that full autonomy is not the goal—at least not yet. Supervision, approval, and accountability must remain human-led.
Even more important is guarding against complacency. As AI tools become more capable, there is a real risk that operators stop checking outputs carefully. Trust without verification is where liability begins.
Governance matters more than confidence
What matters is not individual confidence in AI, but governance.
Clear rules around data use, validation, oversight, and escalation are what make AI safe in practice. MSPs are uniquely positioned to embed those controls, ensuring AI systems are powerful without being reckless.
The session leaves MSPs with a practical challenge. AI is not something to fear or blindly embrace. It is something to architect carefully.
Trust does not come from the technology itself. It comes from how thoughtfully it is deployed, monitored, and checked. And in that equation, MSPs remain firmly in the loop.
