.jpg)
Moya is a Christchurch-based Senior Consultant. As well as being adept at process improvement and creatively analytical about AI, she's also your go-to for steampunk and fantasy-writing chats.
Innovation always lands with a bang, followed by a chorus of "it’s the end of the world" and "it’s the start of a utopia."
In 1862, a series of articles in The Lancet highlighted fears for the health of railway passengers, including reduced intelligence, loss of visual and auditory power, and specific dangers to women's nervous systems due to vibration and speed.
Conversely, in 1857, Francis Mewburn confirmed his belief that the expansion of the railway system "spreads a network over the civilised world, binding nations together for the interchange of mutual interests."
Neither was right. The train didn’t melt our brains, but it didn’t bring world peace either. It was just a powerful tool that changed the map.
We’re seeing parallels with AI. Everything from environmental catastrophe to overcoming world hunger is attributed directly to the advent of AI.
But AI isn't a single, looming "entity." It’s a toolbox.
You wouldn’t use a sledgehammer to hang a picture frame, and you don't need a trillion-parameter model to sort your email. To get past the doom-loop or hope-loop predictions, we need to stop treating AI as a monolith and start seeing it as a suite of discrete technologies with vastly different personalities and footprints.
So, what’s under the AI hood?
We’ve lived with AI for decades; it just didn’t have a flashy marketing department back then. Reactive Machines (think IBM’s Deep Blue) are the "goldfish" of tech - they live entirely in the present, reacting to inputs based on a fixed map of rules.
Then you have Rule-Based Systems - the expert systems that run everything from medical diagnosis support to your company’s compliance checks.
These systems do what they say on the tin. They don’t "hallucinate" because they have no imagination - they just follow the rules. They are predictable, auditable, and have a high sustainability rating because they don't need a small power plant to run. If your problem requires precision and safety, don't buy into the hype of a "smart" model when a reliable workhorse will do. But, if the rules change swiftly or there are frequent unmodelled inputs, then these are not the systems you want.
Around the 2010s, we hit a pivot point. We moved from giving the machine rules to giving it patterns. Using tools like Scikit-learn, we started showed a computer 10,000 photos of cats and let it machine figure out what a "cat" looks like.
This is the engine behind your spam filters and bank fraud alerts. It’s predictive analytics at its best and a "Medium" sustainability tier.
It’s incredibly useful for finding patterns in massive datasets, but it’s only as good as the data you feed it. We don’t worry about these models "dreaming up" fake facts, but we do worry about “Data Drift”. If the world changes and your patterns don't, your "intelligent" system becomes obsolete. It requires a human-in-the-loop to keep the signal clear, or an orchestrator to recheck for bias, require regular listing of perceived truths or assumptions and to update for new data.
This is the AI everyone is talking about.These deep learning systems entered the public arena in 2018, and LLMs like ChatGPT and Claude are the poster children.
They don't have a database of facts; they have a probabilistic map of language. When you ask them a question, they aren't "looking it up" - they are predicting the most likely next word.
Hallucinations manifest in this tier of AI. They are the "exhaust fumes" of creativity. Because the model is designed to be imaginative, it can sometimes be confidently wrong.
This is also where the energy bill spikes, and This is the tier where the sustainability rating drops to “Low”. These models require CUDA-enabled GPUs, which means they use High powered graphics processing for mathematical assessment.
To militate against hallucinations, we can use RAG (Retrieval-Augmented Generation); providing trusted documents for guidance and use with any other available data. There is also the opportunity to provide clear decision edge cases. This helps manage hallucinations, allowing the engine it’s speed and the user a seatbelt.
We are currently in the 2026 shift: moving from AI that talks to AI that acts. Agentic AI doesn't just write a plan; it executes it. It uses tools, browsers, and APIs to get a job done.
Agentic AI shifts the paradigm from task-centred to orchestration-centred architecture; they are capable of long-horizon planning and real-time adaptation.
With that power comes a higher risk. Hallucinations in an agent aren't just wrong words, they’re wrong actions. This requires human-in-the-loop guardrails and relentless audit logs for all decisions. With high speed and high sophistication we also see high energy costs.
Not all AI is the same. The goal isn't just to be "AI-ready" - it's to be intelligence-fluent.
By understanding the SWOT profile of each subgroup, we stop being spectators and start being architects. We can choose "High Sustainability" rule-based systems for compliance tasks and save the "High Octane" Agentic AI for complex problems. Or we can mix and match to find ways to layer AI to deliver what’s required for the best social, economic, and/or environmental value.
The future isn't about the machine; it’s about the people who know which tool to pick. Ready to grab a wrench? Or is it a screwdriver you need?
Link copied to clipboard