Five AI Concepts Every Board Director Should Know

MOST BOARD DIRECTORS assume that artificial intelligence (AI) is a technology issue, but that assumption is wrong.

AI is a decision-making issue, and that puts it squarely in the boardroom.

Directors do not need to build AI or write code. They do, however, need to govern it, and they cannot govern what they do not understand.
The following are five AI concepts that matter most at the board level:

Algorithm (the recipe behind the output): An algorithm takes inputs, follows steps, and produces an output. The governance question is not whether the recipe works, but rather who wrote it, what ingredients they used, and whether you trust the kitchen. When a company uses AI to approve loans, screen CVs, or flag fraudulent transactions, algorithms influence those decisions.

The board’s responsibility is to understand which decisions are driven by algorithms and whether anyone checks the results. When the word ‘algorithm’ appears in a board report, interpret it as ‘decision-making instructions’.

Then ask: Who wrote them, and who is auditing them?
Machine learning (the employee who learns by watching): Machine learning is a type of AI that improves through experience. Think of it as a new employee who learns by observation. Show the employee good examples, and they perform well. Show them bad examples, and they replicate those mistakes. The quality of any AI system depends entirely on its training data, which is the body of information on which it was trained.

The question for management is straightforward: What data are our AI systems trained on, and has anyone checked the data for blind spots?
AI hallucination (the confident ‘consultant’ who makes things up): This is the concept that catches most people off guard. AI systems, especially the generative AI tools that staff may already be using, sometimes produce information that sounds authoritative but is entirely fabricated.

For governance purposes, this issue falls under what could be called information integrity risk: Can you trust the outputs your AI systems generate? The AI-sceptic question is whether AI-generated information is verified before it informs any business decision.
Black box AI (the ‘consultant’ who can’t explain their reasoning): Some AI systems produce decisions or recommendations but cannot explain how they arrived at them. Technologists call this a ‘black box’.

The governance test is straightforward: Would you accept a recommendation from a consultant who could not explain their reasoning? Would your audit committee accept it?
If an AI system cannot explain its decisions, the company may struggle to defend them. In governance terms, this constitutes an unexplainable decision-making risk and should be added to the risk register.

Shadow AI (the hidden AI assistant risk you cannot see): The term refers to employees using AI tools such as ChatGPT, translation applications, and document summarisers that the company has neither approved nor secured and may not even be aware of.

It’s convenient for the user, but invisible to the organisation. The key difference is the level of exposure. Sensitive board papers, client data, and strategic plans might be flowing into AI tools that lack confidentiality protections and an audit trail.

In governance terms, this is an ungoverned technology risk, and it is almost certainly occurring in your organisation right now.

These five concepts form the foundation of informed AI oversight.

Directors who understand them are equipped to hold management accountable, scrutinise AI-related risks, and contribute meaningfully to a conversation that most boards have not yet begun.

The next step is straightforward: At your next board meeting, ask management: Which decisions within this organisation are currently being made or influenced by AI systems, and how would we know if something went wrong?

If management cannot answer this clearly, it reveals everything your board needs to know about the current state of AI governance.

  • Chisom Okafor-Obiudo is an admitted legal practitioner with specialisation in corporate and AI governance. She can be reached at chisomokafor11@gmail.com

In an age of information overload, Sunrise is The Namibian’s morning briefing, delivered at 6h00 from Monday to Friday. It offers a curated rundown of the most important stories from the past 24 hours – occasionally with a light, witty touch. It’s an essential way to stay informed. Subscribe and join our newsletter community.

AI placeholder

The Namibian uses AI tools to assist with improved quality, accuracy and efficiency, while maintaining editorial oversight and journalistic integrity.

Stay informed with The Namibian – your source for credible journalism. Get in-depth reporting and opinions for only N$85 a month. Invest in journalism, invest in democracy –
Subscribe Now!


Latest News