AI is everywhere. Even in this newsletter. But some use cases for AI carry more risk than others. The use of AI by directors is risky business. In this article, we identify some of the typical ways in which AI may be used by directors, explain the risks presented, and suggest risk controls so that directors can use AI safely.

1. Introduction

AI is already in the boardroom — whether the board has approved it or not.

Board packs are being summarised by AI. Management papers are being drafted with AI assistance. Scenarios are being modelled, risks ranked, and recommendations shaped with the help of generative tools.

This isn’t inherently bad. In many cases, it’s genuinely helpful.

But when AI starts influencing strategic decisions, risk judgments, or advice to the board, the governance stakes change. Boards remain accountable for decisions — even where AI has informed them.

The question is no longer “should we use AI?”

It is “how do we use AI safely. Real Safely.

2. Key boardroom AI use cases: benefits, risks, and controls

A. Decision-making and decision augmentation

The use case

AI is used to synthesise information, compare options, identify risks, or suggest preferred courses of action — for example in strategy, investment, or prioritisation decisions.

Key risks

  • Over-reliance on AI recommendations (“automation bias”)

  • Hidden assumptions or biased training data influencing outcomes

  • Limited explainability — directors cannot clearly articulate why a recommendation was made

  • Decision responsibility drifting away from humans

Recommended controls

  • Treat AI as an input, not a decision-maker

  • Require explicit human review and challenge of AI outputs

  • Record where AI informed a decision and how it was assessed

  • Ensure final accountability clearly rests with named individuals

B. Scenario analysis and forecasting

The use case

AI tools are used to model scenarios, stress-test assumptions, or forecast outcomes (financial, operational, or risk-related).

Key risks

  • False precision — outputs appear more certain than they are

  • Poor or incomplete data driving misleading conclusions

  • Failure to model low-probability, high-impact events

  • Directors misunderstanding model limitations

Recommended controls

  • Require disclosure of key assumptions, inputs, and limitations

  • Use AI to explore scenarios, not to predict outcomes

  • Pair AI outputs with qualitative judgment and experience

  • Document how scenario outputs informed (or did not inform) decisions

C. Note-taking and meeting summaries

The use case

AI is used to generate meeting notes, summaries, or action lists for board or committee meetings.

Key risks

  • Inaccurate or incomplete records

  • Loss of nuance in sensitive discussions

  • Confidential or privileged information processed by third-party tools

  • Unclear status of AI-generated notes as “official” records

Recommended controls

  • Treat AI-generated notes as drafts only

  • Require human review and approval before finalisation

  • Confirm where data is processed and stored

  • Clearly define when AI tools may or may not be used in meetings

D. Preparation of board papers and legal advice

The use case

AI is used to draft or assist with board papers, briefing notes, or legal and regulatory analysis that goes to the board.

Key risks

  • Hallucinated facts or incorrect legal interpretations

  • Outdated or jurisdictionally incorrect advice

  • Erosion of professional judgment

  • Confidentiality and privilege risks

  • Loss of legal privilege

Recommended controls

  • Prohibit unreviewed “copy-paste reliance” on AI outputs

  • Require expert review and sign-off for high-risk content

  • Clearly label AI-assisted drafts internally

  • Limit AI use for sensitive advice unless expressly approved

  • Ensure that the legal team is across how AI can affect privilege

3. RI Advice: a cautionary tale for directors

The Australian case ASIC v RI Advice Group Pty Ltd (2022) is frequently referenced in AI governance discussions — not because it involved AI (it was more about cyber security), but because of what it says about director and organisational responsibility for technology risk.

The case concerned cybersecurity failures. The Federal Court found that RI Advice failed to maintain adequate risk management systems, breaching its obligations under the Corporations Act as an AFSL holder.

The key lesson for boards is simple:

Foreseeable technology risks must be actively governed. Informality, ignorance, or unchecked delegation is not enough.

As AI becomes embedded in decision-making and advice, the same principle applies. Boards cannot assume that AI risk is being managed without clear frameworks, controls, and evidence.

AI does not change director duties — it raises expectations around how those duties are discharged.

4. What boards should do now: practical governance artefacts

Boards (particularly of SMEs) do not need enterprise-scale AI programs. But they do need a small set of clear, defensible governance artefacts.

Aligned with leading risk management practice (including the NIST AI Risk Management Framework), boards should expect to see the following:

1. AI Use Register

A simple register that records:

  • where AI is being used

  • for what purpose

  • whether it influences decisions, advice, or outcomes

This creates visibility and avoids “shadow AI” use.

2. AI Risk Assessment Template

For material AI uses, a short assessment covering:

  • intended purpose and benefits

  • key risks (bias, accuracy, explainability, reliance)

  • potential impacts if the AI fails or is wrong

  • risk rating and mitigations

This does not need to be complex — it needs to be documented.

3. Human Oversight and Accountability Statement

Clear articulation of:

  • who owns each AI-assisted activity

  • what decisions require human judgment

  • where AI outputs must be reviewed or challenged

This reinforces that accountability remains human.

4. Decision Log for AI-Informed Decisions

A lightweight log recording:

  • when AI was used to inform a decision

  • what the AI contributed

  • how its output was evaluated

  • who made the final call

This becomes critical evidence if decisions are later scrutinised. Boards should ensure that AI is used in a safe and defensible manner.

5. AI Use Policy (Board-Level, Not Technical)

A short, plain-English policy that sets boundaries around:

  • acceptable and unacceptable AI use

  • high-risk activities requiring approval

  • expectations around transparency and review

This sets tone without stifling innovation.

6. Incident and Escalation Triggers

Pre-defined thresholds for:

  • when AI errors, anomalies, or failures must be escalated

  • when the board should be notified

  • how incidents are reviewed and lessons captured

Boards should not be designing these during an incident.

5. Disclaimer

As with every one of the Real Safety articles, this article is for general information only and does not constitute legal of professional advice. Organisations should seek appropriate professional advice tailored to their circumstances before relying on any AI governance approach.

Keep Reading