Past events

Monday March 23, 2026

AI Summit: Building a trusted digital future

  • AI Summit: Building a trusted digital future image

On Wednesday 18 March, we hosted our AI Summit: Building a trusted digital future. Speakers examined how the public sector can adopt AI in ways that improve services, strengthen capability and maintain public trust. Across the keynote, breakout sessions and closing remarks, the discussion focused on a shared challenge: how to move from early experimentation to practical use at scale, while keeping human judgment, accountability and public confidence at the centre. 

Keynote conversation: Building a trusted digital future 

In the keynote discussion, Stephen Scheeler, the CEO of Omniscient and Former Facebook CEO for Australia and New Zealand reflected on why this phase of AI development differs from earlier waves of digital change, describing AI as more than a new software tool and instead as a new “intelligence layer” within organisations. He argued that by making cognition cheaper, faster and more accessible at scale, AI has implications not only for individual tasks, but for leadership, expertise, trust, organisational design and national resilience. His remarks highlighted the need for careful sequencing in the public sector – moving first in lower-risk areas where AI can improve efficiency and reduce friction, while taking greater care in higher-stakes settings where public trust could be undermined. 

Stephen’s key points were: 

  • AI changes the economics of cognition, not just productivity. The shift is not only faster tasks, but a new “intelligence layer” that changes how organisations generate insight, make decisions and structure work. 
  • The more useful AI becomes, the more expertise matters. Have a human in the loop but also have someone with enough experience and judgment to know when the system is wrong. 
  • There is a real long-term workforce risk in replacing the pipeline of future experts. Over-reliance on AI for junior tasks could leave organisations with too few experienced people in the future. 
  • Public sector adoption should be sequenced, not rushed indiscriminately. Lower-risk, back-end uses are a better starting point than high-stakes decisions that affect lives and livelihoods. 
  • The line government cannot afford to cross is the trust line. A major failure in a high-risk setting could damage public confidence in responsible AI adoption more broadly. 
  • National capability and sovereignty matter. Sovereign AI capability and public-private collaboration will be important to building trusted, resilient digital infrastructure for the future. 

Breakout session: Bridging the gap: upskilling the public sector for an AI-driven future

This session focused on how AI is changing work in practice across the public sector. Panellists discussed how tasks within roles are being reconfigured, and what that means for capability, confidence and job design. A recurring theme was that capability building depends not only on formal training, but on practical use, experimentation and peer learning.

Key takeaways 

  • The dominant pattern is role augmentation, not wholesale replacement. Many roles will stay the same, but the way work is done within them will change. 
  • Public sector strategy needs to be directional, not over-engineered. Organisations need a clear direction, not a rigid blueprint. 
  • AI fluency is the goal. AI capability needs to develop through regular use and ongoing learning. 
  • A good place to start is the most tedious low-risk task. Small practical uses can build confidence and judgment quickly. 
  • Capability often emerges in unexpected places. Curiosity and experimentation can matter more than seniority or job title. 

Breakout session: Scaling secure AI for the workforce

This session examined what is required to move from isolated pilots to more secure, consistent and scalable AI adoption. Speakers discussed the APS AI Plan’s pillars of trust, people and tools, and emphasised that the next challenge is not simply increasing use but improving the quality of use so that AI delivers clear public value while remaining governed and trusted. The discussion also highlighted the role of shared platforms, common assets and stronger capability in helping organisations avoid duplication and scale more effectively. 

Key takeaways 

  • The problem is no longer whether people are using AI, but whether they are using it well. Success is not adoption alone, but whether AI use is safe, useful and freeing up capacity for higher-value work. 
  • Capability constraints are now a bigger bottleneck than access to tools. For many organisations, the immediate challenge is literacy, judgment and practical use — not access to more technology. 
  • Scaling starts by baking AI into strategy, not isolating it as a side experiment. AI needs to be treated as a core operating issue, not a standalone pilot. 
  • Shared platforms are not just an IT convenience; they are a control mechanism. Common platforms help organisations manage usage, cost, governance and risk more consistently. 
  • Good public sector AI should solve citizen problems, not showcase novelty. The measure of success is better outcomes for citizens and businesses, not technology for its own sake. 

Breakout session: From risk to confidence: AI governance in action

This session focused on what AI governance looks like in practice. Speakers argued that many organisations remain stuck because they are trying to govern AI as if it were one single thing, rather than being specific about the technologies, use cases and risks involved. The discussion emphasised that effective governance depends on proportionate controls, clearer definitions, stronger leadership literacy and practical guidance for staff, with a particular focus on governing how people use AI rather than relying on policy documents alone. 

Key takeaways 

  • The first governance mistake is talking about AI as if it were one thing. Effective governance starts with being specific about the technology, use case and risk involved. Vague labels produce vague controls. 
  • The real risk is often human behaviour, not the technology in abstraction. Governance needs to focus on how people use AI, not just the tool itself. 
  • An AI policy is not governance. A written policy is only a starting point; governance also requires practical direction for staff, executive engagement, shared risk language and ongoing monitoring when things go wrong. 
  • Risk appetite is essential because not every use case should be treated as high risk. Clear boundaries help organisations support experimentation without losing control. 
  • The goal is to make AI governance boring in the best possible way. Over time, AI should become part of normal governance rather than something treated as exceptional. 

Breakout session: Digital stewardship at scale: redefining roles through AI and the Prism model 

This session focused on how digital stewardship can shape AI adoption in practical terms across the public sector. Speakers discussed how to identify valuable use cases, embed AI into operating models rather than bolt it on, balance experimentation with risk, and keep both staff experience and public outcomes in view. A recurring theme was that stewardship is not only a governance function, but a day-to-day responsibility that depends on curiosity, judgement, shared learning and clear organisational signals about where effort should be directed. 

Key takeaways 

  • Digital stewardship means embedding AI into how the organisation works, not treating it as an add-on. AI should be integrated into operating models, decision-making and service design rather than layered on top of existing processes. 
  • The strongest use cases improve staff experience and public experience at the same time. The sweet spot is where AI reduces internal pain points while also making services easier, faster or clearer for the public. 
  • Stewardship is most visible in everyday delivery, especially in the digital majority and the non-digital minority. Good stewardship means improving mainstream digital service while still designing carefully for the people who may be easiest to leave behind. 
  • Low-risk value can be lost through over-governance. Governance needs to be proportionate enough to protect against risk without choking small productivity gains. 
  • Curiosity is not a soft extra; it is part of stewardship. Experimentation and continuous learning are core behaviours for staff who want to build capability and use AI responsibly in context. 

Closing remarks: The Hon Patrick Gorman MP 

In his closing remarks, Minister Patrick Gorman reinforced that AI is now a practical public service issue rather than a future one and framed its adoption through a clear public service test: services should be easier to access, decisions should be fair, and human accountability must remain clear when things go wrong. He emphasised the need for AI capability across the APS, grounded in public trust, accountability and APS values, and pointed to a shift from isolated pilots to more coordinated adoption supported by shared tools, clearer assurance, stronger leadership and a continued focus on better outcomes for Australians. 

Minister Gorman’s key points were: 

  • AI literacy is no longer optional for the public service. AI has moved from theory to practice, and public servants need to understand it well enough to use it responsibly. 
  • Trust should be read through the citizen experience. Services should be easy to access, decisions should be fair, and human accountability must remain clear. 
  • “The buck does not stop with AI.” AI can assist, but responsibility remains with public servants and institutions. 
  • APS values still apply in full. Generative AI sits within, not outside, the APS values framework. 
  • The APS is moving from isolated pilots to coordinated adoption. Shared tools, assurance and whole-of-service capability are helping shift AI use from experimentation to more consistent practice. 

 

This event was delivered in collaboration with our Tier 1 partners:

                                       

People

Our partners

Tier 1 corporate partners
Tier 2 corporate partners
Tier 3 corporate partner
OCM
Work with Purpose podcast partner
In-kind partner