
Why We Built Arvoe
I left my career in enterprise risk management in 2025 to build something for schools.
That sentence still feels strange to write. If you'd told me three years ago that I'd be building AI governance software for principals and compliance leads at K–12 schools, I would have assumed you had the wrong person. My career was enterprise risk. Large organisations. Woolworths. Macquarie Bank. Australian Retirement Trust. Complex systems, big teams, high stakes.
But the path from there to here turns out to be shorter than you'd think.
The problem we originally set out to solve
For years, I watched smart people ignore their risk management tools. Not because they didn't care about risk, but because the tools made their lives harder.
Enterprise GRC platforms are built for risk and compliance teams. They're designed to produce reports, not to help people make better decisions. The result is predictable: risk teams tolerate them because they have to, and everyone else refuses to engage. "I've got more important things to do. This tool makes my life harder, not easier."
My co-founder Lachlan Bransby and I lived this from different angles. I was leading risk transformation at Woolworths. Lachlan was deep in AI risk at a major Australian retailer. We'd both seen the same pattern: the people closest to the actual risks were the furthest from the tools designed to manage them.
So we started building Arvoe, an AI-native risk management platform. Not another dashboard. A system where you could have a conversation with your risk data. Ask it questions in plain language. Let AI do the heavy lifting of identifying patterns, surfacing emerging risks, and connecting the dots across your risk landscape.
The technical problem was solvable. Instead of a flat register, we modelled the entire governance landscape as an interconnected graph: risks, controls, policies, strategic objectives, stakeholders, all linked through typed relationships that capture how they affect each other. A risk doesn't sit in isolation. It connects to the controls that mitigate it, the policies that govern it, the objectives it threatens, and the other risks it escalates into. That structure changes how you understand and manage risk.
On top of that graph, we built Ask Arvoe, a conversational AI interface that lets users create, update, and manage their governance data through natural language. No navigating complex dashboards. No learning curve. Just tell the system what you need, and it does the work.
The enterprise tool problem was real, and we were making progress on it.
But something else was happening that we couldn't ignore.
What changed
AI was accelerating. Not just in the abstract "technology is moving fast" sense. In a practical, visible, every-week-something-new sense.
Agents were operating inside organisations. Making decisions, processing data, executing tasks with minimal human oversight. The governance frameworks for these systems were either non-existent or hopelessly behind. We were building for this world, and it felt like we were in a permanent sprint to keep up.
At the same time, the conversation about the future of work was intensifying. What jobs would exist in ten years? What skills would matter? What would become automated? These questions stopped being theoretical when you have kids.
Between us, Lachlan and I are raising seven kids. I have four with my wife Dee: Charlotte in Year 8, Sebastian in Year 6, and twins Max and Olivia in Year 3. Lachlan has three with his partner. All of them growing up in a world that will be fundamentally shaped by AI.
And all of them in Australian schools that are trying to figure out what to do about it.
What I started doing at home
Before I explain what we're building at Arvoe, I want to tell you what I started doing as a parent. Because it's directly connected.
I didn't wait for my kids' school to have an AI curriculum. I started teaching my kids myself.
My eldest son Sebastian and I are learning to vibe code together. He's in Year 6. We sit down and I show him how to find a problem worth solving, how to describe what he wants to an AI tool, how to evaluate whether the output is good or garbage. We talk about system architecture in terms he can understand: what's the frontend, what's the backend, where does the data live, what happens when something goes wrong. He's eleven, and he picks it up fast. When you strip away the jargon and show a kid how things actually work, they absorb it. The concepts click when they can see them in action.
What I'm really teaching him isn't coding. It's how to think alongside AI. How to direct it, evaluate its work, and understand its limitations. That's the skill that will matter in every job he might do.
We built a family management agent together. Our household has six people, two demanding careers, four kids with different school schedules, sports, activities, and social lives. So I built Scout, a family operations agent that helps with meal planning, calendar coordination, and budget tracking. My kids see it in action every day. They see how AI can organise complexity, save time, and make a busy household run more smoothly.
But more importantly, they see the decisions behind it. I show them what information Scout has access to and why. We talk about boundaries: what stays within our family's private systems versus what you'd never type into a public chatbot. Charlotte understands the difference between a tool your family controls and a random app asking for personal details. Seb knows that AI-generated answers need to be checked. The twins are starting to understand that the voice on the screen isn't a person.
These aren't abstract lessons. They're dinner table conversations that happen because the tools are part of our daily life.
We talk about the dark side too. My kids know what a deepfake is. They know that AI can generate images and videos of people that look real but aren't. We've talked about what that means for trust, for news, for the things they see online. Charlotte is at the age where social media is becoming a factor, and she needs to understand that not everything she sees is real, and that the tools to fake things convincingly are getting better every month.
We talk about how AI is changing what jobs look like. Not in a "robots are coming for your career" way, but in a practical way: the people who learn to work with these tools will have an advantage. The people who understand both what AI can do and what it shouldn't do will be the ones who lead.
Why this made us pivot to education
Here's what I realised through all of this: I have the background, the interest, and the time to do this for my kids. Most parents don't.
Most parents are relying on schools to prepare their children for an AI future. And most schools are doing their best, but they're doing it without governance frameworks, without risk tools, without a structured way to decide which AI tools are safe, how they should be used, and what guardrails need to be in place.
When I looked at what was available to help schools, I found a gap that shocked me.
Australia published a national framework for generative AI in schools in October 2023. It was approved by Education Ministers on 5 October, and a review was endorsed in June 2025. It establishes six principles: Teaching & Learning, Human & Social Wellbeing, Transparency, Fairness, Accountability, and Privacy, Security & Safety. It includes twenty-five guiding statements.
It's a good document. The principles are sound.
What it doesn't include is any implementation guidance. We wrote more about this in The Framework Gap: What Schools Need to Know.
No maturity model for schools to assess where they stand. No risk controls mapped to the principles. No assessment tools. No governance structures. No procurement frameworks for evaluating AI-enabled edtech products. No way for a principal, who already has a hundred other priorities competing for their attention, to know whether their school is governing AI well, badly, or not at all.
Compare that to the Higher Education AI Framework published in December 2025 by the Australian Centre for Student Equity and Success, which includes dedicated sections on governance structures, policy development, procurement, professional learning, pedagogical integration, evaluation frameworks, and cross-institutional collaboration. Universities got a roadmap. Schools got aspirational principles.
And schools are arguably the higher-stakes environment. Younger students. Greater vulnerability. Less institutional infrastructure. No equivalent of TEQSA conducting regulatory oversight.
The commentary from experts in this space reinforces the concern. Leon Furze, one of Australia's leading voices on AI in education, has written extensively about how the national framework and the National AI Plan are "either implicit rather than explicit, or tucked away in references to other frameworks which to date haven't been fleshed out."
Meanwhile, the regulatory environment is tightening. The Privacy and Other Legislation Amendment Act 2024 (a package of reforms to the Privacy Act and several related laws) became law in December 2024, and from 10 December 2026 it will require organisations to disclose in their privacy policies when automated decision-making is used to make decisions that significantly affect individuals. The ST4S Responsible AI Evaluation, a procurement assessment framework developed by Education Services Australia, is being rolled out in 2026 as a gateway for AI-enabled tools in schools. Schools will need to demonstrate that they're governing AI responsibly, not just intending to.
This is the gap that Arvoe exists to close.
What we're building
Arvoe is an AI governance platform purpose-built for Australian education, starting with the K–12 sector where the gap between policy and practice is widest.
We're not building a cut-down version of an enterprise GRC tool. We've seen what happens when you take software designed for a compliance team of twenty and hand it to a school business manager who has forty-seven other responsibilities. It doesn't get used.
We're building something different. Arvoe is the operational layer where a school records, manages, and monitors its entire AI governance posture. Everything lives in one place: risk registers, control libraries, maturity assessments, compliance tracking, and reporting. It's designed so that a school business manager or compliance lead can run their governance program without needing a GRC team or a consultant.
Within that platform, three things set Arvoe apart.
The AI Governance Maturity Model (see what mature AI governance actually looks like) helps schools assess where they stand today across key dimensions: policy and strategy, risk identification, vendor management, staff capability, student safety, and incident response. It evaluates both design effectiveness (do you have the right controls?) and operating effectiveness (are those controls actually working?). A school leader can complete the core assessment in twenty minutes and walk away with a clear picture of their maturity level and a prioritised list of next steps.
The AI Risk & Control Library includes 67 controls across 36 risks, organised into nine categories and mapped directly to the National Framework's six principles. This isn't generic risk management. It's built for the specific risks schools face when adopting AI: student data flowing to third-party models, AI-generated content in assessments, staff using unapproved tools, vendor AI features enabled by default, and dozens more. Schools can adopt the library as a starting point and adapt it to their context.
Ask Arvoe is the conversational AI interface that sits across the entire platform. It lets school leaders create and update risk registers, run maturity assessments, check compliance status, and get governance guidance, all through natural language. Instead of learning where to click, you ask: "What are our highest-rated AI risks?" or "Add a new risk for student use of image generation tools" or "Are we meeting the National Framework's privacy requirements?" and get a contextual response drawn from your school's actual data. It turns governance from a chore into a conversation.
Where we are today
We're early, and we're building in the open about that.
We've been accepted into the Queensland AI Hub's Launch AI Pre-Accelerator program. We recently competed in a startup pitch battle night, testing our narrative against other founders and investors. And we're working with a leading Gold Coast independent school as our first education partner, onboarding their governance framework and learning firsthand what works and what needs to change.
Every product decision we make gets what Lachlan calls "the parent test": would we be comfortable if this was the system governing AI at our kids' school? If the answer is no, it doesn't ship.
What comes next
We believe AI governance in schools isn't a nice-to-have. It's the foundation that makes safe AI adoption possible. Without governance, schools face a binary choice: avoid AI (and leave their students unprepared) or adopt AI without guardrails (and hope nothing goes wrong). Neither is acceptable.
With governance, schools can say yes to AI with confidence. They can deploy AI tools in their operations, reducing administrative burden, improving communication, supporting staff. They can integrate AI into their curriculum, teaching students how to use these tools critically, creatively, and safely. And they can do all of this knowing they have the frameworks to manage risk, protect privacy, and demonstrate accountability.
That's the future we're building toward. Not because it's a good market opportunity, although it is. Because our kids are in these schools, and we want them to thrive in an AI world, not be left exposed by one.
If you're a school leader thinking about AI governance, we'd love to hear how your institution is approaching it. And if you'd like to explore what Arvoe could look like for your school, we're at arvoe.ai.
Ryan Speak is Co-Founder and CTPO of Arvoe, an AI governance platform for Australian schools. He lives on the Gold Coast with his wife Dee and their four children, all of whom attend a local school and are learning to navigate AI alongside their dad.
Want to see what Arvoe looks like for your school?
Take our free 5-minute self-assessment to see where your school stands — or book a demo to see the full platform in action.
Keep reading
All posts- Framework Gap
The Framework Gap: What Australian Schools Need to Know About AI Governance
Australia has a national framework for AI in schools. What it doesn't have is a roadmap. Here's what that means for your school — and what you can do about it.
6 min read - What Good Looks Like
What Mature AI Governance Actually Looks Like
Australian teachers lead the world in AI adoption — 66% used AI tools in the past 12 months, double the OECD average. The gap? Governance. Here's what mature AI governance looks like in a real school.
5 min read - Sector Intelligence
December 2026: What Every School Leader Needs to Know
By December 2026, Australian schools will face two major privacy-related changes that will affect how they use AI and other digital tools. These are not just compliance updates — they are leadership, governance, and risk management issues.
7 min read