The Architecture Is Being Built Right Now.
Here Is Where to Find the Doors.
Last week, over espresso and under Chatham House rules, I sat with a small group of people who collectively hold more leverage over the future of AI governance than most of the headlines about AI governance would suggest. Not tech CEOs. Not foundation presidents. Diplomats. Mission staff. People who know what “binding” actually means in practice — because they are in the rooms where that gets decided.
I had ten minutes. I want to give you the same ten minutes here, because the window I described to them is the same window you’re looking at now.
─────────────────
Let me skip what everyone already knows — the turbulence, the rollbacks, the fragility of the multilateral system right now — and go straight to the more useful question.
The foundational decisions of digital governance are being made in this decade. What gets measured. What counts as a quality standard. Who sits in which technical rooms. The consequences will outlast every political disruption we are currently navigating.
The question is not how to defend what we have. It is how to build something firmer — and specifically, where the openings are right now to do that.
─────────────────
Name the structural problem precisely. The precision matters for the strategy.
We are at a moment of converging crises: rising inequality, democratic backsliding, the fragility of multilateralism, and the simultaneous construction of AI systems that will mediate access to healthcare, justice, finance, and political participation for billions of people.
Gender keeps getting added to digital governance frameworks rather than built into them. A paragraph near the end. A dedicated silo running alongside the technical negotiations without ever touching them. Language that says “recognizing the importance of” rather than language that binds design, measurement, or accountability.
Underneath that is a data problem with compounding consequences. The systems being built right now learn from data that mostly records what has been done to women — not what women build, lead, decide, or own. That is not a content gap. It is an architecture flaw.
The compounding works like this: women are excluded from the teams that build AI systems. So their lives, labor, and realities are absent from training data. Systems trained on incomplete data underperform for the populations they’re meant to serve. Those populations don’t adopt them. Less adoption means less data. The cycle tightens — in health, agriculture, finance, education, and justice — at the velocity AI operates.
The returns from breaking that cycle are documented. Gender-balanced teams produce 40% more patents (WIPO). Closing the digital gender gap could generate $1.5 trillion in global GDP (GSMA). These are not equity projections. They are the measurable cost of a structural flaw — already accumulating.
The argument for fixing this is therefore not primarily a gender argument. It is a data quality argument. An innovation argument. A governance legitimacy argument. A system trained on systematically unrepresentative data is a scientifically deficient system. That framing — representativeness as a quality standard, not a political value — is the one that survives technical scrutiny. And the one that opens doors with allies who would not come through a gender door.
─────────────────
So where does the architecture get built?
New York gets the headlines. Geneva is where the technical frameworks that actually govern behavior get negotiated — the indicator reviews, the data governance principles, the AI standards, the interoperability requirements. WSIS, the GDC implementation mechanisms, the ITU processes, the Human Rights Council special procedures. These are the rooms where what “binding” means in practice gets determined.
And unlike every other AI governance framework — the OECD principles, the Global Digital Compact, UNESCO’s ethics recommendation — WSIS has the plumbing. Biennial reporting cycles. Action Line structures with budgets and mandates. Direct links to development finance. Accountability to member states through CSTD. If you want gender-responsive AI governance to have teeth, WSIS is where those teeth get built.
Here is the specific gap. A/RES/80/173 — adopted by the UN General Assembly in December 2025 — explicitly mandates gender equality across all eleven WSIS Action Lines. That mandate exists. What does not yet exist is a single gender indicator in any of them.
Eight days before my espresso conversation, UNGIS presented the Draft Joint Implementation Road Map to CSTD 29 — the document meant to be the coherence architecture for WSIS and the Global Digital Compact going forward. I read it. It is publicly available. The five GDC objectives, the WSIS-GDC coherence matrix, the full entity and mechanism table, the implementation timeline — gender does not appear. The Partnership on Measuring ICT for Development is tasked with indicator review with no gender specification. The AI Panel is listed with no gender mandate. The entire implementation architecture, as currently drafted, is gender-blind.
That is not a political opinion. It is a fact about a document anyone can read.
And it is a fact about a document that is not yet final.
WSIS Forum is 6–10 July. It is explicitly designated in the Road Map as the multistakeholder input moment for Action Line road maps. Those road maps are due at CSTD 30 in April 2027. The GDC High-Level Review follows at the 82nd UNGA in 2027. The window is open right now. Ten weeks.
─────────────────
Three specific entry points.
The indicator review. For twenty years WSIS has measured connectivity — who has a phone, who has internet access. These matter. But they measure the entrance. They tell you nothing about whether the digital economy is generating returns for women — who leads research teams, who files patents, who sets the standards, who controls the infrastructure, who captures the value.
A next generation of indicators would track: women’s share of AI-related patents and publications. Women’s participation in technical standard-setting bodies — because standards are where interoperability gets defined, and economies absent from that process become rule-takers, not rule-makers. Women’s decision-making authority over AI investment — not advisory roles, actual budget authority.
We are developing this indicator framework now, working across the WSIS Action Lines, grounded in the resolution’s own language, timed to feed the 2027 review. We are looking for partners. If your organization has standing in the measurement processes, that is the conversation I want to have.
The AI Panel — and the gap in its mandate. The Independent International Scientific Panel on AI is now in session. The Road Map tasks it with mapping existing UN system-wide capacity-building initiatives and identifying gaps — with no gender requirement in that mandate. That gap is addressable right now, before the mapping is complete.
Scientists care about data quality, reproducibility, and validity. A training dataset that systematically excludes significant population segments fails basic scientific standards. The entry point is not gender policy — it is research integrity. Gender advocates alone will not move this process. Scientists making a scientific argument will.
Procurement — the implementation lever almost no one is using. Every member state is currently developing or revising a national AI strategy. Every member state is also a significant procurer of AI systems — for public services, healthcare, justice, border management. The contractual moment — what a government requires from a vendor before signing — is where governance principles either become real or remain aspirational.
Our work on public trust frameworks for AI procurement — recognized with an Honorable Mention at CHI 2026, the flagship research conference on human factors in computing — shows what this looks like in practice: requiring vendors to disclose training data composition, demographic distribution, and validation methodology before deployment. Mandatory human rights impact assessment as a condition of contract, not a post-deployment addition.
This does not require new treaty language. It requires a procurement clause. Governments can push capitals on this now.
─────────────────
One number to carry into every conversation that follows.
Women represent 18% of inventors on AI-related patents globally. That is WIPO’s own figure — from the world’s primary intellectual property institution. It is not a gender statistic. It is an innovation productivity metric. Every conversation with a science ministry, a standards body, a finance portfolio about AI investment should be anchored to it.
─────────────────
What I said to close the espresso conversation, I’ll say here too.
The goal is not to defend gender equality in digital governance against the forces trying to remove it. The goal is to build the architecture so well — so grounded in scientific standards, economic evidence, and legal obligation, with a coalition broad enough — that removing it becomes technically and politically incoherent.
The rooms where that happens are in Geneva. The time is the next ten weeks.
The question is who is in those rooms, and what they bring with them.
Caitlin Kraft-Buchman is Co-Chair of the CSTD Gender Advisory Board and CEO / Founder of Women at the Table / AI & Equality Human Rights Initiative. Women at the Table holds Observer status at CDNET, the Council of Europe’s Steering Committee for New and Emerging Digital Technologies.
Image: Elise Racine & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Traditional Tibetan weaver in Dharamsala India working at a loom, framed by yellow bounding boxes that classify different elements of textile craft. The image features digital distortion effects and fragmented views of weaving tools and finished textiles. Text has been selectively blanched from the composition, symbolizing whose voices are included or excluded in conversations about digital transformation.