AI Strategy · Governance · Regulatory Systems

I Worked in the Problems
AI Is Still Trying to Solve.

Trade negotiation. Legal research. In-flight safety decisions. Before I studied AI, I worked in the exact domains where AI systems break down — ambiguous rules, high-stakes judgment calls, and accountability gaps that nobody wants to own. Now I build the governance frameworks that bridge that gap.

0AI Projects
0Prior Careers
0Gov Layers ?
Regulatory Enforcement Judicial Process EU AI Act Fail-Closed Design Human Oversight Privacy by Design
Scroll
01 — The Story

AI Is Being Deployed in Every Domain
I've Worked In. I've Seen the Gaps.

Three careers. Three domains where AI systems are failing right now — for reasons I understand from the inside.

01
Trade Negotiation → Where AI Reads Rules Too Literally

At Taiwan's Bureau of Foreign Trade, I coordinated international agreement negotiations across multilateral frameworks where the same clause means different things to different parties. AI tools for trade compliance fail here constantly — they parse the text but miss the negotiating intent behind it. I know the difference, because I was in the room.

"An AI that reads a trade agreement without understanding what was conceded to get it signed will give you confident answers that are completely wrong."
02
Legal Research → Where AI Hallucinates with Authority

As a judicial assistant at Taoyuan District Criminal Court, I prepared case analyses where similar fact patterns produced different verdicts — and the difference was always in details an LLM would flatten. Legal AI today is confidently wrong in exactly this way: it finds patterns, misses the legal reasoning, and cites precedent that doesn't hold. I was trained to spot that gap before it reached a judge.

"The danger of legal AI is not that it's wrong. It's that it's wrong in complete, well-formatted sentences."
03
Aviation Safety → What Fail-Closed Actually Means

At Cathay Pacific, I made real-time safety calls where the protocol was clear: if uncertain, escalate. Don't improvise. Don't override. That principle is why I build AI systems with fail-closed architecture and explicit human escalation paths — not because a framework says to, but because I've felt the cost of the alternative in a way that textbooks don't capture.

"Every AI governance framework says 'human oversight.' Very few people building those systems have ever had to exercise it under pressure."
02 — Work

Selected Projects

Each mapped to its AI governance dimension. Click any card to explore.

03 — Framework

AI Governance Is a Design Problem

The EU AI Act, NIST AI RMF, and emerging sector regulations are not just compliance checklists — they are a design language for building accountable systems.

Core Governance Principles — Applied
Risk Classification
Every AI system starts with a risk tier. High-risk systems require stricter validation, audit trails, and human oversight by regulation.
📋
Auditability & Traceability
Every output should be traceable to its inputs and logic — like a flight data recorder for model decisions.
👤
Human Oversight by Design
Fail-closed mechanisms and escalation paths are first-class architecture decisions, not safeguards added later.
🔍
Explainability Proportional to Stakes
A VR recommendation and a loan denial require different levels of explanation. Governance scales with consequence.
Regulatory Landscape
FrameworkScopeRisk Level
EU AI ActAll AI in EU marketHigh
NIST AI RMFUS federal / voluntaryMed
GDPR / PrivacyPersonal data processingHigh
FCRA / Fair LendingCredit decisionsHigh
ISO 42001AI Management SystemsMed
HIPAAHealthcare AIHigh
Healthcare AI — Multi-Jurisdiction Governance (Project 5)
Cancer Hospital Waiting Time Prediction · EU AI Act Annex III, 5(a) — Medical Diagnostic Support
Jurisdiction Legal Basis Applicability Risk Tier Key Obligations Status
🇪🇺 EU EU AI Act (2024/1689) Annex III, 5(a) — medical diagnostic support High-Risk Conformity assessment, human oversight, technical documentation, post-market monitoring ⚠ Compliance Required
🇩🇪 Germany EU AI Act + SGB V (§ 139e) DiGA pathway if patient-facing; advisory if staff-only High DiGA certification for patient-facing; data localization; hospital IT security (KRITIS) ⛔ DiGA if patient-facing
🇫🇷 France EU AI Act + HAS guidelines Applies as clinical decision support High HAS evaluation, CNIL data processing declaration, CE marking if Class IIa ⚠ HAS Evaluation
🇳🇱 Netherlands EU AI Act + NEN 7510 Hospital information security scope Med-High NEN 7510 compliance, DPIA under GDPR, transparency to patients ✓ DPIA sufficient
🇬🇧 UK MHRA AI/ML Guidance + UK GDPR Software as Medical Device (SaMD) assessment Med MHRA SaMD classification, ICO data protection, NHS DSP Toolkit ⚠ MHRA Review
🇺🇸 USA FDA Clinical Decision Support + HIPAA Non-actionable CDS if staff-facing only Low-Med HIPAA BAA, FDA CDS exemption criteria, state-level AI disclosure laws ✓ CDS Exempt (staff)
🇹🇼 Taiwan TFDA AI/ML SaMD Guidance Applies if deployed in clinical setting Med TFDA registration, PIMS compliance (personal data), clinical validation ⚠ TFDA Registration
🇨🇳 China NMPA AI Medical Devices Reg. + PIPL Third-class medical device if diagnostic High NMPA Class III approval, PIPL consent, data localization, algorithm filing ⛔ Class III NMPA
Governance Design Note: Patient-facing deployment (showing q75 wait estimates) triggers the highest regulatory burden globally. Staff-only deployment (operational dashboard) reduces obligations significantly — a deployment scope decision that is itself a governance choice.
Projects by Governance Dimension
What are the 6 Governance Layers? Each project was evaluated across six accountability dimensions from the EU AI Act and NIST AI RMF: ① Risk Classification ② Auditability ③ Explainability ④ Human Oversight ⑤ Privacy by Design ⑥ Fairness & Bias
04 — Conclusion

What I Bring to AI Strategy
That Pure Technologists Don't

The hardest part of AI consulting isn't building the model. It's knowing which risks are real, which rules are negotiable, and when to tell a client their AI strategy is going to fail — before it does.

01
🌐
Governance Must Survive Jurisdictions

International trade negotiations taught me that the same rule means different things across legal contexts. AI deployed across borders faces the identical problem. I design governance frameworks that specify not just what a system must do, but what changes when the jurisdiction does.

Applied in: ML Studio · MARS Model · Chatbot Compliance
02
⚖️
If You Can't Show the Reasoning, the Output Doesn't Count

Criminal court work lives or dies on the quality of its reasoning chain. A judgement that can't be traced to its evidence is worthless — and so is an AI decision that can't be explained. Explainability isn't a feature I add. It's the standard I start from.

Applied in: MARS Model · Chatbot Compliance
03
✈️
Design for the Moment the System Fails

Aviation safety protocols exist for exactly the conditions where normal assumptions break down. Good AI governance works the same way: human escalation paths, fail-closed defaults, and consent frameworks are designed for the edge case — not the average. If they're not core architecture, they're not real governance.

Applied in: L'Oréal VR · Operations AI
Interactive — Try It
Which governance principle applies to your AI use case?
IW ADD YOUR PHOTO profile-img src=""
Available for opportunities
🎓 MS AI in Business · Simon, U of R ⚖️ Ex-Judicial Assistant · Taoyuan Court 🌐 Ex-Bureau of Foreign Trade, TW ✈️ Ex-Cathay Pacific Cabin Crew
05 — About Me

Iris Wang

MS AI in Business candidate at Simon Business School (University of Rochester). Before this degree, I spent years working in the exact domains AI consultants are now being hired to transform — and getting it wrong in.

🌐 Bureau of Foreign Trade, TW
International Agreement Liaison
Coordinated multilateral trade negotiations. Learned that policy language is ambiguous by design — and AI tools that treat it as fixed rules fail every time.
⚖️ Taoyuan District Criminal Court
Judicial Assistant
Drafted case analyses and legal research for criminal proceedings. Saw first-hand how similar facts produce different verdicts — the exact pattern LLMs miss.
✈️ Cathay Pacific Airways
Cabin Crew
Executed safety protocols under pressure with incomplete information. Built the instinct for fail-closed design that now shapes every AI system I architect.

The MS in AI in Business gave me the technical vocabulary for problems I'd already lived. The combination — domain experience in law, trade, and safety plus the ability to build the systems — is what I bring to AI strategy and governance consulting.

Technical
Python · Streamlit · scikit-learn · R · SQL
Governance
EU AI Act · NIST AI RMF · GDPR · FCRA · ISO 42001
Domain Expertise
International Trade Law · Criminal Justice Process · Aviation Safety Protocols
Contact Me → View Projects
06 — Connect

Let's Talk AI Governance

Open to full-time roles in AI governance, policy analysis, and AI compliance — particularly where regulatory expertise and technical fluency both matter.

Currently available for opportunities

Interested in: AI Policy · Governance Consulting · Regulatory Affairs · Research