Trade negotiation. Legal research. In-flight safety decisions. Before I studied AI, I worked in the exact domains where AI systems break down — ambiguous rules, high-stakes judgment calls, and accountability gaps that nobody wants to own. Now I build the governance frameworks that bridge that gap.
Three careers. Three domains where AI systems are failing right now — for reasons I understand from the inside.
At Taiwan's Bureau of Foreign Trade, I coordinated international agreement negotiations across multilateral frameworks where the same clause means different things to different parties. AI tools for trade compliance fail here constantly — they parse the text but miss the negotiating intent behind it. I know the difference, because I was in the room.
As a judicial assistant at Taoyuan District Criminal Court, I prepared case analyses where similar fact patterns produced different verdicts — and the difference was always in details an LLM would flatten. Legal AI today is confidently wrong in exactly this way: it finds patterns, misses the legal reasoning, and cites precedent that doesn't hold. I was trained to spot that gap before it reached a judge.
At Cathay Pacific, I made real-time safety calls where the protocol was clear: if uncertain, escalate. Don't improvise. Don't override. That principle is why I build AI systems with fail-closed architecture and explicit human escalation paths — not because a framework says to, but because I've felt the cost of the alternative in a way that textbooks don't capture.
Each mapped to its AI governance dimension. Click any card to explore.
The EU AI Act, NIST AI RMF, and emerging sector regulations are not just compliance checklists — they are a design language for building accountable systems.
| Framework | Scope | Risk Level |
|---|---|---|
| EU AI Act | All AI in EU market | High |
| NIST AI RMF | US federal / voluntary | Med |
| GDPR / Privacy | Personal data processing | High |
| FCRA / Fair Lending | Credit decisions | High |
| ISO 42001 | AI Management Systems | Med |
| HIPAA | Healthcare AI | High |
| Jurisdiction | Legal Basis | Applicability | Risk Tier | Key Obligations | Status |
|---|---|---|---|---|---|
| 🇪🇺 EU | EU AI Act (2024/1689) | Annex III, 5(a) — medical diagnostic support | High-Risk | Conformity assessment, human oversight, technical documentation, post-market monitoring | ⚠ Compliance Required |
| 🇩🇪 Germany | EU AI Act + SGB V (§ 139e) | DiGA pathway if patient-facing; advisory if staff-only | High | DiGA certification for patient-facing; data localization; hospital IT security (KRITIS) | ⛔ DiGA if patient-facing |
| 🇫🇷 France | EU AI Act + HAS guidelines | Applies as clinical decision support | High | HAS evaluation, CNIL data processing declaration, CE marking if Class IIa | ⚠ HAS Evaluation |
| 🇳🇱 Netherlands | EU AI Act + NEN 7510 | Hospital information security scope | Med-High | NEN 7510 compliance, DPIA under GDPR, transparency to patients | ✓ DPIA sufficient |
| 🇬🇧 UK | MHRA AI/ML Guidance + UK GDPR | Software as Medical Device (SaMD) assessment | Med | MHRA SaMD classification, ICO data protection, NHS DSP Toolkit | ⚠ MHRA Review |
| 🇺🇸 USA | FDA Clinical Decision Support + HIPAA | Non-actionable CDS if staff-facing only | Low-Med | HIPAA BAA, FDA CDS exemption criteria, state-level AI disclosure laws | ✓ CDS Exempt (staff) |
| 🇹🇼 Taiwan | TFDA AI/ML SaMD Guidance | Applies if deployed in clinical setting | Med | TFDA registration, PIMS compliance (personal data), clinical validation | ⚠ TFDA Registration |
| 🇨🇳 China | NMPA AI Medical Devices Reg. + PIPL | Third-class medical device if diagnostic | High | NMPA Class III approval, PIPL consent, data localization, algorithm filing | ⛔ Class III NMPA |
The hardest part of AI consulting isn't building the model. It's knowing which risks are real, which rules are negotiable, and when to tell a client their AI strategy is going to fail — before it does.
International trade negotiations taught me that the same rule means different things across legal contexts. AI deployed across borders faces the identical problem. I design governance frameworks that specify not just what a system must do, but what changes when the jurisdiction does.
Criminal court work lives or dies on the quality of its reasoning chain. A judgement that can't be traced to its evidence is worthless — and so is an AI decision that can't be explained. Explainability isn't a feature I add. It's the standard I start from.
Aviation safety protocols exist for exactly the conditions where normal assumptions break down. Good AI governance works the same way: human escalation paths, fail-closed defaults, and consent frameworks are designed for the edge case — not the average. If they're not core architecture, they're not real governance.
MS AI in Business candidate at Simon Business School (University of Rochester). Before this degree, I spent years working in the exact domains AI consultants are now being hired to transform — and getting it wrong in.
The MS in AI in Business gave me the technical vocabulary for problems I'd already lived. The combination — domain experience in law, trade, and safety plus the ability to build the systems — is what I bring to AI strategy and governance consulting.
Open to full-time roles in AI governance, policy analysis, and AI compliance — particularly where regulatory expertise and technical fluency both matter.
Currently available for opportunities
Interested in: AI Policy · Governance Consulting · Regulatory Affairs · Research