Arbitration Forums Inc. logo

AI Governance & Explainability Engineer

Arbitration Forums Inc.
3 hours ago
Full-time
Remote
United States
AI Governance

DEPARTMENT: Data Insights and Innovation

JOB TITLE: AI Governance & Explainability Engineer 

JOB CODE: AIGEE 

REPORTS TO: Data Governance Lead 

FLSA STATUS: Exempt 

EMPLOYMENT TYPE: Full-Time 


JOB PURPOSE:

This role at Arbitration Forums is as unique as it is rewarding because of the AF IPAAL Values (Integrity, Passion, Accountability, Achievement, Leadership) and TRI Model (Trust, Respect, Inclusion). 

The AI Governance & Explainability Engineer is a handson technical role within the Data Governance team responsible for ensuring AI, GenAI, and Agentic AI solutions are explainable, governable, auditable, and productionready. 

This role embeds governance directly into the AI technology stack, translating policies, regulatory expectations, and risk requirements into technical controls, automated checks, standardized artifacts, and release gates across the AI lifecycle. 

The role combines AI/ML engineering depth, GenAI & Agentic AI design knowledge, and governance discipline to ensure AI solutions deliver explainability, can be trusted, defended, and audited in production, particularly within the Microsoft Fabric and Purview ecosystem.  


DEPARTMENTAL EXPECTATION OF EMPLOYEE 


Adheres to AF Policy and Procedures and the AF IPAAL Values and TRI Model 

  • Acts as a role model within and outside AF.
  • Performs duties as workload necessitates. 
  • Maintains a positive and respectful attitude. 
  • Communicates regularly with the departmental leader about department issues. 
  • Demonstrates flexible and efficient time management and ability to prioritize workload. 
  • Consistently reports to work on time, prepared to perform duties of the position. 
  • Meets Department productivity standards.


ESSENTIAL DUTIES AND RESPONSIBILITIES

  • AI Governance by Design Engineering (Execution Focus not Policy writing) 
    • Embed governance, explainability, and risk controls directly into AI, GenAI, and Agentic AI workflows 
    • Translate enterprise AI policies, standards, and Responsible AI principles into: 
      • Technical guardrails 
      • Automated checks 
      • Required evidence artifacts 
      • CI/CD release gates 
    • Implement governance as code and automation, eliminating reliance on manual or after-the-fact reviews. 
  • AI Governance, Explainability & Human Oversight
    • Advise solution teams on explainability requirements for automated, semi-automated, and decision-support AI systems. 
    • Ensure human-in-the-loop (HITL) controls are implemented where required by risk level or use case. 
    • Define, generate, and manage explainability outputs that are: 
      • Appropriate to the end-user or reviewer persona
      • Aligned to the decision context and operational use
    • Document explainability assumptions, limitations, and residual risk as governance evidence. 
  • Metadata, Lineage & Governance Evidence Management 
    • Operationalize AI Governance in Microsoft Purview by registering and maintaining: 
      • AI models, features, prompts, agents, notebooks, and pipelines
    • Maintain end to end lineage across: 
      • Data → features → models → inferences → outputs 
      • Apply ownership, stewardship, sensitivity, and classification metadata. 
    • Ensure governance is maintained: 
      • Discoverable
      • Versioned
      • Traceable
      • Audit-defensible
  • GenAI & Agentic AI Governance Enablement 
    • Apply governance patterns to LLMs, RAG, and Agentic AI solutions 
    • Ensure governance traceability when synthetic data or augmented data is used for training, testing, or evaluation.


  • Implement Agentic AI lifecycle governance, including: 
    • Observability of agent actions, deviations, and failures 
    • Oversight of planning, reflection, and tool-use behavior 
    • Controls on autonomous vs. constrained operation 
  • Enable GenAI explainability, including: 
    • Retrieval transparency for RAG (sources, relevance) 
    • Inference context documentation 
    • Decision trace generation where applicable 
  • Explainability, Interpretability & Model Risk Controls 
    • Own and operate explainability capabilities used for governance, audit, and trust. 
    • Implement and operationalize techniques such as: 
      • Feature attribution (e.g., SHAP or equivalent) 
      • Driver and proxy detection 
      • Global and local model explanations 
    • Identify bias signals, risk indicators, and explainability gaps. 
    • Store and manage explainability and observability outputs as governed, audit-ready artifacts. 
    • Support audit, compliance, and risk review activities with defensible evidence. 

 

  • Monitoring, Observability & Incident Readiness 
    • Define and implement AI monitoring metrics, alerts, and thresholds for: 
      • Performance degradation 
      • Bias and ethical risk indicators 
      • Drift and instability 
    • Partner with MLOps and platform teams to integrate monitoring into production pipelines. 
    • Support AI incident response and post-incident reviews with governance evidence. 
    • Ensure all observability outputs are retained, traceable, and auditready. 
  • Governance Checkpoints & Release Gating  
    • Define and enforce governance checkpoints within CI/CD pipelines (DEV-> TEST/UAT -> PROD). 
    • Implement automated release checks for: 
      • Required documentation and evidence artifacts 
      • Explainability artifacts 
      • Monitoring configuration 
      • Data usage, lineage completeness, and medallion-layer alignment 
    • Partner with Engineering and MLOps teams on promotion decisions while owning governance readiness, not platform approval. 

 

QUALIFICATIONS 

Required Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, Engineering, or a related field. 
  • Minimum 7 years of experience in AI/ML engineering, data science, GenAI/LLMs, NLP, Agentic AI, data governance, or related roles. 
  • Demonstrated experience operationalizing AI governance, explainability, and risk controls in production environments. 
  • Deep understanding of Agentic AI architectures and lifecycle considerations. 

 

Technical Skills  

  • Strong proficiency in Python with handson experience in AI/ML engineering workflows. 
  • Working knowledge of Microsoft Fabric (Lakehouse, OneLake, notebooks, pipelines). 
  • Experience with Microsoft Purview (catalog, lineage, classification, ownership). 

Experience with AI/ML and GenAI tooling, including: 

  • Azure AI Foundry / Azure ML 
  • ML explainability libraries (e.g., SHAP) 
  • LLMs, RAG architecture, and prompt engineering 
  • Familiarity with Agentic AI frameworks and patterns (e.g., tool use, planning, reflection). 
  • Experience integrating governance controls into CI/CD pipelines using GitHub or Azure DevOps. 
  • Understanding of cloud platforms (Azure preferred; AWS/GCP a plus
  • Experience producing auditready technical documentation and evidence artifacts. 
  • Familiarity with reporting and visualization tools (e.g., Power BI) for governance and monitoring views. 

Soft Skills 

  • Strong analytical and problemsolving abilities, particularly in riskbased decisionmaking. 
  • Excellent written and verbal communication skills, with the ability to translate technical details into governancerelevant insights. 
  • Ability to lead governance execution initiatives and influence crossfunctional teams without direct authority. 
  • Strong organizational skills with attention to detail and audit readiness. 
  • Auto insurance or claims industry experience preferred. 

 

Preferred Qualifications

  • Experience evaluating or governing model training approaches (e.g., NLP, generative models) without owning full training pipelines. 
  • Familiarity with synthetic data governance (generation methods, limitations, risk documentation). 
  • Experience with additional AI platforms (Databricks AI, Snowflake Cortex, Dataiku). 
  • Experience in regulated industries (insurance, financial services, healthcare). 


AMERICANS WITH DISABILITY SPECIFICATIONS 


PHYSICAL DEMANDS

The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. 

While performing the duties of this job, the employee is occasionally required to stand; walk; sit; use hands to finger, handle, or feel objects, tools, or controls; reach with hands and arms; climb stairs; balance; stoop, kneel, crouch, or crawl; talk or hear; taste or smell. The employee must occasionally lift and/or move up to 25 pounds. Specific vision abilities required by the job include close vision, distance vision, color vision, peripheral vision, depth perception, and the ability to adjust focus. 


WORK ENVIRONMENT [Standard language tied to each job description]

This is a fully remote position requiring reliable high-speed internet access and a dedicated workspace.

Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.