Pacific AI
  • Home
  • Product
        • Governor

          Centralized registry for systems, vendors, risks, model cards and policies across the AI lifecycle.

          Pricing
        • Gatekeeper

          Automate LLM, ML, and Agentic testing. Run pre-release test suites as a CI/CD release gate.

          Advisory
        • Guardian

          Monitor model performance, detect bias, and protect against adversarial attacks in real-time.

          Documentation
  • Case Studies
  • Open Source
        • AI Policy Suite

          Access and utilize our curated collection of comprehensive, ready-to-deploy AI governance and safety policies.

        • MedHELM

          A comprehensive Stanford CRFM benchmarking project, built to evaluate LLMs on real-world clinical tasks.

        • LangTest

          A comprehensive, unified testing library for measuring language model accuracy, bias, and robustness in LLM applications.

  • Resources
    • Webinars
    • Articles
    • Papers
    • AI Governance Quiz
    • AI Governance Survey
  • Contact Us

Healthcare AI Governance Library

LangTest: A comprehensive evaluation library for custom LLM and NLP models
Read Paper
LangTest: A comprehensive evaluation library for custom LLM and NLP models
Identifying and mitigating bias in AI recruiting models, featuring data science and HR technology experts discussing fairness, transparency, and responsible AI practices in hiring and talent selection systems.
Watch Online
Identifying and Mitigating Bias in AI Models for Recruiting
Automated testing of bias, fairness, and robustness in generative AI solutions, highlighting responsible AI evaluation with expert insights on model reliability, risk detection, and governance-ready validation.
Watch Online
Automated Testing of Bias, Fairness, and Robustness of Generative AI Solutions
Building responsible language models with the LangTest library, illustrating automated testing for bias, robustness, and safety in large language models to support trustworthy and governance-ready AI systems.
Article
Building Responsible Language Models with the LangTest Library
Ethical implications of medical large language models in healthcare, showing clinical data flows, model decision layers, and governance controls to address transparency, safety, and responsible AI use.
Article
The Ethical Implications of Medical LLMs in Healthcare
LangTest evaluation workflow for custom LLM and NLP models, showing automated testing pipelines, dataset augmentation, and before-and-after performance metrics for safety, robustness, and model quality assessment.
Article
LangTest: Unveiling & Fixing Biases with End-to-End NLP Pipelines
Automated testing framework for detecting demographic bias in clinical treatment plans generated by large language models, highlighting secure medical documents, validation checks, and responsible AI evaluation in healthcare.
Article
Automatically Testing for Demographic Bias in Clinical Treatment Plans Generated by Large Language Models
Robustness testing of named entity recognition (NER) models using LangTest, illustrating document processing, model evaluation dashboards, and reliability checks beyond accuracy in NLP systems.
Article
Beyond Accuracy: Robustness Testing of Named Entity Recognition Models with LangTest
Automated data augmentation for NLP models, showing an AI assistant on a digital platform with performance metrics and data elements, highlighting improved model accuracy, robustness, and training efficiency.
Article
Elevate Your NLP Models with Automated Data Augmentation for Enhanced Performance
Evaluating gender-occupational bias in AI language models using the WinoBias test, illustrated by a neural network interface connected to a human brain, representing bias detection and mitigation with the LangTest library.
Article
Mitigating Gender-Occupational Stereotypes in AI: Evaluating Language Models with the Wino Bias Test through the Langtest Library
Building responsible language models with the LangTest library, illustrating automated testing for bias, robustness, and safety in large language models to support trustworthy and governance-ready AI systems.
Article
Unmasking Language Model Sensitivity in Negation and Toxicity Evaluations
Illustration of a policy tree with connected documents and analytics icons, representing how the Pacific AI Governance Policy Suite supports compliance with the HHS HTI-1 transparency rule through structured documentation, reporting, and AI governance controls.
Article
Unveiling Bias in Language Models: Gender, Race, Disability, and Socioeconomic Perspectives
Automated testing framework for detecting demographic bias in clinical treatment plans generated by large language models, highlighting secure medical documents, validation checks, and responsible AI evaluation in healthcare.
Article
Unmasking the Biases Within AI: How Gender, Ethnicity, Religion, and Economics Shape NLP and Beyond
Evaluating large language models for gender-occupational stereotypes using the Wino Bias Test, illustrated by a rocket launch symbolizing AI model assessment, bias detection, performance metrics, and responsible NLP evaluation.
Article
Evaluating Large Language Models on Gender-Occupational Stereotypes Using the Wino Bias Test
Ethical implications of medical large language models in healthcare, showing clinical data flows, model decision layers, and governance controls to address transparency, safety, and responsible AI use.
Article
Testing the Question Answering Capabilities of Large Language Models
Evaluating stereotype bias in large language models using LangTest, showing automated bias testing workflows, labeled datasets, evaluation pipelines, and responsible AI assessment for NLP models.
Article
Evaluating Stereotype Bias with LangTest
Generative AI in Healthcare Survey 2024 infographic showing adoption of task-specific LLMs, open-source general-purpose LLMs, open-source task-specific models, and proprietary LLMs, highlighting trends discussed at the John Snow Labs NLP Summit.
Article
John Snow Labs Announces Fifth Annual NLP Summit, the World’s Largest Gathering for the Applied NLP, LLM, and Generative AI Community
« Previous 1 2 3 4

Join the Responsible AI Community

Stay current on new regulations, papers, case studies, and tools

Join The Responsible AI Group
Pacific AI
Product
  • Governor
  • Gatekeeper
  • Guardian
Open Source
  • AI Policy Suite
  • MedHELM
  • LangTest
Resources
  • Case Studies
  • Webinars
  • Papers
  • Contact Us
  • 16192 Coastal Highway,
    Lewes, DE 19958, USA
  • [email protected]
  • +1 (302) 313-6841
  • facebook
  • linkedin
© 2026 Pacific AI, Inc. All rights reserved.
  • Privacy Policy
  • Terms of Service
  • AI Acceptable Use Policy