Pacific AI
  • Home
  • Product
        • Governor

          Centralized registry for systems, vendors, risks, model cards and policies across the AI lifecycle.

          Pricing
        • Gatekeeper

          Automate LLM, ML, and Agentic testing. Run pre-release test suites as a CI/CD release gate.

          Advisory
        • Guardian

          Monitor model performance, detect bias, and protect against adversarial attacks in real-time.

          Documentation
  • Case Studies
  • Open Source
        • AI Policy Suite

          Access and utilize our curated collection of comprehensive, ready-to-deploy AI governance and safety policies.

        • MedHELM

          A comprehensive Stanford CRFM benchmarking project, built to evaluate LLMs on real-world clinical tasks.

        • LangTest

          A comprehensive, unified testing library for measuring language model accuracy, bias, and robustness in LLM applications.

  • Resources
    • Webinars
    • Articles
    • Papers
    • AI Governance Quiz
    • AI Governance Survey
  • Contact Us

Healthcare AI Governance Library

Beyond Accuracy: Robustness Testing of Named Entity Recognition Models with LangTest
Article
Beyond Accuracy: Robustness Testing of Named Entity Recognition Models with LangTest
Automated data augmentation for NLP models, showing an AI assistant on a digital platform with performance metrics and data elements, highlighting improved model accuracy, robustness, and training efficiency.
Article
Elevate Your NLP Models with Automated Data Augmentation for Enhanced Performance
Evaluating gender-occupational bias in AI language models using the WinoBias test, illustrated by a neural network interface connected to a human brain, representing bias detection and mitigation with the LangTest library.
Article
Mitigating Gender-Occupational Stereotypes in AI: Evaluating Language Models with the Wino Bias Test through the Langtest Library
Magnifying glass reviewing structured AI documentation and compliance records, illustrating Pacific AI joining the Coalition for Health AI (CHAI) as a partner in the assurance provider certification process for trustworthy and transparent healthcare AI.
Article
Detecting and Evaluating Sycophancy Bias: An Analysis of LLM and AI Solutions
Building responsible language models with the LangTest library, illustrating automated testing for bias, robustness, and safety in large language models to support trustworthy and governance-ready AI systems.
Article
Unmasking Language Model Sensitivity in Negation and Toxicity Evaluations
Illustration of a policy tree with connected documents and analytics icons, representing how the Pacific AI Governance Policy Suite supports compliance with the HHS HTI-1 transparency rule through structured documentation, reporting, and AI governance controls.
Article
Unveiling Bias in Language Models: Gender, Race, Disability, and Socioeconomic Perspectives
Automated testing framework for detecting demographic bias in clinical treatment plans generated by large language models, highlighting secure medical documents, validation checks, and responsible AI evaluation in healthcare.
Article
Unmasking the Biases Within AI: How Gender, Ethnicity, Religion, and Economics Shape NLP and Beyond
Evaluating large language models for gender-occupational stereotypes using the Wino Bias Test, illustrated by a rocket launch symbolizing AI model assessment, bias detection, performance metrics, and responsible NLP evaluation.
Article
Evaluating Large Language Models on Gender-Occupational Stereotypes Using the Wino Bias Test
Ethical implications of medical large language models in healthcare, showing clinical data flows, model decision layers, and governance controls to address transparency, safety, and responsible AI use.
Article
Testing the Question Answering Capabilities of Large Language Models
Evaluating stereotype bias in large language models using LangTest, showing automated bias testing workflows, labeled datasets, evaluation pipelines, and responsible AI assessment for NLP models.
Article
Evaluating Stereotype Bias with LangTest
Generative AI in Healthcare Survey 2024 infographic showing adoption of task-specific LLMs, open-source general-purpose LLMs, open-source task-specific models, and proprietary LLMs, highlighting trends discussed at the John Snow Labs NLP Summit.
Article
John Snow Labs Announces Fifth Annual NLP Summit, the World’s Largest Gathering for the Applied NLP, LLM, and Generative AI Community
« Previous 1 2 3 4

Join the Responsible AI Community

Stay current on new regulations, papers, case studies, and tools

Join The Responsible AI Group
Pacific AI
Product
  • Governor
  • Gatekeeper
  • Guardian
  • Case Studies
Open Source
  • AI Policy Suite
  • MedHELM
  • LangTest
Resources
  • Webinars
  • Articles
  • Papers
  • Contact Us
  • Contact Us
  • 16192 Coastal Highway,
    Lewes, DE 19958, USA
  • [email protected]
  • +1 (302) 313-6841
  • facebook
  • linkedin
© 2026 Pacific AI, Inc. All rights reserved.
  • Privacy Policy
  • Terms of Service
  • AI Acceptable Use Policy