Current US legislation prohibits AI applications in recruiting, healthcare, and advertising from discrimination and bias.
This requires organizations who deploy such systems to test and prove that their solutions are robust and unbiased – in the same way that they’re required to comply with security and privacy regulations. This session introduces Pacific AI, a no-code tool built on top of the LangTest library, which applies Generative AI to:
- Automatically generate tests for accuracy, robustness, bias, and fairness for text classification and entity recognition tasks
- Automatically run test suite, create detailed model report cards, and compare different models against the same test suite
- Publish, share, and reuse AI test suites across teams and projects
- Automatically generate synthetic training data to augment model training and minimize common model bias and reliability issues
This session then presents how John Snow Labs uses Pacific AI to test and improve its own healthcare-specific language models.