Skip to main content

Please note this is an extract of the original bulletin.

Weights & Biases: The AI Developer Platform

Page summary:
Weights & Biases provides an MLOps platform to help organisations gain auditable and explainable end-to-end machine learning workflows for reproducibility and governance.

Change made:
First published.

RAI Institute: Artificial Intelligence Impact Assessment (AIIA)

Page summary:
A system-level AI Impact Assessment (AIIA), developed by the Responsible Artificial Intelligence Institute (RAI Institute), informs the broader assessment of AI risks and risk management.

Change made:
First published.

Controlling and Validating (Generative) Artificial Intelligence for Oliver Wyman’s NewsTrack Pipeline

Page summary:
Oliver Wyman’s AI validation framework demonstrates how a rigorous validation framework can be executed efficiently in practice.

Change made:
First published.

OpenMined: Privacy-preserving third-party audits on Unreleased Digital Assets with PySyft

Page summary:
PySyft allows model owners to load information concerning production AI algorithms into a server, where an external researcher can send a research question without ever seeing the information in that server.

Change made:
First published.

Lumenova AI Governance, Risk Management, and Compliance Platform

Page summary:
Lumenova AI’s AI Governance, Risk Management, and Compliance Platform aims to simplify and streamline AI risk management, providing complete visibility on AI models and ensuring consistent adherence to the latest regulatory standards and industry best practices.

Change made:
First published.

Credo AI Governance Platform: Reinsurance Provider Algorithmic Bias Assessment and Reporting

Page summary:
A global provider of reinsurance used Credo AI’s platform to produce standardised algorithmic bias reports to meet new regulatory requirements and customer requests.

Change made:
First published.

Aival Evaluate, Aival Analysis Lab: Validating third-party AI by assessing performance, fairness, robustness and explainability on internal data

Page summary:
Aival Evaluate reports the performance, fairness, robustness and explainability of AI products under consideration on a user’s data, enabling comparison along the same baseline.

Change made:
First published.

 

Tehelj