We support private and public organizations in evaluating their AI-based processes to ensure they meet high standards of transparency and fairness in compliance with the latest regulations.
(about MIRAI)
We build a bridge between academic research and industry through robust and innovative solutions.
Product Innovation
Employment of completely transparent formal methods in addition to existing statistical methods for the analysis and verification of AI models.
Employment of logical methods (the effectiveness of which in developing AI is well established) for the analysis and verification of AI models.
Process Innovation
Ethical and human-centered approach throughout development and deployment: fully transparent, human-designed systems, inspired by ethical principles and formal results.
Academy-industry interaction at all stages of the process: collaboration of experts in data science, ethics and philosophy, formal methods, mathematics and computer science.
No reliance on possession of data collections for the development and deployment of marketed systems.
MILANORESPONSIBLEARTIFICIALINTELLIGENCE
research + develop + advise = MIRAI
(Building the future of responsible AI)
team
MIRAI is a spin-off of the University of Milan
founded in 2024 at the "Piero Martinetti" Philosophy Department, in collaboration with Wazabit and Davide Posillipo.
mission
To support responsible and trustworthy uses of Artificial Intelligence systems
through transparent and accountable methods in order to control data, models and to respect regulations.
Keeping AI under control
IS GETTING HARDER
Models complexity
With complex deep learning and machine learning models is hard to detect biases and other unwanted behaviours, and measure risks.
Data complexity
Training of AI models on large and noisy datasets, using different sources and created through complex and opaque pipelines, makes it difficult to trace down the origin of undesired outcomes.
Regulations
The upcoming EU AI Act brings the need of monitoring continuously updated AI models with respect to several dimensions of risk.
Solutions
MIRAI develops digital ecosystems for the verification, control and supervision
of data-driven and machine learning technologies, with particular reference to the analysis of issues of fairness, bias and transparency in compliance with legal and ethical criteria.
BRIO: Algorithmic Bias and Risk Detector
- A tool capable of analysing algorithmic models in order to identify biases and to measure risks, and provide methods to mitigate them.
- BRIO leverages on formal methods, based on published research by the group behind MIRAI, offering an unmatched level of control and transparency.
- It is model-agnostic, it has been tested on several tabular dataset and it will be progressively adapted to handle any other kind of data formats.
Consultancy
- Collect information about the current AI models and data used in the organization.
- Identify and prioritize the models and scenarios that need assessment
- Inspect data and models with the appropriate MIRAI tools and methodologies to provide actionable metrics
- Use the collected insights to propose a roadmap for mitigation and transition to responsible AI uses
- Support the Client during the roadmap implementation