How we built our business -
and why it matters.
Risk management strategies fit for the AI age.
Artificial Intelligence risk management is essential if you’re using AI systems. With the enormous potential for automation and massive scalability of AI solutions, the risks associated with the increased speed and scale of business operations have also increased. Meanwhile, AI legislation is shifting towards risk reduction. AI is used more and more in contexts that affect key aspects of our lives, such as loan calculation and hiring software. Any AI system today must be trustworthy to be sustainable.
Several high-profile litigation cases have shown that AI ethics are serious business: IBM was sued for data mismanagement, Optum is under investigation for a racially discriminating algorithm, and Goldman Sachs’ loan algorithm was biased against women (read more here). The EU is working towards a risk-based approach to AI and a penalty regime that will hit companies where it hurts.
AI entails legal and regulatory risks, but reputational risks may be just as impactful - you don’t want your company to be seen as discriminatory. Regular audits are likely to become essential for proper AI risk management. Moreover, responsible AI implies a more efficient use of resources, better products, more understandable outcomes, easier training for staff, and more adaptable, revisable data models. In short: not using AI responsibly means you will lose money in the long term, whether to fines or to inefficiencies - far more than the cost of an audit.
Building on robust research.
Our services and products are based on published research from reputable sources. Through our in-house research experience, we designed our methods using top-quality studies on best practices in AI risk management, internal auditing methods, business strategies, AI ethics, and human-centred design. Our sources include the U.S. Government Accountability Office's Accountability Framework, Stanford University's Institute for Human-Centred AI, the Institute of Internal Auditors' AI Auditing Framework, McKinsey's AI risk management insights, the Center for AI and Digital Policy, the OECD AI Observatory, Holistic AI, AI4People, Deloitte & COSO's Enterprise Risk Management Framework, FERMA, ISACA's AI Auditing Framework, and the World Economic Forum's AI Governance Framework.
Why we do things differently.
Many AI auditing frameworks take inspiration from traditional internal audit strategies that analyze a snapshot of a company’s inner workings. Many frameworks (1) do not properly account for the AI values and principles that have become crucial in AI governance, (2) focus on checklists, management structures, and plans, while neglecting the technical aspects of algorithms and their evolving nature, or (3) over-emphasize technical analysis without providing assistance and assurance regarding deployment, management, and maintenance, ignoring the disruptive potential of AI in business processes. The constant progress in AI technology and governance as well as the fact that machine learning algorithms can evolve over time make the “snapshot” of traditional methods inefficient.
Our methodology solves those problems by focusing on three key concepts.
First, our attention to flexibility, ethics, and governance contexts across Europe allows us to give tailor-made advice specific to the location of your company, the nature of your system, and the relevant governance frameworks. From the very outset of our auditing process, we fit our methods to your needs.
Second, we see AI as an ecosystem, meaning that we go far beyond traditional technical and outcome analysis while ensuring technical robustness. Bias and discrimination can creep in at any level, not just at the level of the algorithm. Our broad-spectrum audit goes beyond technical analysis and will help you future-proof your data and AI ethics practices. Devising new frameworks and approaches for every automation project is not cost-efficient, but letting us audit your processes and systems allows you to develop consistent AI plans and protocols well into the future. If you develop and distribute AI systems, we can also help you design guides for your clients to deploy your systems in ethical ways, in order to mitigate risks generated by inattentive implementation or management.
Third, our methodology is not just ecosystemic but also continuous: we add your system into our policy-tracking database and flag any relevant policy shifts that might be coming your way - this gives you ample time to hedge against governance risks. Since we stay agile and don’t see audits as one-and-done rubber-stamping, we can help you manage your systems more flexibly.