AIQI Member Spotlight - TÜV AI.Lab

About TÜV AI.Lab

The TÜV AI.Lab was founded in 2023 as an independent joint venture by the TÜV companies TÜV SÜD, TÜV Rheinland, TÜV NORD, TÜV Hessen and TÜV Thüringen. The TÜV AI.Lab aims to translate the regulatory requirements for AI into practice and make Europe a hotspot for safe and trustworthy AI. To this end, it develops quantifiable conformity criteria and suitable test methods for AI. The AI.Lab also actively supports the development of standards and norms for AI systems.

TÜV AI.Lab’s role in AI Assurance

Translating regulatory requirements – such as those outlined in the EU AI Act – into practical assessment and certification procedures, the TÜV AI.Lab plays a key role in globally advancing the safe, secure, and ethical development of artificial intelligence. As a joint venture of leading TIC organisations, the TÜV AI.Lab develops quantifiable conformity criteria, test methods, and assessment frameworks to enable third-party assessment by Notified Bodies and AI certification, particularly for high-risk AI systems.

Among other initiatives, we developed the AI Act Risk Navigator, a free and low-threshold online tool for risk classification of AI systems and models. Entrepreneurs, managers, developers and specialists can use it to understand whether and how they are affected by the EU AI Act. This will result in greater clarity and certainty in the implementation of the regulation.

Furthermore, we are proud to contribute our TÜV AI Assessment Matrix to the efforts of AIQI, providing the core structure for the AIQI AI Assurance Framework. The TÜV AI Assessment Matrix provides a systematic and comprehensive framework for all conformity criteria, assessment subjects, and forms of assessment for AI systems, thus creating, for the first time, a comprehensive meta-structure for the entire field of AI assessment. The Matrix enables a systematic overall structure of all assessment dimensions based on a comprehensive principle and at the same time integrates both technical and ethical dimensions. It includes a coherent set of definitions which aligns with the relevant standards, and offers the possibility of mapping requirements such as from the EU AI Act or existing standards. Furthermore, the Matrix builds a bridge between theoretical concepts and practical evaluation methods. It is also an excellent tool for comparing various international regulations, assessment approaches, or evaluation catalogs with one another.

In addition to various other internal and external research projects, we currently work on sector-specific pilot use cases to validate our own TÜV AI Assessment Framework under real-world conditions. The TÜV AI.Lab contributes actively to standardisation initiatives at the German, European and international level.

We believe that collaboration in the ecosystem is essential to building trust in AI – and we work closely with TIC players, authorities, industry stakeholders, and research institutions across Europe to achieve this goal together.

AIQI establishes a highly important and diverse dialogue forum of quality infrastructure bodies and other organisations, such as accreditation bodies and national physical laboratories, ensuring AI is developed and deployed in a safe, secure, and ethical manner – a goal we strive for decisively.

The TÜV AI.Lab joined the AIQI Consortium to contribute its AI assessment expertise to the development of a global AI assessment framework. We strongly support this goal of developing a global AI assessment framework that builds on the EU AI Act, but also looks beyond it – towards an international, cross-sectoral approach to trustworthy AI. One of the key gaps identified in the global AI quality landscape was the former lack of a systematic, overarching framework to coordinate the many highly specialised efforts underway in technical, regulatory, and ethical AI assessment. By a.o. contributing the TÜV AI Assessment Matrix, we aim to support a shared understanding of conformity dimensions and help enable scalable, internationally coherent approaches to AI assurance.

At TÜV AI.Lab, we see AI assurance as a cornerstone of future-proof, responsible innovation. Our priority is to help build robust, practical frameworks that ensure AI systems can be efficiently tested and proven to be safe, secure, and trustworthy by design. With pragmatic implementation solutions around regulations such as the EU AI Act, we believe Europe has a unique opportunity to become a hub for trustworthy AI – not only through high-level regulatory requirements, but also through shared methodologies, testing practices, and a thriving assurance ecosystem.

We believe that trust in AI must scale globally. That’s why we actively support international initiatives like AIQI, which aim to align efforts across borders and sectors. One of the most pressing challenges ahead is to translate high-level requirements into concrete, testable criteria – a task that demands collaboration, clarity, and cross-disciplinary insight.

We look forward to shaping this field together with partners across the AI ecosystem!

If you’re interested in working with us or learning more about our work, feel free to reach out:
📧 info@tuev-lab.ai
🌐 www.tuev-lab.ai

 
Previous
Previous

AIQI Member Spotlight - UKAS