Fundamental Rights Impact Assessment for High-Risk AI Systems
Regulatory Context
The European Artificial Intelligence Regulation (AI Act) requires, for certain AI systems, the performance of a Fundamental Rights Impact Assessment.
This obligation applies when the system is classified as high-risk. The assessment must be carried out before the system is put into service.
Purpose of the Assessment
The Fundamental Rights Impact Assessment aims to:
Identify the risks that the AI system may pose to:
privacy
non-discrimination
freedom of expression
freedom of association
presumption of innocence
human dignity
workers’ rights
Assess the severity and likelihood of these risks
Determine appropriate mitigation measures
⚠️ This assessment is distinct from, but complementary to, a GDPR DPIA (Data Protection Impact Assessment).
Conducting the Impact Assessment
In Dastra, when you define an AI system as high-risk, you will see an alert prompting you to perform an impact assessment.

If you have the Dastra Questionnaires module, this alert banner will provide a button labeled “Perform an impact assessment”. Clicking it will open a window containing questionnaires.
If you have installed the impact assessment questionnaire template, it will appear in this window. Otherwise, you can click “Create template”, which will redirect you to the Questionnaires module.
You can then:
Create your own template
Import our existing template
To import our ready-to-use template, click “Create a template”, then "Automated questionnaires”. This will redirect you to a list of templates. In the filters, select “Dastra default library” and choose the type “Privacy impact assessment (DPIA)”. You can also type “DPIA Template for AI Systems - GDPR and EU AI Act Compliance” in the search bar.

You can now carry out your impact assessment.
Last updated
Was this helpful?