How to Complete a Canadian Algorithmic Impact Assessment: Step-by-Step Guide
Published March 2026
~11 min read
Who This Is For
Program owners, digital governance leads, policy analysts, and consultants helping federal departments complete AIAs.
What You'll Learn
The full AIA process: who needs one, team composition, what the questions cover, scoring mechanics, impact level obligations, and how to publish and maintain results.
The Algorithmic Impact Assessment Is Mandatory
It contains 65 risk questions and 41 mitigation questions. It produces an impact level that determines the requirements and oversight expected under the Directive on Automated Decision-Making. And most teams find it harder than expected. If you are new to the framework, start with what an AIA is and why it exists before working through this process guide.
Where Teams Get Stuck
- • Ownership unclear — no single person drives the assessment across program, legal, privacy, and technical
- • Evidence scattered across email, shared drives, and people's heads
- • Legal consulted too late to change anything
- • Peer review started months after it should have been
- • Publication prep treated as an afterthought
These are not knowledge gaps. They are coordination failures.
Do You Need an AIA?
Yes, if your system makes or assists in making an administrative decision that affects the legal rights, privileges, or interests of a person or business. That includes rule-based scoring, ML-driven triage, predictive analytics, and NLP classification. If a human officer makes the final call but your algorithm shaped the recommendation, the system is in scope.
The Directive applies to all federal institutions under the Policy on Service and Digital. Since the third review (April 2023), internal services are covered too — hiring tools, performance scoring, resource allocation. Agents of Parliament have until June 2026. For a full breakdown of what the Directive requires at each stage, see our compliance overview.
Build Your Team First
Team composition is the single biggest predictor of AIA quality. The World Privacy Forum's 2024 review found that assessments completed primarily by IT teams were weaker on social impact, legal nuance, and privacy implications.
Minimum team:
- • Program/policy owner who understands the decision and clients
- • Technical staff who can explain the algorithm, data, and outputs
- • Legal services — required before development
- • Privacy/ATIP officers
For Level III or IV: add GBA+ leads, accessibility specialists, and client-facing operations staff.
The 7-Step Process
Assemble cross-functional team
Program, technical, legal, privacy. GBA+ lead for Level II+.
Complete AIA during design phase
65 risk questions and 41 mitigation questions. If information is still unknown during design, document assumptions clearly and revisit them before production.
Commission peer review (Level II+)
Start early. This is the bottleneck. 1–3 months lead time.
Implement Directive requirements
Human oversight, bias testing, notice, explanation, monitoring, recourse.
Re-complete AIA before production
Validate results reflect the actual system built.
Publish on Open Government Portal
Both official languages. Peer review published alongside for Level II+.
Monitor, review, maintain
Scheduled reviews. Update on any scope or functionality change.
What the Questions Cover
Project Details: Business context, decision type, reasons for automation, approval authorities
System Design: Algorithm type (rule-based vs. ML), explainability, security classification
Decision Context: Who's affected, vulnerability of populations, degree of human involvement
Impact Analysis: Duration, reversibility, effects on rights, health, dignity, economic interests
Data Governance: Personal info, data sources, accuracy, retention, security classification
Consultation: Stakeholder engagement, GBA+, community input, legal consultations
Mitigation questions (41) cover de-risking measures and data quality. You need documented evidence of controls actually in place — not future plans.
What Drives Your Score
- • Irreversible decisions score dramatically higher than short-term, reversible ones
- • Vulnerable populations (Indigenous Peoples, racialized communities, persons with disabilities) add substantial weight
- • Opaque algorithms (deep learning) score higher than interpretable rule-based systems
- • No human in the loop pushes risk up
- • Sensitive personal data (health, biometric, security-classified) elevates data risk
- • The 80% mitigation threshold: if the mitigation score reaches 80% or more of the maximum attainable mitigation score (60 out of 75 points), 15% is deducted from the raw impact score to produce the current score
Need the full breakdown? See The Complete Guide to Canada's Algorithmic Impact Assessment for scoring tables, impact levels, peer review thresholds, and worked examples.
Five Mistakes That Cost Teams Time and Credibility
- Treating it as an IT exercise. The AIA covers legal, privacy, equity, and client rights. A tech-only team leaves gaps that reviewers will find.
- Starting too late. Start during design, when you can change architecture. After the build, you're documenting, not de-risking.
- Gaming the score. Published AIAs are public. Researchers and journalists review them. Strategic under-scoring creates risk you cannot take back.
- Missing the 80% mitigation threshold. Document controls before the assessment, not after.
- Forgetting to update. Any change to functionality or scope triggers a re-assessment.
Key Takeaway
The hard part is not understanding the AIA — it is coordinating the people, evidence, reviews, and deadlines across a cross-functional team. That is not a knowledge problem. It is a workflow problem.