Last updated: March 2026
The Complete Guide to Canada's Algorithmic Impact Assessment
Everything federal practitioners need to know about the AIA, the Directive on Automated Decision-Making, scoping, pre-assessment preparation, impact levels, scoring mechanics, team composition, common mistakes, ongoing obligations, and the June 2026 compliance deadline — in one place.
Module 01
Foundations
What the AIA is, where it comes from, and why it matters for every federal team building or procuring automated decision systems.
Algorithmic Impact Assessment (AIA)
A mandatory, standardized questionnaire maintained by the Treasury Board of Canada Secretariat (TBS). It contains 65 risk questions and 41 mitigation questions that produce an impact score and corresponding level (I through IV). Federal institutions must complete an AIA before deploying an automated decision system that falls within the scope of the Directive and affects the rights, privileges, or interests of individuals or businesses.
Source: Treasury Board of Canada Secretariat — Algorithmic Impact Assessment tool
The AIA applies to systems developed internally as well as systems procured from vendors. Where a federal institution acquires an automated decision system from a third party, the institution remains responsible for determining whether the system is in scope and for completing, approving, and publishing the assessment where required. Vendors may provide technical details needed to answer the questionnaire, but accountability for the assessment remains with the federal institution.
Why This Guide?
Since Treasury Board introduced the Directive on Automated Decision-Making in 2019 and amended it following the fourth review on June 24, 2025, federal institutions have grappled with scattered documentation, evolving requirements, and tight compliance timelines. This guide consolidates everything practitioners need to know in a single resource.
The DADM and the AIA Are Not the Same Thing
Practitioners frequently use "DADM" and "AIA" interchangeably. They are related, but they are not the same thing.
The Directive on Automated Decision-Making (DADM) is a Treasury Board policy instrument. It creates the obligation for federal institutions to assess automated decision systems, apply safeguards proportionate to the impact level, provide transparency, ensure appropriate human involvement, and meet public reporting requirements.
The Algorithmic Impact Assessment (AIA) is the assessment tool used to operationalise that obligation. It is the questionnaire maintained by Treasury Board Secretariat to assess the potential impact of an automated decision system and assign an impact level from I to IV.
Appendix C of the Directive is the bridge between the two. It maps each impact level to the corresponding requirements, including peer review, approvals, transparency measures, and human involvement. In practice, the Directive establishes the obligations, the AIA supports the assessment, and Appendix C determines what additional controls apply at each impact level.
For the full Appendix C requirements, refer directly to the Directive on Automated Decision-Making.
Who Should Read This?
- • Policy teams designing or procuring automated decision systems
- • Data scientists and technical leads implementing AI/ML solutions
- • Compliance and risk officers managing AIA fulfillment
- • Project leads navigating the June 2026 deadline
- • Vendors and consultants supporting federal departments
Scoping
Is Your System in Scope?
Before starting the AIA, teams need to assess whether the system falls within the scope of the Directive on Automated Decision-Making. Systems that use automation or analytics are not always in scope, and the boundaries are often misunderstood.
Core Scoping Questions
The official Guide on the Scope of the Directive on Automated Decision-Making should be treated as the primary source for scope analysis. In practice, departments usually need to examine several questions together:
- 1Is the system being used by a federal institution subject to the Directive?
- 2Is it being used within an administrative decision-making context?
- 3Does it replace or assist elements of human judgment, discretion, or assessment?
- 4Is it deployed in production or being prepared for production use?
- 5Is it a new system, a procured system, or a significantly modified existing system?
These questions matter because scope is not limited to fully automated systems. A system may still be in scope where it supports human decision-makers by generating recommendations, scores, rankings, summaries, or other outputs that influence the decision-making process.
Common Scoping Mistakes
"A human makes the final decision, so we assumed we did not need an AIA."
Not necessarily. The official scope guidance makes clear that the Directive can apply to both full and partial automation, including cases where a human remains involved in the final decision.
"We procured the system from a vendor, so the vendor handles compliance."
Not necessarily. Procurement does not transfer accountability. The federal institution remains responsible for determining scope and for completing the AIA where required.
"The system only triages or prioritises files, it does not make decisions."
This depends on how the triage works and how it affects the downstream decision. If the system applies discretionary criteria or materially influences access, prioritisation, eligibility, or enforcement outcomes, teams should assess scope carefully.
"The system is for internal operations, so it is outside the Directive."
That cannot be assumed. Teams should review the current scope guidance carefully, especially where internal systems affect the rights, interests, treatment, or opportunities of individuals.
For the full scoping criteria and examples, see the Guide on the Scope of the Directive on Automated Decision-Making.
Preparation
Before You Start: Evidence and Documentation
Teams that begin the AIA without the right documentation usually produce weaker answers, incomplete mitigation responses, and unnecessary re-work. Preparing evidence in advance makes the assessment faster and more defensible.
Documentation You Should Have Ready
The AIA draws on information from multiple parts of a project. Before starting the questionnaire, teams should confirm what documentation already exists, what is in progress, and what still needs to be developed.
System and Algorithm Documentation
- •A clear description of what the system does and where it fits in the decision-making process
- •Documentation of the model, rules, or logic used by the system
- •A record of the system's inputs, outputs, and intended use
- •Clarity on who owns the system and who is accountable for its performance
Data Documentation
- •What data the system uses, including whether personal information is involved
- •Where the data comes from and how it is collected
- •Security classification and retention expectations
- •Any known limitations, quality issues, or bias concerns in the data
- •Any data sharing, matching, or transfer arrangements relevant to the system
Privacy and Legal Inputs
- •A completed or in-progress Privacy Impact Assessment, where applicable
- •Input from legal services on authority, compliance, and regulatory considerations
- •Confirmation of whether additional policy or legal authority may be required
Impact and Risk Inputs
- •A clear view of who may be affected by the system and how
- •Whether effects are reversible, long-lasting, or difficult to challenge
- •Whether the system affects vulnerable, equity-denied, or otherwise disproportionately impacted groups
- •Whether the system is novel, precedent-setting, or likely to attract public scrutiny
Consultation and Review Records
- •Records of consultations already completed or planned
- •GBA+ inputs where relevant
- •Notes from privacy, legal, technical, program, and operational reviewers
- •Any existing mechanisms for feedback, challenge, recourse, or escalation
Mitigation and Control Evidence
- •Documentation of existing safeguards such as audit logging, monitoring, human review, recourse, and testing controls
- •Evidence showing which controls are already operational versus still planned
- •Supporting materials that demonstrate how mitigation claims can be substantiated if later reviewed
Why Preparation Matters
The strongest assessments are usually prepared by teams that gather core evidence before drafting responses. That reduces inconsistency, improves the quality of impact reasoning, and helps ensure that mitigation answers are supported by documentation rather than assumptions.
Module 02
The Questionnaire
Understanding the 65 risk questions and 41 mitigation questions that form the heart of any AIA assessment.
Structure
The AIA questionnaire is divided into two main sections, each with distinct purposes and scoring methods.
Risk Questions
65 Questions
Identify and assess risks introduced by the automated decision system. These determine the raw impact score.
Mitigation Questions
41 Questions
Evaluate safeguards and controls in place. When the mitigation score reaches 80% or more of the maximum, 15% is deducted from the raw impact score. The deduction reduces the score, not the impact level directly.
The Six Official Risk Areas
- 1. Project — Project details, reasons for automation, risk profile, and project authority. Max score: 22.
- 2. System — System capabilities, builder, configuration, and accountability. Max score: 17.
- 3. Algorithm — Explainability, learning behaviour, and use of protected characteristics. Max score: 15.
- 4. Decision — The administrative decision being made or supported. Max score: 8.
- 5. Impact — Effects on rights, equality, dignity, privacy, autonomy, health, well-being, economic interests, environmental sustainability, reversibility, and duration. Max score: 52.
- 6. Data — Use of personal information, security classification, bias controls, and accuracy measures. Max score: 55.
Module 03
Scoring Mechanics
How the raw impact score is calculated, adjusted, and mapped to impact levels.
The Algorithm
1. Calculate Raw Impact Score – Sum the weighted answers from 65 risk questions across six risk areas. The maximum possible raw impact score is 169 points.
Risk Area No. of Questions Maximum Score Project 10 22 System 9 17 Algorithm 9 15 Decision 1 8 Impact 20 52 Data 16 55 Raw Impact Score 65 169 The Impact and Data areas together account for 107 of the 169 possible points. These are the most heavily weighted areas in the AIA.
2. Calculate Mitigation Score – Sum the weighted answers from 41 mitigation questions across two mitigation areas. The maximum possible mitigation score is 75 points.
Mitigation Area No. of Questions Maximum Score Consultations 4 10 De-risking and Mitigation Measures 37 65 Mitigation Score 41 75 3. Apply Mitigation Deduction – If the mitigation score is 80% or more of the maximum attainable mitigation score, 15% is deducted from the raw impact score to produce the current score. If the mitigation score is below that threshold, the current score equals the raw impact score.
For a 75-point mitigation score, the 80% threshold is 60 points.
Example: A raw impact score of 90 with a mitigation score of 62 produces a current score of 76.5.
- 4. Map to Impact Level – Convert the current score to a percentage of the maximum raw impact score of 169. The percentage determines the impact level: Level I (0%–25%), Level II (26%–50%), Level III (51%–75%), Level IV (76%–100%).
These scoring tables reflect the current structure of the public AIA tool and should be checked against the current official tool and Directive when preparing or updating an assessment.
Impact Level Ranges
Level I
0% to 25%
Little to no impact.
Level II
26% to 50%
Moderate impact.
Level III
51% to 75%
High impact.
Level IV
76% to 100%
Very high impact.
The percentage is calculated by dividing the current score by the maximum raw impact score of 169.
Example: 76.5 ÷ 169 = 45.3%, which falls in Level II.
Source: Treasury Board of Canada Secretariat. Scoring tables, impact level ranges, and peer review thresholds are based on the Algorithmic Impact Assessment tool page and the Guide to Peer Review of Automated Decision Systems. Impact level obligation summaries are based on Appendix C of the Directive on Automated Decision-Making. For the full authoritative requirements, refer directly to the Directive.
Module 04
Impact Levels Explained
Each impact level brings distinct compliance obligations and governance requirements.
Level I
Little to No Impact
Key Obligations:
- • Notice: Provide plain language notice through all service delivery channels in use that the decision will be made or assisted by an automated decision system.
- • Explanation: Publish a meaningful explanation of how the system works, including the role of the system, input data and its source, criteria used to evaluate data, output produced, and principal factors behind a decision.
- • Training: Role-based training at a high level on how to use and explain the system.
- • Human involvement: The system may make decisions and assessments without direct human involvement. Humans are involved in quality assurance and can intervene where appropriate.
- • Approval: Assistant Deputy Minister responsible for the program.
Level II
Moderate Impact
Key Obligations:
- • All Level I requirements.
- • Peer review: Consult at least one qualified expert and publish the complete review or a plain language summary on a Government of Canada website.
- • GBA+: Complete a Gender-based Analysis Plus in consultation with diversity and inclusion experts.
- • Explanation: When a decision results in the denial of a benefit or service, or involves a regulatory action, provide the client with a detailed explanation including the principal factors and how the automated system output was used by human officers.
- �� Human involvement: Same as Level I.
- • Approval: Assistant Deputy Minister responsible for the program.
Level III
High Impact
Key Obligations:
- • All Level II requirements.
- • Peer review: Same as Level II, with at least one expert.
- • Training: Recurring role-based training covering technical aspects of the system, impacts on privacy, fairness, and human rights, and how to evaluate and override decisions where needed.
- • Human involvement: The final decision must be made by a human. Decisions cannot be made without clearly defined human involvement during the decision-making process. Humans review decisions or recommendations for accuracy and appropriateness.
- • Approval: Deputy Head.
Level IV
Very High Impact
Key Obligations:
- • All Level III requirements.
- • Peer review: Consult at least two qualified experts and publish the complete review or a plain language summary on a Government of Canada website.
- • Human involvement: Same as Level III. The final decision must be made by a human.
- • Approval: Treasury Board.
Source: Treasury Board of Canada Secretariat. Scoring tables, impact level ranges, and peer review thresholds are based on the Algorithmic Impact Assessment tool page and the Guide to Peer Review of Automated Decision Systems. Impact level obligation summaries are based on Appendix C of the Directive on Automated Decision-Making. For the full authoritative requirements, refer directly to the Directive.
Module 05
The Compliance Process
Step-by-step: From system design to deployment and monitoring.
Scoping & Documentation
Identify whether your automated decision system is in scope of the Directive.
Complete the AIA
Answer all 65 risk questions and 41 mitigation questions as accurately as possible. Additional contextual fields may also be required but do not contribute to the score.
Peer Review
Projects assigned impact levels II, III, or IV must undergo peer review. At least one expert must be consulted for Levels II and III. For Level IV, departments are expected to consult at least two experts. For impact levels II and III, a single report is required. Impact level IV requires multiple experts to produce at least 2 independent reports. Publish the complete review or a plain language summary on a Government of Canada website before the system goes into production.
Obtain Approvals
Approval level scales with impact: Assistant Deputy Minister for Levels I and II, Deputy Head for Level III, Treasury Board for Level IV.
Implement Safeguards
Deploy human oversight, client recourse, GBA+, and other controls.
Deploy & Monitor
Launch system and track performance metrics, audit trail, and impact assessments.
Report & Maintain
Publish results on Open Government portal (Level III+) and re-assess as needed.
Module 06
Who Needs to Be in the Room
The AIA should not be completed by one person in isolation. In practice, departments need input from program, legal, privacy, and technical staff to answer the questionnaire accurately and prepare for the requirements that follow from the impact level.
Recommended Team Composition
In practice, most departments need at least these four perspectives:
Program or Policy Owner
The person who understands the business context, the administrative decision being automated, and the clients who are affected. They are best positioned to answer the Project and Decision sections of the AIA and to coordinate with senior management for approvals.
Technical Lead
The person who can explain the algorithm, the data pipeline, the system architecture, and the outputs. They answer the System, Algorithm, and Data sections accurately. Without a technical lead, these sections get filled with assumptions instead of facts.
Privacy / ATIP Officer
Privacy and ATIP officials should be consulted early when the system uses or processes personal information. They help ensure alignment with the Privacy Act, existing Privacy Impact Assessments (PIAs), and Personal Information Banks (PIBs).
Legal Services
The Directive requires consultation with the department's legal services unit from the concept stage of a project. Legal counsel identifies risks related to procedural fairness, authority to automate, and recourse obligations. Engaging legal services after the system is built usually means key design decisions are already harder to change.
For Level III or IV Systems
For higher-impact systems, departments typically also involve:
- • GBA+ and Diversity Specialists — The Directive requires completion of a Gender-based Analysis Plus during development or modification of the system. These specialists assess how the system might impact different population groups.
- • Accessibility Specialists — If the system interacts with clients, accessibility requirements under the Accessible Canada Act apply.
- • Client-Facing Operations Staff — The people who deliver the service and handle recourse requests. They understand how clients actually experience the decision and can identify impacts that program and technical staff miss.
Approval Requirements by Impact Level
Approval requirements increase with impact level under the Directive on Automated Decision-Making. For Levels I and II, the Assistant Deputy Minister responsible for the program approves the system. For Level III, the Deputy Head. For Level IV, Treasury Board approval is required. Teams should identify the responsible senior official early and make sure the AIA is reviewed by the right decision-makers before production.
For the full approval requirements by impact level, refer to Appendix C of the Directive on Automated Decision-Making.
Common Failure Mode
The most common failure is delegating the entire AIA to a single person — usually a junior analyst or a project coordinator — who does not have the authority or expertise to answer questions across all six risk areas. This produces incomplete answers, creates audit risk, and often results in a score that does not reflect the actual risk profile of the system. The AIA is designed to be completed by a team, not an individual.
Module 07
Common Mistakes That Inflate Your Impact Score
These are the errors that experienced practitioners see repeatedly. Most of them are avoidable with better preparation.
Misclassifying System Scope
Including manual processes in the automated decision boundary inflates the risk score unnecessarily. The AIA assesses the automated component, not the entire business process. If a human officer makes the final call based on a system recommendation, the scope is the recommendation engine — not the officer's judgment. Define the boundary clearly before starting the assessment.
Answering Data Questions Without Evidence
The Data section carries the heaviest weight in the AIA (55 out of 169 possible points). Departments that claim they test for bias or validate data quality but cannot produce documentation for it will score poorly on mitigation. The mitigation questions ask about controls that are actually in place — not controls you plan to implement. Document your data governance before starting the AIA, not after.
Leaving Recourse and Explanation Too Late
Higher-impact systems require stronger explanation, human involvement, and review measures. Many teams discover late in the process that client-facing recourse, explanation, or escalation steps are underdesigned. Build these into the service design early rather than treating them as an afterthought.
Using an Outdated Questionnaire Version
The Directive on Automated Decision-Making and the AIA are reviewed periodically. Always confirm that you are using the current official questionnaire and current guidance before you start or update an assessment.
Treating the AIA as a One-Time Exercise
Published AIAs should be revisited on a scheduled basis and whenever the functionality or scope of the system changes. For higher-impact systems especially, the AIA should be treated as a living document. Any material change to the system — new data sources, expanded scope, algorithm updates — should trigger a re-assessment.
Missing the 80% Mitigation Threshold
The mitigation deduction is the single largest score adjustment available. If your mitigation score reaches 60 out of 75 points (80% of the maximum), 15% is deducted from the raw impact score. This can be the difference between Level III and Level II. But you only get credit for mitigation measures that are documented and in place — not planned or aspirational ones. Prepare your mitigation evidence before completing the assessment.
Module 08
After Submission — Ongoing Obligations
Completing and publishing the AIA is not the end of the process. The Directive creates ongoing expectations for monitoring, review, and updates over the life of the system.
Ongoing Review
Departments should revisit the assessment when the system changes, when risk assumptions no longer hold, or when governance requirements are updated. Higher-impact systems generally require closer monitoring and more disciplined review practices.
The review should verify that the AIA still accurately reflects the system as it currently operates. The review frequency can depend on:
- • The nature of the system and the context of its deployment
- • The volume of decisions being made
- • The number of clients affected
- • Whether the system operates in a rapidly changing environment
- • The impact level of the system
Material Changes
Teams should revisit the AIA when the functionality or scope of the automated decision system changes. Examples of material changes include:
- • Adding new data sources or changing data collection methods
- • Modifying the model, rules, or algorithm
- • Expanding to new business lines or client populations
- • Changing the degree of human involvement in the decision
- • Changing the decision criteria the system applies
Mitigation measures can reduce the current score used in the AIA calculation, but they do not directly change the impact level bands themselves. If the system changes materially, teams should complete an updated AIA using current information and confirm whether the impact level has changed.
The Federal AI Register
The Government of Canada published a Minimum Viable Product version of the AI Register on November 28, 2025. It was assembled from existing information sources, including Algorithmic Impact Assessments. As the register evolves, departments will likely need to align public system information across related transparency mechanisms. Maintaining accurate, up-to-date AIAs now reduces the reconciliation burden later.
Documentation and Audit Trails
Teams should maintain clear records of assessment decisions, system changes, testing, monitoring, and supporting evidence. A complete audit trail from the initial AIA through every subsequent review and update makes it easier to defend the assessment, support reviews, and keep published information current over time.
Need a structured way to complete, review, and maintain an AIA? AIA Simplified helps teams work with the current questionnaire, prepare exports, and keep assessment records organized.
Critical
June 2026 Compliance Deadline
June 24, 2026
All systems deployed or procured before June 24, 2025 must fully comply with Directive requirements. This applies to federal institutions subject to the Directive on Automated Decision-Making.
New systems deployed after June 24, 2025 must be in compliance from day one.
What Must Be Done By June 2026
- ✓ Complete or update AIA questionnaire
- ✓ Conduct peer review with qualified experts
- ✓ Obtain required management approvals
- ✓ Implement all impact-level-specific controls — see the full compliance requirements
- ✓ Publish results (Level III systems on Open Government portal)
- ✓ Establish human oversight and recourse mechanisms
- ✓ Document data governance and audit trail practices
For a step-by-step walkthrough of the full AIA process, read How to Complete a Canadian Algorithmic Impact Assessment. New to the framework? Start with What Is an AIA.
Key Fact
No Pass/Fail: Only Impact Levels
The AIA assigns a level (I–IV) that determines compliance obligations, not approval.
Critical Timeline
Cutoff Date: June 24, 2025
Systems deployed before this date have until June 24, 2026 to comply.
Official Government Resources
Start Your Official AIA Process
Use the official Government of Canada resources to begin your assessment, confirm scope, review directive requirements, and prepare for peer review.
↗ Opens in new tab