AI is showing up in health care faster than most people expected. That is why governments, educators and employers are building rules to keep patients safe, protect privacy, and make sure students and practitioners stay accountable. Here is the simple breakdown of what different government levels are doing to incorporate AI changes into the safe and effective delivery of health care.
Federal Regulation and National Initiatives
Pan-Canadian AI for Health principles (AI4H)
Canada’s federal, provincial, and territorial health leaders created the Pan-Canadian AI for Health (AI4H) Guiding Principles to support safe and ethical use of AI across health systems. The principles focus on things like patient safety, transparency, accountability, privacy, and equity. You can read them here: AI4H Guiding Principles (Health Canada).
Privacy Rules Apply Even for “New” AI
The principles above are based on existing health care laws and privacy acts, and Canada’s privacy regulators have made it clear that those laws still apply to generative AI. In late 2023, the Office of the Privacy Commissioner of Canada released practical principles for “responsible and privacy-protective” generative AI, including being transparent, limiting data use, and protecting personal information.
AI-specific Legislation in the Pipeline (AIDA)
Policymakers proposed an Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, aimed at regulating “high-impact” AI systems and setting oversight expectations. The government published an explanation of how AIDA would work here.
However, Bill C-27 (and AIDA) did not become law after Parliament was prorogued in early 2025, so Canada still does not have a single, finalized AI law that covers everything.
Alberta’s role (Since Health Care is Provincial)
Alberta’s health privacy rules already shape what AI tools can be used at work. The Health Information Act (HIA) sets rules for how health information must be collected, used, disclosed, and protected.
A very practical Alberta example is the Alberta privacy commissioner’s guidance on “AI scribes” (tools that record and summarize patient visits). The guidance explains what custodians must include in a Privacy Impact Assessment and what safeguards are expected under HIA.
For students, the takeaway is simple: in Alberta workplaces, “consumer AI” is often not allowed for patient data (ChatGPT, Gemini, Claude, etc.). Employers need health-grade tools, clear contracts, and strong security controls.
How These Rules Show Up at School and Work
This is where most students will notice AI regulation the most, because it affects what is allowed in class and practicum.
Common approaches include:
- Using approved, secure platforms designed for health care workflows, not open public chatbots
- Banning entry of any patient-identifying information into non-approved tools
- Requiring Privacy Impact Assessments before new AI systems go live (common in public health care)
- Setting academic integrity rules so students still learn the core skills (documentation, terminology, critical thinking)
In plain terms, AI is being managed like any other clinical risk. If a tool is not private, not explainable, or not accountable, it does not belong in patient care.
Explore a New Future in Healthcare with the Latest Technology at ABES
Want a health care career that stays future-ready and human-first? Explore ABES programs in Alberta, where we train the next generation of healthcare support workers. Contact us today to learn more about admissions!


