Quebec’s New AI Guidelines for Higher Education

Quebec Releases AI Policy For Universities And Cégeps

In a significant move for the educational sector, Quebec has unveiled its AI policy specifically targeting universities and Cégeps. This policy arrives nearly three years after the emergence of generative AI tools like ChatGPT, highlighting the province’s response to the evolving landscape of technology in education.

Overview of the AI Policy

Last month, the Quebec government published two essential policy documents regarding the use of generative AI in higher education institutions. The first document is a framework that outlines guiding principles and ethical considerations, while the second is a practical guide designed to assist these institutions in implementing AI policies.

“AI is now part of the higher education landscape, and we must find ways to adapt and take advantage of this new technology,” stated Pascale Déry, the minister of higher education, during the announcement of these documents.

Consultation and Development

The policies were developed by IVADO, an organization focused on training and knowledge mobilization, after extensive consultations with students, professors, and a review of existing AI policies across various post-secondary institutions. According to Réjean Roy, the director of training at IVADO, these documents aim to foster discussion on AI governance in the postsecondary sector.

Key Principles and Recommendations

Among the principles highlighted, the importance of maintaining human oversight while utilizing AI tools was emphasized. The guidelines advocate for broader consultations within institutions and stress the significance of accessibility and ethical considerations regarding AI usage.

Furthermore, the documents provide practical examples of how educational institutions can leverage generative AI for both classroom activities and student services, including psychosocial support and advising.

Implementation at Concordia University

At Concordia University, the Centre for Teaching and Learning (CTL) has previously released its own guidelines for teaching with generative AI. These guidelines focus on privacy and ethics, advising instructors to explicitly include their AI policies in course syllabuses. While no specific AI tools are recommended, CTL guidelines discourage the use of unauthorized tools designed to detect AI-generated plagiarism.

Instructors, like Gregor Kos from the chemistry and biochemistry department, have begun incorporating AI into their coursework while ensuring students remain accountable for their work. Kos has observed that while students are encouraged to utilize AI, they must also engage in traditional research methods, thus enhancing their learning experience.

Student Perspectives on AI Tools

Concerns regarding generative AI’s impact on students’ writing and reasoning skills are prevalent among educators. Stephen Yeager, a professor in the English department, has expressed reservations about the potential detrimental effects of AI on student learning outcomes. He advocates for a balanced approach that includes best practices for AI use rather than a strict prohibition.

Students have also shared their experiences with generative AI tools. For instance, undergraduate student Lena Palacios raised concerns about the biased outputs generated by tools like DALL-E, which she found problematic when creating culturally sensitive content.

Future Considerations

Benoit Lacoursière, president of the Fédération des enseignantes et des enseignants du Québec (FNEEQ), underscored the need for additional training and resources as institutions adopt AI technologies. He highlighted that effective integration of AI requires funding, which has been diminished in recent years due to budget cuts.

As the educational landscape continues to evolve with the integration of AI, these policies aim to ensure that institutions can effectively adapt to new technologies while fostering discussion surrounding their ethical implementation.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...