Op/Ed: AI Literacy Can Build a More Responsible Future in North Carolina
The single most important ingredient in responsible AI is literacy. Everything else — accountability, governance structures, tooling, and councils — depends on it.
North Carolina has jumped into action to build that literacy foundation at the state level. Governor Josh Stein’s Executive Order on AI and the formation of the North Carolina AI Leadership Council signal an understanding that leadership in responsible AI requires a diverse coalition of experts and community members. This approach moves beyond abstract principles to focus on the practical application of AI in a way that benefits all citizens.
Establishing a Shared Responsibility Model
The first step in building a responsible AI framework is recognizing that accountability must be shared. It cannot rest solely with engineers and data scientists. In North Carolina, the AI Leadership Council is championing a model of shared responsibility that involves leadership, legal, procurement, security, HR, educators, and the communities directly affected by AI decisions.
The Council’s priorities reflect a crucial shift from theory to practice. The focus is not just on writing principles but on strengthening governance for how AI is used in public services. This involves investing in AI literacy and workforce readiness so decision makers understand what AI can and cannot do, where risks emerge, and who is accountable when things go wrong.
Because earning public confidence in AI is not a purely technical problem; it is a sociotechnical one, and its success hinges on people. Without a population that understands AI’s capabilities and risks, there can be no true accountability. Without accountability, there is no trust — and without trust, AI’s value will stall.
Building Systems for Transparency and Accountability
Principles alone are insufficient. While many organizations measure AI success in terms of speed and productivity, a responsible approach demands more. If you only measure speed, you will only get speed. But if you measure accountability, fairness, reliability, and real-world impact, you will build systems that last.
This is where transparency becomes more than a compliance requirement. Governance must enable organizations to show their work: where data originates, how models are trained, what assumptions are embedded, and where human accountability sits.
If an outcome cannot be traced or explained, the institution is exposed — not only legally, but reputationally. The responsibility of governance includes preventing outcomes where a system is technically compliant yet still undermines public trust or institutional values. Ownership must be explicit, intended outcomes must be tracked against actual outcomes, and teams must be empowered to pause, question, and intervene.
Creating an Interdisciplinary and Inclusive AI Ecosystem
A common barrier to shared responsibility is what can be called the “pandemic of not belonging”, where non-technical professionals assume AI governance is something reserved for technical experts. They see AI as something that happens “over there,” in technical teams, rather than as a set of decisions that shape their own domains. To counter this, leaders must make it clear that lived experience and domain expertise are essential to defining what “good” looks like.
The next generation already intuits this. In a recent session with UNC Chapel Hill Honors students, when asked which roles, beyond engineers, needed a seat at the table to shape AI, their answers clustered around teachers, philosophers, psychologists, doctors, historians, policymakers, and social workers. They described an interdisciplinary, whole-of-society roster, signaling that our curricula and policies must catch up to their expectation that AI is everyone’s business.
To achieve this, North Carolina has an opportunity to build a comprehensive AI literacy ecosystem:
- K–12 Education: Integrate AI curriculum across computer science, social studies, and humanities to help students understand not just the math, but the impact on privacy, bias, power, and democracy.
- Higher Education: Connect data science and engineering with ethics, law, public policy, and health, so that future nurses, teachers, and entrepreneurs all see themselves as part of AI decision-making.
- Informal Learning: Utilize libraries, community centers, and nonprofits as trusted spaces for AI literacy where residents can ask questions, experiment safely, and see their concerns reflected in the conversation.
Leading the Charge Toward a Responsible Future
North Carolina is already home to emerging proof points of this model. North Carolina Central University has established the first AI Academy at an HBCU, pairing technical training with a Law and Policy program. Community colleges and universities across the state are piloting AI courses that link computing with healthcare, agriculture, manufacturing, and more, while informal learning partners like public libraries and youth organizations are exploring AI workshops and badges in after-school programs.
These efforts do more than transfer knowledge; they foster a sense of belonging and empowerment. They reach residents who may never enroll in a formal course, but who are still affected by algorithmic decisions.
By building literacy early and broadly, the state can cultivate leaders who understand not only how AI works, but how it shapes power, equality, and accountability. The goal is not to slow innovation, but to make it durable, shared, and worthy of our trust.