Workday’s Bold Future With Responsible Agentic AI
As AI enters its next evolution—agents—Workday’s responsible AI (RAI) governance framework continues to guide the company’s decisions about product development. AI agents, which are systems designed to translate user intent into coordinated action, open up a world of potential in the workplace. With these new capabilities come new considerations about responsible AI.
True innovation means bringing real, positive value to customers while avoiding harm, which is why Workday’s commitment to ethical and responsible AI is now more crucial than ever. It all comes back to the foundational values—Integrity and Innovation. These AI Ethics Principles serve as a guiding star, ensuring the AI built is trustworthy. Responsible AI is not an add-on; it’s at the core of the strategy.
Focus on Both Tech and People
At Workday, the approach to responsible AI is holistic—referred to as sociotechnical. This means looking at the full picture: not just the technology but how real people use it in real workplaces. The design of guardrails is based on how AI interacts with teams, processes, and the entire organization. Agentic AI introduces a new dynamic, moving from assistance to autonomous action, prompting adaptations in existing frameworks to scale with new risks.
Technical Considerations
Agentic AI ushers in a major technical shift. To govern it, Workday focuses on three core areas—risks that are also central to emerging regulations like the EU AI Act:
- Autonomy: Deliberation on which actions agents can take independently. It prioritizes human rights and safety, ensuring consequential decisions, like hiring, remain under human control.
- Transparency: Agent actions must be visible and traceable. An audit trail is essential for customers and developers to reconstruct behavior, identify misalignments, and ensure accountability.
- Security: Agents with access to enterprise tools pose security risks. To limit harm, agents operate on a “least privilege” model, inheriting only the existing permissions of human users.
Social Considerations
Technical guardrails are only part of the equation. The social implications of adopting agentic AI solutions involve rethinking the impact on people, including job roles. New risks emerge from collaboration between humans and agents. Therefore, the social governance focuses on three key areas:
- Accountability: Clarity on responsibilities is vital. Humans must retain control of consequential decisions while understanding their role in overseeing the technology.
- Evaluations: Evaluations go beyond output quality. Testing agent behavior involves assessing how they interpret instructions, make decisions, and respond to issues.
- Change management: Guiding employees through the AI journey is crucial for a smooth transition and building trust. The focus is on human-centered values and clear safety guardrails.
Leaning Into the Future, Respecting Our Core
Workday is committed to building responsible and trustworthy AI agents, grounded in foundational values and ethical principles. This commitment is facilitated by a strong governance framework intentionally built for agility, allowing core principles to extend confidently into the agentic space. As technology, policy, and regulations evolve, Workday remains on solid ground, avoiding the need to fundamentally rethink its approach.
By staying true to integrity and innovation, the principled approach to agentic AI is poised to unlock significant potential and transform the world of work for the better.