Innovating Responsibly: The Future of AI Development

Thinking Like a Start-up About Responsible AI Development

Entrepreneurial thinking has a history of transforming industries and reshaping everyday life. When educator and innovator Sir Rowland Hill first proposed stamps—paper ‘covered at the back with a glutinous wash’—the UK’s Postmaster General dismissed it as the most extravagant of all the “wild and visionary schemes”. Yet Hill’s idea was rooted in practicality: people avoided the postal system because it was slow and costly.

By introducing pre-paid postage, Hill aimed to make mail affordable and accessible. Despite initial opposition, the Penny Black stamp featuring Queen Victoria debuted in 1840 in the UK, quickly becoming a success. Over 70 million letters were sent that year, with volume tripling within two years. Looking back, the impact was exponential.

Today, start-ups with a similar mindset are transforming sectors from manufacturing to media and communications, customer services, and the arts. The applied AI Institute for Europe estimates that the EU is home to around 6,300 AI start-ups, 10.6 percent of them focusing on generative AI. This growth and adoption lead to pressing questions: how can the start-up mindset be sustained while ensuring responsible development? Can AI tools genuinely benefit workers?

Worker Transparency: Common Challenges of AI Models

Protecting human interest has become a focal point of regulation when addressing the unique challenges posed by big and small AI models. The former require neural networks that work better with more data, in the case of some large language models (LLMs) like ChatGPT with over 200 billion parameters per model, making them more expensive to train and run. The latter are fine-tuned for specific tasks or areas with fewer parameters but can rival larger models. For instance, Phi-3-mini and its open-source equivalent SmolLM rival the performance of models 25 times larger.

Transparency is essential for fostering trust and understanding among workers. Without clear insight into how AI models function, workers may become sceptical or distrustful of AI-supported decision-making. The City of Amsterdam officials suggest that a bottom-up approach has proven crucial in analyzing and aligning AI initiatives by sector. Exposure varies when comparing roles heavily dependent on information or within public administration against those in hospitality or manufacturing.

Some companies advocate including workers in the development process by empowering them to experiment with AI tools directly. The concept of ‘citizen data scientist’, which enables workers to engage hands-on with AI, exemplifies how inclusivity can help demystify AI and build trust. Allowing workers to test and adapt AI tools not only makes the technology more accessible but also reduces apprehension about its role in the workplace.

“But in the end, it’s the human expertise that makes the final product valuable.”

Designing for Augmentation, Not Replacement

Designing AI to enhance human roles, not replace them, was the roundtable’s main theme. The consensus leaned toward a model of augmentation in which AI takes on repetitive tasks, allowing workers to focus on creative, strategic, or emotionally driven work. Involving workers directly in AI adoption enhances job satisfaction and morale by inviting feedback and addressing redundancy.

“Depending on their identity, people may be more willing to adapt,” an HR expert shared. “The question is how to design better organizations while remaining conscious of the threats AI poses in the workplace.” This approach ensures that AI aligns with workers’ needs, creating a supportive rather than disruptive presence.

Small AI models, developed by start-ups and large companies alike, are mostly focused on a single task or reduced scope, allowing for faster testing, iteration, and refinement. The benefits of some of these models include improved productivity and streamlined processes. Yet even small AI models should prioritize a responsible design, such as privacy by default. “If a model can understand your communications history with a coworker and draft an email response, it’s a step forward,” a customer service expert noted. “But in the end, it’s the human expertise that makes the final product valuable.”

Big AI models face challenges of scale and oversight. “Europe is already seen as too bureaucratic, even if start-ups are not highly disruptive,” a portfolio manager argued. “Start-ups shouldn’t be over-regulated; let’s focus on bigger fish while fostering grassroots innovation.” Some participants agreed that this differentiation should come to the fore in norm-setting or regulatory enforcement, especially with frameworks like the EU AI Act at the national level.

Aligning AI Models with Needs in the Workforce

Start-ups are powered by an entrepreneurial mindset that provides lessons in adaptability, experimentation, and scaling. Just as Hill’s postage stamp transformed communications by making postal delivery more accessible and reliable, AI development is reshaping the core foundations of value generation. So, how can organizations maintain the start-up spirit of innovation while navigating the complexities of responsible AI?

For big AI, rigorous oversight and transparency are essential, given these models’ potential impacts across industries and in broad societal areas such as education, neuroscience, healthcare, and public safety. While small AI models are generally lower risk, they should integrate human-centered approaches to maintain trust and inclusivity.

“Automation is a long-term investment. High-frequency tasks will be targeted. Fraud detection, for example, is often done using machine learning,” an AI consultant suggested. “The attitude is that we should be scared but excited, being realistic about models’ use and asking organizations about their trust and culture.” Workers remain central to this transformation. To enhance productivity without eroding trust, clear guidelines are needed about the deployment and impact of AI models in most areas. This includes transparent communication, promoting augmentation, and supporting upskilling opportunities to align AI with human needs in the workforce.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...