AI Developments: Grok Controversy, OpenAI’s Regulation Support, and New Research Insights

AI News Roundup

xAI’s Grok Faces Scrutiny

Elon Musk’s xAI has restricted the use of its Grok AI system following revelations that Grok was used to “undress” photos of women and girls online. Reports from the Financial Times indicate that Grok was employed by users on the platform X to digitally remove clothing from images, leading to significant backlash from global governments.

The company has limited Grok’s image generation and editing features to paid subscribers but has not prohibited its capabilities for explicit photo editing. Grok was initially designed with fewer restrictions compared to its competitors. However, increasing criticism has emerged, including a directive from the European Commission to retain documents concerning Grok, and calls from three U.S. senators for Grok and X to be removed from U.S. app stores.

Musk stated that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” while simultaneously criticizing governmental restrictions as a suppression of free speech.

OpenAI Backs AI Regulation in California

In another development, OpenAI has announced its support for a California ballot measure aimed at regulating how AI chatbots interact with children. Previously, OpenAI had backed its own measure for the ballot which would compete with a stricter proposal from the nonprofit organization Common Sense Media.

Both organizations have now agreed to collaborate on a compromise measure designed to give parents more control over their children’s interactions with AI chatbots. Notably, the new measure does not include a ban on cell phones in classrooms or provisions allowing parents and children harmed by AI chatbots to sue AI companies.

OpenAI is set to contribute at least $10 million to support the measure, which requires 875,000 signatures to be placed on the ballot for the upcoming November election. Signature collection is scheduled to commence early next month.

Research on AI Model Memorization

Recent research has revealed that the phenomenon of AI models “memorizing” their training data is more prevalent than previously understood. A preprint paper released by researchers at Stanford and Yale indicated that several popular AI systems, including OpenAI’s ChatGPT and Anthropic’s Claude, can reproduce verbatim excerpts from the texts they were trained on.

This “memorization” phenomenon was observed when Claude, prompted by researchers, produced nearly complete texts from well-known books, including George Orwell’s “Nineteen Eighty-Four” and the first “Harry Potter” novel. AI companies have generally denied that their models store copies of training data, as such an admission could expose them to legal liabilities for copyright infringement.

This new information sheds light on the operational mechanics of AI models, often seen as a black box, and may influence future legal debates surrounding AI.

Ford’s AI Assistant Launch

The Ford Motor Company is preparing to launch an AI assistant for several of its car models. At the annual Consumer Electronics Show (CES) in Las Vegas, AI was prominently featured among new gadgets, with Ford’s chief officer, Doug Field, stating the goal is to personalize the driving experience through AI.

For instance, a driver could take a photo of an object intended for their truck, and the AI would assess whether it fits in the truck bed. The rollout of the AI assistant is expected in the Ford and Lincoln smartphone apps later this year, with plans for integration into new car models by 2027.

SEC Approves AI in Proxy Voting

A U.S. Securities and Exchange Commission (SEC) official has approved the use of AI by investment advisors for making proxy voting decisions. Brian Daly, director of the SEC’s Division of Investment Management, emphasized the potential of AI tools like large language models to assist advisors without replacing human judgment.

This marks a shift from the more cautious stance of the previous SEC Chairman, Gary Gensler. A recent executive order from President Trump has instructed the SEC to review regulations concerning proxy advisors, particularly in relation to diversity, equity, and inclusion (DEI) and environmental, social, and governance (ESG) policies.

Daly advised attendees to “stay tuned” for the results of the SEC’s inquiry into these significant matters.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...