AI News Roundup
xAI’s Grok Faces Scrutiny
Elon Musk’s xAI has restricted the use of its Grok AI system following revelations that Grok was used to “undress” photos of women and girls online. Reports from the Financial Times indicate that Grok was employed by users on the platform X to digitally remove clothing from images, leading to significant backlash from global governments.
The company has limited Grok’s image generation and editing features to paid subscribers but has not prohibited its capabilities for explicit photo editing. Grok was initially designed with fewer restrictions compared to its competitors. However, increasing criticism has emerged, including a directive from the European Commission to retain documents concerning Grok, and calls from three U.S. senators for Grok and X to be removed from U.S. app stores.
Musk stated that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” while simultaneously criticizing governmental restrictions as a suppression of free speech.
OpenAI Backs AI Regulation in California
In another development, OpenAI has announced its support for a California ballot measure aimed at regulating how AI chatbots interact with children. Previously, OpenAI had backed its own measure for the ballot which would compete with a stricter proposal from the nonprofit organization Common Sense Media.
Both organizations have now agreed to collaborate on a compromise measure designed to give parents more control over their children’s interactions with AI chatbots. Notably, the new measure does not include a ban on cell phones in classrooms or provisions allowing parents and children harmed by AI chatbots to sue AI companies.
OpenAI is set to contribute at least $10 million to support the measure, which requires 875,000 signatures to be placed on the ballot for the upcoming November election. Signature collection is scheduled to commence early next month.
Research on AI Model Memorization
Recent research has revealed that the phenomenon of AI models “memorizing” their training data is more prevalent than previously understood. A preprint paper released by researchers at Stanford and Yale indicated that several popular AI systems, including OpenAI’s ChatGPT and Anthropic’s Claude, can reproduce verbatim excerpts from the texts they were trained on.
This “memorization” phenomenon was observed when Claude, prompted by researchers, produced nearly complete texts from well-known books, including George Orwell’s “Nineteen Eighty-Four” and the first “Harry Potter” novel. AI companies have generally denied that their models store copies of training data, as such an admission could expose them to legal liabilities for copyright infringement.
This new information sheds light on the operational mechanics of AI models, often seen as a black box, and may influence future legal debates surrounding AI.
Ford’s AI Assistant Launch
The Ford Motor Company is preparing to launch an AI assistant for several of its car models. At the annual Consumer Electronics Show (CES) in Las Vegas, AI was prominently featured among new gadgets, with Ford’s chief officer, Doug Field, stating the goal is to personalize the driving experience through AI.
For instance, a driver could take a photo of an object intended for their truck, and the AI would assess whether it fits in the truck bed. The rollout of the AI assistant is expected in the Ford and Lincoln smartphone apps later this year, with plans for integration into new car models by 2027.
SEC Approves AI in Proxy Voting
A U.S. Securities and Exchange Commission (SEC) official has approved the use of AI by investment advisors for making proxy voting decisions. Brian Daly, director of the SEC’s Division of Investment Management, emphasized the potential of AI tools like large language models to assist advisors without replacing human judgment.
This marks a shift from the more cautious stance of the previous SEC Chairman, Gary Gensler. A recent executive order from President Trump has instructed the SEC to review regulations concerning proxy advisors, particularly in relation to diversity, equity, and inclusion (DEI) and environmental, social, and governance (ESG) policies.
Daly advised attendees to “stay tuned” for the results of the SEC’s inquiry into these significant matters.