How to Regulate AI by Learning from the United States
Artificial intelligence conjures diverse images: from robotic humanoids to scenes from Chaplin’s Modern Times to tools like ChatGPT that we use every day. AI is already an everyday reality in the United States, present in multiple aspects of our lives. Andrew Ng pointed out that artificial intelligence is “the new electricity”, a tool that will permeate all human areas. This promise has captured the attention of investors: it is estimated that by 2026 investment in AI will exceed $500 billion. This raises ethical challenges and the urgency of establishing appropriate legal frameworks by sector and level: local, state, national, and international.
Areas of AI Integration in Everyday Life
Here are four areas where AI is integrated into everyday life in the United States, along with the regulations specific to those sectors:
Transportation: Autonomous Vehicles
In several Californian cities, robotaxis—autonomous vehicles that transport passengers without a driver—are operating. Equipped with cameras, radar, and learning systems, these vehicles are becoming increasingly common in Los Angeles and other areas of the country.
Trade: Cashier-less Markets, “Just Walk Out” (Take It and Go)
In cities such as Washington DC and Los Angeles, there are markets managed by Amazon under the “Just Walk Out” concept. Users enter by identifying themselves with the palm of their hand, take their products (bread, milk, rice, etc.) directly by putting them in their bags or baskets, and a system of multi-cameras and sensors automatically registers the purchases. At checkout, the customer receives the invoice by email. There are no checkouts and no lines. Naturally, this requires pre-registration in the system with personal and financial data.
Logistics: Distribution Centers
Amazon’s mega-distribution centers represent perhaps the most spectacular interaction between AI and humans. The largest, located in Ontario, California, spans more than 400,000 square meters. These warehouses function as “living organisms” with thousands of mobile robots moving on highways to go back and forth between shelves, bringing products to and from operators (humans). This AI system in distribution centers predicts traffic, optimizes inventories, and collaborates with staff. An Amazon executive pointed out that the goal of AI is not to replace human labor but to facilitate it and create new jobs integrated into the system.
Education
AI has deeply penetrated U.S. educational practices. A large part of the faculty, from elementary to higher education, uses artificial intelligence tools for class design, administrative management, didactic planning, performance analysis, and the development of pedagogical resources. In the university context, 90% of students incorporate it in their learning processes.
Health and Wellness
In the North American healthcare system, institutions use AI to support diagnostics—especially imaging—refine analysis, process massive data, and automate administrative tasks. For patients, there are everyday applications: health chatbots, online triage systems, and wearables to monitor physical activities or vital signs.
The Challenges
While these applications are positive, there are also dangerous uses of AI: development of autonomous lethal weapons, cyber-attacks, manipulation of information, and violation of privacy.
The Need for Ethical and Legal Regulations
Given these realities, it is necessary to establish legal regulations and ethical guidelines for the use of artificial intelligence, from the local to the international level. Although it would be ideal to have binding international legislation, for countries such as the United States—the main developer and user of AI—a treaty of such scope is not very plausible. In any case, it would be just one piece of the regulatory machinery emanating from the local and national levels.
Examples of Current Regulation in the United States
Regulation of Autonomous Vehicles
There are specific rules for robotaxis. When one of these vehicles is involved in an accident, the National Traffic Safety Administration and the Department of Transportation require immediate notification in a nationwide registry. In states such as California, Arizona, Texas, or New York, there are legal frameworks that regulate permits, terms of service, and liability for robotaxi accidents. The company that manages the vehicles is held responsible in case of an accident. In California, there is a protocol for reporting incidents directly to the agency. These rules also extend to insurers. The costs of policies for autonomous vehicles are high, which forces companies to avoid violations. As it is AI, the machines record what is allowed and what is prohibited.
Education
Guidance and state regulations exist in the U.S. education arena. The Department of Education issued guidance on AI use in 2025 that calls for respecting privacy, civil rights, and academic integrity standards. Many states have issued official guidance. Unlike in many countries, school districts are independent entities that develop their own policies in coordination with state and federal laws. California universities operate on the same principle; each defines its own regulatory framework. However, there is a national consensus: regulations against plagiarism extend to the use of AI. Institutions have adopted advanced tools that detect texts generated entirely by artificial intelligence, and their use is widespread.
Health
Although there is no single legal standard specific to AI in healthcare, there is a regulatory patchwork involving AI. The Health Insurance Portability and Accountability Act (HIPAA) protects patients’ medical data and requires entities that handle it (hospitals, insurers, clinics) to comply with strict privacy and security rules.
The Regulatory Path of AI
The regulatory path of AI is just beginning. It should be established in each sector (education, health, finance) and from the bottom up: local, state, national, and international. A universal supranational law regulating AI seems unthinkable, especially since many legal frameworks—particularly in the U.S.—are complex. The U.S. controls the models, the hardware (chips from companies such as NVIDIA), and the infrastructure (Google Cloud, AWS) that make AI possible. Therefore, possible AI regulatory frameworks must come from the U.S. and then, at another level, mesh with non-binding agreements at the international level.
The Role of the Church in AI Regulation
The Church has been a pioneer in developing, promoting, and using an ethical framework for the use of artificial intelligence. Some documents stand out, such as “Antiqua et Nova”, a note on the relationship between artificial intelligence and human intelligence from the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education of January 14, 2025. Noteworthy are the interventions of pontiffs, such as Pope Francis and Pope Leo XIV, regarding AI, including Pope Francis’ 2024 World Day of Peace message and Pope Leo XIV’s speeches on the subject.
These recent interventions are based on the principles of the Social Doctrine of the Church, which should be applied to the use of artificial intelligence, particularly concerning issues of human dignity, common good, and solidarity. These ethical norms could also be developed and applied at the level of each ecclesiastical jurisdiction, especially in sectors where the Church exercises its functions, such as Catholic schools or hospitals, seminaries, and formation centers. Some dioceses, such as those in Biloxi (Mississippi), Orange (California), and the Maryland Catholic Conference, already have guidelines in this regard.
Towards Multisectoral and Multilevel Legal Framework
At the international level, the Holy See can significantly contribute to constructing a normative framework on artificial intelligence at the United Nations level. This framework should be a non-binding agreement since a binding treaty would face significant obstacles—both due to incompatibility with legal systems such as the U.S. and the need for differentiated responses according to sectors and jurisdictional levels. Therefore, it seems more viable and effective to promote one or several non-binding agreements within the UN to guide the regulation of AI on a global scale, thus respecting the regulatory autonomy of each country.