Empowering Hong Kong Firms to Prioritize AI Safety

Hong Kong Firms Must Take Initiative on Safe AI Practices

As artificial intelligence (AI) develops rapidly, an increasing number of organizations are leveraging this technology to streamline operations, improve quality, and enhance competitiveness. However, AI poses significant security risks, including personal data privacy risks, that cannot be ignored.

Organizations developing or using AI systems often collect, use, and process personal data, leading to privacy risks such as excessive collection, unauthorized use, and breaches of personal data.

The Global Context of AI Security

The importance of AI security has become a common theme in international declarations and resolutions adopted in recent years. In 2023, 28 countries, including China and the United States, signed the Bletchley Declaration at the AI Safety Summit in the UK. This declaration stated that misuse of advanced AI models could lead to catastrophic harm and emphasized the urgent need to address these risks.

In 2024, the United Nations General Assembly adopted an international resolution on AI, promoting “safe, secure, and trustworthy” AI systems. At the AI Action Summit held in Paris, more than 60 countries, including China, signed a statement emphasizing that leveraging the benefits of AI for economic and societal growth depends on advancing AI safety and trust.

China’s Approach to AI Development and Security

Concerning technological and industrial innovation, China has emphasized both development and security. In 2023, the Chinese mainland launched the Global AI Governance Initiative, proposing principles such as a people-centered approach and developing AI for good. More recently, in April, during a study session of the Political Bureau, President Xi Jinping remarked that while AI presents unprecedented development opportunities, it also brings risks and challenges not seen before.

Risks Highlighted by Recent Incidents

These risks and challenges are as unprecedented as they are real. Around two years ago, Samsung banned its employees from using ChatGPT amid concerns about leakage of sensitive internal information on such platforms. The crackdown was reportedly prompted by an engineer’s accidental leak of sensitive internal source code, underscoring the importance of protecting trade secrets in the technology era.

At that time, ChatGPT itself reported a major data leakage incident involving sensitive data, including user conversation headings, names, email addresses, and even parts of credit card numbers.

AI in the Workplace

As AI-powered chatbots gain popularity, they are increasingly used in workplaces for tasks such as preparing minutes, summarizing presentations, and creating promotional materials. However, organizations must recognize that while AI can automate workflows and boost productivity, it also poses risks such as the leakage of confidential information or customers’ personal data, excessive collection or improper use of data, and the production of inaccurate or biased data.

Organizations should be aware that depending on the algorithm of the AI tool and server accessibility, uploaded data may enter a large open database and be used to train the underlying model without employees’ knowledge. This data may inadvertently be regurgitated in responses to prompts from competitors or customers.

Compliance Checks and Findings

Given the privacy and security risks posed by AI, compliance checks were conducted on 60 organizations across various sectors. These checks aimed to understand whether organizations complied with the relevant requirements of the Personal Data (Privacy) Ordinance in the collection, use, and processing of personal data during the use of AI, and whether proper governance was in place.

Findings revealed that 80% of the organizations examined used AI in their day-to-day operations. Among these, half collected and/or used personal data through AI systems. However, not all had formulated AI-related policies; only about 63% of those that collected and/or used personal data had such policies in place, indicating significant room for improvement.

The Need for AI-Related Policies

The importance of having an AI-related policy cannot be overstated. Organizations are recommended to formulate internal AI guidelines to balance business efficacy and data privacy protection. To assist in this effort, a “Checklist on Guidelines for the Use of Generative AI by Employees” was published to help organizations develop the right policies while complying with the city’s ordinance on AI.

The guidelines recommend that an organization’s internal AI policy include information on the permissible use of generative AI, protection of personal data privacy, lawful and ethical use, bias prevention, data security, and consequences for violations.

Conclusion

Organizations are responsible for ensuring that the development or use of AI is beneficial for business while also being lawful and safe. While leveraging AI to sharpen their competitive edge, organizations should not allow the technology to run wild or become their new master.

More Insights

EU Launches AI Advisory Forum to Shape Future Regulation

The European Commission is inviting experts to apply for its newly established AI Act Advisory Forum, which will provide crucial guidance on the implementation of the EU's AI Act aimed at ensuring...

Bridging the AI Confidence Gap: Insights for CEOs

EY's study reveals a significant disconnect between CEOs' perceptions of AI concerns and actual public sentiment, with consumers expressing greater worries about issues like data privacy and...

Confronting the Risks of Shadow AI in the Enterprise

IBM has introduced tools to help organizations manage AI systems they may be unaware of, addressing the growing challenge of shadow AI. With a significant number of employees using unapproved AI...

Utah Lawmaker to Lead National AI Policy Task Force

Utah State Rep. Doug Fiefia has been appointed to co-chair a national task force aimed at shaping state-level artificial intelligence policies. The task force, organized by the Future Caucus, intends...

Utah Lawmaker to Lead National AI Policy Task Force

Utah State Rep. Doug Fiefia has been appointed to co-chair a national task force aimed at shaping state-level artificial intelligence policies. The task force, organized by the Future Caucus, intends...

Texas Takes a Stand: New AI Regulations Set the Tone for Responsible Innovation

On June 22, 2025, Texas enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), making it the second state to implement comprehensive AI regulations. The Act establishes...

EU AI Act: New Regulations Transforming the Future of Artificial Intelligence

The European Union's AI Act, which categorizes artificial intelligence models based on risk levels, aims to balance innovation with safety. As of August 2, compliance is mandatory for general-purpose...

Shifting Paradigms in Global AI Policy

Since the start of 2025, the strategic direction of artificial intelligence (AI) policy has shifted to focus on individual nation-states’ ability to win “the global AI race” by prioritizing national...

Shifting Paradigms in Global AI Policy

Since the start of 2025, the strategic direction of artificial intelligence (AI) policy has shifted to focus on individual nation-states’ ability to win “the global AI race” by prioritizing national...