Empowering Hong Kong Firms to Prioritize AI Safety

Hong Kong Firms Must Take Initiative on Safe AI Practices

As artificial intelligence (AI) develops rapidly, an increasing number of organizations are leveraging this technology to streamline operations, improve quality, and enhance competitiveness. However, AI poses significant security risks, including personal data privacy risks, that cannot be ignored.

Organizations developing or using AI systems often collect, use, and process personal data, leading to privacy risks such as excessive collection, unauthorized use, and breaches of personal data.

The Global Context of AI Security

The importance of AI security has become a common theme in international declarations and resolutions adopted in recent years. In 2023, 28 countries, including China and the United States, signed the Bletchley Declaration at the AI Safety Summit in the UK. This declaration stated that misuse of advanced AI models could lead to catastrophic harm and emphasized the urgent need to address these risks.

In 2024, the United Nations General Assembly adopted an international resolution on AI, promoting “safe, secure, and trustworthy” AI systems. At the AI Action Summit held in Paris, more than 60 countries, including China, signed a statement emphasizing that leveraging the benefits of AI for economic and societal growth depends on advancing AI safety and trust.

China’s Approach to AI Development and Security

Concerning technological and industrial innovation, China has emphasized both development and security. In 2023, the Chinese mainland launched the Global AI Governance Initiative, proposing principles such as a people-centered approach and developing AI for good. More recently, in April, during a study session of the Political Bureau, President Xi Jinping remarked that while AI presents unprecedented development opportunities, it also brings risks and challenges not seen before.

Risks Highlighted by Recent Incidents

These risks and challenges are as unprecedented as they are real. Around two years ago, Samsung banned its employees from using ChatGPT amid concerns about leakage of sensitive internal information on such platforms. The crackdown was reportedly prompted by an engineer’s accidental leak of sensitive internal source code, underscoring the importance of protecting trade secrets in the technology era.

At that time, ChatGPT itself reported a major data leakage incident involving sensitive data, including user conversation headings, names, email addresses, and even parts of credit card numbers.

AI in the Workplace

As AI-powered chatbots gain popularity, they are increasingly used in workplaces for tasks such as preparing minutes, summarizing presentations, and creating promotional materials. However, organizations must recognize that while AI can automate workflows and boost productivity, it also poses risks such as the leakage of confidential information or customers’ personal data, excessive collection or improper use of data, and the production of inaccurate or biased data.

Organizations should be aware that depending on the algorithm of the AI tool and server accessibility, uploaded data may enter a large open database and be used to train the underlying model without employees’ knowledge. This data may inadvertently be regurgitated in responses to prompts from competitors or customers.

Compliance Checks and Findings

Given the privacy and security risks posed by AI, compliance checks were conducted on 60 organizations across various sectors. These checks aimed to understand whether organizations complied with the relevant requirements of the Personal Data (Privacy) Ordinance in the collection, use, and processing of personal data during the use of AI, and whether proper governance was in place.

Findings revealed that 80% of the organizations examined used AI in their day-to-day operations. Among these, half collected and/or used personal data through AI systems. However, not all had formulated AI-related policies; only about 63% of those that collected and/or used personal data had such policies in place, indicating significant room for improvement.

The Need for AI-Related Policies

The importance of having an AI-related policy cannot be overstated. Organizations are recommended to formulate internal AI guidelines to balance business efficacy and data privacy protection. To assist in this effort, a “Checklist on Guidelines for the Use of Generative AI by Employees” was published to help organizations develop the right policies while complying with the city’s ordinance on AI.

The guidelines recommend that an organization’s internal AI policy include information on the permissible use of generative AI, protection of personal data privacy, lawful and ethical use, bias prevention, data security, and consequences for violations.

Conclusion

Organizations are responsible for ensuring that the development or use of AI is beneficial for business while also being lawful and safe. While leveraging AI to sharpen their competitive edge, organizations should not allow the technology to run wild or become their new master.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...