Empowering Hong Kong Firms to Prioritize AI Safety

Hong Kong Firms Must Take Initiative on Safe AI Practices

As artificial intelligence (AI) develops rapidly, an increasing number of organizations are leveraging this technology to streamline operations, improve quality, and enhance competitiveness. However, AI poses significant security risks, including personal data privacy risks, that cannot be ignored.

Organizations developing or using AI systems often collect, use, and process personal data, leading to privacy risks such as excessive collection, unauthorized use, and breaches of personal data.

The Global Context of AI Security

The importance of AI security has become a common theme in international declarations and resolutions adopted in recent years. In 2023, 28 countries, including China and the United States, signed the Bletchley Declaration at the AI Safety Summit in the UK. This declaration stated that misuse of advanced AI models could lead to catastrophic harm and emphasized the urgent need to address these risks.

In 2024, the United Nations General Assembly adopted an international resolution on AI, promoting “safe, secure, and trustworthy” AI systems. At the AI Action Summit held in Paris, more than 60 countries, including China, signed a statement emphasizing that leveraging the benefits of AI for economic and societal growth depends on advancing AI safety and trust.

China’s Approach to AI Development and Security

Concerning technological and industrial innovation, China has emphasized both development and security. In 2023, the Chinese mainland launched the Global AI Governance Initiative, proposing principles such as a people-centered approach and developing AI for good. More recently, in April, during a study session of the Political Bureau, President Xi Jinping remarked that while AI presents unprecedented development opportunities, it also brings risks and challenges not seen before.

Risks Highlighted by Recent Incidents

These risks and challenges are as unprecedented as they are real. Around two years ago, Samsung banned its employees from using ChatGPT amid concerns about leakage of sensitive internal information on such platforms. The crackdown was reportedly prompted by an engineer’s accidental leak of sensitive internal source code, underscoring the importance of protecting trade secrets in the technology era.

At that time, ChatGPT itself reported a major data leakage incident involving sensitive data, including user conversation headings, names, email addresses, and even parts of credit card numbers.

AI in the Workplace

As AI-powered chatbots gain popularity, they are increasingly used in workplaces for tasks such as preparing minutes, summarizing presentations, and creating promotional materials. However, organizations must recognize that while AI can automate workflows and boost productivity, it also poses risks such as the leakage of confidential information or customers’ personal data, excessive collection or improper use of data, and the production of inaccurate or biased data.

Organizations should be aware that depending on the algorithm of the AI tool and server accessibility, uploaded data may enter a large open database and be used to train the underlying model without employees’ knowledge. This data may inadvertently be regurgitated in responses to prompts from competitors or customers.

Compliance Checks and Findings

Given the privacy and security risks posed by AI, compliance checks were conducted on 60 organizations across various sectors. These checks aimed to understand whether organizations complied with the relevant requirements of the Personal Data (Privacy) Ordinance in the collection, use, and processing of personal data during the use of AI, and whether proper governance was in place.

Findings revealed that 80% of the organizations examined used AI in their day-to-day operations. Among these, half collected and/or used personal data through AI systems. However, not all had formulated AI-related policies; only about 63% of those that collected and/or used personal data had such policies in place, indicating significant room for improvement.

The Need for AI-Related Policies

The importance of having an AI-related policy cannot be overstated. Organizations are recommended to formulate internal AI guidelines to balance business efficacy and data privacy protection. To assist in this effort, a “Checklist on Guidelines for the Use of Generative AI by Employees” was published to help organizations develop the right policies while complying with the city’s ordinance on AI.

The guidelines recommend that an organization’s internal AI policy include information on the permissible use of generative AI, protection of personal data privacy, lawful and ethical use, bias prevention, data security, and consequences for violations.

Conclusion

Organizations are responsible for ensuring that the development or use of AI is beneficial for business while also being lawful and safe. While leveraging AI to sharpen their competitive edge, organizations should not allow the technology to run wild or become their new master.

More Insights

Understanding the Impacts of the EU AI Act on Privacy and Business

The EU AI Act, finalized in late 2023, establishes comprehensive regulations governing the use of artificial intelligence by companies operating in Europe, including those based in the U.S. It aims to...

Kazakhstan’s Bold Step Towards Human-Centric AI Regulation

Kazakhstan's draft 'Law on Artificial Intelligence' aims to regulate AI with a human-centric approach, reflecting global trends while prioritizing national values. The legislation, developed through...

Balancing Innovation and Ethics in AI Engineering

Artificial Intelligence has rapidly advanced, placing AI engineers at the forefront of innovation as they design and deploy intelligent systems. However, with this power comes the responsibility to...

Harnessing the Power of Responsible AI

Responsible AI is described by Dr. Anna Zeiter as a fundamental imperative rather than just a buzzword, emphasizing the need for ethical frameworks as AI reshapes the world. She highlights the...

Integrating AI: A Compliance-Driven Approach for Businesses

The Cloud Security Alliance (CSA) highlights that many AI adoption efforts fail because companies attempt to integrate AI into outdated processes that lack the necessary transparency and adaptability...

Preserving Generative AI Outputs: Legal Considerations and Best Practices

Generative artificial intelligence (GAI) tools raise legal concerns regarding data privacy, security, and the preservation of prompts and outputs for litigation. Organizations must develop information...

Embracing Responsible AI: Principles and Practices for a Fair Future

Responsible AI refers to the creation and use of artificial intelligence systems that are fair, transparent, and accountable. It emphasizes the importance of ethical considerations in AI development...

Building Trustworthy AI for Sustainable Business Growth

As businesses increasingly rely on artificial intelligence (AI) for critical decision-making, the importance of building trust and governance around these technologies becomes paramount. Organizations...

Spain’s Trailblazing AI Regulatory Framework

Spain is leading in AI governance by establishing Europe’s first AI regulator, AESIA, and implementing a draft national AI law that aligns with the EU AI Act. The country is also creating a regulatory...