Category: AI

Deploying Responsible AI with Vertex AI and Gemini Models

This Medium article is a tutorial on deploying a FastAPI application to Google Cloud Run that invokes Gemini models through Vertex AI while implementing responsible AI principles. It emphasizes the configuration of safety filters and practical safety implementations to screen both inputs and outputs for harmful content.

Read More »

AI Governance in Finance: Building Trust and Ensuring Compliance

As AI technology rapidly evolves, CFOs must proactively govern emerging solutions, starting with low-risk applications to build confidence within their teams. Ensuring data quality and maintaining human oversight are essential for establishing trust and compliance as AI becomes integral to finance functions.

Read More »

Ensuring Responsible AI: The Essential Guide to LLM Safety

The rise of large language models (LLMs) has revolutionized technology interactions, but their deployment comes with significant responsibilities. This guide explores LLM safety, emphasizing the importance of implementing guardrails and addressing risks to ensure ethical and reliable AI systems.

Read More »

Italy Leads Europe with New National AI Law

On October 10, 2025, Italy will become the first EU member state to implement a national artificial intelligence law, ahead of the EU’s AI Act. Law No. 132/2025 emphasizes a human-centric approach to AI, with provisions for transparency, privacy, and safety, while introducing penalties for harmful AI use.

Read More »

Transforming AML Investigations with Agentic AI

Agentic AI is revolutionizing AML investigations by significantly reducing the burden of false alerts and streamlining the investigative process for analysts. This innovative approach not only enhances efficiency but also ensures compliance and improves the quality of financial crime investigations.

Read More »

Understanding the EU AI Act: Key Compliance Insights

The European Union AI Act is the first comprehensive regulation of artificial intelligence worldwide, introducing a tiered framework to classify and govern AI systems based on their risk levels. Understanding compliance with the Act is essential for those building or managing AI systems, as it sets forth enforceable guidelines to ensure safety and accountability.

Read More »

Integrating NIST AI RMF with ISO 42001 for Effective AI Governance

This guide provides a practical approach to integrating the NIST AI Risk Management Framework and ISO 42001 into a cohesive AI governance strategy, highlighting how to effectively manage risk and ensure compliance. By combining the flexible guidance of NIST with the structured requirements of ISO, organizations can create a robust governance program tailored to their specific needs.

Read More »

California’s AI Employment Regulations: What Employers Need to Know

California has recently implemented regulations concerning the use of automated-decision systems in employment, aimed at preventing discrimination and ensuring transparency. Employers must now prepare to comply with these regulations by conducting risk assessments and providing proper notices to employees and applicants.

Read More »

Understanding AI Compliance: Key Regulations and Frameworks

AI compliance involves adhering to legal, ethical, and operational standards in the design and deployment of AI systems, necessitating a comprehensive understanding of various regulatory frameworks. As AI adoption increases, so does the importance of establishing a robust compliance strategy to protect sensitive data, reduce risks, and build trust with stakeholders.

Read More »