Enterprise AI Safeguards: Defining Safety and Enhancing Trust, Governance in the Age of Generative AI
Apr 8, 2024 — 02:20 pm - 3:00 PMSalon Impérial
Salon Impérial
Presented by
PB
Philippe Beraud
Microsoft
Chief Technology & Security Advisor |...
MM
Marten Mickos
HackerOne
CEO
LP
Ludovic Peran
Google
Google Research / Responsible AI -...
YR
Yohann Ralle
Direction générale des entreprises
National Coordinator for Artificial...
SR
Sasha Rubel
Amazon Web Services (AWS)
Head of Public Policy, Artificial...
ST
Shawn Ten
Open Government Products
Head of Artificial Intelligence Policy
Description
Establishing Robust AI Safety Frameworks: Delve into the methodologies and standards for creating safe enterprise AI systems. Explore the importance of developing AI with built-in safeguards to prevent unintended consequences, ensuring that AI applications perform within ethical boundaries and comply with regulatory requirements. Discuss the role of transparency, accountability, and ethical considerations in building trust between AI systems and their human users.
Enhancing Trust Through Ethical AI Practices: Discuss the critical need for trust in AI systems by implementing ethical AI practices. Highlight how fairness, inclusivity, and privacy protection principles can be embedded into AI development processes. Explore case studies and best practices where ethical AI governance has enhanced trust among stakeholders, including customers, employees, and the wider community.
Navigating Governance in the Generative AI Era: Examine the challenges and opportunities presented by generative AI for enterprise governance. Discuss the importance of cross-functional collaboration between technical, legal, and policy teams to adapt existing governance frameworks to generative AI's unique capabilities and risks. Explore how continuous monitoring, risk assessment, and feedback loops can be integrated into governance models to respond dynamically to the evolving AI landscape.