Original Source
AI Governance Playbook for Business Risk Management
AI Governance Process and Risk Assessment
During Ward and Smith's annual In-House Counsel seminar, Mayukh Sircar, a cybersecurity and technology attorney, provided comprehensive guidance on the strategic role of Artificial Intelligence (AI) in modern business and its associated risks. Sircar emphasized that the first step in the AI governance process is to conduct a comprehensive inventory of existing AI systems and use cases. He highlighted the importance of applying proportionate governance, suggesting that choosing tools that achieve the same goal with less data is often a better approach.
Managing AI Tools by Risk Level
AI tools should be managed differently based on their risk level. Tools that impact legal and/or material rights, such as resume screening or loan applications, are classified as high-risk and require strict oversight and formal impact assessments. Sircar explained that these tools might even be prohibited in certain cases. Tools like creating external marketing copy or internal analytics reports pose a moderate level of risk, and their outputs should always be human-reviewed. Conversely, low-risk AI tools, such as brainstorming or summarizing public articles, can often be governed by existing general use policies.
*Source: Ward and Smith, P.A. (2026-03-24)*



