As compliance teams experiment with AI for everything from risk assessments to policy interpretation, a practical question emerges: Which tasks can be automated reliably, and which still require human judgment? Steph Holmes, director of compliance and ethics strategy at EQS Group, dives into her organization’s research on six AI models, finding that it excels at rule-driven work but struggles in the gray zone where data meets intent and culture intersects with language. The findings suggest oversight should be strategic, not universal, but there’s no doubt the loop isn’t complete without humans.

Read more here