As compliance teams experiment with AI for everything from risk assessments to policy interpretation, a practical question emerges: Which tasks can be automated reliably, and which still require human judgment? Steph Holmes, director of compliance and ethics strategy at EQS Group, dives into her organization’s research on six AI models, finding that it excels at rule-driven work but struggles in the gray zone where data meets intent and culture intersects with language. The findings suggest oversight should be strategic, not universal, but there’s no doubt the loop isn’t complete without humans.
International Development
Trending Now
Restoring Transparency: Rebuilding the Foundations of U.S. Foreign Assistance Data • FAR Council Finalizes ’Two-Point’ SAM Registration Rule, Ending Years of Confusion • The Quiet Revolution: How Regulation Is Forcing Cybersecurity Accountability • Comments by Former OFCCP Director Eschbach on Future of Agency • Companies may look to move presence abroad amidst H-1B uncertainty
Where in the Loop? Testing AI Across 120 Compliance Tasks to Find Out Where Humans Are Most Needed
Stay ahead in international development contracting with daily updates on USAID, global procurement, and foreign assistance with our Development newsletter, delivering up-to-the-minute intelligence Monday–Saturday — Subscribe here.
