2. Data usage limitations

Your data is your most valuable asset; you may not know how it’s used once it leaves your control. Many AI vendors want to leverage client data to train and refine their models. Unless your third-party contracts explicitly restrict this, sensitive information could end up in systems you don’t govern or even embedded in a model that benefits your competitors. The lack of transparency of AI use cases makes it nearly impossible to know whether your data is being repurposed in ways you never agreed to.

Action to take

Include explicit language that your data may not be used to train external models, incorporated into vendor offerings or shared with other clients. Require that all data handling comply with the strictest applicable privacy laws (GDPR, HIPAA, CCPA, etc.) and specify that these obligations survive the termination of the contract.

3. Human oversight requirements

AI can accelerate workflows and reduce costs, but also introduces risks that can’t be left unchecked. Human oversight ensures that automated outputs are interpreted in context, reviewed for bias and corrected when the system goes astray. Without it, organizations risk over-relying on AI’s efficiency while overlooking its blind spots. Regulatory frameworks are moving in the same direction: for example, high-risk AI systems must have documented human oversight mechanisms under the EU AI Act.