Skip to main content
Vikrama.

How do we implement AI without risking data?

Security is an architecture decision: scope, permissions, logging, and controlled data access. Treat AI like a privileged system, not a plugin.

Security is an architecture decision: scope, permissions, logging, and controlled data access. Treat AI like a privileged system, not a plugin.

Start with a data boundary

Classify data (public, internal, confidential) and decide what can flow into prompts and what cannot. Use redaction and allow-lists for high-risk content.

This isn't a security review. It's a 2-hour exercise with your team. Draw the line before you write any code.

Prefer retrieval over training

For internal documents, use retrieval (RAG) rather than fine-tuning on sensitive corpora. Keep audit logs of retrieval and model outputs.

RAG means your data stays in your infrastructure. The model never sees it during training, only at query time, under your controls.

Human-in-the-loop for high-stakes steps

Drafts, recommendations, and checks can be automated; final approvals remain human for critical workflows.

The pattern that works: AI proposes, human disposes. For low-risk steps, you can gradually remove the human review. For anything involving customer data, financial decisions, or compliance, keep the human in the loop permanently.


Related reading:

Frequently asked questions

Can we use AI on customer data?

Yes, with explicit consent, strict access controls, and a clear retention policy. Use anonymisation where possible.

Should we self-host?

Sometimes. Self-hosting can help compliance but increases ops burden. Choose based on risk and capacity.

Related questions