GDPR is the sharper blade for everyday AI use
The sharpest AI regulation affecting everyday operations at most companies isn't in the AI Act. It sits in GDPR and has been in force for eight years.
This isn't about the AI Act's high-risk category. That one carries higher fines. It's about everyday use: documents going into ChatGPT, Claude or Copilot, employees experimenting on their own, no clear framework around any of it.
The reason: nearly all major AI models run outside the EU. The moment personal data flows into those systems, a third-country transfer assessment is required. With a clean enterprise setup, a data processing agreement (DPA) and EU region, it's manageable. Without one (and that's the default case with shadow IT), it isn't.
The common sorting logic under the AI Act goes: "We don't do CV screening, credit scoring, or AI-driven decision systems, so no high risk." That's usually correct.
What this misses: the AI Act classifies the system, GDPR classifies the data flowing into it. Most AI compliance issues in mid-sized companies today are GDPR issues with AI tools, not the other way around.
I've put a self-test openly on our website: AI Act class and GDPR dimension in one pass. No signup, no email.
https://sixtyfour.solutions/eu-ai-act-selbsttest/
Anyone spotting gaps or sharper questions: comments welcome.