AI governance for Swiss SMEs: where to start

How shadow AI usage exposes your data and what you can do to protect your business
Swiss companies are adopting AI tools such as Copilot without clear frameworks, creating risks for compliance, data protection, and competitiveness. This article explains the Swiss and EU regulatory context (nLPD, FINMA, AI Act), highlights practical risks, and shows how strong data governance and AI governance frameworks protect businesses.
The Swiss Context Comes First
Switzerland signed the Council of Europe Convention on AI on March 27, 2025 (ratification pending), and continues to favor a sector-based regulatory approach. In practice, the use of AI in Switzerland is already subject to the new Federal Act on Data Protection (nFADP). The Federal Data Protection and Information Commissioner (FDPIC) has made it clear that the principles of transparency, proportionality, information, and human oversight apply to all AI-assisted processing, regardless of technology.
In the financial sector, FINMA requires governance that is adapted to the materiality of AI use cases. This means that institutions must maintain a complete inventory of their AI applications, classify risks according to their impact and criticality, ensure the quality and integrity of the data used, and keep all technical documentation up to date. They are also expected to implement effective monitoring mechanisms and to carry out independent reviews whenever AI systems are considered material.
For B2B providers, particularly in software and managed services, having a clear internal AI framework is becoming a factor of trust. Large enterprises in banking and pharma already demand contractual guarantees regarding how AI is used by their suppliers.
👉 keyIT supports organizations in meeting these expectations with structured data protection and compliance frameworks tailored to their sector.
European Union: Concrete Impacts for Swiss Companies
Many Swiss SMEs sell to clients in the EU or operate within European value chains. The AI Act entered into force on August 1, 2024, with phased implementation:
→ From February 2025: prohibition of certain unacceptable-risk practices (e.g., behavioral manipulation, social scoring) and mandatory training of key users to build an “AI literacy” culture.
→ From August 2025: reinforced obligations for general-purpose AI (GPAI) models.
→ From August 2026: full application of the framework for high-risk AI systems.
→ From August 2025: reinforced obligations for general-purpose AI (GPAI) models.
→ From August 2026: full application of the framework for high-risk AI systems.
How Data Governance Supports AI Governance
The fundamental principles of data governance (knowing which data is processed, by whom, for what purpose, and under which conditions) are essential to control the use of artificial intelligence.
For SMEs, outsourcing part of this oversight through a service such as DPOaaS makes it possible to establish continuous controls without overloading internal resources.
👉 keyIT also helps strengthen your processes through IT compliance audits.
Two Common Examples
Copilot Activated Without Framework
Risks: exposure of sensitive data, publication of errors, dependency on the vendor.
Fixes: short usage charter, DLP/MIP policies, authorized/forbidden use list, logging of sensitive prompts, 45-minute training.
Fixes: short usage charter, DLP/MIP policies, authorized/forbidden use list, logging of sensitive prompts, 45-minute training.
👉 Discover how to manage these risks with our Microsoft 365 Copilot offer and our Secured GPT solution.
Client Chatbot on Your Website
Risks: inaccurate responses, unlawful data collection, insufficient security.
Fixes: validated knowledge base, transparency notice, disabled training on conversations, logging and purging, legal review.
Fixes: validated knowledge base, transparency notice, disabled training on conversations, logging and purging, legal review.
👉 keyIT provides tailored services, including cybersecurity and business & data analytics to secure these use cases.
Where to Start
→ Quickly list the AI use cases in your company.
→ Define simple rules for use and human oversight.
→ Protect data and frame suppliers with adapted contractual clauses.
→ Train teams and measure results.
→ Define simple rules for use and human oversight.
→ Protect data and frame suppliers with adapted contractual clauses.
→ Train teams and measure results.
To structure this process, you can rely on our Audit Flash 360, which identifies compliance gaps quickly and helps prioritize corrective actions.
