Here is something I keep seeing: a company with 50 employees, half of them already using ChatGPT or Copilot at work, and zero written policy about any of it.
That is not a hypothetical. 58% of small businesses now use generative AI, up from 40% a year earlier (U.S. Chamber of Commerce, 2025). Some reports put it even higher, with nearly 89% of small businesses leveraging AI in some capacity (ICIC/Intuit, 2025). The adoption is real, and it is happening fast.
The policy? Not so much. Only about 23% of SMBs have a formal AI policy in place (HR Partner, 2026). That gap between usage and governance is what keeps me up at night, not because people are doing something wrong, but because they are doing it in the dark.
The Rules-First Trap
When companies do get around to writing a policy, the instinct is usually the same: start with a list of “don’ts.” Do not paste client data into ChatGPT. Do not use AI-generated content without review. Do not use unapproved tools.
I get it. The instinct makes sense. You see risk, you write a rule.
The problem is that rules without context create a compliance culture, not a capable one. People learn what they cannot do, but they never learn what good looks like. They get cautious instead of competent. And the employees who were already using AI before the policy landed? They keep doing what they were doing, because the rules feel disconnected from their actual work. That is shadow AI in a nutshell: people using unapproved tools because the official guidance does not match reality.
A list of “don’ts” without the “why” is not a policy. It is a warning label with no explanation.
Values First: What It Actually Means
Starting with values means your AI policy reflects who your organization already is, not who some compliance template says you should be.
Here is the practical version. Before you write a single rule, your leadership team answers three questions:
- What do we believe about how our people should work?
- What do we owe our customers when it comes to their data and trust?
- What does responsible innovation look like for us, specifically?
The rules come after those answers. When they do, they carry weight, because everyone understands the “why” behind the “what.” The NIST AI Risk Management Framework follows this same logic. It starts with governance and mapping before moving to measurement and management, which is a way of saying “know your values and your risks before you start writing restrictions.”
PwC flagged 2026 as the year responsible AI moves from talk to traction. Traction only works if the foundation is real, not borrowed from a generic template you found online.
The Practical Difference
Let me give you a scenario. A customer service rep at a 40-person manufacturing company in Northeast Ohio uses ChatGPT to draft a response to a frustrated client.
Rules-first approach: The rep checks the policy, sees “do not input customer information into external AI tools,” and stops. They write the email manually, it takes three times as long, and they resent the policy. Next time, they skip the check entirely.
Values-first approach: The rep knows the company values customer trust and data privacy. The policy says, “Use AI tools to improve your work, but protect customer identifiable information by removing names, account numbers, and specifics before pasting anything in.” The rep strips the identifying details, gets a solid draft in 30 seconds, personalizes it, and sends a better email faster.
Same employee, same tool, completely different outcome. Structure creates safety, and a values-grounded policy is how you build both. The question is not whether your team uses AI. It is whether they know how to use it in a way that reflects your standards.
How to Start: Three Steps
Step one: Run a values alignment session with your leadership team.
Take your existing company values, the ones on your wall or your website, and map them to AI use. If you value transparency, that means disclosing when AI assisted in client-facing work. If you value employee development, that means training people to use AI well instead of banning it. This is a two-hour conversation, not a six-month project.
Step two: Audit what is already happening.
You cannot write a useful policy without knowing what tools people are using, what data is flowing through them, and where the real risks live. At the enterprise level, 80% of C-suite executives report having a dedicated AI risk function (IBM Institute for Business Value, 2024). Your version of that might be one leader spending two hours on it, and that is fine. The audit matters more than the org chart.
Step three: Write your policy in plain language that connects every rule to a value.
Not “Do not use unapproved AI tools” but “We support the use of AI tools that meet our security standards (list here), because we value both innovation and the trust our clients place in us.” Every rule should be traceable to a value. If it is not, either the rule is unnecessary or you are missing a value.
The Difference Between Followed and Filed
The question is not whether your company needs an AI policy. It does. The question is whether that policy will be a document people actually follow, or a PDF that lives in a SharePoint folder and gets ignored after week one.
Values-first policies get followed because they make sense, they meet people where they are, and they treat employees like adults who can make good decisions when they understand the principles. This is not about slowing down AI adoption. It is about making sure the speed matches the steering. Your team is already using AI. Give them a compass, not a cage.