Crafting policies isn’t exactly the most exciting part of our jobs, but when it comes to using AI responsibly, having a solid, thoughtful policy in place is essential. The real challenge? Knowing when you’ve said enough… and when you need to provide more detail.
Over the last few months here at Sparkbox, there’s been a renewed energy around exploring new ways of thinking, creating, and evolving our problem-solving. Much of this has been driven by the rise of AI, which is quickly becoming an integral part of our workflows across disciplines. It became clear that we needed a modernized, thoughtful policy that sets clear expectations for AI use while still encouraging ongoing exploration and innovation.
This AI Policy had a clear goal from the very start: to provide a document that offers clarity, completeness and, most importantly, is usable. So, how did we get started?
The Starting Point
The good news? We weren’t starting from scratch.
Our previous policy provided a solid foundation and starting point, offering a great outline of areas of focus and a helpful tone and voice to build from. While the policy didn’t offer the necessary detail we need now, starting with a draft is always easier than working from a blank page.
Best-In-Class Review
To continue to build on what we had, the next step was to turn to others in the industry. Many companies have their policies open to the public for review. I collected several policies from organizations with values that aligned with ours and compared their content to our own. This provided exposure to critical content gaps and revealed additional ways to approach both structure and phrasing. I also explored best practices and reviewed policy templates. This uncovered a few essential content areas that we hadn’t addressed yet, such as approved tool selection and employee education.
AI Evaluation
Finally, it seemed right to utilize the AI tools themselves to identify possible improvements or clarification points. For example, Chat GPT suggested creating a usability checklist for the team to facilitate easier use of the policy. This feedback also helped me expand unclear sections and improve the structure by breaking down longer paragraphs. It was surprisingly helpful.
The result? A detailed document that covers all necessary guidelines, standards, approvals, and checkpoints on how Sparkbox uses AI responsibly.
So we were done, right?
Not exactly.
The Problem: Overwhelming Detail
Although the document created was robust and full of the needed information, it was also hard to use. The UX side of me was disappointed that I had lost the entire point. By creating a document that had ‘everything,’ I’d created something that wasn’t usable for anyone. So I took a step back and reframed the project around the real users: our clients and our internal team.
Through this lens, I realized that our clients and our internal team did not need the same level of detail.
Our clients don’t need to see our Risk Assessment checklists and security procedures; they just need to know that we take privacy and responsibility seriously.
Our employees, on the other hand, need more practical guidance on what tools they can use, safeguards when using them, and how to use AI thoughtfully.
So we divided the policy into three focused documents, each customized for a specific audience.
The Result: Policies People Can Use
The three documents cater to a specific audience and provide details for each individual processing the policy. Here is our breakdown:
Client-Facing Policy:
This document provides a clear, high-level overview of our responsible AI approach. It details how we keep their information safe and our ethical stance without the overwhelming details.Employee Usage Guidelines:
This internal document provides detailed guidelines covering key topics like governance, privacy, tool use, and ethics. It provides our expectations on usage and even how to evaluate new features and tools responsibly, which are behaviors we consistently encourage.
This document also includes checklists for compliance, helping our team understand what to look for when applying our usage guidelines or completing a risk or bias assessment. These checklists take the policies we’ve created and turn them into practical, actionable steps we can follow in our everyday work.Approved Tools:
This internal document provides a central location for learning about the tools approved for use by the Sparkbox team. Our leadership team vets these tools, and we provide training around them for increased efficiency and proper usage. This document can also be used to learn about different features and platforms, such as:Required settings
Data and privacy considerations
Potential use cases
Known risks and limitations
Best practices
We have a team of individuals who will continue to assess and make updates as the guidelines and policies evolve or the approved list of tools expands. These continual touchpoints are vital because we can’t expect our team to use AI responsibly if they don’t know how to do it confidently.
Final Thought
If you’re writing (or rewriting) your own AI policy, here’s my best advice: keep the humans in mind. A well-structured document is great. But a policy that people can actually understand and use?
That’s where the real value lives.