Skip to main content

The Uncomfortable Unknowns of Ethical AI Use

06-05-24 Catherine Meade

It seems as though artificial intelligence (AI) technology has been integrated into every aspect of our lives lately. How can we gain the benefits of this powerful new technology without compromising the integrity of our applications?

As technology leaders, we’ve been slammed, willingly or not, into a landscape full of “artificial intelligence” (AI) and “machine learning” (ML). We’ve been tasked with making quick decisions that have broad impact–even though few of us are actually experts in all the nuances of AI technology use. This means we’re likely leaning heavily on what others in the industry are saying to determine how our teams interact with these tools. And that’s uncomfortable.

Extremism exists in the tech industry right now; if you’re against AI tools, you’re an uneducated luddite keeping the team from progressing, but if you’re all in on new AI tools, you’re irresponsible and putting the team at risk. Neither perspective is helpful in determining how we use these tools to build a better web.

It is human nature to fear the unknown and what we don’t understand. One of the best ways to become comfortable with unknowns is to turn them into “knowns.” Together, let’s go through some emerging areas of conversation in order to better prepare ourselves to make responsible decisions around the ethical use of AI and ML tooling.

Generative AI vs Productivity Tools

Much of the technology we think of as “AI” is not really new. IBM’s Watson is a natural language processing tool, or a “chatbot,” that competed on Jeopardy! over a decade ago in 2011. Predictive text and grammar tools in Google Docs, automated Zoom captions, and the ability to search for “cats” in your Apple Photos app are all examples of tools we’d call “AI.” The tech industry has recently shifted to using “AI” more often as a branding tool, capitalizing on popular trends.

Often we discuss all AI tools as one big AI bucket. However, I personally believe that indistinction creates friction between decision makers with varying types of concerns. It can help to split AI tooling into two separate categories based on the functions they help us complete: “Generative AI” and “Productivity Tools.”

Generative AI is a type of tooling that creates something new. The most popular Generative AI tools are those that turn a text prompt into an image, such as MidJourney, DALL-E, and Adobe Firefly. Using ChatGPT to write a poem or a blog post is Generative AI. Marvel’s Secret Invasion used Generative AI in a controversial opening credits sequence. Generative AI can cause anxiety for anyone whose job it is to create, due to both the fear of being replaced in the workforce and concerns of plagiarism and copyright violations

Productivity Tools are AI technology that assist in daily tasks including organizing and analyzing information. Otter.ai takes meeting notes, Parallax optimizes capacity planning, and Grammarly copy edits our blog posts. Using ChatGPT to help you organize a list of bullet points into some JIRA tickets is productivity tooling. I believe Github Copilot, a tool designed to write code alongside a developer, is a Productivity Tool–it can’t really write quality code without a human to review and modify said code. However, Github Copilot can also be used irresponsibly to deliver sloppy features or inefficient solutions. 

I’d also like to point out that the majority of the examples I have listed as Productivity Tools are still “generative” from a technical perspective. They all “generate” some sort of response to a user request. Categorizing these things without explaining the underlying functionality is inelegant, but going into technical detail is tedious. For the sake of this conversation, let’s assume that “Generative AI” refers more to the scope of the tasks the tool accomplishes and not about the way it works.

If we understand the differences between using AI as Generative AI and using it as a Productivity Tool, we can more comfortably decide how to use AI in our work. Most of us are likely okay with carefully integrated Productivity Tools, but more concerned about the ways we use Generative AI, especially with regards to delivered work.

AI in Vendor, Client, and Employee Relationships 

Understanding how to use AI tools is only one piece of the responsibility puzzle. We must also understand and determine the acceptable impact of these tools in our work product, including relationships with vendors, clients, and even our employees. 

Often, using common sense will solve many issues. For example, I would not recommend delivering generative AI assets, such as AI-created images or written copy, to a client unless they have explicitly requested such or it is covered under a master services agreement (MSA).

Using tools, especially Productivity Tools, to support work is less straightforward. One place to start is by evaluating the security and data processing policies of the tool’s parent company. The AICPA’s System and Organization Controls (SOC) guidelines create a set of standards for many organizations. For example, Figma, an extremely popular design tool, shares their SOC reports on their security page. Becoming familiar with an AI tool’s terms of service and how it applies to you as the user, and your data, is also key.

Depending on your industry, the level of acceptable risk in AI tooling can vary greatly. General business marketing websites can have looser requirements than sites for government, finance, healthcare, and scientific research. When in doubt, it is best to review any questionable areas with your legal counsel for advice.

It is important to create and distribute a policy internally for employees on how to use AI tooling, empowering them to make safe and informed decisions around new technology. This should be something you ask for from any vendor you are working with, or they should be willing to comply with the policy you have written. For the agency readers–your AI policy should be written in a manner you’d be happy to share with a client when asked.

AI tooling has dramatically changed how we have to evaluate potential hires as well. In recent years, we’ve had to be extra careful when relying on take-home projects or even asking live questions. At Sparkbox, we’ve incorporated a live pairing step into our technical interview process, which helps us evaluate candidates more on how they approach solving a problem rather than the submitted code sample. I likely wouldn’t disqualify someone who wanted to use AI tooling, such as Github Copilot, in this pairing session, as long as they were upfront about it. But, I would watch them closely to see how they use it. It’s important to determine who uses AI to support their own capabilities and who is using AI to cover a gap in their skill.

The Environmental Impact of AI

One big, robot elephant in the room while discussing the ethical use of AI is the environmental impact of these tools. Electricity usage is a tangible, significant cost of any major computing task. And this isn’t a new concept either. Just a couple years ago we had this same conversation around blockchain development and generating NFTs.

Generative AI, understandably, consumes more electrical energy than other common forms of AI. Generating an image averages around the same amount of energy as driving a car 4 miles, as this study by the Carnegie Mellon University Allen Institute for AI found:

“For context, the most carbon-intensive image generation model (stable-diffusion-xl-base-1.0) generates 1,594 grams of 𝐶𝑂2 for 1,000 inferences, which is roughly the equivalent to 4.1 miles driven by an average gasoline-powered passenger vehicle […]. This can add up quickly when image generation models such as Dall·E and MidJourney are deployed in user-facing applications and used by millions of users globally (we discuss this point further in Section 5).”

In the early days of OpenAI and Midjourney, many of us generated dozens of images for fun. Now, better understanding the electrical impact, those joke images that went immediately to my trash bin have me feeling… uncomfortable.

This cost does not mean we should not ever use these technologies–we would not likely choose to sit in the dark at night or leave our homes unheated during Midwestern winters just to save electricity (assuming we had the privilege to make that choice). Ethical AI use means making responsible decisions while understanding the impact of our choices.

Responsible AI Use is the Future of Tech


Ultimately, AI tools are here to stay, but we get to determine the boundaries we draw with them. Web workers are not really at risk of being replaced by “AI Robots;” however, the most productive tech creators will be those that efficiently and intelligently integrate AI tools into their own work. It’s now part of our jobs as leaders to educate ourselves and our teams in order to make the best decisions we can about this relatively unknown change in the industry. With great AI technology comes great responsibility.

Related Content

User-Centered Thinking: 7 Things to Consider and a Free Guide

Want the benefits of UX but not sure where to start? Grab our guide to evaluate your needs, earn buy-in, and get hiring tips.

More Details

See Everything In

Want to talk about how we can work together?

Katie can help

A portrait of Vice President of Business Development, Katie Jennings.

Katie Jennings

Vice President of Business Development