Skip to main content

Using Scorecards to Encourage Adoption of Design Systems

02-24-20 Mandy Kendall

Evaluating design system components with a scorecard can benefit subscribers by providing them with transparency and guidance for the developers creating those components.

In our 2019 Design Systems Survey, we asked respondents from both in-house teams and external agencies to report their organization’s biggest concerns as they considered implementing a design system. In-house teams reported two concerns that topped the list:

  1. Adoption and acceptance of the system

  2. Buy-in from the team

So as design system teams, what can we do to encourage subscribers to adopt our design system? In this article, we’ll discuss a strategy for evaluating the components in a design system, how this strategy can benefit subscribers by providing transparency, and how developers can use it as a guide for creating new components within the design system.

Developing a Strategy

I am currently part of a dedicated project team that works on designing, building, and maintaining a mature (~4-year-old) design system for one of our enterprise clients. Our design system is consumed by roughly a dozen different internal teams at the company, and these teams are interested in the design system for different reasons. Some may want a new component for prototyping while others need one for a production environment. Those subscribers have very different needs in terms of what we would call the “maturity status” of the components.

“Maturity status” refers to how ready a component is for use in production. As we develop the components in our system, they may reach a level where they are “good enough” for use on some teams. The prototyping team, for example, may not be concerned with the performance of a component or how well the code is written since they are just in the testing phases. On the other hand, code quality and performance may be very important to production teams who need to be confident that the component they intend to use is fully “mature”, or production-ready. We shouldn’t make the prototyping team wait to use a component while we are still working on getting the component ready for the production team.

To address this difference in subscriber needs, our team developed a strategy to measure and easily convey the maturity level of the individual components to our subscribers. Our approach was two-fold:

  1. Version individual components instead of the design system as a whole

  2. Create a system to easily convey to our users what they were getting when they used a particular version of a component

Component Versioning

Previously, our team had used semantic versioning for the whole design system—meaning every component was production-ready at the time of the release. As the design system grew and was being used by more and more teams within the organization (all with varying needs), we saw that our approach to versioning needed to grow as well. After some research and consulting with stakeholders, we decided that moving from versioning the entire design system to versioning the components individually would serve all the teams better. This type of versioning strategy would allow some teams to begin using mature components in production as soon as those components were ready while also allowing our prototyping teams to use components that were still being developed.

Scorecards and Status Indicators

The second part of our strategy was developing what we call a “component scorecard.” The scorecard is a set of metrics we use to evaluate a component so we can determine its maturity status. Our scorecard is accompanied by a graphic that helps individual subscribers better understand if the component, in its current state, fits their needs.

Many design systems, like IBM’s Carbon and GitHub’s Primer, have some way of determining and communicating the status of their components to the subscribers. It’s common to display some type of visual status indicator next to the component in the design system documentation. These visual status indicators are often in the form of a graphic or color-coded label, along with a term like “Stable” or “Experimental” to indicate the maturity status of a component. The design system’s documentation then includes a key with a more detailed explanation of what the different statuses mean.

Criteria to Determine Component Status

Each design system team must decide what criteria they will use to determine a component’s maturity status. Those criteria should be based on the needs of the subscribers they are targeting. These are some common criteria that may be used for determining the status of a component:

Usability: How usable is the component for the end user? Is the component’s function clear to the end user, or is it potentially confusing?

Accessibility: Is the component usable by those using assistive technologies like screen readers or keyboard navigation?

Code Quality: Is the code written efficiently and clearly? Does it follow commonly accepted code practices? Will it be easy for future developers to understand the code?

Performance: How performant is the component? Could the code be written differently to improve performance?

Within each of these criteria, there may be individual metrics to help developers understand what is expected of a component in that area.

Scorecard Example: Determining the Usability of a Component

Let’s take a look at an example text field component and how the scoring system might work for one of the criteria, usability.

See the Pen Scorecard Component Example by Mandy Kendall (@MLKendall) on CodePen.

In this example, we have three different potential status labels we can use to convey the component’s maturity status:

LabelDescription
StableThe component is production-ready for both design and development use.
In ProgressThe component is generally complete but needs selective improvements to be "production-ready."
ExperimentalThe component is new and under construction. It may have basic functionality, but it is not safe for use in production.

The chart below lists the metrics that we will use to evaluate the usability of our sample component.

Usability MetricHow to Score It
Clearly appears to be interactiveAt first glance, can users clearly tell how to interact with it?
Provides clear visual feedbackDoes it give users visual feedback that helps them understand the state of a user action (like clicking or typing) that’s currently happening?
All necessary states are presentDoes this component have all the states necessary to accommodate all user cases? Are all the states well defined and distinguishable from each other?
Colors meet the contrast standard (WCAG AA- ratio 4.5:1)Do the colors meet the WCAG 2.0 contrast ratio requirement?

Using this chart, we can see that our sample textbox component passes three of the four usability metrics:

Looks Interactive: PASS

  • Our component looks and functions like a text box, and it is clear to users that they would type information inside of it.

Provides appropriate feedback: PASS

  • It is clear to users when they have focused on the textbox and can begin typing.

All appropriate states exist: PASS

  • Our example textbox component includes a disabled, focus, and error state.

Meets color contrast standard (WCAG AA- ratio 4.5:1): FAIL

  • The component fails the last usability metric because the color contrast of the hint text does not meet the color contrast standard.

Since this component passes only 3 of the 4 metrics we are using to assess its usability, it has room for improvement. The component may need to be sent back to the design team to choose colors that are WCAG compliant. It is likely useful enough for the prototyping team to use in the meantime; however, the production team may want to wait until the component is updated before putting it into production. In this case, we would rate the component as “In Progress” since it is generally complete but still needs improvement before it should be used in production.

Benefits of Using Scorecards and Status Indicators in Design Systems

So how does using a scorecard encourage subscribers to adopt a design system?

Builds Confidence in the Design System

Using a scorecard can build confidence among developers, designers, and other stakeholders by providing transparency into the process of component creation. The subscriber can see that the design system team is aware that a component has areas where it needs improvement and that there is a plan in place to get the component to full maturity status. In our case, we also give our subscribers access to the metrics we used to evaluate the individual components so they can clearly understand why a component has a certain status and whether the current version of that component is useful to them.

Encourages Adoption

Allowing subscribers to see the status of a component encourages early adoption because it gives them full confidence that a component they are using now, even if not fully mature, will continue to be supported and improved in the future. Prototyping teams can feel confident that if they begin using an “immature” component in their prototypes and find it to be successful, the design system team will continue improving and developing the component for future use in production. Production teams, on the other hand, can be confident that components considered “mature” by the design system are truly production-ready and able to be used out in the real world.

Provides Guidelines for the Design System Team

One of the benefits of using a scorecard is that it provides a set of guidelines for the team as they continue to work on improving their product. Project managers can use the scorecard to help them write and prioritize stories and develop acceptance criteria for each of those stories, which in turn helps developers and designers clearly understand the expectations around components. These guidelines are also useful to team members providing design or code reviews to others on the team. Our team recently held a retrospective for the last large phase of our design system work, and developers indicated that the scorecard was extremely useful in helping the team prioritize and focus on specific areas of the product that we needed to improve.

Scorecard Systems Paving the Way to Better Design Systems

Design systems enable their subscriber to build better products by ensuring a level of consistency and reusability—but only if subscribers have enough confidence to use them. Using scorecards in our design system helped to build that confidence and encourage our subscribers to use this very important tool. If you or your team is already using a scorecard system or has a different strategy for conveying the status of your design system components to users, we’d love to hear from you.

Resources We Love:

Sparkbox’s Development Capabilities Assessment

Struggle to deliver quality software sustainably for the business? Give your development organization research-backed direction on improving practices. Simply answer a few questions to generate a customized, confidential report addressing your challenges.

Related Content

Want to talk about how we can work together?

Katie can help

A portrait of Vice President of Business Development, Katie Jennings.

Katie Jennings

Vice President of Business Development