top of page

Accessibility Compliance Tool - Manual Testing

What is ACT M?

The Accessibility Compliance Tool (ACT) allows users to check the accessibility of their site and will be used by the Accessibility COE team to monitor accessibility compliance across all teams at Dell.


Only about 30% of accessibility issues can be tested automatically and the rest requires manual testing. This project will go into the research, prototyping and user testing of this addition to the tool, a phase that is referred to as "ACT - M". 

The Accessibility Compliance Tool:
An introduction

Main Features -

  • Allows users to run an automated test on single or multiple URLs. (Create a Scan Page)

  • Gives users an accessibility score to let them check their level of compliance. (Scan Results Page)

  • Gives users information about the number of issues they have and steps to fix them. (Scan Results, Specific URL and Specific Issue Page)

  • Allows users to schedule scans to be able to track their progress over time. (Create a Scan and Scan History Page)

Main Users -

  • Product Owners

  • Product Leads

  • Engineers

ACT Wireframes

Summary - Stages of the Project

Competitive Analysis

Conceptual designs and flow

Stakeholder feedback and iterations

User testing process

Affinity Mapping and Insights

Iterations and final designs

Competitive Analysis

To better understand what features we would need to add to ACT, we decided to a comparative analysis between the manual testing features of different accessibility tools.

The main tools we compared were:

  • Usable Net AQA

  • Microsoft Accessibility Insights for Web

  • Axe DevTools

  • Axe Auditor

  • Access Assistant (Level Access)

We looked at their overall user flows and features to inform our design. Our key areas of analysis were:

  • The explanation of manual testing and its importance.

  • Reporting methods in terms of format, score, data visualizations, scans over time and sharing data with the overall team.

  • World Content Accessibility Guidelines (WCAG) Coverage.

  • Testing methods - no. of steps, beginning a new step with another in progress, mid test saving, testing time captured and notifications,

Competitve Analysis with screenshots of the applications
Competitive analysis in excel format

First time user for a multi-URL scan

Current Flow

Create a Scan

Scan History

Scan Results

Specific URL

Specific Issue

Additional Flow

Manual Review

Manual Review URL List

Select URL

Manual Review Console (Needs Review)

Manual Review Console (Guided Manual Test)

Current and Proposed User Flows

Conceptual Designs

This was a flow we created for initial stakeholder feedback.

Create a New Scan - Multiple URLs

Create a Scan Page

This page is a starting point to test multiple URLs. A user would upload their URL Excel file and create their scan

Initial Stakeholder Feedback

After testing our basic mid - fidelity wireframes on stakeholders, we had some main points of feedback:

  • How can we better communicate the uncertainty of the score?

  • How can we more clearly explain the difference between the needs review and manual guided testing phases?

  • Concerns around development feasibility.

  • Clarity around at what point progress would be saved.

  • Clarity around when the score would be updated.

Stakeholder Map

Stakeholder Map

Flow used for testing

This was the flow we used for main user testing

Create a New Scan - Multiple URLs.jpg

Create a Scan Page

This page is a starting point to test multiple URLs. A user would upload their URL Excel file and create their scan

User Testing Process

We created a user testing outline and script with tasks that the users had to execute. We collected that data from each user in an Excel sheet and then moved those insights into a whiteboard for affinity mapping.​

We also sent a follow-up survey so that we could understand our users' experience with the user testing process.

Tasks given to users -

  1. Answer introductory questions.

  2. Identify and explain the overall accessibility score.

  3. Choose 4 URLs for manual review.

  4. Complete manual review on one URL.

  5. Check updated score.

  6. Answer any additional questions.

Data points from user testing collected in rainbow spreadsheet.

Data points collected in rainbow spreadsheet

ACT User Testing Outline and Script

Outline and Script

ACT User Testing Follow up Survey

Post User Testing Survey

Affinity Mapping

Affinity mapping post its before sorting
Affinity mapping post its sorted based on similarity

Main Insights from affinity mapping

Previous visualization of the Overall Scan Accessibility Score
New visualization for the scan accessibility score with an estimated score of 82% +/- 15%, along with the lowest (67%) and higest(97%) score possible

Understanding the score

  • Most users interpreted the score correctly. It was not always clear that the color differences in the gauge were to show variance. 

  • There is an opportunity to connect the variance portion better to the overall score.

  • The overall score should be current - users felt it should reflect work done so far (Manual Review Homepage). 

Terminology 

Users had trouble understanding the nuance of terms used within the application, including:

  • Manual (vs automated)

  • Issues to review (vs. Verified)

  • What "Manual Homepage" referred to.

  • Exactly what "passed" and "failed" mean (failed URLs, instances passed etc.)

URLs selected by starring method on the Manual URL List Page
Instance highlighted in blue in the guided manual testing page.

UI Mismatch

Users' expectation of certain UI components did not match up to what was shown in the tool including:

  • Users saw starring URLs as favouriting items vs selecting them.

  • Users did not see going to the second tab in the Manual URl List page as a natural progression.

  • Users did not perceive the highlight color of to the box in the manual console to be prominent enough.

Issues with User Flow

  • Proceeding to manual testing could be a clearer CTA than just "Select URLs"

  • Interstitial Manual Console screens need to appear adequately different.

  • Flow within Manual Console needs to be clear - users should be able to return to the Manual Homepage at any point.

  • Users are not currently able to return from the manual console flow to the homepage without exiting the entire manual experience.

Final Designs (based on feedback)

These were the final designs we landed on based on user feedback.

Create a New Scan - multiple URL.jpg

Create a Scan Page

This page is a starting point to test multiple URLs. A user would upload their URL Excel file and create their scan

Accessibility Annotations marked on a wireframe

Accessibility Annotations

Accessibility annotations such as headings, tab stops, accessible names, link and buttons, and landmarks were added to all designs.

Overall Challenges

Some challenges we experienced during the process -

  • The designs were initially created in DDS 1 (earlier design system). The ask was to shift it to DDS 2 (new design system), however there were a lot of development issues with this change, and we had to go back and forth a lot between the two systems.

  • The scope of MVP 1 kept changing and it was hard to keep track of what needed to be done.

  • Team members were constantly changing so that slowed down progress.

bottom of page