QA Conscience — Testing Ethics

James Beringer
5 min readDec 8, 2021

For all developers, testing their applications is a large part of their actual work. Most teams mitigate this by using a quality team to handle most of that validation. The ownership of what is acceptable to go to production lies on their shoulders. Often times, this goes beyond validating the simple functionality of the application. They are to ensure that the product released will not pose any major impediments to its end users. This means including the potential consideration for a diverse set of users.

Here’s an example of a missed consideration: A request from the security team to update a form’s input for first name and last name to confirm against a regex formula. Now my immediate thought was how annoying it may be to confirm, but informed that it wouldn’t be too difficult. When testing, because of my Irish descent, thought to test these fields with a last name containing an apostrophe. Well lo and behold, I got an error stating the name was incorrect. When I went back to the development team, they said “Oh well that’s easy enough for us to add.” Although, that wouldn’t have addressed the core issue.

No matter how well we designed the change, we were going to end up ostracizing some customer at one point. It would have required an extensive amount of research into different names, character styles, etc. that we found not worth the effort. Finally the team decided a deny-list would be adequate. The decision was on me to say, “No this cannot go into production like this,” which is something I’ve said before, to no avail. This time, there wasn’t much push back.

The primary reason I was able to identify this error in functionality was because of my heritage. Which is part of the problem. In a field dominated by white men, functionality is being put out there that may not work for a large group of people. While this is almost never intentional, this is a very real impediment into products being fair and responsible.

Current Testing Standards

There are many different organizations that outline different standards for ethical software testing. Some try to condense them down into a short list ot ten commandments. Unfortunately, many of them fall short. Most focus on the interests of stakeholders and protecting data, instead of on how the products will affect a diverse group of end users.

Photo by Sigmund on Unsplash

Determining this depends on the softwares capabilities. One of the best examples of this is the creation of users in an application. If the user’s first name happens to be something with a dash, apostrophe, or accent would it handle it, or would there be and issue? Maybe a conversion to the dreaded unicode rhombus with the white question mark. Ensuring these considerations are working as one would expect is the responsibility of the QA team. If the QA team misses those, there can be some ramifications, most small annoyances, but some can lead to legal action.

For example, back in 2014, Amazon moved some of their hiring decisioning to an algorithm to streamline their hiring process. The algorithm gathered existing resumes of engineers to learn what to look for. Unfortunately, most of their staff was male. Because of that, it began giving less prevalence to female applicants. Things like attending certain women’s universities, sports having “women’s” in front of them. They were able to scrap the program before there was too much damage, but that may only be due to the visibility from them being such a behemoth like Amazon. There may be hundreds of organizations that may use applications with pre-loaded implicit bias.

Conscience Confidence from Development to Delivery

There are many steps that a team can take to mitigate these kinds of mishaps. While not all them will be applicable, these steps will help enable the engineers to build and test knowing it will be as equitable as possible.

Photo by Mapbox on Unsplash

Have scheduled bias and discrimination training on a regular basis. While most organizations include thit of their annual HR training, it tends to miss the mark. Most focus more on co-worker or direct customer interactions. For many engineers, the issue is the unconscious bias. To address this, many change charged language. Main instead of master, allow/deny instead of black/white, etc. are all good changes that need to happen, but it’s not enough. What is more important is asking the right questions in planning and development meetings.

Ask the right questions. Working in teams allows those team members to bounce those ideas off of each other. But, if those teams aren’t asking the right questions, things will get missed. “Is there someone’s perspective we’re missing in the room?” If someone’s perspective is missing, go get it. There are many questions to ask, but the best questions come from the best people.

Hire a more diverse workforce. In the current tech space, the field is predominantly populated by white males. What this means is having the right sets of eyes in the room to ensure you have a full perspective, or at least as full as possible. This will give you the best return on investment in ensuring a forward thinking team.

Photo by Matteo Vistocco on Unsplash

Final Considerations

While taking these steps will help cut some of the risk run, but it won’t stop it all. It takes constant conscious action to ensure what’s built is validated with the consideration that many different types of people may be the end users. For a single individual, that is a huge burden to shoulder, regardless of background. Having the entire team, and development process, take the necessary time to consider all possible iterations and downstream effects people may encounter. When a team works together to make those considerations, they will have an ethically sound product.

--

--