Table of Contents
- What is the Digital Standard?
- Who Created and Maintains the Digital Standard? Who can Contribute?
- Why is Testing Important?
- Why was this Testing Handbook Necessary, and Who is it For?
- How does the Handbook Score Products?
- How did we Pick the Products? (And Why aren’t We Naming Them?)
- What Products did we Ultimately Choose?
- How did we Design the Technical Testing Procedures?
- How did we Design the Policy Testing Procedures?
- What would we Change in the Standard?
- Conclusion
How did we Design the Policy Testing Procedures?
The standard also includes a number of tests that evaluate the policies of the product being tested, and those of the company that made or maintains the product. These are designed and function somewhat differently than the more technical tests. In evaluating the indicators for the policy tests, testers mostly review information provided by the company or manufacturer on their website or on documents included when a user purchases the physical product. Some indicators seek yes or no answers to, for example, questions about a product's terms of service or privacy policy.
In some ways it is easier to use a stringent rubric for tests that involve reviewing written documents or using the app itself, because there are fewer functions to evaluate. However, similar to the technical tests, the requirements provided in the standard are often ambiguous. Measuring whether a document is easy to understand or whether data sharing practices are reasonable is completely subject to interpretation by testers. Further, the nature of the product may mean that privacy risks for some products are more significant than for others. It is less crucial that a legal document is “easy to understand” when the product in question poses fewer threats to privacy, for example, if the product does not collect any sensitive personal information. However it is extremely important that a product which collects biometric, location, and other sensitive data has a privacy policy that is understandable by users, because mishandling this data can have devastating effects to a user’s privacy and security.
In designing the procedures for these tests we often focused on whether the company provided clear examples or definitions in their policies, and whether the policies addressed all of the functions of a specific product. Other tests focus on the specific practices of the company—for example how they manage specific user data, or what internal company practices on privacy are. A company may have a robust data deletion policy, but unless they specify that they do in documents that are available to customers, we were forced to fail them on relevant indicators. This is both a limitation of the standard—in that results may not always reflect actual practices—and a strength, in that it is arguably more important that a user has an accurate understanding of a company’s practices so that they can make educated choices. Even when companies do appropriately delete or minimize data according to best practices, it is still important that they disclose that information to users.
The policy tests are likely more accessible to non-expert testers than the technical tests, which was why it was crucial for us to make sure that the processes were clear and consistent. Anyone can read company documents or use the product and see what its features are, whereas not everyone can evaluate code to look for vulnerabilities. Given this, we did our best to create procedures that could be replicated by anyone and did not require significant interpretation and expertise.