Our Testing Process

Each service is tested using a simple, but thorough process. The first stage is shopping. The same as every other customer will, we compare each service based on what they claim to provide, and choose what we think is the best value for each service. Often this is the highest tier, but for some options this could be the middle or even lowest tier of service. Each one of these identity theft protection services values their service tiers individually, with many shunting off lesser used options to higher tiers so they can pass the savings off to those who don’t need those services.

Once that is done, we purchase the services and look over what each one offers and how the services are laid out. One of the most important features to us is ease of use. While performance is important, if a lot of the functionality is locked behind hard to read webpages and seemingly random categorization, it might be better to go with another option.

Next, we feed all necessary information into these services that they request. The amount of information they prompt the user to input in the first place is a good, easy metric to look into, since it means the service is trying to anticipate your needs and make sure it has everything covered, even things the user may have forgotten.

All of these services are given the same information if possible, but if one doesn’t cover something those fields won’t be available.

Then, we wait. Before too long, a number of alerts may or may not pop up based on what was put into the system. Certain information has been seeded into the system that SHOULD throw up an alert; known data breaches or past identity theft incidents we would expect a service like this to catch.

If it finds this planted info: perfect! That means it’s monitoring everything well. This is what we will usually mean when talking about “performance” in this context. How thorough and accurate the readings are.

If not, that presents a problem. It means there are gaps in the protection that the user will need to be wary of, and in this case it may be better to go with another service.

Finally, we test their threat resolution capabilities by either using the options built into the website orare app (some can be completed via an automated system), or by calling customer support to see how they react to the alerts that have been thrown up. Customer support is graded on responsiveness and knowledgeability above all, though availability is important as well.

While a particular agent may not have all the answers, we would expect anything one agent doesn’t know how to resolve to be kicked up to a more experienced agent who can. If this is done, we wouldn’t consider it a black mark against their record, as no singular agent should be expected to be perfect. if they are completely unable to resolve the problem, that presents a larger issue.

Finally, we put all these metrics together to help create a clear picture of what the user experience for a particular service is like.

Last Updated on