In general usage, “error” means a mistake. In scientific circles, however, error is part of everyday life where certainty is rare. Whenever you try to measure something, there will always be some level of uncertainty. Maybe something is closer to 2 inches long than 1 inch or three inches, but it’s never “exactly” 2 inches. Is it closer to 2.0 or 2.1 inches? 2.00 to 2.01 inches? Every tool that you use to measure with has its limits, and thus there is an “error” in the measurement: some range within which you’re certain but beyond which you can’t get more specific.
Error is an important consideration in analyzing data, especially in health and medical applications. Set some limit too low, and you get more false negatives than false positives. Set it too high, and you get more false positives than false negatives. Choosing the right limit for some measurements is a critical problem. If you go too far one way in medical applications, you risk wasting money on treatments that patients don’t need. Go the other way, and patients who need the treatment won’t get it.
This problem is explored in an excellent article, “In Biometrics, Which Error Rate Matters?” As we expect wearable devices to do more for us, providing data that we will use not only to make decisions about our own individual health, but about larger populations in general, accuracy matters. And we need to accept that no device will ever be “perfectly” accurate. How much error is acceptable? If a step counter is off by 20%, nobody is likely to get hurt. If a blood glucose reading is off by that much, people could die. But no matter what the accuracy is, we have to decide how we’ll treat the results. Do we want to have more false positives or more false negatives? There’s no single correct answer, and the choice will depend on the application and what’s at stake. It’s important that we get it right if we are to get the full benefits from wearable Health Tech devices.