As the reader may know, Big Data refers to data sets of enormous scope (think terabytes of data). Historically, many of these databases have contained highly dimensional data about the activity of people online; the advertising platforms of Facebook and Google – the backbone of their businesses – are examples of products that are informed by big data sets. Applications aren’t limited to the web however; Walmart’s transaction databases are estimated at 2.5 petabytes (1 PB = 1,000,000 GB).
What we can learn from this is that many successful companies have discovered that their ability to collect and analyze enormous amounts of data gives them a significant competitive advantage; if not the core value proposition of their organization.
So, how does this apply to medical devices?
Health-related data is becoming more abundant. People care less about privacy than they used to and, for those who do, HIPAA has well-documented guidance for de-identifying Protected Health Information. Still more data is produced and stored by devices from consumer health companies such as Fitbit and Withings that cater to the growing quantified-self market.
Bottom line: there is precedent for collecting data on a large scale… so what data should you capture?
The answer depends on your specific application, but here are some ideas you should consider:
- Could you make a better walking brace with data on millions of steps taken by thousands of users? What if you also had information on their recovery time?
- Could you make a better surgical robot with data on the position of every end-effector at every second of every surgery it ever performed?
- Could you make a better ventilator with flow/pressure/heart rate/O2 Sat/etc. data on millions of inhalations and exhalations?
Now imagine your company had such a data set… and your competitor didn’t. As that data set gets larger and your products become more data-driven, a network-effect occurs. Who would want to buy a competitor’s product that is based on an inferior data set? You would be Google, your competitor would be Yahoo.
Analyzing Big Data can be hard. The hurdles are both logistical and analytical:
- Logistically, Big Data can’t be loaded into a laptop’s RAM (i.e., you won’t be opening it up in Excel). To be able to “look at” Big Data, specialty tools such as Hierarchical Data Format or Hadoop may be required.
- Analytically, machine learning techniques such as neural networks may be required if a pattern or trend can’t be isolated using strictly mathematical methods. Such techniques, while well-understood, differ from traditional statistics and can have a bit of a learning curve.
A company cannot perpetually compete while operating on less data than their competitors. Ever-increasing saturation of connectivity (RFID, Wi-Fi, Bluetooth, NFC) is allowing us to collect and store more and different data than ever before. The critical question is; what kind of data can give you an edge?