We need a common language for the internet of things

When someone tells me they have bought smart light bulbs, an Internet-connected pet cam, or any other Internet of Things (IoT) device, I always get an unsettled feeling in the pit of my stomach. They’re so excited about the affordances or their new IoT devices and apps, but I am skeptical about the privacy and security vulnerabilities. How do I have a conversation about these concerns without coming across as hyper paranoid? Perhaps the answer is that we aren’t quite ready to discuss these issues on a societal level.

Privacy and security advocates all over the world have been talking about the threats that IoT may pose to society – unless standards and regulations are put in place to help mitigate some of these risks. They champion that privacy and security should be built into design and should not come as an afterthought.

While I praise the work that advocates are doing, IoT devices are on the shelves right now and we need to be able to have conversations with everyday folk about what privacy and security risks look like in the digital economy. However, how can we have these conversations when we haven’t yet established understandable and common terms to discuss the nuances of privacy and security? Even the acronym of “IoT” in and of itself is not a commonly used term amongst most of my “less-techy” friends and family, which means I have to explain what it stands for and how it underpins other phrases such as “smart homes.”

Personal security, national security, and social privacy are already well understood, albeit the cultural contexts for each may be different across the globe. They are often defined by notions of harm posed to individuals and society, whether that is physical harm, psychological harm, or financial harm. In my current research I have identified a clear gap in people’s knowledge and vocabulary for talking about institutional privacy. (From “Privacy Concerns and Online Behaviour,” institutional privacy is “people’s uneasiness and fear that their data is used for unwanted purposes. Examples are unwanted, targeted ads on Facebook or political spying by the state. Compared with social privacy concerns, institutional privacy concerns are more abstract and less present inpeople’s daily lives.”) The potential consequences of data sharing, collection, and use are difficult to understand, for the feedback loop between the creation of data and an unintended consequence is often obfuscated.

Examples of unintended consequences:

  • A doorbell making your WiFi network vulnerable to being hacked/hackers
  • Hackers accessing baby monitors
  • Your smart TV potentially listening to all of your conversations

I like to call these instances unknown unknowns. We never expected these to happen, nor do we have the tools to be able to mitigate these risks. The line between cause and effect is not a direct path that we are allowed to see.

I recently listened to an interview with Cory Doctorow, a science fiction author and co-editor of Boing Boing, on the podcast Yale Privacy Lab, in which he spoke of exactly this. He referenced how we talk about the nuances of climate change from using language such as “global warming” to talking about CO2 and “all climate-related gases including methane.” He continues by saying that having this level of detail lends itself to better quality discussions of the issues across “a very broad section of the population which is necessarily for it to both have democratic legitimacy and chance of building a mass movement that we need to do something about it.”

So while we might not have the technical know-how to personally reduce the risks of privacy and security vulnerabilities, collective voices talking about these issues through commonly understood vocabulary is a start and it needs to happen now.

Get IoT smart!
Internet of Things (IoT), Privacy, Security, Trust