it is frequently suggested, in both the popular media and academic press, that the internet is essentially re-writing the ‘rules’ of privacy. this is simply not true. ‘privacy’ always has, and always will be, based on two pillars, trust and transparency. neither the rules of privacy, nor the way in which they are applied, have changed. what has occurred, and what has been misinterpreted as fundamental change, is that the relative costs of privacy and publicity have dramatically shifted, resulting in people sharing more information, more publicly. the misreading of this shift has driven a focus on building new platforms for private sharing based on the ‘security’ model, requiring greater and greater amounts of identification and verification. this is ultimately an inappropriate response, and we are witnessing the beginning of its demise. the next iteration of ‘private’ sharing solutions are based on sharing as little collateral information as possible, rather than deploying additional metadata in an attempt to lock down the data itself.
it is important to first set the parameters of exactly what is meant by privacy, for its definition is not always consistent. privacy, as it will be discussed here, is the ability to share information with specific counterparts, without exposing it beyond the lines which the ‘owner’ of the content intends. privacy pertains to issues of the transmission and retransmission of information among people. this is distinct from security, which pertains to preventing information from being exposed through targeted exploit.
every private transaction must satisfy two critical pre-requisites: trust and transparency. a transaction can only be private to the extent that the person sharing information trusts the recipient, and that he or she transparently signals exactly how the information shared is to be treated. this holds true for both individuals and systems. technology cannot prevent friends or associates from passing on secrets, one can only pick trustworthy counterparts and ask them not to tell. this means that the goal of private systems must be to deliver an experience that models the ‘trusted and transparent’ in-person conversation as closely as possible.
while the basic prerequisites of privacy are not changing, the cost structure of sharing very much is. the cost of distributing content has been falling as technology has improved. the internet is a drastic step change in this direction, but it is hardly the first. most recently the ‘web 2.0’ movement has lowered the cost of public distribution by paying people who make their content public in one form or another. this advertising supported communication model (primarily centered on search and social applications) has brought it to the point that for many types of ‘user generated’ content, the wide distribution of information is negatively expensive.
in contrast, the costs of privacy have risen on a relative, if not absolute, basis. it is now far more expensive to keep content private than it is to publish it widely. this is not because of the often cited rise in security costs. rather, it is simply because the costs associated with publishing information have fallen far more rapidly than the costs of private sharing. the tools for private sharing are less developed than the tools for public sharing. so, whereas for all of human history it has been more expensive to share information widely than privately, the reverse is now true.
to help users control private information, a wide range of providers have developed complex mechanisms that require ever greater amounts of user input. the most recognizable form of this is social networking. these systems help users construct trusted identities and then define sets of relationships, roles, and permissions to define how they want their information and content to be accessed. the central conceit is that additional layers of information can be deployed to seal private information behind a complex maze of accounts and permissions.
this model is unsustainable. it asks users to trust services and people with even more rich private information, defining identity and relationships (accounts, email addresses, other personal identifiers). by centralizing identity and ceding the metadata necessary to define the accessibility of information, these services are raising the total cost of private sharing for individuals, and increasing the potential for abuse. in many cases, the systems are just too complicated and users opt not to use them. in other cases, the extra data creates new points of vulnerability.
out of this current state, a new model is emerging: ‘simple privacy.’ this model has been referred to as ‘casual privacy’, or ‘data-poor’ privacy (as opposed to ‘data rich’ or ‘verbose’ privacy). the fundamental basis of the movement, which is ultimately a return to historical norms, is that less is more. within this construct, people share exactly what they want with whom they want and for as long as they want, without extraneous information or metadata. this model minimizes the informational footprint needed to privately share information and does not require embedded accounts, identity, search, or any social elements. rather than enabling content to last ‘forever,’ it is removed as soon as it is no longer needed, and provides revocable access. users share simply by transmitting specific un-guessable locations and passwords off one platform and on to another. as a result, privacy is heightened. it is impossible for a system to expose maliciously or accidentally that which it does not know. this aims to mirror as closely as possible the private, in-person conversation in the middle of a busy café where, without context, private things can be openly discussed. even though others within earshot might listen, they lack the context to understanding the nature of the discussed topic.
it is likely that this model of private sharing will prevail again, as it has in the past, and this period will be seen as a momentary deviation from the historical norm, in which privacy is simpler and easier to achieve than publicity.