Trustless Computing Association

View Original

Cyber-libertarianism vs. Rousseau’s Social Contract in cyberspace

The cyber-libertarian belief that an individual citizen is able to autonomously and meaningfully protect its civil rights in cyberspace is incorrect and a huge drain of scarse volunteer resources in digital civil rights activism. 

In fact, even if such individual is supported by large informal online “communities of trust” – it is not possible to avoid the inherent existence of formalized or non-formalized “trusted third parties” that have to power to radically influence such individual’s ability to obtain, or not obtain, meaningful protection of his/her civil freedoms against malicious or accidental critical vulnerabilities in its critical hardware design, fabrication and assembly. 

In its 1762 “Social Contract” Rousseau wrote:

“‘Find a form of association that will bring the whole common force to bear on defending and protecting each associate’s person and goods, doing this in such a way that each of them, while uniting himself with all, still obeys only himself and remains as free as before.” There’s the basic problem that is solved by the social contract”.

Flash forward 250 years later, and half our average awake time is spent in cyberspace, where virtually NO citizen have access to IT devices or services that provide meaningful assurance that their privacy, identity and security is not completely and continuously compromised at very low marginal cost by even mid-level threats.

In fact, adequate protection of citizens’ IT is not required by the state – as it does for nuclear weapons, airplanes or housing standards – nor is it offered by companies or traditional social organizations. Citizens are left alone to protect themselves.

In cyberspace, would citizens be better able to protect themselves alone or through adequate joint associations? Should we let users alone to protect themselves or is there a need for some forms of cyberspace social contracts? Would delegating part of one’s control of its computing to jointly-managed organizations produce more or less freedom overall?

Rousseau went on writing:

Each man in giving himself to everyone gives himself to no-one, and the right over himself that the others get is matched by the right that he gets over each of them. So he gains as much as he loses, and also gains extra force for the preservation of what he has“.

The current mainstream answer is that we can and should do it alone. Cyber-libertarianism has completely prevailed globally among activists and IT experts dedicated to freedom, democracy and human rights in and through cyberspace (arguably because of the nature of anarcho-libertarian geek political culture of the US west coast, especially north-west).

But achieving meaningful protection is completely impossible by an individual – even if supported by informal digital communities of trust, shared values and joint action, more or less hidden in cyberspace.

In fact, achieving meaningful assurance of one’s computing device requires meaningful trustworthiness of the oversight processes of fabrication and assembly of critical hardware components that can completely compromise such devices.  So, therefore, even for a pure P2P solution, we need anyway those in-person processes for the fabrication and assembly. Until a user will be able to 3D print a device in its basement, there will be a need for such a geolocated complex organizational process, with which the NSA and others can completely compromise surreptitiously or outlaw it.

The necessity of such oversight organizational processes can be desumed from this 3 minute video except in which Bruce Schneier clearly explains how we must assume CPUs as untrustworthy and therefore we may need to develop intrinsically trustworthy organizational processes, similar to those that guarantee the integrity of election ballot boxes. (As we at UVST apply to the CivicRoom and the CivicFab).

In fact, since NSA and similar, after the 90s popularisation of high-grade software encryption and after the Clipper Chip failed, they were at risk of losing their (legal sanctioned and constitutionally) authority of intercept, search and seizure. They, therefore, have had the excuse (and reason) to break all end-points at birth, surreptitiously or not, all the way down to the assembly or at the CPU or SoC foundry.

They succeeded wildly and, even more importantly, succeeded in letting most criminals and dissenters think that some crypto software or device was safe enough to share critical information. Recent Snowden tight-lipped revelations about intrusion NSA in Korean and China IT companies, show that things have not changed since 1969 when most governments of the world were using Swiss Crypto AG equipment thinking it secure, while they were undetectably spied upon by NSA.

Therefore, we must have some form of social contracts to have any chance of gaining and retaining any freedom in cyberspace.

The great news is that those social contracts – and related socio-technical systems – can be enacted among a relatively-small number of individuals that share values, aims and trust, rather than some territory, and they can be changed at will by the user, enabling much more decentralized and democratically resilient forms of democratic association.

Since we must have such social contracts in place for each such communities of trust to handle the oversight processes, we may as well extend those, in (redundant pairs of) democratic countries, to provide in-person processes controlled by effective citizen-jury-based processes to allow constitutional – no more no less – access to intercept, search and seizure, to discourage its use by criminals and avoid giving a reason & excuse for the state to outlaw it, or surreptitiously break it.

A sort of social contract for cyberspace was enacted in 2004 by the founders of the Debian GNU/Linux operating systems, through the Debian Social Contract. It eventually became a huge adoption success, as it developed the world-leading free software OS, and originated much of the tech leaders of the leading free software privacy tools. But ultimately it did not deliver trustworthy computing, even to its most developers, no matter how much convenience and user-friendliness was sacrificed.

In addition to poor technical architecture choices – such as the belief in their ability to make huge software stacks adequately secure with limited resources – what ultimately caused their failure by the fact that the contract was for the users but not by the users, i.e. the users were not substantially involved in its governance. For this reason, its priorities were those of geek developers, i.e. the freedom of hack around and share, through barely functioning code, as opposed to freedom from abuse of core civil rights – through extreme engineering and auditing intensity relative to resources, extreme minimization, trustworthy critical hardware life-cycle and compartmentation, in order to deliver minimal functionality but meaningful assurance that your device is your instrument and not someone else’s.

The User Verified Social Telematics, recently renamed Trustless Computing Association, proposes to extend on the organizational and technical, to enable effective autonomous cyberspace social contracts of the users, by the users and for the users.