Blind Signaling and Response presentation posted

Online services mine personal user data to monetize processes. That’s the business model of “free” services. Even if mining is consensual, policies and promises cannot guaranty privacy. It succumbs to error, administrative malfeasance, hackers, malware and overreaching governments. Is there a technical solution? One that supports monetized data mining and manipulation, but only under predefined conditions, rather than by policies and promises?

Philip Raymond has spent the past 4 years searching for a privacy Holy Grail: The ability to facilitate data mining and manipulation in a way that protects user identity, restricts use to predefined purposes, and insulates results from other uses, even within the service that gathers and manipulates the data.

Prior to this week, there was scant public material on the Blind Signaling mechanism. A PowerPoint overview was accessible only by students at a few universities and the French mathematician who is donating resources to the project.

This week, Université de Montréal posted a live video presentation that steps through the BSR PowerPoint slides. It was filmed at a computer privacy workshop hosted by the university math and encryption departments. Master of Ceremonies, Gilles Brassard, is recognized as an inventor of quantum cryptography, along with his colleague, Charles Bennett. [Brief History of QC]

Blind Signaling and Response  by Philip Raymond…

I am often asked about the algorithm or technical trick that enables data to be decrypted or manipulated—only if the user intent is pure. That’s the whole point here, isn’t it! We claim that a system can be devised that restricts interpretation and use of personal data (and even identities of individual users who generate data), based on the intended use.

The cover pulls back near the end of the video. Unfortunately, I was rushed through key PowerPoint slides, because of poor timing, audience questions, and a lack of discipline. But, I will present my theories directly to your screen, if you are involved in custodial privacy of user data for any online service (Google, Yahoo, Bing, etc) or ISP, or upstream provider, or an Internet “fabric” service (for example, Akamai).

How it Works

The magic draws upon (and forms an offshoot of) Trusted Execution Technology [TXT], a means of attestation and authentication, closely related to security devices called Trusted Platform Modules. In this case, it is the purpose of execution that must be authenticated before data can be interpreted, correlated with users or manipulated.

Blind Signaling and Response is a combination of TXT with a multisig voting trust. If engineers implement a change to the processes through which data is manipulated (for example, within an ad-matching algorithm of Google Ad-Words), input data decryption keys will no longer work. When a programming change occurs, the process decryption keys must be regenerated by the voting trust, which is a panel of experts in different countries. They can be the same engineers who work on the project, and of course they work within an employer NDA. But, they have a contractual and ethical imperative to the users. (In fact, they are elected by users). Additionally, their vote is—collectively—beyond the reach of any government. This results in some very interesting dynamics…

  1. The TXT framework gives a Voting Trust the power to block process alteration. The trust can authenticate a rotating decryptoin key when changes to an underlying process are submitted for final approval. But, if a proscribed fraction of members believes that user data is at risk of disclosure or manipulation in conflict with the EULA, the privacy statement (and with the expectations of all users), they can withhold the keys needed for in-process decryption. Because proposed changes may contain features and code that are proprietary to the custodian, members of the voting trust are bound by non-disclosure—but their vote and their ethical imperative is to the end user.
    .
  2. Blind Signaling and Response does not interfere with the massively successful Google business model. It continues to rake in revenue for serving up relevant screen real-estate to users, and whatever else Google does to match users with markets. Yet, BSR yields two important benefits:
  • a) It thwarts hackers, internal spies, carelessness, and completely undermines the process of government subpoenas, court orders and National Security Letters. After all, the data is meaningless even to in-house engineers. It is meaningful only when it is being used in the way the end users were promised.
    .
  • b) Such a baked-in process methodology can be demonstrably proved. Doing so can dramatically improve user perception and trust in an online service, especially a large collection of “free” services that amasses personal data on interests, behavior and personal activities. When user trust is strengthened, users are not only more likely to use the services, they are less likely to thwart free services via VPN, mixers or other anonymizers.

Incidentally, the idea to merge a TXT mechanism with a human factor (a geographically distributed voting trust accountable to end users) was first suggested by Steven Sprague (just hours before my presentation in the above video…I had been working on a very different method to achieve blind signalling). In addition to being insightful and lightning quick to absorb, process and advise, Steven is a Trusted Platform expert, director of Wave Systems and CEO of  Rivetz. Steven and I were classmates at Cornell University, but we had never met nor heard of each other until our recent affiliation as advisers to The Cryptocurrency Standards Association.

To learn more about Blind Signaling and Response—or to help with the project—use the contact link at the top of this page. Let me know if you watched the Montreal video.

Disclosure: The inventor/presenter publishes this Wild Duck blog under the pen name, “Ellery”.

Leave a Reply

Your email address will not be published. Required fields are marked *