Selfish Ledger: Google’s mass sociology experiment

Check out the internal Google film, “The Selfish Ledger”. This probably wasn’t meant to slip onto a public web server, and so I have embedded a backup copy below. Ping me if it disappears. I will locate a permanent URL.

This 8½ minute video is a lot deeper—and possibly more insipid—than it appears. Nick Foster may be the Anti-Christ, or perhaps the most brilliant sociologist of modern times. It depends on your vantage point, and your belief in the potential of user controls and cat-in-bag containment.

He talks of a species propelling itself toward “desirable goals” by cataloging, data mining, and analyzing the past behavior of peers and ancestors—and then using that data to improve the experience of each user’s future and perhaps even their future generations. But, is he referring to shared goals across cultures, sexes and incomes? Who controls the algorithms and the goal filters?! Is Google the judge, arbiter and God?

Consider these quotes from the video. Do they disturb you? The last one sends a chill down my spine. But, I may be overreacting to what is simply an unexplored frontier. The next generation in AI. I cannot readily determine if it ushers in an era of good or bad:

  • Behavioral sequencing « a phrase used throughout the video
  • Viewing human behavior through a Lemarkian lens
  • An individual is just a carrier for the gene. The gene seeks to improve itself and not its host
  • And [at 7:25]: “The mass multigenerational examination of actions and results could introduce a model of behavioral sequencing.”

There’s that odd term again: behavioral sequencing. It suggests that we are mice and that Google can help us to act in unison toward society’s ideal goals.

Today, Fortune Magazine described it this way: “Total and absolute data collection could be used to shape the decisions you make … The ledger would essentially collect everything there is to know about you, your friends, your family, and everything else. It would then try to move you in one direction or another for your or society’s apparent benefit.”

The statements could apply just as easily to the NSA as it does to Google. At least we are entering into a bargain with Google. We hand them data and they had us numerous benefits (the same benefits that many users often overlook). Yet, clearly, this is heavy duty stuff—especially for the company that knows everything about everyone. Watch it a second time. Think carefully about the power that Google wields.

Don’t get me wrong. I may be in the minority, but I generally trust Google. I recognize that I am raw material and not a client. I accept the tradeoff that I make when I use Gmail, web search, navigate to a destination or share documents. I benefit from this bargain as Google matches my behavior with improved filtering of marketing directed at me.

But, in the back of my mind, I hope for the day that Google implements Blind Signaling and Response, so that my data can only be used in ways that were disclosed to me—and that strengthen and defend that bargain, without subjecting my behavior, relationships and predilections to hacking, misuse, or accidental disclosure.

Credit for snagging this video: Vlad Savov @ TheVerge

Blind Signaling and Response presentation posted

Online services mine personal user data to monetize processes. That’s the business model of “free” services. Even if mining is consensual, policies and promises cannot guaranty privacy. It succumbs to error, administrative malfeasance, hackers, malware and overreaching governments. Is there a technical solution? One that supports monetized data mining and manipulation, but only under predefined conditions, rather than by policies and promises?

Philip Raymond has spent the past 4 years searching for a privacy Holy Grail: The ability to facilitate data mining and manipulation in a way that protects user identity, restricts use to predefined purposes, and insulates results from other uses, even within the service that gathers and manipulates the data.

Prior to this week, there was scant public material on the Blind Signaling mechanism. A PowerPoint overview was accessible only by students at a few universities and the French mathematician who is donating resources to the project.

This week, Université de Montréal posted a live video presentation that steps through the BSR PowerPoint slides. It was filmed at a computer privacy workshop hosted by the university math and encryption departments. Master of Ceremonies, Gilles Brassard, is recognized as an inventor of quantum cryptography, along with his colleague, Charles Bennett. [Brief History of QC]

Blind Signaling and Response  by Philip Raymond…

I am often asked about the algorithm or technical trick that enables data to be decrypted or manipulated—only if the user intent is pure. That’s the whole point here, isn’t it! We claim that a system can be devised that restricts interpretation and use of personal data (and even identities of individual users who generate data), based on the intended use.

The cover pulls back near the end of the video. Unfortunately, I was rushed through key PowerPoint slides, because of poor timing, audience questions, and a lack of discipline. But, I will present my theories directly to your screen, if you are involved in custodial privacy of user data for any online service (Google, Yahoo, Bing, etc) or ISP, or upstream provider, or an Internet “fabric” service (for example, Akamai).

How it Works

The magic draws upon (and forms an offshoot of) Trusted Execution Technology [TXT], a means of attestation and authentication, closely related to security devices called Trusted Platform Modules. In this case, it is the purpose of execution that must be authenticated before data can be interpreted, correlated with users or manipulated.

Blind Signaling and Response is a combination of TXT with a multisig voting trust. If engineers implement a change to the processes through which data is manipulated (for example, within an ad-matching algorithm of Google Ad-Words), input data decryption keys will no longer work. When a programming change occurs, the process decryption keys must be regenerated by the voting trust, which is a panel of experts in different countries. They can be the same engineers who work on the project, and of course they work within an employer NDA. But, they have a contractual and ethical imperative to the users. (In fact, they are elected by users). Additionally, their vote is—collectively—beyond the reach of any government. This results in some very interesting dynamics…

  1. The TXT framework gives a Voting Trust the power to block process alteration. The trust can authenticate a rotating decryptoin key when changes to an underlying process are submitted for final approval. But, if a proscribed fraction of members believes that user data is at risk of disclosure or manipulation in conflict with the EULA, the privacy statement (and with the expectations of all users), they can withhold the keys needed for in-process decryption. Because proposed changes may contain features and code that are proprietary to the custodian, members of the voting trust are bound by non-disclosure—but their vote and their ethical imperative is to the end user.
    .
  2. Blind Signaling and Response does not interfere with the massively successful Google business model. It continues to rake in revenue for serving up relevant screen real-estate to users, and whatever else Google does to match users with markets. Yet, BSR yields two important benefits:
  • a) It thwarts hackers, internal spies, carelessness, and completely undermines the process of government subpoenas, court orders and National Security Letters. After all, the data is meaningless even to in-house engineers. It is meaningful only when it is being used in the way the end users were promised.
    .
  • b) Such a baked-in process methodology can be demonstrably proved. Doing so can dramatically improve user perception and trust in an online service, especially a large collection of “free” services that amasses personal data on interests, behavior and personal activities. When user trust is strengthened, users are not only more likely to use the services, they are less likely to thwart free services via VPN, mixers or other anonymizers.

Incidentally, the idea to merge a TXT mechanism with a human factor (a geographically distributed voting trust accountable to end users) was first suggested by Steven Sprague (just hours before my presentation in the above video…I had been working on a very different method to achieve blind signalling). In addition to being insightful and lightning quick to absorb, process and advise, Steven is a Trusted Platform expert, director of Wave Systems and CEO of  Rivetz. Steven and I were classmates at Cornell University, but we had never met nor heard of each other until our recent affiliation as advisers to The Cryptocurrency Standards Association.

To learn more about Blind Signaling and Response—or to help with the project—use the contact link at the top of this page. Let me know if you watched the Montreal video.

Disclosure: The inventor/presenter publishes this Wild Duck blog under the pen name, “Ellery”.

Google switches Privacy honchos (Opportunity knocks)

After three years on the job, Google’s first ever Director of Privacy is stepping down. Alma Whitten rode out a tumultuous period which saw several high profile privacy snafus, not least of which has become known as the WiFi drive-by data grab.

Changing-of-the-guard in the office of chief privacy honcho presents a rare opportunity for Google. One wonders if Lawrence You will seize the moment…

Google-greyGoogle has a privacy option that could propel them onto the moral high ground. A nascent science offers a way for Mr. You to radically demonstrate indisputable proof of respect for users. Unlike other potential announcements, policies or technologies, this one protects user privacy completely—while continuing to profitably direct data from marketing partners. In fact, it won’t interfere with revenues across all services, including Search, Docs, and all aspects of Android and Chrome.

Lawrence You steps in as Privacy Director

Lawrence You: Reason to keep smiling.

What could possibly anonymize individual user data while preserving individual benefits? I refer to Blind Signaling and Response. It is new enough that no major services incorporate the technique. That’s not surprising, because the math is still being worked out with help from a few universities. But with the resources and clout of the Internet juggernaut, Google needn’t wait until upstarts incorporate provable privacy and respect into every packet of data that flies across the ether.

What is Blind Signaling and Response? You’re Google! Google it and go to the source. You’ve once brought the inventor to Mountain View. My 2¢: Get the project in house and grow it like a weed. When PMs & directors, like Brad Bender, Anne Toth, Stephan Somogyi and Andrew Swerdlow get religion, the tailwind will grease a path toward roll out—and well deserved bragging rights.

A bit of Irony: Venture Beat says that Whitten is leaving the “hardest job in the world” and that Lawrence You will lose his smile as he takes the reins. Nonsense! With a technical solution to privacy, the world’s hardest job will transform into one of education and taking the credit. Ultimately, it will be the prestige job that commands respect.

Perhaps just as important, Blind Signaling and Response will gut the Bing Scroogled campaign, like a stake through the heart. With Google pioneering user respect, the Scroogled campaign will turn from clever FUD into a colossal waste of cash.

Disclosure:  Ellery Davies is very keen on the potential for BSR and want’s very much to pull his favorite service provider along for the ride.

Will Google “Do No Evil”?

Google captures and keeps a vast amount of personal information about its users. What do they do with all that data? Despite some very persistent misconceptions, the answer is “Nothing bad”. But they could do a much better job ensuring that no one can ever do anything bad with that data—ever. Here is a rather simple but accurate description of what they do with what is gleaned from searches, email, browsing, documents, travel, photos, and more than 3 dozen other ways that they learn about you:

  • Increase the personal relevance of advertising as you surf the web
  • Earn advertising dollars–not because they sell information about you–but
    because they use that data to match and direct relevant traffic toward you

These aren’t bad things, even to a privacy zealot. With or without Google, we all see advertising wherever we surf. Google is the reason that so many of the ads appeal to our individual interests.

But what about all that personal data? Is it safe on Google’s servers? Can they be trusted? More importantly, can it someday be misused in ways that even Google had not intended?

I value privacy above everything else. And I have always detested marketing, especially the unsolicited variety. I don’t need unsolicited ‘solutions’ knocking on my door or popping up in web surfing. When I have needs, I will research my own solutions—thank you very much.

It took me years to come to terms with this apparent oxymoron, but the personalization brought about by information exchange bargains are actually a very good bargain for all parties concerned, and if handled properly, it needn’t risk privacy at all! In fact, the things that Google does with our personal history and predilections really benefits us, but…

This is a pro-Google posting. Well, it’s ‘pro-Google’ if they “do no evil” (Yes—it’s the Google mantra!). First the good news: Google can thwart evil by adding a fortress of privacy around the vast corpus of personal data that they collect and process without weakening user services or the value exchange with their marketing partners. The not-so-good news is that I have urged Google to do this for over two years and so far, they have failed to act. What they need is a little urging from users and marketing partners. Doing no evil benefits everyone and sets an industry precedent that will permeate online businesses everywhere.

The CBS prime time television series, Person of Interest, pairs a freelance ‘James Bond’ with a computer geek. The geek, Mr. Finch, is the ultimate privacy hack. He correlates all manner of disparate data in seconds, including parking lot cameras, government records, high school yearbook photos and even the Facebook pages of third parties.

Mr. Finch & Eric Schmidt: Separated at birth?

It’s an eerie coincidence that Google Chairman, Eric Schmidt, looks like Mr. Finch. After all, they both have the same job! They find a gold mine of actionable data in the personal dealings of everyone.

Viewers accept the TV character. After all, Finch is fictional, he is one of the good guys, and his snooping ability (especially the piecing together of far-flung data) is probably an exaggeration of reality. Right?!

Of course, Eric Schmidt & Google CEO Larry Page are not fictional. They run the largest data gathering engine on earth. I may be in the minority. I believe that Google is “one of the good guys”. But let’s first explore the last assumption about Mr. Finch: Can any organization correlate and “mine” meaningful data from a wholesale sweep of a massive eavesdropping machine and somehow piece together a reasonable profile of your interests, behavior, purchasing history and proclivities? Not only are there organizations that do this today, but many of them act with our explicit consent and with a disclosed value exchange for all that personal data.

Data gathering organizations fall into three categories, which I classify based on the exchange of value with web surfers and, more importantly, whether the user is even aware of their role in collecting data. In this classification, Google has moved from the 2nd category to the first, and this is a good thing:

  1. Organizations that you are aware of–at least peripherally–and for which there is a value exchange (preferably, one that is disclosed). Google comes to mind, of course. Another organization with informed access to your online behavior is your internet service provider. If they wanted to compile a dossier of your interests, market your web surfing history to others, or comply with 3rd party demands to review your activities, it would be trivial to do so.
  2. Organizations with massive access to personal and individualized data, but manage to “fly beneath the Radar”. Example: Akamai Technologies operates a global network of servers that accelerate the web by caching pages close to users and optimizing the route of page requests. They are contracted by almost any company with a significant online presence. It’s safe to say that their servers and routers are inserted into almost every click of your keyboard and massively distributed throughout the world. Although Akamai’s customer relationship is not with end users, they provide an indirect service by speeding up the web experience. But because Internet users are not actively engaged with them (and are typically unaware of their role in caching data across the Internet), there are few checks and on what they do with the click history of users, with whom they share data, and if–or how–individualized is data is retained, anonymized or marketed.
  3. National governments. There is almost never disclosure or a personal value exchange. Most often, the activity involves compulsory assistance from organizations that are forbidden from disclosing the privacy breach or their own role in acts of domestic spying.
The NSA is preparing to massively vacuum data from everyone, everywhere, at all times

The US is preparing to spy on everyone, everywhere, at all times. The massive & intrusive project stuns scientists involved.

I have written about domestic spying before. In the US, It has become alarmingly broad, arbitrary and covert. The über secretive NSA is now building the world’s biggest data gathering site. It will gulp down everything about everyone. The misguided justification of their minions is alternatively “anti-terrorism” or an even more evasive “911”.

Regarding, category #2, I have never had reason to suspect Akamai or Verizon of unfair or unscrupulous data mining. (As with Google, these companies could gain a serious ethical and market advantage by taking heed of today’s column.) But today, we focus on data gathering organizations in category #1—the ones with which we have a relationship and with whom we voluntarily share personal data.

Google is at the heart of most internet searches and they are partnered with practically every major organization on earth. Forty eight free services contain code that many malware labs consider to be a stealth payload. These doohickeys give Google access to a mountain of data regarding clicks, searches, visitors, purchases, and just about anything else that makes a user tick.

It’s not just searching the web that phones home. Think of Google’s 48 services as a marketer’s bonanza. Browser plug-ins phone home with every click and build a profile of user behavior, location and idiosyncrasies. Google Analytics, a web traffic reporting tool used by a great many web sites, reveals a mountain of data about both the web site and every single visitor. (Analytics is market-speak for assigning identity or demographics to web visits). Don’t forget Gmail, Navigate, Picassa, Drive, Google Docs, Google+, Translate, and 3 dozen other projects that collect, compare and analyze user data. And what about Google’s project to scan everything that has ever been written? Do you suppose that Google knows who views these documents, and can correlate it with an astounding number of additional facts? You can bet Grandma Estelle’s cherry pie that they do!

How many of us ever wonder why all of these services are free to internet users everywhere? That’s an awful lot of free service! One might think that the company is very generous, very foolish, or very unprofitable. One would be wrong on all counts!

Google has mastered the art of marketing your interests, income stats, lifestyle, habits, and even your idiosyncrasies. Hell, they wrote the book on it!

But with great access to personal intelligence comes great responsibility. Does Google go the extra mile to protect user data from off-label use? Do they really care? Is it even reasonable to expect privacy when the bargain calls for data sharing with market interests?

At the end of 2009, Google Chairman, Eric Schmidt made a major gaffe in a televised interview on CNBC. In fact, I was so convinced that his statement was toxic, that I predicted a grave and swift consumer backlash. Referring to the Billions of individuals using Google search engine, investigative anchor, Maria Bartiromo, asked Schmidt why it is that users enter their most private thoughts and fantasies. She wondered if they are aware of Google’s role in correlating, storing & sharing data—and in the implicit role of identifying users and correlating their identities with their interests.

Schmidt seemed to share Bartiromo’s surprise. He suggested that internet users were naive to trust Google, because their business model is not driven by privacy and because they are subject to oversight by the Patriot Act. He said:

If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place. If you really need that kind of privacy, the reality is that search engines — including Google — do retain this information for some time and it’s important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.

At the time, I criticized the statements as naive, but I have since become more sanguine. Mr. Schmidt is smarter than me. I recognize that he was caught off guard. But clearly, his response had the potential to damage Google’s reputation. Several Google partners jumped ship and realigned with Bing, Microsoft’s newer search engine. Schmidt’s response became a lightning rod–albeit brief–for both the EFF (Electronic Freedom Foundation) and the CDT (Center for Democracy & Technology). The CDT announced a front-page campaign, Take Back Your Privacy.

But wait…It needn’t be a train wreck! Properly designed, Google can ensure individual privacy, while still meeting the needs of their marketing partners – and having nothing of interest for government snoops, even with a proper subpoena.

I agree with the EFF that they undermine Google’s mission. Despite his high position, Schmidt may not fully recognize to that Google’s marketing objectives can coexist with an ironclad guarantee of personal privacy – even in the face of the Patriot Act.

Schmidt could have had salvaged the gaffe quickly. I urged him to quickly demonstrate that he understands and defends user privacy. But I overestimated consumer awareness and expectations for reasonable privacy. Moreover, consumers may feel that the benefits of Google’s various services inherently trade privacy for productivity (email, taste in restaurants, individualized marketing, etc).

Regarding a damning consumer backlash for whitewashing personal privacy with their public, I was off by a few years, but in the end, my warnings will be vindicated. Public awareness of privacy and especially of internet data sharing and data mining has increased. Some are wondering if the bargain is worthwhile, while others are learning that data can be anonymized and used in ways that still facilitate user benefits and even the vendor’s marketing needs.

With massive access to public data and the mechanisms to gather it (often without the knowledge and consent of users), comes massive responsibility. (His interview contradicts that message). Google must rapidly demonstrate a policy of “default protection and a very high bar for sharing data. In fact, Google can achieve all its goals while fully protecting individual privacy.

Google’s data gathering and archiving mechanism needs a redesign (it’s not so big a task as it seems): Sharing data and cross-pollination should be virtually impossible – beyond a specified exchange between users and intended marketers. Even this exchange must be internally anonymous, useful only in aggregate, and self expiring – without recourse for revival. Most importantly, it must be impossible for anyone – even a Google staffer – to make a personal connection between individual identities and search terms, Gmail users, ad clickers, voice searchers or navigating drivers! For a while now, voice search has been thought of as a huge potential advancement in the Google data gathering system, although many would argue that it has not yet had its desired impact.

I modestly suggest that Google create a board position, and give it authority with a visible and high-profile individual. (Disclosure, I have made a “ballsy” bid to fill such a position. There are plenty of higher profile individuals that I could recommend).

Schmidt’s statements have echoed for more than 2 years now. Have they faded at all? If so, it is because Google’s services are certainly useful and because the public has become somewhat inured to the creeping loss of privacy. But wouldn’t it be marvelous if Google seized the moment and reversed that trend. Wouldn’t it be awesome if someone at Google discovered that protecting privacy needn’t cripple the value of information that they gather. Google’s market activity is not at odds with protecting their user’s personal data from abuse. What’s more, the solution does not involve legislation or even public trust. There is a better model!

They are difficult to contain or spin. As Asa Dotzler at FireFox wrote in his blog, the Google CEO simply doesn’t understand privacy. Here in USA, Schmidt’s statements have become a lightning rod for both the EFF and CDT (Center for Democracy & Technology). The CDT has even launched a front page campaign to “Take Back Your Privacy”.

Google’s not the only one situated at a data Nexus. Other organizations fly below the radar, either because few understand their tools or because of Government involvement. For example, Akamai probably has more access to web traffic data than Google. The US government has even more access because of an intricate web of programs that often force communications companies to plant data sniffing tools at the junction points of massive international data conduits. We’ve discussed this in other articles, and I certainly don’t advocate that Wild Ducks be privacy zealots and conspiracy alarmists. But the truth is, the zealots have a leg to stand on and the alarmists are very sane.

Verizon Wireless: Trouble with honesty & fairness

In the market for mobile phones, a time span of 7 years represents a different era altogether. At least 4 generations of hardware feature phones have come and gone. Seven years ago, there was no iPhone and no Android. Palm was king of PDAs, a class that was still separate from phones and browsers. Feature phones offered Symbian at best. (Who remembers Windows CE?).

Way back in 2004, Verizon crippled Bluetooth in the Motorola v710, the first mobile phone to support short range wireless technology. The carrier supported Bluetooth for connecting a headset and for voice dialing, but they blocked Bluetooth from transferring photos and music between a phone and the user’s own PC. More alarmingly, they displayed a Bluetooth logo on the outside of custom Verizon packaging, even though the logo licensing stipulated that all logical and resident Bluetooth “profiles” are supported.

(Disclosure: I was a plaintiff in a class action that resulted in free phones for users affected by the deception. I am not a ‘Verizon basher’. I have been a faithful client since early cell phones and I recently defended Verizon’s right to charge for off-device tethering.)

Can you hear me now?

Why would Verizon cripple a popular feature that helps to differentiate and sell equipment? That’s an easy one. It forced users to transfer photos and music over the carrier network rather than exchange files directly with a PC. The carrier sells more minutes or costly data plans.

With the same motive, Verizon restricted feature phone apps to their Get it Now store, limiting music, games and ringtones to their own pipeline. Heck–Why not? It’s their ball park! Users can take their business to other carriers. Right? Well perhaps—but mobile service is built upon licensed spectrum, a regulated and limited commodity. Although carriers are not a monopoly in the strict sense (there are three or four carriers in populated regions), they are licensed stewards of an effective market duopoly.

Perhaps the longest lived vestige of Verizon’s stodgy funk (and the most depressing) was their insistence on stripping pre-smart phones of the manufacturer’s user GUI and foisting users to navigate a bland set of carrier-centric screens and commands. Often, I would sit next to someone on an international flight who had the same model Motorola, Samsung or Nokia phone. And guess what? His carrier didn’t interfere with fascinating user features. Why did Verizon force their own screens on unsuspecting Americans? It meant that I could not set my phone to vibrate first and then ring with increasing volume over the next few seconds. What a great feature on my Moto i810! But it was stripped from subsequent models, because it wasn’t spec’d by the boys in Verizon’s “retrofit and bastardize” lab.

With the exception of the class action on the Bluetooth features, no legislation was needed to get Verizon to unlock phone features. Eventually a free market mechanism forced them to rethink their ivory tower greed. With AT&Ts market success selling iPhones, Verizon eventually capitulated so that they could become the Android market leader. The new strategy worked for both consumers and for Verizon. Even before they began selling iPhones in 2011, Verizon reasserted their position as the carrier of choice and fully justified their cost premium through excellent coverage and quality service.

Hey, Verizon! Can you hear us now?!

But now, the company that I have learned to hate, love, and then curse, is at it again! They are about to introduce the Samsung Galaxy Nexus. It is only the 2nd Google branded Android (you can’t get closer to a pure Android experience!). But wait! News Flash: They are going to cripple a native Android feature. Just as with the Bluetooth debacle, Verizon claims that it is for the protection and safety of their own users. (Stop me, Mommy! I’m about to access a 3rd party service!).

Why doesn’t Verizon get it? Why can’t they see value in being the #1 carrier and base profit strategy on exceptional build out and service? Sure, I support their right to offer apps, music, ringtones, photo sharing, navigation, child tracking, mobile television, and even home control. These are great niches that can boost revenue. But remember that you are first and foremost a carrier. Just because you plan to enter one of these markets is no reason to cut off your own users from content and service options.

Think of this issue as your subscribers see it: Cutting off users from the Android wallet, because you plan to offer a payment mechanism of your own is no different than a phone service blocking calls to Bank of America because they are tied in with Citibank. If that metaphor doesn’t cut it, how about a simple truth? It’s been 22 years since Judge Harold Greene deregulated the telecommunications monopoly. Your company is both legacy and chief beneficiary of that landmark decision. But success is transient to those who use market penetration to restrict choice. And this time, it won’t require anti-monopoly legislation. The market will push back hard and share recovery will be slow.

I’m taking my phone and going home!

Android is open. Get it? You have flourished recently, because you chose to embrace an open system that builds on its own popularity. You have contributed to its swift ascent, and likewise, Google and your users who like Android have contributed to your success. Why spit on your users now? What did we do to deserve this?

C’mon Verizon. Stop seizing your ball and threatening to close the ball park. We love you. Get it right for once and stop dicking with us. Our patience is wearing thin!