Complicated tax return? Thank TurboTax

Actually, there are two ways in which TurboTax complicates the process of filing your taxes. One is downright nefarious—and the other, just plain stupid.   Continue below cartoon…

Tax Complexity-1

♦ The Nefarious

It’s one thing for a company that builds a business on tax simplification to make their own product more complex. (That’s the topic of the next section, below). But, it’s a completely different animal when a company that helps taxpayers navigate an unnecessarily complex tax code launches a fake grassroots letter-writing campaign that covertly lobbies that same government to maintain complexity and maintain filing fees.

Read about and weep. It is nothing less than a mob-inspired, profit strategy. This is not a faux pas. It is a stick-it-in your eye, anti-consumer behavior without precedent! Intuit sought to preserve their niche of simplifying your life by secretly pushing government to keep it complex and to require a knight-in-shining armor to unwinding the complexity.

♦ The Stupid & Greedy

Tax Complexity-2aThis year, TurboTax has effectively blocked prospective customers  from figuring out which TurboTax product suits their tax situation. Intuit has not only stripped TurboTax Deluxe of important filing returns. Their flagship product can no longer be used to electronically file Schedule D, for capital gains and losses; Schedule E, for rental real estate, royalties and distributions from partnerships; Schedule C, for profit and loss from a sole proprietorship business; or Schedule F, for farm income. This year, those who need to file a Schedule D or E must trade up to TurboTax Premier, while a Schedule C or F requires the even more expensive TurboTax Home & Business.

But wait! The highest end product sells for more than a hundred dollars. It is more suited to corporations, trusts and foundations. Don’t think that you need the many features of TurboTax Home & Business? No problem. Now, you can choose from a dozen filing form combinations. This not-quite à la carte approach virtually ensures that you will have no way of knowing which products to buy until after you prepare your taxes.

That decision can be blamed on simple greed in the boardroom and a phalanx of idiots in the marketing department. WildDucks can Infer the rest from my letter to Intuit General Manager, Sasan Goodarzi.


Greetings, Mr. Goodarzi,

I don’t earn a lot, but I dabble in many things: Self employment income, dividends, royalties, a Family Limited Partnership, a trust, rental property, an inherited IRA and deductions carried forward from past partnerships and business ventures.
I have used TurboTax Dexuxe + State since 1993. I want very much to continue using that product, even if the cost rises…
  • Earlier this week, I walked into my local OfficeMax and was confronted with your new hyper-bifurcated product lineup. (A stunningly complex array of options with at least 3 times as many products as on your revised web site today!!).
  • I immediately realized that you were asking prospective customers to make a very complex purchase decision. I had no idea which product to buy.
  • I checked the comparison check list on the product display and discovered that the only way to ensure that I was covering all of my bases would be to purchase your most premium home and business product.

TurboTax DeluxeFor loyal users of TurboTax Deluxe, the cost jump is from $39 (discount price with 1-state option) to over $120. In my view, this is outrageous.

And so I purchased HR Block Tax Deluxe 2014 + State for $22.49 after promo discount at Newegg.com.

I considered your last-second compromise offer of a one-time $25 rebate. For me, it seemed insulting. You were asking me to either make a very complex decision (one that I could only evaluate after completing my tax returns), or pay $90 more and apply for a $25 rebate. What kind of offer is that? Not one that I would jump at.

TurboTax was once a premium product in a small field of competitors. Your newest offer is reasonable, but it is simply too late! My advice for next time: Simply raise the price, if you feel that the competitive marketplace can sustain the cost bump. For the past 20 years, the discount retail cost of TT deluxe + state (after coupons and store promo) has drifted downward from about $60 to $39. Sometimes, I can even get it for $29. It is reasonable to adjust the discounted cost upward to $49 or even $59 (equivalent MSRP = $84).

But instead, you chose a complex marketing option that mirrors the current problem with our tax code.  To fan the fire, news services are reporting that your company has secretly lobbied to maintain a complex tax code.

You are entitled to a price increase.  But you botched the execution. You lost a long time customer (and one who has purchased TurboTax for employees and partners).

~Ellery Davies
  Formerly, a faithful TurboTax customer

Bitcasa: Headed for the Abyss

I was an early Bitcasa supporter. I jumped on the bandwagon early and I blew my trumpet loud and far:  Bitcasa: Unlimited storage, version history & sync  (Feb 8, 2013)

bitcasa-sBut consider this shockingly short timeline:

• September 2011: Bitcoin co-founder, Tony Gauda, excites investors with a business model that supports an “Infinite Drive” service.

• Feb 2013: My first year is $49/year for “Infinite” storage. Throughout the year, there are numerous bugs with both uploads, downloads and on both web and PC client. Although the beta had ended, I chalked it up to a learning experience.

• Sept 2013: Bitcasa’s visionary co-founder and charismatic CEO, Tony Gauda, is eased out the door. He is reluctant to explain the reason for his sudden departure, even to his fans and email correspondents.

Listen closely to the first 2 minutes of this interview with Mr. Gauda, and tell me if Bitcasa has executed on plan. “The customer with 10 terabytes of video is ½ of 1%. But we love this type of customer… We don’t care how much data you have. We don’t meter it!”

But don’t blame Tony. Blame the directors, probably influenced by near sighted investors. Tony’s business model was solid. As often happens with visionary entrepreneurs, he was given the bum’s rush. He was shown the exit door practically before he even launched.

• Early 2014: Bitcasa announces a stunning price increase.  For newer users, costs increase 1200% (by twelve times). Early users like me see a staggering 2400% increase (from $49 year to $99 per month). Like many users, I am stunned. I am forced to revisit a history of gushing endorsement: Bitcasa bursts its bubble.

• Feb 2014: Bitcasa charges my credit card without authorization. The cost of my plan has risen from $49/year to $99/month. But in a gesture of magnanimity, the company offers to extend my subscription at my “current rate”  which they consider to be $99/yr (not really my current rate—but I was aware that my actual current rate was a one-time special).

I secure a written promise that my credit card data will not be retained and an explanation of how I can ensure that Bitcasa cannot retain that information. Again, throughout the 2nd subscription year, there are numerous bugs with both uploads, downloads and on both web and PC client. Nothing has improved.

• Nov 2014: The Infinite storage plan is retired (Whaa!?!! We are still in the midst of year #2). Can you imagine how this makes early adopters feel? We were duped into referring other users with incentives and offers related to our Infinite Drive plan.

Bitcasa (cartooned)Bitcasa requires users to migrate data into a new plan with only 5 weeks notice. But wait—there’s more. Let’s get personal…

a) Even though I am in the midst of my year, Bitcasa makes another unauthorized charge to my credit card (Again, and after promising that my card data had been deleted). The hat-in-hand excuse that I receive from the support staff is ludicrous. These schnooks ware fed a line from on high.

b) The migration fails miserably. I am a tiny client. I use only 638 MB, and yet none of my data—whether uploaded or mirrored—can be migrated to the new plan. I have wasted dozens of hours trying the Bitcasa tools and failing to get support. It simply doesn’t work.

c) Perhaps just as alarming, there has never been any progress on the numerous bugs with both uploads, downloads and on both web and PC client.

I could go on. But I think that the writing is on the wall. Fair warning. This one is headed for the abyss.

Oye, Bitcasa! Say it ain’t so! Even if you have contempt for your customers (I don’t think that you do), I doubt that you could have intentionally orchestrated a better demonstration of how to spit in the eye of testers, users, investors, and especially anyone giving their credit card to you in good faith.

Past thoughts on Bitcasa: The good, the bad and the ominous.
~Ellery

Keurig brews consumer discontent

Nov 2015 Update: Brewer sales plummet 26% forcing Keurig to introduce 2.0 My K-Cup adapter. Owners can use any coffee they wish. Comment of Nov 8, 2015.

Feb 2015 Update: Keurig CEO, Brian Kelley, called one of our readers to discuss the 2.0 restriction. Scroll to comment of Feb 10, 2015 near the bottom of this page.

Two years ago, as the K-Cup patent expired, the leader in single-serve coffee brewers introduced Vue, a slightly larger, coffee pod. If you haven’t heard of Vue, your not alone. The newer single-serve coffee packet never caught fire like Keurig’s original K-Cup. Just as with ink jet printers, the new pod and the brewers that accommodate them were Green Mountain’s strategy to reassert control of a market that produces revenue and profit from a consumable rather than the appliance that processes it.

According to Keurig, the Vue system was introduced “in order to increase the choices users have in brewing beverages.” Now that Vue has failed to gain traction, it appears that Keurig is reaching out to owners and softening their loss.

This certainly sounds like a benevolent company; one that care about consumer preferences, and protections—Right? We’ll get to their motives in a minute…

Keurig Brewer 2.0This month, Keurig put pomp and fanfare behind the introduction of Keurig 2.0. (I think that ‘3.0’ would be a more accurate nomenclature, but who can blame them for trying to downplay the marketplace failure of Vue). And so, this week, I became the owner of a new Keurig 2.0 model 400 brewer. (The flagship model 500—or 560 if you purchase at a warehouse club—has a slightly larger water tank, a larger display screen, and the odd addition of a color changing night light).

Did you catch the omission above? I bet you missed it! I said “I became the owner” rather than “the proud owner”. You might think that unwrapping a new, 3-figure appliance with color display, operating system and lots of shiny new parts would leave me enthralled for at least a week, right?

Not really…

You see, the new Keurig brewer accepts both K-Cup and Vue coffee pods. But it also has has built a camera. The camera spies on the owner of their new 2.0 brewers. (Seriously—It really does!). It’s not trying to film the marital vows that you renewed on the kitchen floor last night. Keurig 2.0 leaves that to the NSA and Google. Rather, the camera is constantly vigilant against any attempt to use unlicensed coffee.

The camera studies the lid of each coffee pod inserted in the brewer and it looks for Taggant, a chemically-coded ink on the outer ring of the lid. It won’t accept the My K-Cup gizmo that Keurig continues to sell for use with legacy brewers, and it even rejects pre-2014 K-Cups from Green Mountain and its partners. Shocking—because they are fully licensed and are well with in the expiration date marked on the package.

For those who own a boatload of Vue pods, the new brewer comes with a comforting statement: “Call us and we will work out something”. Apparently, Keurig will placate owners with a large stash of coffee pods by exchanging them. Gee! That’s great! Just register your products, identify yourself and wait for a package, because your new machine spies on you and will not let you brew your favorite drinks. That’s just ducky.

Was every executive over 25 absent on the day that CEO, Brian Kelley, dreamed up the spy camera? Green Mountain is walking down the path of the early iTunes era. Buy all from Apple or your music won’t play on your phone, your PC or your iPod (the operative word is ‘your’). At least Apple could argue that it was trying to thwart internet piracy.

If you attempt to put a perfectly good coffee pod into a Keurig 2.0 brewer, a message is displayed across a tiny color TV screen:

“Oops! That coffee isn’t compatible with our incredibly high standards! We want you to enjoy the very best experience possible. Besides, you probably wouldn’t enjoy the flavor of coffee from any vendor that refuses to pay us for the privilege of compatibility.”

Keurig-OopsSeriously! It says something just like that. At least, to anyone who can read between the lines. A satirist couldn’t come up with better material for marketing-blunder-of-the year. And not just a blunder, but one that flips a finger to their customers.”

Who would have thunk it? Keurig put DRM into a coffee maker. For cryin’ out loud, it’s a coffee maker! What’s next? Cars that demand Ford-branded gasoline? How about a TV that only displays Sony-licensed content?

As for my new brewer, I have found work-arounds that defeat the Gestapo agent within. Several YouTube mavens describe tricks for keeping Keurig in its place. But make no mistake: It is a pain! I don’t relish the idea of taping a forged software license across a camera and changing it whenever a family member wants to brew a different beverage. I don’t want to search local stores for a licensed K-Cup that is sufficiently close to the each beverage that I already own?                                                 Continue below photo »

Keurig 2.0 brewers look for data hidden in the outer ring

Keurig 2.0 brewers look for data hidden in the outer ring

Keurig has turned their brand into the butt of a joke faster than you can say ARccOS. They must be guided by lawyers with no concept of market dynamics. In the blink of an eye, they will become an anachronism. In a few years, the Keurig 2.0 will be a unit in market training seminars alongside the ‘New Coke’, Andy Grove’s slow recall of the Pentium that exhibited math errors, and Ken Olsen’s conviction that consumers would never buy ‘personal’ computers for use at home.

But unlike Coke and Intel, Keurig doesn’t have a 10 billion dollar cushion. Even worse, they have fooled their fans once before. They may not be able to recover from screwing them over with malicious intent and an extended middle finger.

Green Mountain Coffee has a limited time to recover from the Keurig 2.0 fiasco. Here, then, is our humble WildDuck marketing advice:

  • Change the heartless restriction into an on-screen sales pitch. Be a good guy!
  • Accept all the existing K-Cups that your consumers already own. I have dozens.
  • Offer an adapter that allows owners of your new brewers the same privilege of occasionally scooping in the grounds of their favorite store-bought coffee.

And for G-d’s sake, stop spying on your customers! With a downward-facing camera mounted 10 inches above my kitchen counter, I wonder if your next software update will activate a microphone. Get off my back. Please Keurig; respect your customers!


Afterword 2.0

A guest lecturer at Cornell University asked his students to suggest a shareholder letter from Green Mountain Coffee. I haven’t been a college student in years. But if I were in that class, this would be my letter…

Dear Shareholder,

These are exciting times for your company. As you know, we are introducing a series of Keurig branded coffee brewers that are not quite compatible with both of our previous single-serve coffee pods, the ubiquitous K-Cup and Vue.

Keurig Brian Kelley-a

Brian Kelley; genius behind cameras in coffee pots. But, hey! it’s for your own good. A safer, more enjoyable beverage experience.

As a former Coca-Cola executive, I know a thing or two about tinkering with a successful brand in an effort to teach consumers what is in their best interest. That’s why we pushed New Coke onto the market back then, and it’s no different with the Keurig 2.0 product launch.

Of course, it is critical that we at Green Mountain Coffee convince consumers that our use of digital rights management is a benevolent and beneficial act—one that protects them from unsafe coffee, electrical failure and night terror. We must avoid any perception of ulterior motive or hidden agenda. Fortunately, consumers have a very poor memory. With clever marketing, they will buy our products with an assurance that they cannot accidentally harm themselves (or their Keurig 2.0 appliance) by brewing inferior coffee.

Of course, we could have used the very same coffee pod detection technology to simply display a message that the K-cup a user has inserted is not licensed, and may not taste as wonderful as coffee that comes from a company that pays us for the privilege of compatibility. But that wouldn’t be sufficient. We are concerned that our customers may be too busy enjoying coffee from 10,000 competing brands to heed our urgent warning.

Brian Kelley, CEO
Green Mountain Coffee

Ellery is editor at AWildDuck and owner of a new Keurig 2.0 brewer

Latency beats speed for most Internet activity

This evening, editors at Quora asked me to suggest network optimization methods to enhance the Internet experience of Internet gamers. My 5-step reply, below, is good practice for anyone who wants a zippier Internet experience.

Forums across the web stress a high Internet service connection speed as the panacea for gaming or a web experience that lacks zing. Sure, speed is important for network backup or streaming HD video (although, the bottleneck may lie within the video server or it be caused by a backhaul peering spat, or a financial dispute between Netflix and your own ISP). But for everything else — especially a robust web surfing experience, speed takes a back seat to latency. That frustration that you feel when web pages don’t pop up instantly after a click is more likely related to latency than throughput.

Speed is the rate at which a open or streaming data connection passes data. It is measured in megabits per second. In 2014, a speed or ‘bandwidth’ of 30 or 50 Mbps is typical for residential cable or fiber optic service. With their FIOS service, Verizon offers consumers speeds of up to 300 mbps.

Latency is quite different than speed. It is a measurement of the delay in getting a single packet from point A to point B. It is typically measured in milliseconds. (35 ms is typical of an optimized route. 65 ms is tolerable and 120 ms yields a frustrating experience. If you are a gamer or you use VOIP (voice-over-Internet protocol), you should test the latency to various hosting services, with an eye toward observing latency under 50ms. Otherwise, you will notice a lag in responses coming from the other side of your connection. On phone calls, this is particularly annoying.

Because latency involves two end points, measurement entails choosing a remote web server or Internet page. In Windows, latency is measured using a command prompt and the PING or TRACERT commands. (This article is not meant to explain the command or be a procedural tutorial. Look it up or ask your neighborhood Geek).

If you discover very short latency with some sites, but much longer with a few, then the problem is not within your home or local ISP infrastructure. It is related to the remote site that is part of your test or something in the path that is closer to it than to you. But if you find that latency is poor for most of the sites that you choose in your tests, then the problem is very likely with your ISP or even in your own home or business.

Here, then, are my suggestions for a great gaming experience—or similarly, a zippier web surfing session. Tips for reducing latency are offered in the footnote to #1…

1. A fast internet connection: 25 Mbps or better should do. 50 is much better if other family members like to watch Netflix or Amazon Prime while you access the Internet.*

2. Try to use a direct connection to the Internet rather than WiFi. If that’s not possible, use the latest technology—an 802.11AC router. (If you really want to burn rubber, check out the Netgear Nighthawk series). Make sure that any switch or router inside your home supports 1 Gbps at each Ethernet port.

3. Discourage others in your home from doing backups, file transfers, Netflix streaming, Skype or VOIP calls. Even if they are not accessing the WAN/uplink, they will likely hog the limited aggregate bandwidth of your switch or router. Even printing can interfere with gaming unless the user has an ad-hoc/p2p connection with the printer. (This is rare).

4. Check your game documentation for any special requirements such as the need for a phase-inverted, biturbo micro-encabulator. ** [Continue below video]

5. [Advaced]: Learn about the frame buffer feature in your router or switch and study the communications optimization features of your operating system. In some cases, a tool from your ISP can do wonders to optimize some of the esoteric Windows or Mac settings.

* Even more important than a fast Internet connection is the need for a short round trip packet latency. Use a command prompt or diagnostic app to test the ping time (delay) between you and IP addresses of the gaming server or other critical nodes that you can identify.

If ping times are more than 65 ms, look for a different Internet service or perhaps the problem is within your home… Reduce the number of switches and routers between you and the Internet. With a little fine tuning (for example, experimenting with gaming sites that offer multiple hosting cities), you may get the ping time below 35 ms. That would make a big difference in your gaming experience. It may give you the edge that you need.

** I was kidding in #4. There’s no such thing as a biturbo micro encabulator. But still, you should check the gaming documentation!

Update: Bitcasa bursts its bubble

We first wrote about Bitcasa Infinite Storage back in February, and we amplified our kudos in a brief–but gushing–July update. In fact, lots of folks were impressed. The service model, home grown technology, smooth-as-silk founder, and jaw-dropping price point made for a very compelling story.

Infinite storage? Well, Yes…If you have an infinite bank account! Or at least, if you can accept uncapped cost. Surprise! Bitcasa users have been slapped with a 2000% cost increase. For Wild Ducks, only the future cost is “infinite”, obscene or just random.

Bitcasa (cartooned)We raved over Bitcasa in two past reviews, because they “get it”! At least they did before losing Tony Gauda, co-founder and prescient CEO. Bitcasa was a model for massive, private and always online storage with an unlimited ceiling and intuitive apps. We’ve used it ever since—discovering that a venture upstart can kick a*s with the big boys.

I use it with my own PC. Just this week, I expanded the number of folders that I mirror to their cloud servers. I am also relying on some very capable apps. The few glitches seem relatively minor (The Windows agent commandeers the lowest available drive letter, burns CPU cycles when idle, and has difficulty streaming several popular formats). But the company is responsive to these issues. Based on experience, I suspect that they will resolve the technical issues.

But, oh no! Here it comes: The cost of Bitcasa has ballooned. Not just any balloon, but a very elastic balloon. In fact, it looks more like a Blimp!

Users who signed up early this year (but after completion of the beta period) paid $49. We were warned that subsequent years would cost $99 (or less, for those who refer users). Now, we find that the subscription rate is changing from $49 or $99 per year to a slightly higher $999 per year.

Whazzit?! Come again?!!

The new pricing is effective immediately for newbies. From what I can tell, existing users may be exempt from the new pricing model—for now—but it’s not clear for how long. And, if these ‘grandfathered’ users want access to long anticipated features, such as a Linux client, they lose their privileged status. Even worse, the new plan limits the number of devices that can share a single data store. I can accept a device limitation when using iTunes. After all, the service streams licensed media under agreement from a publisher. But for the user’s personal data? The whole point is to support access from everywhere, I mean, c’mon!                                                                                  [Continue after photo]…

100 dolalrs

Next year, you’ll pony up 12 of Benjamins to store data in the cloud. At street price, you could purchase nine 4TB drives = 36 Terabytes

To be fair, early users recognized that the cost for subsequent years might be refined, slightly. After all:

  • The world economy might experience rampant inflation
  • The raw costs for storage and bandwidth might suddenly rise
  • Bitcasa may find that a high fraction of users are abusing the system or may actually store tens of terabytes. That would throw off a centralized storage model
  • Bitcasa may need a higher cost network architecture to enhance robustness

Guess what? None of these things happened! So, what would cause Bitcasa to screw over its devotees? Was the model unprofitable or unsustainable? I think not. Bitcasa de-dupes files while simultaneously encrypting user data. The technology is remarkably clever. In fact with the inevitable addition of a distributed, P2P storage architecture, the infrastructure costs drops by—oh—perhaps by 90%. RDDC can be incredibly lucrative.

Bitcasa_new_pricing

How does Bitcasa explain away a startling blunder? By claiming that only 2% of users need more than 1TB and almost no one needs more than 5TB. Perhaps. But they overlook three things:

  • The higher price goes into effect at 1TB. Anyone with music, movies or years of photos will eat up several terabytes. At 5TB, users experience sticker shock. Hardly “Infinite storage”, Eh?!
  • Fewer than 2% of users may need more than 1TB today (a claim that is highly improbable). But what about tomorrow? Will those users trust that the cost will track the inherent cost of storage back downhill?
  • If very few people use multible TB, then Bitcasa leaves very little money on the table by sticking to its motto: “Infinite storage”. The iconic phrase conveys a powerful and visceral assurance. It is at the core of the platform’s market image and competitive positioning! Changing the rules without the need to do so triggers a vast and negative emotional response from the minions who proselytize on Bitcasa’s behalf, and future users who don’t wish to calculate their storage needs.

Why this? Why now?!

We’re still enamored with founder, Tony Gauda. He is remarkably smart and charming. Incoming CEO, Brian Taptich, is no slouch either! C’mon, Brian. No one expects you to give away service (even though it is exactly what Google does). But you needn’t rain on a parade that your team crafted with brilliance. You have an elegant and profitable model. If you screwed up on implementation, identify the cost overruns. Please fix the problem rather than killing the customer.

Honestly! You can redeem yourself and pull this one out of the fire… But do it quick. In the absence of dissenting opinions (I searched for quite awhile), here is a very typical consenting opinion.

—A loyal fan of the Bitcasa of yore

Amazon Elastic Transcoder: It will mature with time

Most consumer devices, such as PCs or smart phones, can downscale video on-the-fly, as media content streams in from the Internet. But consuming high definition data on a low definition device adds expense at both ends, because it consumes far more bandwidth than the device can display. In fact, watching a 1080p video clip on a 360p device wastes as much a 90% of the data.

transcode6

Transcoding is a job best performed at the service provider (or content source, if they are one and the same). An outgoing process can make intelligent, cost effective, on-the-fly,  trade offs that go beyond individual device resolution.

Smart source transcoders are fully device aware. Not only can they adjust for bandwidth and device resolution, but for the recipient codec (AVI, MPEG, MKV), color palette or even display in black & white. All together, a smart transcoder can result in significant cost savings.

Consider, for example, a YouTube enhancement that you may have noticed this month. YouTube began adjusting the default video resolution to the default player window within your browser. For at least some devices and browsers, the YouTube server has sufficient information about your initial display resolution to optimize the transcoder and resulting media stream.

transcode-imageLet’s peek into the transcoder decision process offered to content admins. You might think that this is a concern only for big organizations that serve up lots of data. But more often, lately, it intersects with casual consumers who operate media-streaming clouds from a home-based NAS (such as a router-attached PogoPlug), a remote PC, or from an ISP or hosting service.

The Amazon Elastic Transcoder is a service component of their cloud storage and service suite (AWS, EC2, S3. Refer to this page for the full set of Amazon cloud services). The cost of transcoding depends on the resolution (e.g. from HD to SD, etc), the service region and the minutes of video content converted. For example, a minute of SD transcoding in the Northeast USA region is about 0.45¢ (just to be clear, that less than ½ penny).

This morning, Engadget highlighted improvements to the media transcoder, specifically, its ability to transcode audio as well as video. But, for me, the video tutorial seems to point to a deficiency rather than a bragging point…

A Wild Duck Opinion . . .

For organizations and individuals seeking to serve AV in user-demanded formats, the Amazon Elastic Transcoder barely scratches the surface. In my opinion, the video demonstrates a fine set of underlying processes, but from a user perspective (even from the perspective of a content admin), it is a crude and unfriendly beginning. The overall process fails the most important user tests: simple, automatic and transparent.

transcode_iconI envision this changing rapidly to something that requires no thinking by either the content admin or consumer/user. This is where market evolution must take us: On-the-fly transcoding at the optimum resolution for each user device—as it is demanded. Streams should be scaled even lower for individual users that have elected to conserve bandwidth at their end (using only their device interface). Decisions to prioritize pipelines or retain a cache of transcoded versions (instead of simply creating them anew for each user-demand) should never be a burden to the content admin. S/he shouldn’t be required to think about these things any more than an elevator needs an operator to decide at which floor it should hoover. I don’t mind the admin controls, but such forced decisions are evidence of either a start up alpha-process or a poorly implemented design. I expect that the feature rich controls of Amazon Elastic Transcoder falls into the latter category.

Instead, these should be driven by optimization algorithms that weigh current demand (at each resolution or quality) and the cost and availability of storage -vs- bandwidth. As with the most recent iteration of YouTube, users will ultimately trust that a video watched on a 360p device is not rushing toward their data cap by transmitting at 1080p. (And in fact, their own device will warn them that 90% of incoming data is being discarded, unless they are also saving to memory).

Is 4K HDTV relevant?

Beginning with the 2012 holiday season, I began seeing large screen, 4K TVs in retail displays (typically in a high-end theater room). The first one that I could inspect closely was at a Sony store in a factory outlet mall in Winthrop MA. That was on Black Friday. Just a month later, I saw several displays with more compelling content at ABT, the mega-super-retailer with just one location in Glenview IL.

4k_compareIf 4K were to catch fire, the sourcing of high resolution content is not in doubt. 4K has been a production and archival standard for Hollywood studios since shortly after the advent of digital content creation. And, of course, studios can always transfer directly from their vast warehouses of legacy films. (At about 2000 lpi, the 35 or 70mm film used in the making of Hollywood movies for the past 75 years has a theoretical resolution of about half way between HDTV and 4K, depending in large part on lighting conditions. Digital IMAX is arguably the pinnacle of mainstream theater technology. It is projected at 4k x 2k = 8M pixels).

But is home theater 4K TV relevant?

In 1990s, I was briefly co-chair of the National Coalition for HDTV Research & Policy. The path to HDTV standards was torturous, both for display technology, broadcast standards, and the requisite PC convergence.

Can we be blown away all over again?

Can we be blown away all over again?

I am a resolution junkie. For enter-tainment, I crave a big, beautiful theater experience. For PC work, I want a desktop with many open windows or pages—resplendent with microscopic detail. I want lines and characters that pop out with enhanced acutance. In the 90s and early 2000s, my friends were satisfied with VGA (640×480) or SVGA (800×600). I demanded XGA (1024×768). When laptops shifted to widescreen, I held out for WUXGA (1920×1200). Now, I have a 1080p notebook. It is the convergence standard. But it is not the ultimate consumer display. In fact, I crave the newest Samsung Book 9 plus, which offers 3,200 x 1,800 pixels packed into a 13.3 inch display. That’s almost 6 megapixels!

The NTSC standard lasted more than 50 years. It took two decades to make the market transition to HDTV. Today, 1080p is the de facto standard for both PC and TV displays, although most HD TV content is transmitted at a still respectable 720p. But do we want or need another standard that has 400% more pixels?

As a resolution junkie, I can firmly answer the question: Nah… It is simply not worth it, even if the technology cost rapidly drops to par.

Notebook Resolution callout-aWatching TV is very different than viewing PC page content, which tends to be filled with text, but is mostly static. Over time, motion creates a rich experience. In fact, the “psychological bandwidth” of TV viewing is a product of pixels and frame rate. In my opinion, with HD—especially at 1080p—the human mind is maxed out. At this point, auditory and tactile input become more important than attempts to increase resolution beyond 1080p.

At whatever distance that you find comfortable, (say 2.5 feet from a 24″ display, 9 feet from a 50″ display or 15 feet in a home theater with a 110 inch screen), adding resolution to a moving image beyond 1080p is detectable only when getting so close to the screen, that you are no longer enjoying the experience. For this reason, HDTVs under 20″ don’t even bother to support 1080 pixels unless the display is also intended to accommodate connection to a PC.          [ continue below image ] …

Click here for a close-up eyeball-to-screen inspection

Click here for a close-up eyeball-to-screen inspection

In my opinion, taking films beyond 1080p adds nothing to the experience (or at least, a severely diminished return), and yet it adds tremendously to the cost of storage and transmission.

Of course, in the end, industry standards are becoming marginalized. 4K will probably come upon us with or without a federally sanctioned standard, thanks to multi-synch monitors and the flexible nature of graphics cards and microcode. Today, resolution—like software—is extensible. Cable service providers can pump out movies at whatever resolution they like. The set top box at the other end will decode and display films at the maximum resolution of a subscriber’s display. The role of government in mandating an encoding standard is marginalized, because most viewers no longer tune in to public airwaves. FCC turf is generally restricted to broadcast standards.

Am I often reluctant to adopt bleeding edge technology? Far from it! This opinion is brought to you from a committed resolution junkie. But I do have a few exceptions. Check out my companion piece on consumer 3D TV technology. Spoiler: Both technologies are limited exceptions to my general tendency to push the proverbial envelope!

Ellery Davies is a privacy pundit and editor of AWildDuck. He is a frequent contributor to The Wall
Street Journal. He is also a certified techno-geek with ties to CNet, Engadget & PC World.

3D TV: Ubiquitous & cheap. But who cares?

My opinion on the gradual penetration of consumer 3D television is not intended as an expert research opinion, but rather speaking simply from experience as a 3D owner.

3D_TV_1aI searched long and far for the perfect balance between a thrilling effect, simplicity, and cost. The brand and technology that I chose is unimportant to my point, but you can bet it was close to the very best in-home, 3D experience available during 2013.

The technology works. That is, it elicits Oohs and Aahhs from visitors every time a fish swims up my neighbor’s nose or the dragon breathes fire and smoke. Basketball games are downright stunning, if a bit hard to find. But (and this is a very big “BUT”)…

… But the overall experience falls considerably short of the community cinema, and its not a problem with the technology. In fact, they are equivalent!

At first, I thought that consumer adoption would be stuck until these problems are worked out. But, in fact, these are NOT the problems:

  • Wait for technology to be equivalent to movie theaters
  • Wait for cost to come down
  • Wait for passive eyewear
  • Wait for a wide spectrum of content (3D broadcast and Films)

In fact, all of these things have happened, and YES, due to low cost, 3D tech is now slapped onto flat screen TVs without demanding that viewers commit to actually using the feature. This gives tremendous impetus to adoption by broadcasters, because it addresses the two-sided network effect. That is, it solves the chicken-and-egg problem.

3D_TV_2aBut here’s the rub: Recall that I said that it falls short of a movie theater experience and yet—with passive glasses—it achieves the same quality and convenience. How can both of these observations be true?

In a movie theater, you are resigned to sit in one place for up to 2 hours without much head movement and certainly without walking about or viewing out of the corner of your eyes. Transporting the same technology into your home (In my opinion, this has been achieved with equal quality), does not create an equal experience. The glasses are never handy (there is no one to clean and recycle them, or hand them to you when you enter the room), and moving about the room causes headache and eyestrain. Quite simply, it unnatural.

The practical outcome of this unfortunate situation is that I am left with transient bragging rights (until my friends buy their next TV) and I occasionally supervise stunning demonstrations. But even though content abounds, I really don’t care. After the first weeks of ownership, I never bothered to watch an entire show or movie in 3D. Furthermore, I unloaded the 3D copy of Avatar that came with my Panasonic Blu-Ray player. I prefer to watch in 2D. In the end, black level, contrast and resolution trump the Oohs and Aahhs of things that pop out of the frame.

Ellery Davies is a privacy advocate and security consultant. He addresses
issues at
the intersection of technology with law or social policy. His opinions
and research appear across popular media, scientific and trade venues.

Update: NSA surveillance, Bitcoin, cloud storage

Just last month, Edward Snowden was honored with our first annual Wild Duck Privacy Award (we hope that he considers it an honor). The vigorous debate ignited by his revelations extend to the US Congress, which just voted on a defense spending bill Edward Snowdento  defund a massive NSA domestic spying program at the center of the controversy.

Although the bill was narrowly defeated, it is clear that Snowden has played a critical role in deliberative policy legislation at the highest level of a representative government. Even if this is the only fact in his defense, why then – we wonder, is Snowden a fugitive who must fear for his life and his freedom?

Snowden saw an injustice and acted to right a wrong. His error was to rely solely on his own judgment and take matters into his own hands, without deliberative process or oversight. But since it is the lack of these very same protective mechanisms for which he engaged in conscientious objection, the ethical dilemma presented a Catch 22.

—————————————————————————————

Stacks of BitcoinRegular readers know that we love Bitcoin. We covered the stateless currency in 2011 and 2013. Just as the internet decentralizes publishing and influence peddling, some day soon, Bitcoin will decentralize world monetary systems by obliterating the role of govern-ments and banks in the control of money flow and savings. Why? Because math is more trustworthy than financial institutions and geopolitics. You needn’t be an anarchist to appreciate the benefits of a currency that is immune from political influence, inflation, and the potential for manipulation.

Now, comes word of a Texas man charged with running a $60 million Bitcoin Ponzi scheme. The story is notable simply because it is the first skullduggery aimed at the virtual currency — other than internet hacking or other attacks on the still fragile infrastructure. Should we worry. Absolutely not. This story has little to do with Bitcoin and falls squarely under the category of Caveat Emptor. Widows and orphans beware!

—————————————————————————————

bitcasa-sIn February, we wrote about Bitcasa, the upstart cloud storage service with an edge over diver-sified competitors and other entrenched players: Dropbox, Google Drive, Microsoft SkyDrive, SugarSync, Apple iCloud, etc. WildDucks learned how to get truly unlimited cloud storage for just $49. Now they are launching unlimited cloud storage in Europe starting at €60 per year.

Bitcasa still captures our attention and sets our pulse racing. While we are disappointed that it lacks the RDDC architecture that will eventually rule the roost, their Infinite Drive technology is a barn burner. More than ever, it is clear that Bitcasa is likely to displace or be acquired by their better known brethren.

—————————————————————————————

Drew Houston-01sWe also wrote about Dropbox, but that posting wasn’t really a review. It was our plea to CEO, Drew Houston (shown at left), to adopt a fully distributed and reverse cloud architecture. That effort failed, but it is still our favorite of the entrenched players. More suited to pin stripe corporate adoption, but in our opinion, not quite a Bitcasa.

In a previous article, we introduced lesser known cloud startups with clever and unique architect-ture that yield subtle benefits: SpaceMonkey, Symform and Digital Lifeboat. That last one was in need of a life preserver. It flopped. But the IP that they created in the area of distributed p2p storage management will live on. We will all benefit.

—————————————————————————————

Stream Music Flowchart-s2Finally, in May we ran down the benefits of cloud music players and their likely future of streaming your own personal library of movies. Now, Jeff Somogyi at Dealnews has created a nifty flowchart to help you decide among many vendors in a crowded market.

Of course, a discussion of Bitcasa, Dropbox, SpaceMonkey and RDDC wasn’t our first discussion of cloud storage. Shortly after AWildDuck launched back in 2011, we applauded PogoPlug and their ilk (Tonidoplug, Dreamplug, Shiva, and other genres consumer grade network attached storage with internet access. They let you create personal cloud services and even stream media from a drive or RAID storage device attached to your home router.

 

Cloud Music Players Foreshadow Movies On the Go

AWildDuck was launched in August 2011, nearly 2 years ago. In that first month, I wrote about a radical new feature of Apple Computer’s iTunes Cloud Player. Music Match allows users to upload music obtained from any source—even bootleg copies. Once uploaded (or more precisely, matched and mapped to a licensed, original track on Apple servers)—users can not only play it from the cloud with pristine quality, but even download a new high-quality original to their PC, without any copy protection (also known as DRM or “Digital Rights Management”).

In that early article, I questioned Apple’s integrity in turning vast pirated libraries filled with tunes of questionable quality and pedigree into newly legitimized albums and tracks—all with high quality and no DRM. What I found most surprising was that Apple was nabbing a subscription fee of $24.95 per user while rights owners got a raw deal, even if Apple distributed the subscription fee across all rights owners in all those collections. After callout-02all, the deal covers 25,000 songs for each user, and it is likely that this will be expanded to 200,000 tracks to level the playing field with Amazon.

Since writing that piece as a newly minted Blogger (I was still wearing diapers), I have begun to dismount from my high horse just a bit. First, there is the fact that the recording studios were very much a party to the new service. Although the deal really shafts it to composers and musicians with a continuing stake in their creations, Apple didn’t hold a gun to their heads. Rather, they faced a brutal technical and market reality. Music is very easy to copy. To maintain a core of paid listeners, authorized channels of distribution and licensing had better be inexpensive, very simple, and with added value that drives consumers to be both legitimate and loyal.

Another reason that I can’t take a strong position against piracy is because it would be the very epitome of hypocrisy. The legitimacy of my own collection of music and movies is questionable to say the least. Actually, there is nothing “questionable” about it. I know the source of each track and film—and I certainly don’t claim that licenses are in order.

Even so, I had a difficult time understanding why Apple would help to undermine content producers, which are the very bread and butter of a windfall revenue engine, from any perspective. But my thinking has softened toward Apple…

First, there is the fact of participation by rights owners and the Piracy facts on the ground. But also, the high-quality, DRM-free tracks that users can download are laced with encrypted data that identifies the distributor, authorized user, and even the download transaction. No, they are not copy protected, and users are free to back-up their collection, create their own mix and even share music (with certain restrictions). But if studios lose control of their collections, they can at least identify the leak if an investigation ensues.

But I am not here to revisit the politics of Music Match and the effect on Pirates or rights owners. After two years, I am finally becoming a cloud streaming groupie. That isn’t to say that I lack experience in the Cloud. I wrote the spec on Reverse Distributed Data Clouds and I create streaming data clouds from a plug PC situated in my own home to access documents and media on the go. But this time, I am joining the legions who stream from a major service and not just from their own private clouds.

Last week, I compared three services: iTunes Match, Amazon Cloud Drive and Google Music. Then, I moved my entire music collection to Google Music. By “compare”, I mean that I read advertising claims, specifications and online reviews for each service. I talked to users and I searched for critical feedback concerning bugs and limitations. But, I did not subscribe to each service nor test them against each other. So my observations are not a comparative review. cloud_music_player_logosYet, I can confidently make some observations about an emerging industry. These observations apply equally to all three media streaming services.

First, and perhaps most obvious, is the continuing change to entertainment delivery mechanisms, and the significant benefits with each change in technology…

Movies and Television

In the early 20th century, there were movies and newsreels. You had to travel to a big auditorium, the choice was limited, and the schedule for new content was measured in weeks. Display equipment was expensive. Then, after World War II—long before most of us were born—there was television. TV brought entertainment into the home. But it was not personal, it could not be saved and retrieved at will, and it belonged to a big company. In the mid-1970s, video tape allowed time shifting, archiving and purchasing or borrowing content. But it was complex, bulky and slow to move between films, chapters or scenes. Because of the nature of tape, it was very difficult to catalog a personal collection. For most of us, the “catalog” was a bookshelf next to the VCR with a narrow graphic or description along the edge of each box.

Accessible Media

Next, DVD and Blu-Ray displaced Video tape. Even during the height of the Blu-Ray / HD-DVD battle, pundits agreed that the winning format would be the last removable storage device that used moving media. They predicted that electronic media would replace spinning discs. Blu-Ray players began sporting USB ports and SD slots which allowed users and visitors to bring content on a key chain. All of it was easily cataloged, and instantly accessible. And with the improvements to audio & video compression (and especially the cost and density of electronic storage), users could fit many movies into a device the size of a postage stamp or a stick of chewing gum.

I have loaded films onto USB drives for portability and swapping. But that era lasted only a few years. Despite a leapfrog of convenience, the physical format is coming to an end. The whole idea of storing media in a device that we carry from one place to another or store in a closet is an anachronism…

Welcome to the Cloud

The cloud is not new. It could be argued that Netflix and OnDemand from your cable provider are cloud services. But with these models, content “use” is under control of rights owners and distribution companies. Consumers don’t like that. They just won’t stand for it.

Just as Netflix and OnDemand have changed the entertainment landscape (in the past, media was borrowed from a library or a Blockbuster store), iTunes, Amazon and Google are changing the way media is served up from your personal library. It’s like having everything on your own drive, but a whole lot better.

How is it better? In this bulleted list of benefits, l refer to movies and music equally. In fact, cloud services are having a difficult time dragging along movie studios into the world of user controlled, non-DRM content. But I am trying to be a forward thinker. Sooner or later, you will be able to store and serve up movies from your own iTunes, Amazon or Google account and with callout-03all of the features and benefits that are just now spreading to music. So, while it may be a bit premature, I treat music and movies equally.

■  Your collection is available everywhere you go. You cannot forget to bring content that you own.

  • You needn’t worry about making and maintaining frequent back ups. That burden is borne by the cloud service. Instead, keep one permanent collection on your own media. It is your hedge against the possibility that lawmakers may clamp down on these services in the future.
  • With a matching feature from your cloud provider, your personal copy is perfect. Listen or view at the highest definition.
  • Your collection is indexed, searchable, and easier to research as you enjoy it. Imagine clicking on an actor’s face and instantly linking to their IMDB filmography. Now that’s a benefit worth writing home about!
  • You can loan or borrow content from a friend. Some services allow media sharing. — or simply create a temporary password for your nephew in Seattle.*

For now, these benefits are limited to your personal music collection. The  motion picture industry will delay the inevitable day of consumer content control for as long as they can. But with the ease of copying, the futility of DRM and the very low cost and compressed size of videos, the dawn of consumer empowerment is lurking around the corner.

Just as the floppy disk died a distinguished death in the 1990s, the interaction of consumers with all manner of removable media is coming to an end. The cloud is not just a marketing gimmick. It is tangible, friendly and very beneficial to consumers. I still believe that personal, distributed p2p clouds have an edge over cloud services. But the services have better applications, and the staff to maintain them. They offer an array of features and security that a home tinkerer would be hard-pressed to serve up from home or from a co-location server .

What About Google Music? Ready for Prime Time?

I have disclaimed any notion of offering a comparative review of cloud services, be-cause I have not tested iTunes Match or Amazon Cloud Player. That said, readers wonder why I chose Google Music over the competition and what I think about it…

I chose it because it is free (up to 20,000 songs), it supports Android, the match component offers exceptional quality (320kbs MP3 tracks), and restoration of an entire library with one click. Finally, it is from Google, a company that champions consumer rights and tries hard to do the right thing regarding privacy. As far as my thoughts on the first week of heavy use, the user interface is limited and there are some bugs to work out. Most noticeably, it is sophomoric. Although uploading is a snap, it is not clear if a user can changes to MP3 metadata back to their PC or restrict the direction of sync. But as a music player, it is robust. I am confident that bells and whistles will follow.

* It may be a bit trickier if you wish enjoy content abroad when using cloud provider. Just as with Netflix, content “matched” by the provider may be restricted by apparent IP region. Therefore, you may need to set up a VPN to enjoy your media when traveling overseas.

Google switches Privacy honchos (Opportunity knocks)

After three years on the job, Google’s first ever Director of Privacy is stepping down. Alma Whitten rode out a tumultuous period which saw several high profile privacy snafus, not least of which has become known as the WiFi drive-by data grab.

Changing-of-the-guard in the office of chief privacy honcho presents a rare opportunity for Google. One wonders if Lawrence You will seize the moment…

Google-greyGoogle has a privacy option that could propel them onto the moral high ground. A nascent science offers a way for Mr. You to radically demonstrate indisputable proof of respect for users. Unlike other potential announcements, policies or technologies, this one protects user privacy completely—while continuing to profitably direct data from marketing partners. In fact, it won’t interfere with revenues across all services, including Search, Docs, and all aspects of Android and Chrome.

Lawrence You steps in as Privacy Director

Lawrence You: Reason to keep smiling.

What could possibly anonymize individual user data while preserving individual benefits? I refer to Blind Signaling and Response. It is new enough that no major services incorporate the technique. That’s not surprising, because the math is still being worked out with help from a few universities. But with the resources and clout of the Internet juggernaut, Google needn’t wait until upstarts incorporate provable privacy and respect into every packet of data that flies across the ether.

What is Blind Signaling and Response? You’re Google! Google it and go to the source. You’ve once brought the inventor to Mountain View. My 2¢: Get the project in house and grow it like a weed. When PMs & directors, like Brad Bender, Anne Toth, Stephan Somogyi and Andrew Swerdlow get religion, the tailwind will grease a path toward roll out—and well deserved bragging rights.

A bit of Irony: Venture Beat says that Whitten is leaving the “hardest job in the world” and that Lawrence You will lose his smile as he takes the reins. Nonsense! With a technical solution to privacy, the world’s hardest job will transform into one of education and taking the credit. Ultimately, it will be the prestige job that commands respect.

Perhaps just as important, Blind Signaling and Response will gut the Bing Scroogled campaign, like a stake through the heart. With Google pioneering user respect, the Scroogled campaign will turn from clever FUD into a colossal waste of cash.

Disclosure:  Ellery Davies is very keen on the potential for BSR and want’s very much to pull his favorite service provider along for the ride.

Bitcasa: Unlimited storage, version history & sync

bitcasa-sBitcasa has just emerged from “skunkworks” mode. The cloud storage startup made waves in 2011 as finalist at TechCrunch Disrupt and runner up at Startup Battlefield. After burning through an initial $2 million, they landed an additional $7 million in June 2012. While there were few updates during 2012, some analysts noted that they filed for 20 patents—a few are really slick! Now, during Feb 2013, they have unveiled a cloud service with an edge over all others (SkyDrive, iDrive, Dropbox, Sugarsync, etc). In my opinion, only Symform and SpaceMonkey come close to the model that I described 3 years ago (search for ‘Ellery’ and ‘RDDC’).

Bitcasa gives every user folder sync, a timeline for version recovery, and cloud storage without limits. And, I really mean limitless! By the end of next month, I may be using petabytes, as in millions of gigabytes! The space available to me shows exabytes are still available.  That’s more than all the grains of sand on the world’s beaches and all the stars in the heavens. How much does this cost? Just $99 a year, or $49 if you sign up early this month. (Promo Code: BETATHANKS). WildDucks can help this Blog by using our referral link. It tacks a free month onto your editor’s subscription.

I can’t guarantee that Bitcasa will be around next year. After all, most startups fail. But in this case, I crafted a substantially identical network architecture years ago. I understand the business model. Even with a high fraction of data hogs, the venture can profitably service users for the long haul. If an understanding of the secret sauce isn’t sufficient to assuage hesitation, this interview with CEO Tony Gauda will floor you. He combines the technical and marketing genius of Steve Jobs with the showmanship of Siegfried and Roy, and the smile of Barak Obama

Damon Michaels, a WildDuck contributor wrote:

Seems like a virtual drive. I need automatic backup of
my important data. I use Carbonite for this right now.

The folder-sync defaults to all drives in their entirety—even external drives and network attached storage! If you accept the default, it always backs up everything. But more importantly, Bitcasa reverses the model. As connectivity becomes more ubiquitous and speedy, they want you to use the cloud as your primary active storage. Eventually, it will even host your live EXE files (your apps) and your “bootable” OS. The synchronized copy on your PC will be the backup – as well as the one that is used when you cannot connect.

I proposed the fundamental principles used in Bitcasa architecture in this Blog, and 3 years ago in other articles. I called it a “Reverse Distributed Data Cloud” (RDDC). My spec adds distributed, P2P storage to the model. This reduces cost, creates redundancy, and makes a far more robust system. Not only does it get rid of the data center completely. With my model, it is unnecessary for the service provider to perform any backups. In effect, the live cloud is a RAID 10,000 constellation.

One architectural trade-off is the desire for massive de-duplication –vs– the compelling need for end-to-end encryption, in which only the individual users have the keys. These two features are incompatible. DropBox and Bitcasa claim that files are encrypted at the sender and that private keys are never given to the service. While technically true, that claim covers up a nasty little detail. They use a method called Convergent Encryption in which encryption keys are derived from a character string within the encrypted file. Although the service cannot decrypt a unique file (for example, your income taxes), they could compare a hash of your file to one provided by a government or alleged rights owner, thus proving that you have stored a copy of contested media. They could block access to movies and music that you have stored or even block your original upload. The good news is that with a full RDDC implementation, the need for de-duplication is greatly reduced or even eliminated. Therefore, a properly implemented RDDC can truly empower its uses with strong, end-to-end encryption.

I’ll report more about Bitcasa after a few months of use. For now, I feel ratified to see my dream taking shape at several American ventures. If you find this field as fascinating as me, check out Symform, SpaceMonkey and Digital Lifeboat. That last venture is floundering, and may be bankrupt by the time you read this. But they have some very compelling technology for p2p, distributed storage.

Hertz acquires Dollar: What about the liability?

I avoid using this soap box for personal vendettas. A Wild Duck has a broad venue but spats over shady business practices aren’t covered. Tonight, I am outvoted. My co-editor wants me to run this story. Hey, this wound is fresh! Who am I to disagree?

Every once in a while, one encounters a vendor with business practices so out-of-whack, that it just begs to be exposed. Here’s one that hasn’t fully played out. If it is resolved before next week, I will update this Op Ed. But after experiencing this scam, I have doubts that a culture of deception can be corrected by a Blog posting…

Does Hertz care what lies under the covers?         Does Dollar know about ‘Rent a Terstappen’?

Let’s start with statements of fact: I travel. And I hate renting cars.

Until recently, the cost of renting a car was rarely what was agreed in advance. Online reservations are especially problematic, because franchisees fail to report local fees or policies to the franchiser, agency or internet marketing affiliates.

But years ago, I developed a method to overcome the problem, and it has worked splendidly. I first applied ‘Ellery’s Rule’ planning a trip to Florida. I called the rental agency directly and presented my discount codes. I was quoted an excellent weekly rate. (I think that it was Avis, but I am not certain).

Just in case, a desk clerk were to add up the numbers differently than the friendly telephone agent, I asked the agent to add a statement to the Memo section of the contract. She added these words:

The customer has been promised the rate as calculated in this estimate. He is not to be charged a different amount if the car is returned in good condition and with a full tank of gas.

To ensure that the statement exuded authority, I asked her to cite the name of a regional or department manager.

When I got to Florida, the reservation contract was already printed and waiting at the airline terminal rental desk. I pointed out the statement in the Memo section and the local clerk brushed it off with a chuckle. “Don’t you worry”, he said. “The rate is correct. You won’t be cheated.”

But when I returned the car, there was an extra $11 tacked onto the contract. “What’s this?!” I asked to a new face at the desk. “Oh, that’s the Florida drug tax” a friendly woman exclaimed, as if reading from a script. “Every customer must pay it. It’s the law. We have no control over state taxes.”

Guess what? I snapped back. I don’t want any drugs. I don’t think that she got the wit or charm of my dry sarcasm, but after a few phone calls, I certainly didn’t pay the Florida drug tax. Of course, she was right. It is a state law and payment is ascribed to the renter. But Avis paid it from the proceeds that I had agreed to pay. That’s because I had a written contract that specified the cost after all taxes, fees and even drugs. It is inclusive, en toto, complete! You get the picture.

For years, my little system worked like a charm. If at first, a rental agent refuses to add the memo (effectively stating that their estimate is truthful), I threaten to cancel the reservation. They always get authority to add the Memo. It never fails. And so for these past years, I have been quietly smug when overhearing another traveler talk about unexpected fees added at the car rental desk.

I was smug, that is, until this past week. With Rent a Terstappen, I got hoodwinked!

Tactics of deception: Germany’s Dollar car rental franchise

I traveled to Frankfurt Germany last week and rented a car from the local Dollar franchise. I got a good rate from HotWire.com, a popular web travel site. For a simple booking, it’s difficult to get a live agent on the phone, and so I booked my rental online, realizing that I might get stuck with a Frankfurt “drug tax”–or perhaps in this case, a wiener-schnitzel tax. But I was woefully unprepared for what happened. I was socked with an enormous fee and an even more absurd justification. It doubled the amount quoted in Hotwire’s  good faith disclosure!

Dollar franchisee       Rent a Terstappen
Desk clerk                 Beatrice Lindholm-Dagci
HotWire itinerary       4523744713
Contract offer            $151.87 *
Customer charge      $315.38 (?!)

* Revised from original offer of $182.24 for 6 days

Dear readers: You won’t believe the pretense on which Rent a Terstappen doubled my rental contract cost. Even with the separation of 6,000 kilometers and 6 days since my return, I still can’t believe the loony reason that Ms. Lnidholm-Dagci told me (at first, with a straight face). More shocking, I sensed that she didn’t believe it either. She whispered for me to visit Dollar competitors at nearby rental counters. Clearly, she gets push-back from more than a few outraged customers.

Well, this customer won’t stand for it. I landed during the busiest travel week in Germany. Even with staggered school vacations, everyone is on holiday during the 3rd week of August. Five other rental companies offered to match the rate that I was promised (without a farcical add-on), but none had vehicles anywhere near the airport. They were fully booked no matter what I paid. The folks at Hertz and Sixt (a European car rental outfit) sympathized with my plight. One even offered me a personal ride into the city. She has dealt with other disgruntled Dollar-booked clients.

  • Does Dollar Rental know of the massive deception foisted on their clients by Rent a Terstappen? (the local Dollar franchisee at the Frankfurt airport).
  • Does Rent a Terstappen force desk agents to pretend they don’t see what agents at every other rental counter already see? Beatrice Lindholm-Dagci recognizes the deception she is forced to perpetrate. She must hoodwink customers and then blame the fiasco on HotWire or other referring agents.
  • Does Hertz know that the reporting chain at Dollar is either deceptive or egregiously deficient? (My travel department will talk with Hertz if this is not settled by the end of this week.)

Oh yes! I forgot to tell you the reason for the doubling of my rental charge: Ms. Linholm-Dagci explained to me that I must use a Gold branded MasterCard to complete the transaction, because she had no way of verifying insurance coverage for any other form of payment. I had with me a Platinum American Express, a Platinum Visa Card and a Business Premium MasterCard. All of them carried rental insurance. I offered her a $1500 deposit, which she processed! I also offered proof of my insurance coverage through Liberty Mutual with a very clear stipulation of full vehicle replacement value, even when driving in a foreign country.

She didn’t care. It had to be a Gold MasterCard. Not Premium, Not American Express Platinum, Not Chrome, Not Visa, Not the 7 other cards whose logos that they display at the counter. Only a Gold MasterCard.

Next week, I will add Hubert Terstappen’s phone numbers to this story. Perhaps Wild Ducks can persuade him to rethink his business model.

Late Thursday Update:

A representative at HotMail has seen my rant and has launched an investigation. I understand that HotWire may compensate me for the difference between what I was promised and what was stated in their good faith estimate.

HotWire is a good company. They want to do the right thing. But I don’t really consider their payoff to be a proper solution. What about future visitors to Frankfurt who don’t know about the policy/scam? (Take your pick. It’s a toss up!) I have asked HotWire to reassess Dollar representation or at least get the corporate franchiser involved. Craft an ethical solution to the Rent a Terstappen practices. I am fortunate to be working with individuals at HotWire and Dollar who are both understanding and empowered.

Enhancing Privacy: Blind Signaling and Response

A user-transparent privacy enhancement may allow online service providers like Google to provably shield personal data from prying eyes—even from themselves. Personal user data like search, email, doc and photo content, navigation and clicks will continue to support clearly defined purposes (advertising that users understand and agreed to), data will be unintelligible if inspected for any other purpose.
In effect, the purpose and processes of data access and manipulation determine whether data can be interpreted or even associated with individual users. If data is inspected for any purpose apart from the original scope, it is unintelligible, anonymous and self-expiring. It is useless for any person or process beyond that which was disclosed to users at the time of collection. It cannot even be correlated to individual users who generate the data.

Blind Signaling and Response is not yet built into internet services. But as it crosses development and test milestones, it will attract attention and community scrutiny. A presentation at University of Montreal Privacy Workshop [video] gives insight into the process. The presenter can be contacted via the contact link at the top of this Blog page.

Can Internet services like Google protect user data from all threats—even from their own staff and processes—while still supporting their business model? If such commitment to privacy could be demonstrable, it could usher in an era of public trust. I believe that a modification to the way data is collected, stored and processed may prevent a breach or any disclosure of personal user information, even if compelled by a court order.

The goal of Blind Signaling and Response is define a method of collecting and storing data that prevents anyone but the intended process from making sense of it. But this pet theory has quite a road ahead…

Before we can understand Blind Signaling and Response, it helps to understand classic signaling. When someone has a need, he can search for a solution.

When an individual is aware of their needs and problems, that’s typically the first step in marrying a problem to a solution. But in a marketing model, a solution (sometimes, one that a user might not even realize he would desire) reaches out to individuals.

Of course the problem with unsolicited marketing is that the solution being hawked may be directed at recipients who have no matching needs. Good marketing is a result of careful targeting. The message is sent or advertised only to a perfect audience, filled with Individuals who are glad that the marketer found them. Poor marketing blasts messages at inappropriate lists or posts advertisements in the wrong venue. For the marketer (or Spam email sender), it is a waste of resources and sometimes a crime. For the recipient of untargeted ads and emails, it is a source of irritation and an involuntary waste of resources, especially of the recipient’s attention.

Consider a hypothetical example of a signal and its response:

Pixar animators consume enormous computing resources creating each minute of animation. Pixar founder, John Lasseter, has many CGI tools at his disposal, most of them designed at Pixar. As John plans a budget for Pixar’s next big film, suppose that he learns of a radical new animation theory called Liquid Flow-Motion. It streamlines the most complex and costly processes. His team has yet to build or find a practical application that benefits animators, but John is determined to search everywhere.

Method #1: A consumer in need searches & signals

Despite a lack of public news on the nascent technique, John is convinced that there must be some workable code in a private lab, a university, or even at a competitor. And so, he creates a web page and uses SEO techniques to attract attention.

The web page is a signal. It broadcasts to the world (and hopefully to relevant parties) that Pixar is receptive to contact from anyone engaged in Liquid Flow-Motion research. With Google’s phenomenal search engine and the internet’s reach, this method of signaling may work, but a successful match involves a bit of luck. Individuals engaged in the new art may not be searching for outsiders. In fact, they may not be aware that their early stage of development would be useful to anyone.

Method #2: Google helps marketers target relevant consumers

Let’s discuss how Google facilitates market-driven signaling and a relevant marketing response today and let us also determine the best avenue for improvement…

At various times in the past few weeks, John had Googled the phrase “Liquid Flow-Motion” and some of the antecedents that the technology builds upon. John also signed up for a conference in which there was a lecture unit on the topic (the lecture was not too useful. It was given by his own employee and covered familiar ground). He also mentioned the technology in a few emails.

Google’s profile for John made connections between his browser, his email and his searches. It may even have factored in location data from John’s Android phone. In Czechoslovakia, a grad student studying Flow-Motion has created the first useful tool. Although he doesn’t know anything about Google Ad Words, the university owns 75% of the rights to his research. They incorporate key words from research projects and buy up the Google Ad Words “Liquid Flow-Motion”.

Almost immediately, John Lasseter notices very relevant advertising on the web pages that he visits. During his next visit to eBay, he notices a home page photo of a product that embodies the technique. The product was created in Israel for a very different application. Yet it is very relevant to Pixar’s next film. John reaches out to both companies–or more precisely, they reached out in response to his signal, without even knowing to whom they were replying.

Neat, eh? What is wrong with this model?

For many users, the gradual revelation that an abundance of very personal or sensitive data is being amassed by Google and the fact that it is being marketed to unknown parties is troubling. Part of the problem is perception. In the case described above and most other cases in which the Google is arbiter, the result is almost always to the user’s advantage. But this fact, alone, doesn’t change the perception.

But consider Google’s process from input to output: the collection of user data from a vast array of free user services and the resulting routing of ads from marketing partners. What if data collection, storage and manipulation could be tweaked so that all personal data–including the participation of any user–were completely anonymized? Sounds crazy, right? If the data is anonymized, it’s not useful.

Wrong.

Method #3: Incorporate Blind Signaling & Response into AdWords
— and across the board

A signaling and response system can be constructed on blind credentials. The science is an offshoot of public key cryptography and is the basis of digital cash (at least, the anonymous form). It enables a buyer to satisfy a standard of evidence (the value of their digital cash) and also demonstrate that a fee has been paid, all without identifying the buyer or even the bank that guarantees cash value. The science of blind credentials is the brainchild of David Chaum, cryptographer and founder of DigiCash, a Dutch venture that made it possible to guaranty financial transactions without any party (including the bank) knowing any of the other parties.

The takeaway from DigiCash and the pioneering work of David Chaum is that information can be precisely targeted–even with a back channel–without storing or transmitting data that aids in identifying a source or target. (Disclosure: I am developing a specification for the back channel mechanism. This critical component is not in the DigiCash implementation). Even more interesting is that the information that facilitates replying to a signal can be structured in a way that is useless to both outsiders and even to the database owner (in this case, Google).

The benefits aren’t restricted to Internet search providers. Choose the boogeyman: The government, your employer, someone taking a survey, your grandmother. In each case, the interloper can (if they wish) provably demonstrate that meaningful use of individually identifiable data is, by design, restricted to a stated purpose or algorithm. No other person or process can find meaning in the data—not even to whom it belongs.

The magic draws upon and forms an offshoot of Trusted Execution Technology, a means of attestation and authentication. In this case, it is the purpose of execution that must be authenticated before data can be interpreted, correlated with users or manipulated. This presentation at a University of Montreal privacy workshop pulls back the covers by describing a combination of TXT with a voting trust, (the presenter rushes through key slides at the end of the video).

It’s reasonable to assume that privacy doesn’t exist in the Internet age. After all, unlike a meeting at your dining table, the path from whisper to ear passes through a public network. Although encryption and IP re-routing ensure privacy for P2P conversations, it seems implausible to maintain privacy in everyday searches, navigation, and online email services, especially when services are provided at no cost to the user. Individuals voluntarily disgorge personal information in exchange for services, especially, if the goal is to keep the service provider incented to offer the service. For this reason, winning converts to Blind Signaling and Response requires a thoughtful presentation.

Suppose that you travel to another country and walk into a bar. You are not a criminal, nor a particularly famous or newsworthy person. You ask another patron if he knows where to find a good Cuban cigar. When you return to your country, your interest in cigars will probably remain private and so will the fact that you met with this particular individual or even walked into that bar.

Gradually, the internet is facilitating at a distance the privileges and empowerment that we take for granted in a personal meeting. With end-to-end encryption, it has already become possible to conduct a private conversation at a distance. With a TOR proxy and swarm routing, it is also possible to keep the identities of the parties private. But today, Google holds an incredible corpus of data that reveals much of what you buy, think, and fantasize about. To many, it seems that this is part of the Faustian bargain:

  • If you want the benefits of Google services, you must surrender personal data
  • Even if you don’t want to be the target of marketing,* it’s the price that you pay for using the Google service (Search, Gmail, Drive, Navigate, Translate, Picasa, etc).

Of course, Google stores and act on the data that it gathers from your web habits. But both statements above are false!

a)  When Google incorporates Blind Signaling into its services, you will get all the benefits of Google services without anyone ever discovering personal information. Yet, Google will still benefit from your use of their services and have even more incentive to continue offering you valuable, personalized services, just as they do now.

b)  Surrendering personal data in a way that does not anonymize particulates is not “the price that you pay for Google services”. Google is paid by marketers and not end users. More importantly, marketers can still get relevant, targeted messages to the pages you visit, while Google protects privacy en toto! Google can make your personal data useless to any other party and for any other purpose. Google and their marketing partners will continue to benefit exactly as they do now.

Article in process…

* This is also a matter of perception. You really do want targeted messaging. Even if you hate spam and, like me, prefer to search for a solution instead of have marketers push a solution to you. In a future article, I will demonstrate that every individual is pleased by relevant messaging, even if it is unsolicited, commercial or sent in bulk.

Will Google “Do No Evil”?

Google captures and keeps a vast amount of personal information about its users. What do they do with all that data? Despite some very persistent misconceptions, the answer is “Nothing bad”. But they could do a much better job ensuring that no one can ever do anything bad with that data—ever. Here is a rather simple but accurate description of what they do with what is gleaned from searches, email, browsing, documents, travel, photos, and more than 3 dozen other ways that they learn about you:

  • Increase the personal relevance of advertising as you surf the web
  • Earn advertising dollars–not because they sell information about you–but
    because they use that data to match and direct relevant traffic toward you

These aren’t bad things, even to a privacy zealot. With or without Google, we all see advertising wherever we surf. Google is the reason that so many of the ads appeal to our individual interests.

But what about all that personal data? Is it safe on Google’s servers? Can they be trusted? More importantly, can it someday be misused in ways that even Google had not intended?

I value privacy above everything else. And I have always detested marketing, especially the unsolicited variety. I don’t need unsolicited ‘solutions’ knocking on my door or popping up in web surfing. When I have needs, I will research my own solutions—thank you very much.

It took me years to come to terms with this apparent oxymoron, but the personalization brought about by information exchange bargains are actually a very good bargain for all parties concerned, and if handled properly, it needn’t risk privacy at all! In fact, the things that Google does with our personal history and predilections really benefits us, but…

This is a pro-Google posting. Well, it’s ‘pro-Google’ if they “do no evil” (Yes—it’s the Google mantra!). First the good news: Google can thwart evil by adding a fortress of privacy around the vast corpus of personal data that they collect and process without weakening user services or the value exchange with their marketing partners. The not-so-good news is that I have urged Google to do this for over two years and so far, they have failed to act. What they need is a little urging from users and marketing partners. Doing no evil benefits everyone and sets an industry precedent that will permeate online businesses everywhere.

The CBS prime time television series, Person of Interest, pairs a freelance ‘James Bond’ with a computer geek. The geek, Mr. Finch, is the ultimate privacy hack. He correlates all manner of disparate data in seconds, including parking lot cameras, government records, high school yearbook photos and even the Facebook pages of third parties.

Mr. Finch & Eric Schmidt: Separated at birth?

It’s an eerie coincidence that Google Chairman, Eric Schmidt, looks like Mr. Finch. After all, they both have the same job! They find a gold mine of actionable data in the personal dealings of everyone.

Viewers accept the TV character. After all, Finch is fictional, he is one of the good guys, and his snooping ability (especially the piecing together of far-flung data) is probably an exaggeration of reality. Right?!

Of course, Eric Schmidt & Google CEO Larry Page are not fictional. They run the largest data gathering engine on earth. I may be in the minority. I believe that Google is “one of the good guys”. But let’s first explore the last assumption about Mr. Finch: Can any organization correlate and “mine” meaningful data from a wholesale sweep of a massive eavesdropping machine and somehow piece together a reasonable profile of your interests, behavior, purchasing history and proclivities? Not only are there organizations that do this today, but many of them act with our explicit consent and with a disclosed value exchange for all that personal data.

Data gathering organizations fall into three categories, which I classify based on the exchange of value with web surfers and, more importantly, whether the user is even aware of their role in collecting data. In this classification, Google has moved from the 2nd category to the first, and this is a good thing:

  1. Organizations that you are aware of–at least peripherally–and for which there is a value exchange (preferably, one that is disclosed). Google comes to mind, of course. Another organization with informed access to your online behavior is your internet service provider. If they wanted to compile a dossier of your interests, market your web surfing history to others, or comply with 3rd party demands to review your activities, it would be trivial to do so.
  2. Organizations with massive access to personal and individualized data, but manage to “fly beneath the Radar”. Example: Akamai Technologies operates a global network of servers that accelerate the web by caching pages close to users and optimizing the route of page requests. They are contracted by almost any company with a significant online presence. It’s safe to say that their servers and routers are inserted into almost every click of your keyboard and massively distributed throughout the world. Although Akamai’s customer relationship is not with end users, they provide an indirect service by speeding up the web experience. But because Internet users are not actively engaged with them (and are typically unaware of their role in caching data across the Internet), there are few checks and on what they do with the click history of users, with whom they share data, and if–or how–individualized is data is retained, anonymized or marketed.
  3. National governments. There is almost never disclosure or a personal value exchange. Most often, the activity involves compulsory assistance from organizations that are forbidden from disclosing the privacy breach or their own role in acts of domestic spying.
The NSA is preparing to massively vacuum data from everyone, everywhere, at all times

The US is preparing to spy on everyone, everywhere, at all times. The massive & intrusive project stuns scientists involved.

I have written about domestic spying before. In the US, It has become alarmingly broad, arbitrary and covert. The über secretive NSA is now building the world’s biggest data gathering site. It will gulp down everything about everyone. The misguided justification of their minions is alternatively “anti-terrorism” or an even more evasive “911”.

Regarding, category #2, I have never had reason to suspect Akamai or Verizon of unfair or unscrupulous data mining. (As with Google, these companies could gain a serious ethical and market advantage by taking heed of today’s column.) But today, we focus on data gathering organizations in category #1—the ones with which we have a relationship and with whom we voluntarily share personal data.

Google is at the heart of most internet searches and they are partnered with practically every major organization on earth. Forty eight free services contain code that many malware labs consider to be a stealth payload. These doohickeys give Google access to a mountain of data regarding clicks, searches, visitors, purchases, and just about anything else that makes a user tick.

It’s not just searching the web that phones home. Think of Google’s 48 services as a marketer’s bonanza. Browser plug-ins phone home with every click and build a profile of user behavior, location and idiosyncrasies. Google Analytics, a web traffic reporting tool used by a great many web sites, reveals a mountain of data about both the web site and every single visitor. (Analytics is market-speak for assigning identity or demographics to web visits). Don’t forget Gmail, Navigate, Picassa, Drive, Google Docs, Google+, Translate, and 3 dozen other projects that collect, compare and analyze user data. And what about Google’s project to scan everything that has ever been written? Do you suppose that Google knows who views these documents, and can correlate it with an astounding number of additional facts? You can bet Grandma Estelle’s cherry pie that they do!

How many of us ever wonder why all of these services are free to internet users everywhere? That’s an awful lot of free service! One might think that the company is very generous, very foolish, or very unprofitable. One would be wrong on all counts!

Google has mastered the art of marketing your interests, income stats, lifestyle, habits, and even your idiosyncrasies. Hell, they wrote the book on it!

But with great access to personal intelligence comes great responsibility. Does Google go the extra mile to protect user data from off-label use? Do they really care? Is it even reasonable to expect privacy when the bargain calls for data sharing with market interests?

At the end of 2009, Google Chairman, Eric Schmidt made a major gaffe in a televised interview on CNBC. In fact, I was so convinced that his statement was toxic, that I predicted a grave and swift consumer backlash. Referring to the Billions of individuals using Google search engine, investigative anchor, Maria Bartiromo, asked Schmidt why it is that users enter their most private thoughts and fantasies. She wondered if they are aware of Google’s role in correlating, storing & sharing data—and in the implicit role of identifying users and correlating their identities with their interests.

Schmidt seemed to share Bartiromo’s surprise. He suggested that internet users were naive to trust Google, because their business model is not driven by privacy and because they are subject to oversight by the Patriot Act. He said:

If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place. If you really need that kind of privacy, the reality is that search engines — including Google — do retain this information for some time and it’s important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.

At the time, I criticized the statements as naive, but I have since become more sanguine. Mr. Schmidt is smarter than me. I recognize that he was caught off guard. But clearly, his response had the potential to damage Google’s reputation. Several Google partners jumped ship and realigned with Bing, Microsoft’s newer search engine. Schmidt’s response became a lightning rod–albeit brief–for both the EFF (Electronic Freedom Foundation) and the CDT (Center for Democracy & Technology). The CDT announced a front-page campaign, Take Back Your Privacy.

But wait…It needn’t be a train wreck! Properly designed, Google can ensure individual privacy, while still meeting the needs of their marketing partners – and having nothing of interest for government snoops, even with a proper subpoena.

I agree with the EFF that they undermine Google’s mission. Despite his high position, Schmidt may not fully recognize to that Google’s marketing objectives can coexist with an ironclad guarantee of personal privacy – even in the face of the Patriot Act.

Schmidt could have had salvaged the gaffe quickly. I urged him to quickly demonstrate that he understands and defends user privacy. But I overestimated consumer awareness and expectations for reasonable privacy. Moreover, consumers may feel that the benefits of Google’s various services inherently trade privacy for productivity (email, taste in restaurants, individualized marketing, etc).

Regarding a damning consumer backlash for whitewashing personal privacy with their public, I was off by a few years, but in the end, my warnings will be vindicated. Public awareness of privacy and especially of internet data sharing and data mining has increased. Some are wondering if the bargain is worthwhile, while others are learning that data can be anonymized and used in ways that still facilitate user benefits and even the vendor’s marketing needs.

With massive access to public data and the mechanisms to gather it (often without the knowledge and consent of users), comes massive responsibility. (His interview contradicts that message). Google must rapidly demonstrate a policy of “default protection and a very high bar for sharing data. In fact, Google can achieve all its goals while fully protecting individual privacy.

Google’s data gathering and archiving mechanism needs a redesign (it’s not so big a task as it seems): Sharing data and cross-pollination should be virtually impossible – beyond a specified exchange between users and intended marketers. Even this exchange must be internally anonymous, useful only in aggregate, and self expiring – without recourse for revival. Most importantly, it must be impossible for anyone – even a Google staffer – to make a personal connection between individual identities and search terms, Gmail users, ad clickers, voice searchers or navigating drivers!

I modestly suggest that Google create a board position, and give it authority with a visible and high-profile individual. (Disclosure, I have made a “ballsy” bid to fill such a position. There are plenty of higher profile individuals that I could recommend).

Schmidt’s statements have echoed for more than 2 years now. Have they faded at all? If so, it is because Google’s services are certainly useful and because the public has become somewhat inured to the creeping loss of privacy. But wouldn’t it be marvelous if Google seized the moment and reversed that trend. Wouldn’t it be awesome if someone at Google discovered that protecting privacy needn’t cripple the value of information that they gather. Google’s market activity is not at odds with protecting their user’s personal data from abuse. What’s more, the solution does not involve legislation or even public trust. There is a better model!

They are difficult to contain or spin. As Asa Dotzler at FireFox wrote in his blog, the Google CEO simply doesn’t understand privacy. Here in USA, Schmidt’s statements have become a lightning rod for both the EFF and CDT (Center for Democracy & Technology). The CDT has even launched a front page campaign to “Take Back Your Privacy”.

Google’s not the only one situated at a data Nexus. Other organizations fly below the radar, either because few understand their tools or because of Government involvement. For example, Akamai probably has more access to web traffic data than Google. The US government has even more access because of an intricate web of programs that often force communications companies to plant data sniffing tools at the junction points of massive international data conduits. We’ve discussed this in other articles, and I certainly don’t advocate that Wild Ducks be privacy zealots and conspiracy alarmists. But the truth is, the zealots have a leg to stand on and the alarmists are very sane.

What’s with Verizon Billing & Customer Service?

Feb 2012 UPDATE:
Verizon billing misfeasance—just keeps getting worse

At the end of 2011, Verizon announced a bill-paying fee that would be charged even if payment is made on time and online. They did this to discourage individual monthly payments, pushing users, instead, to authorize debit from a checking account. To avoid the fee, users must grant Verizon carte blanche to dip into the till without a mechanism to authorize or restrict individual payments.

Although the proposal was not a ‘trial balloon’ (Verizon believed users wouldn’t mind paying for the privilege of paying!),  they were met with overwhelming publicity and a scathing consumer reaction. The plan was scrapped within 48 hours.

But since the article appeared, many Wild Ducks were less concerned about Verizon’s fee schedule and more interested in the billing & support problems that plague Verizon TV and Internet, especially the wholesale inability to honor FIOS bundle promotions.

Billing integrity is abhorrent. I suspect an audit of 100 customer accounts would reveal errors in the invoices or ACH debits of every one. In my own account, Verizon made scores of credit adjustments, but only after hundreds of calls & complaints. Jump directly to the relevant section.

When I launched A Wild Duck, I promised myself that this humble soap box would never be used for a personal gripe or vendetta. So let me get this out up front: This is a personal gripe. It’s not about the Verizon decision to charge customers a fee to pay their bills (a decision that they announced and then retracted after just 48 hours). Well, it’s tangentially related, but at least it’s not specifically about that loony announcement.

The ISP and wireless behemoth that Americans just love to hate is technically superior in every sector they serve. The best cell phone network in North America. The best Internet Service in the world (many of us enjoy 100Mbps FIOS service in our homes). Incredible television choices at reasonable prices. All this technology and superb technicians when there is a problem. But wait!…

They keep gushing out fodder. This time, Verizon announced a $2 fee for any customer who pays their bill. Yes! A fee to pay bills by mail or even online – unless the customer consents to pre-authorized automatic debit.

The plan lasted for about 2 days. They retracted the goofy anti-customer measure when the Federal Trade Commission announced an investigation (Hey guys. It’s stupid, but it probably isn’t illegal) and when a grass roots backlash began from every corner of the country. In fact, during the waning hours of 2011, it was more like a tidal wave.

I won’t comment further on the idea of charging customers to pay bills. It’s so whacky that it defies comment. But let me explain why users might not wish to allow Verizon to transfer payments in the absence of active client participation..


What’s up with Verizon’s Billing & Support?

  • Even after 4 years—Verizon has difficulty honoring offers & incentives
  • Renewal leads to endless billing errors & deplorable customer service
  • Verizon continues billing errors, even after agents identify the problem
  • Hundreds of calls, dozens of letters, constant apologies; Errors persist

Verizon should be permanently barred from interacting with any bank account. The amounts they debit have absolutely no bearing on the service package contracted by their clients! At least if you require them to mail invoices, you have a chance to demand corrections before payment. (But good luck. It can take literally hundreds of calls and complaints).

To make matters worse, Verizon sacked their customer support staff years ago. The remaining peons have been stripped of authority and tools. They simply cannot solve problems, no matter how egregious! (This has been acknowledged to me by numerous telephone support specialists who wish that they had mechanisms to solve serious and blatantly obvious snafus. They can’t even elevate serious billing problems).

Case Study:  Me!

I am an early adopter of direct debit payment (ACH & EFT). Since the 1980s, I have allowed a few vendors to debit my checking account for monthly services. This is how I pay for my mortgage, electric & gas bills, UPS package delivery, and other monthly services. I used to allow Verizon the same access to sweep their monthly service fee from my checking account. “Why not?” you ask. After all, It saves time, avoids late fees, and – as a diversified conglomerate – they can certainly keep their records straight. Right? Not on your life! For the past three years, I have blocked Verizon from dipping into my bank account. Instead, I use single payments for a practical reason…

Verizon has cut back on customer service to such an extent that they debit the wrong amount more frequently than the correct amount (no exaggeration!). In fact, in just 40 months, they have made more than 120 corrections to my bill and issued almost a dozen apologies. The problem is biggest with their FIOS and One Bill program (which folds in your VZ Wireless bill). They also have trouble with accurate billing for their “triple-play” bundles, especially if you choose a plan that aggregates your wireless phones.

I feel sorry for the Verizon customer who fails to regularly check their bank statement for EFT/ACH debits. With an almost complete lack of customer support, it sometimes takes legal threats (or waiting for service to be cut off) before getting Verizon to correct a litany of errors.

Why put up with such negligent customer service? It transcends misfeasance! One reason: Without question, Verizon serves up the best TV, Internet and wireless service in every market they serve. I freely acknowledge a terrific product suite. Cable TV companies and satellite services don’t even come close. Verizon never has a blackout or glitch, they replace equipment on demand, they don’t over-compress the TV signal and their FIOS speeds don’t degrade as neighbors jump on the bandwagon. In short, their “product-service” is terrific. But what about customer service?…

After 120 credits (and more than 150 phone calls to get them corrected), I finally had it! I called to disconnect service. Guess what? They lowered the price to keep my business. At first, I said “No.” I was really, really, really fed up. They didn’t just lower it once, but three times on the same call—a discount of more than $50 each month, a free DVR and a bump up to unlimited data on my smart phone. Even more surprising, they threw more senior and more professional resources at saving my business than ever offered in the past. They bent over backwards to retain my good will, and in the end, I capitulated… I accepted an outrageously grand offer.

And what happened after they created a new bundle price for me (confirmed in writing). You guessed it! The discount never stuck. Each month thereafter, I was billed the wrong amount. Did I complain? Yes. Every single month. I got profuse apologies (“up the Gazoo” as they say). Eventually, a telephone representative told me that there is simply no mechanism to automatically apply special “customer retention” offers. So she offered to apply the discount each month a few days after the regular invoice is generated. Mind you, a separate representative was manually crediting a “Triple Play” bundle discount because the company had no process for honoring a nationally advertised service packages that included wireless services.

On top of all this, their unified One Bill program was a month behind in showing credits and payments, so I never knew what to pay!

Does this method of manual intervention work? Sort of…about 1/3 of the time. The rest of the time, I must call (it takes 3 or 4 calls) and persuade the first few representatives that I am the beneficiary of a “customer retention” offer. Then, these jokers need to find the representative who made the rebate offer. Then, my call is mysteriously dropped, or – get this – a tin plated, middle manager picks up the line and tells me that the original employee acted without authority. Whoahh?! I reprint transcripts of everything and resend a few legal demand notices (8 times last year!). Eventually, the original rep calls me back. Another apology, retroactive credit, and another promise, and…

Does this sound like a company that has its act together? Is this a vendor to be trusted with the keys to your bank account? I think not, Toto! Against the advice of my own family, I have still remained a Verizon customer. Alas, it is difficult to give up terrific products (Wireless phones, TV, Internet and tethering) and of course, very significant concessions to keep my business. But I certainly wouldn’t put up with this, if it weren’t for a massive incentive: about $600 off of their discount bundle and that’s on top of advertised incentives.

Note to Verizon Stock Holders: Imagine how much more your company would earn if they didn’t have to give so much back to disgruntled customers. If I held equity in Verizon or Vodafone, I would demand an accounting of post-facto givebacks. I bet that you will find a universe of lost revenue.

Verizon Wireless: Trouble with honesty & fairness

In the market for mobile phones, a time span of 7 years represents a different era altogether. At least 4 generations of hardware feature phones have come and gone. Seven years ago, there was no iPhone and no Android. Palm was king of PDAs, a class that was still separate from phones and browsers. Feature phones offered Symbian at best. (Who remembers Windows CE?).

Way back in 2004, Verizon crippled Bluetooth in the Motorola v710, the first mobile phone to support short range wireless technology. The carrier supported Bluetooth for connecting a headset and for voice dialing, but they blocked Bluetooth from transferring photos and music between a phone and the user’s own PC. More alarmingly, they displayed a Bluetooth logo on the outside of custom Verizon packaging, even though the logo licensing stipulated that all logical and resident Bluetooth “profiles” are supported.

(Disclosure: I was a plaintiff in a class action that resulted in free phones for users affected by the deception. I am not a ‘Verizon basher’. I have been a faithful client since early cell phones and I recently defended Verizon’s right to charge for off-device tethering.)

Can you hear me now?

Why would Verizon cripple a popular feature that helps to differentiate and sell equipment? That’s an easy one. It forced users to transfer photos and music over the carrier network rather than exchange files directly with a PC. The carrier sells more minutes or costly data plans.

With the same motive, Verizon restricted feature phone apps to their Get it Now store, limiting music, games and ringtones to their own pipeline. Heck–Why not? It’s their ball park! Users can take their business to other carriers. Right? Well perhaps—but mobile service is built upon licensed spectrum, a regulated and limited commodity. Although carriers are not a monopoly in the strict sense (there are three or four carriers in populated regions), they are licensed stewards of an effective market duopoly.

Perhaps the longest lived vestige of Verizon’s stodgy funk (and the most depressing) was their insistence on stripping pre-smart phones of the manufacturer’s user GUI and foisting users to navigate a bland set of carrier-centric screens and commands. Often, I would sit next to someone on an international flight who had the same model Motorola, Samsung or Nokia phone. And guess what? His carrier didn’t interfere with fascinating user features. Why did Verizon force their own screens on unsuspecting Americans? It meant that I could not set my phone to vibrate first and then ring with increasing volume over the next few seconds. What a great feature on my Moto i810! But it was stripped from subsequent models, because it wasn’t spec’d by the boys in Verizon’s “retrofit and bastardize” lab.

With the exception of the class action on the Bluetooth features, no legislation was needed to get Verizon to unlock phone features. Eventually a free market mechanism forced them to rethink their ivory tower greed. With AT&Ts market success selling iPhones, Verizon eventually capitulated so that they could become the Android market leader. The new strategy worked for both consumers and for Verizon. Even before they began selling iPhones in 2011, Verizon reasserted their position as the carrier of choice and fully justified their cost premium through excellent coverage and quality service.

Hey, Verizon! Can you hear us now?!

But now, the company that I have learned to hate, love, and then curse, is at it again! They are about to introduce the Samsung Galaxy Nexus. It is only the 2nd Google branded Android (you can’t get closer to a pure Android experience!). But wait! News Flash: They are going to cripple a native Android feature. Just as with the Bluetooth debacle, Verizon claims that it is for the protection and safety of their own users. (Stop me, Mommy! I’m about to access a 3rd party service!).

Why doesn’t Verizon get it? Why can’t they see value in being the #1 carrier and base profit strategy on exceptional build out and service? Sure, I support their right to offer apps, music, ringtones, photo sharing, navigation, child tracking, mobile television, and even home control. These are great niches that can boost revenue. But remember that you are first and foremost a carrier. Just because you plan to enter one of these markets is no reason to cut off your own users from content and service options.

Think of this issue as your subscribers see it: Cutting off users from the Android wallet, because you plan to offer a payment mechanism of your own is no different than a phone service blocking calls to Bank of America because they are tied in with Citibank. If that metaphor doesn’t cut it, how about a simple truth? It’s been 22 years since Judge Harold Greene deregulated the telecommunications monopoly. Your company is both legacy and chief beneficiary of that landmark decision. But success is transient to those who use market penetration to restrict choice. And this time, it won’t require anti-monopoly legislation. The market will push back hard and share recovery will be slow.

I’m taking my phone and going home!

Android is open. Get it? You have flourished recently, because you chose to embrace an open system that builds on its own popularity. You have contributed to its swift ascent, and likewise, Google and your users who like Android have contributed to your success. Why spit on your users now? What did we do to deserve this?

C’mon Verizon. Stop seizing your ball and threatening to close the ball park. We love you. Get it right for once and stop dicking with us. Our patience is wearing thin!

Photoshop Engineer Unblurs Motion & Restores Focus

Here is a quickie, Wild Ducks. File this one under “Wow!

This demonstration by an Adobe PhotoShop developer forces me to rethink my understanding of focus and information recovery.

Deconvolution restores information, but only if captured in original image and obfuscated via a reversible & non-lossy process. The filter proves that motion blur meets the criteria. It is not indicative of missing information!

Until now, I thought that motion blur (example #1 in the video) and focus (example #2) were evidence of lost information—and therefore, they could not be overcome. That is, if a camera is out of focus or moving in relation to its subject, it is part way along the path to a complete loss of picture information (for example, a camera that is totally unfocussed or moving in a complete circle with the shutter open. In the extreme case, film is exposed to unfocussed light…no useful information). But this video proves that there exist algorithms that can make reasonable measurements and assumptions about the original scene and then recover sharpness and lost information.

Listen to the audience reaction at these times in the video:  1:17 & 3:33. The process is startling because it appears to recover information and not just perceived sharpness. Click for close ups of before-&-after that wow’d the audience  [Plaza]   [Cruise poster]

An existing 3rd party plugin, Focus Magic [updated review], may do the same thing. It is pitched to forensic investigators. (Note to Wild Ducks: Thwarting forensics is a noble calling). Focus Magic touts startling before-&-after photos of a blurry license plate which becomes easily readable after processing. Their web site highlights the restoration of actual sharpness through a process of deconvolution* as opposed to simply enhancing perceived sharpness by applying faux features such as unsharp mask or edge acutance. It is not clear if the two projects use the same underlying technique.

Implications for File Compression (e.g. JPEG)

Here’s something for armchair mathematicians to ponder. If we compare two compressed files: An image with sharp focus and an identical image that is unfocused but still recoverable, we see that the file size of the unfocused image is considerably smaller. In the past, we explained this based on the assumption that the unfocused image contains less information, as if we had resampled the original image at a lower resolution.

But if the unfocused image can be brought into focus (and if the compressed file size relates to the visual entropy of the uncompressed image), then how do we explain the smaller file size? Put another way, if detail in the unfocused image is recoverable, than we should be able to boost file compression by intentionally unfocusing images and then restoring focus during decompression. This should also work for lossless compression methods such as TIF/CCITT.

* Deconvolution is a field of mathematics & signal processing that refers to the removal of noise or distortion and revealing meaningful information hidden within a polluted file or signal. What is surprising about the PhotoShop demonstration (and perhaps the process used by Focus Magic) is that there exists a deconvolution process for information that I had assumed was never captured during the original recording process.

Awash in cash, does Dropbox sense the undertow?

Dropbox CEO, Drew Houston, is about to facilitate a meaningful donation to his favorite cause, but he doesn’t know it yet. More about this in the last paragraph…

Dropbox is in an enviable position. The company is smokin! It’s so hot, that Forbes magazine calls it Tech’s Hottest Startup. So hot, that Steve Jobs tried to acquire it. So hot, that when anointed by the MacMeister with a personal audience, 28-year-old founder Drew Houston snubbed his proverbial nose at the offer. *

Dropbox: Atop it’s game…  But what about Future Shock?

What does Dropbox sell?
Dropbox sells cloud storage services, including backup, synchronization and file distribution. They are arguably king the market leader, but they have plenty of competition: Apple’s new iCloud, SugarSync, SkyDrive (Microsoft), LiveDrive, Google Docs, Box.net, FolderShare and a growing list of wannabees. Without getting into the nitty-gritty, let’s just say that if you’re not using Dropbox or a similar service now, you will do so soon.

What can cloud storage do for me?
You are probably familiar with Carbonite and Mozy. These vendors market clouds as safety nets, constantly backing up your PC over the internet, as you work. On the other hand, Google Docs makes collaboration easier and more efficient because users spread far apart can work on the same document at the same time – and without worrying about who has the latest version. These are all ancillary benefits of cloud computing at best. Since the concept is still in its infancy, they focus on a simple and easily digestible pitch.

But clouds offer much more! With data in the cloud, documents, photos and music are always available, backed up, in sync and safe – no matter where you travel or what gadget is handy. Files don’t depend on equipment that you own or carry, so you can travel light and with constant access to your business, media and memories. Much as predicted by Asimov in The Last Question, using a personal data cloud is like having your brain in ever-present hyperspace.

Dropbox is the convergence leader. What’s wrong with that?
When startups reach a phase that I call investor frenzy, founders and early investors inevitably get the “not invented here” bug, or the “we are obviously doing it right” bug. But smart directors swat away cocky bugs of success until the company reaches the profit phase and, of course, the ROI phase. They also keep a keen eye on competitors and even tiny startups to see if someone has come up with a startling new way to improve service, boost revenue or reduce expenses.

What’s new in cloud technology?
“What’s New” is a tectonic shift in technology from centralized, data center storage to distributed peer storage. It’s a dramatic architectural enhancement that I call Ellery’s Reverse Distributed Data cloud [RDDC]. While I can’t take the credit for all that is about to unfold, I was first to propose it three years ago. This past August, I blogged about the concept at AWildDuck.

RDDC changes dynamics of everything that matters in storage: cost, security and speed and even environmental impact – all in the right direction. As each user adds inexpensive storage to their own home or business network (the same drives that they previously used to store their working data or backups), a central “traffic cop” uses this worldwide, massively redundant, distributed storage network as if it were a “RAID-10,000” drive array. The Result: As long as 33% of users don’t turn off their storage devices at the same time, everyone’s data is available instantly, securely and without risk of errors or hacks.

Incredible? You Bet! Want more? Of course!
Consider the return data throughput. That’s the rate at which downloads from these many different storage drives arrive into your PC when restoring a backup or even when using a global cloud array as your main drive. You might think that spreading your data, bytewise across lots of slow uplinks would result in data recovery at a snail’s pace. You would be wrong. Even the programmers who understand the math are astounded at the RDT. Even if many drives in your personal cloud are heavily fragmented or poke along at the 3rd tier connection speed of a rural carrier, incoming throughput sizzles at blistering, heart-pounding speed. Why? Because inbound data is staged in the cloud as a torrent from a massively parallel cluster – and not as a serial stream from one peer.

There’s more. While all of this is happening the uplink channel is not idle. Your global cloud array dispatches data around the world with predictive caching, based on new research into the distribution of media across disparate platforms.

Should I Care?
While the architecture of remote storage may seem a geeky detail, the fallout is a litany of benefits to users and a massive windfall for the first provider’s to get with the program. They will enjoy a 90% reduction in operational expenses, while customers experience a blistering bump in speed and meaningful intangibles like fault tolerance associated with massive redundancy.

What vendors are rolling out this new technology?
Symform is already offering RDDC. SpaceMonkey has not yet announced, but it’s two founders in Salt Lake City (both from EMC) have an even more compelling model. They’re lining up investors now. They get it and the angels are starting to take notice!

These tiny startups and a few others have a big edge on their well-funded brethren, because they are already on top of RDDC. If the challenge is not rapidly met by Dropbox and SugarSync, the new kids will sweep the market.

What’s the risk to the established players? Will they catch up?
Cloud computing for the masses is rapidly becoming a crowded market. Massive consolidation will come in a year. Only a few companies will be left standing. Most of the names entering the market today, and even some established brands won’t survive nor even be acquired. They’ll just die. A few fortunate startups will cash out, because of their early implementation of RDDC. My bet is with cloud providers that move quickly into massively distributed data clouds. They will be healthy and profitable. If Dropbox gets it and moves quickly to seize the day, they will very likely come out on top.

Drew Houston: In the catbird seat, but for how long?

Does Drew know about RDDC?
He might. More likely, he considered it briefly and then dismissed it. Even a bright individual can overlook an elegant solution to an unrecognized problem. (i.e. reducing expenses dramatically while boosting data security).

It’s a safe bet that Carbonite and Mozy can’t implement RDDC in time to save their hides. One is too narrowly focused on marketing themselves as a backup service and the other is married to data centers that they own.

Perhaps Dropbox “gets it” and needs no input from their biggest fan. But perhaps – just perhaps mind you – they have yet to design a fully holographic RAID-10K algorithm. Perhaps they have not yet optimized predictive caching for peer distributed networks. Perhaps they are not equipped to quickly build a torrent reacquisition mechanism on the fly and activate it safely across thousands of peers with disparate upload and download speeds, while each user powers down storage media every day without notice.

What’s the ‘R’ stand for in “RDDC”?
It stands for “Reverse”. This teaser lacks an explanation by design. If Drew or his deputies at Dropbox contact me, I wish to give them an edge. It’s one of the few aspects of an ideal architecture model that has not yet been exploited by any startup.

If Dropbox knows about RDDC, what is the purpose of this blog?
Finally an easy question! Drew Houston may or may not be contemplating a Dropbox implementation of RDDC. But even if he is shoe-horning it into his ops plan right now, the purpose of this Blog is to get his attention. Dropbox understands the business of cloud computing. Yours truly understands the seismic benefits of Reverse Distributed Data Clouds and has the business and engineering experience to jump start a rapidly growing market leader. Your humble editor is itching to help a cloud sync startup beat Apple, Google, EMC and Amazon and dominate the market before the average Joe adopts RDDC from your daddy’s generation.

Tech & investment communities know Ellery by another name
I have never kept it a secret that Ellery is a pen name. I use it here at AWildDuck and for articles that I freelance to Google, c|net, Engadget, Yahoo & Amazon. My general vitae is posted to this blog and of course, Drew Houston will get all of my contact info.

Got your ears on, Drew? I get it. Years ago, I created the blueprint. I tested architectural dynamics before your competitors got off the ground. Together, we can dramatically reduce costs while creating the most robust swarm on earth. Together, we can sew up a new paradigm before others learn to tie their shoes. Reach to me, Drew. I’ll give 5 hours to your favorite cause for 5 minutes of your time. Nothing to lose and either way, you gain! Your move.

* To be fair, Drew admits that Jobs is his idol and a scion of high tech entrepreneurship!

Not a Horse: An oat-powered quadrapedal transport device

From time to time, at AWildDuck, I offer an observation or op-ed on a topic of human interest. This one is not about current events, the price of gold, law or politics. Nah. It’s just Ellery’s spin with a nod toward levity. This one is fluff…

Columnist, Eric Felten, writes The Wall Street Journal’s biweekly column, Postmodern Times. In December 2010, he penned this review of Ralph Keys book, Euphemania.

To Put It Another Way  — The Wall Street Journal, Dec 14, 2010

The book is filled with euphemisms—both clever and odd. A few nuggets generate guffaws because they are linguistic substitutions crafted to soften the impact of harsh truths. Without lying, they manage to twist simple facts to suit the utterer.

Take, for example, this euphemism for an aerial bombing. It has not been credibly attributed to a US defense department spokesman, but one could certainly imagine some Spin Meister warning generals and press attachés to get with the lingo:

Battlefield soldiers called for a vertically deployed antipersonnel device

Replacing “bomb” with “vertically deployed antipersonnel device” brings to mind a humorous euphemism from my childhood.

In the late 70s, I was in the showroom of a Fiat dealer as my father completed the purchase a car in a corner sales office. My brothers and I occupied ourselves by watching a video on what Fiat claimed was the first fully automated robotic assembly line. In a perfect ballet, rows of machines worked in unison. The factory was completely devoid of humans.

Hello boys. What can I do for you?

Spotting an unattended group of young boys, a sales person approached and asked if he could help us. I replied that we were waiting for our father as he completes the purchase of a new car. The salesman surprised us with his response:  “You must be mistaken…We don’t sell cars.

I had no idea if he was joking – or if perhaps, I had wandered into the waiting area of another retailer. (Yet, I was watching a showroom presentation of an automobile assembly line!). As my jaw dropped, I asked the salesman to tell me exactly what products are sold in this establishment. His reply still echoes in the Euphemism Hall of Fame:

                     “We don’t sell cars. We sell Italian driving machines!”

It’s not a car—It’s an Italian driving machine!

The correction had the desired effect. “Wow!”, I thought. These must be very classy cars. A few minutes later, my father emerged from the sales office with a big smile on his face. The sales manager gave him a pair of brown, leather, race car driving gloves. They had open fingers and were covered with raised dots of rubber. That’s just what Dad needed as the new owner of an Italian driving machine.

Ellery (at) starbus (dot) com