There are empty threats, really empty threats, and threats so empty they could qualify as a black hole.  You can get an idea of that last category in a recent Wired piece titled “10 Percent of Americans Would Quit the Internet if Net Neutrality Dies”.  It begins:

Everyone from Comcast to the courts to government regulators should take heed: People want their internet service provider to treat all online traffic equally.

According to a survey recently published by Consumer Reports, 71 percent of internet users would switch to another service provider if their ISP violated network neutrality — the notion that no internet traffic should receive preferential treatment over other traffic. Ten percent of respondents even said they would be willing to give up internet altogether before putting up with throttled connections.

Do you believe those 10 percent?  No of course you don’t, and I don’t imagine Comcast does either.  But it’s even problematic for those rebellious 71 percent, as the piece later acknowledges by adding “But there’s a big wrinkle here: Users would need a competing service to actually move to, and options are limited in most cities.”  Yes, but it isn’t just cities that lack broadband options.

I haven’t thought much about, and have written even less on net neutrality, not because it isn’t important, it is, as the debates surrounding it may well shape what the Internet looks and operates like a few years from now.  It’s because I don’t see much reason to think that that future is going to be shaped by anything but large content and Internet service providers, their armies of lobbyist, and pages of FCC bylaws they manage to write.  It isn’t going to be determined by users with their empty threats and few to nonexistent alternatives.

And after all, it’s not even clear that users understand net neutrality or could spot a violation if they did, as “…the complex nature of the internet and the way it dovetails with other communication systems means that violations of net neutrality aren’t always easy to pin down.”, another quote from the piece.

It may seem unduly pessimistic or downright irresponsible to essentially cede the debate, but was there ever any territory to cede in the first place?  While it might have a pleasantly people power feel to it to warn the powerful players to “take heed”, the threat looks rather empty.  It’s surely better to look on that fact as realism, rather than pessimism.  Wouldn’t  the first step towards attaining power be seeing through the illusion that you currently have any?

Of all the bummers to emerge from the ongoing Snowden/NSA saga, surely the most distressing is the prospect of US tech companies losing billions of dollars in profits as their customers lose trust that their data will remain ‘private’.  Read this report from the Information Technology & Innovation Foundation and weep, as it’s (not particularly rigorously) projected that the US cloud computing industry will lose between $21.5 and $35.0 to mostly EU cloud computing competitors due to revelations about the NSA PRISM program.

But there may be a crack of light shining through the darkness, according to a story in the Financial Times (Login needed), Microsoft is breaking from the pack and planning to differentiate its services by allowing foreign customers to store their data on servers outside of the US.  Details don’t seem to yet be forthcoming.  Just how “local” will your servers be?  If you insist on local servers, just how much redundancy will be able to be built your “cloud”?  After all that’s part of the attraction of the cloud, your data is just out there, hopefully on a bunch of redundant, geographically dispersed servers.  And how, exactly, even if the servers are local, can one guarantee that your data, going from point A to Z, won’t traverse a line that the NSA has access to?  The internet is anarchic that way, and you don’t have a lot of control once your data leaves your network.

Not to mention that there are plenty of other countries around the world, many of them in the EU, who have capable signals intelligence agencies of their own.  What about them?  And let’s not forget…the companies themselves have your data, and even if they don’t violate your privacy and use it themselves, there is plenty of evidence that large companies aren’t great at keeping your data safe from thieves.

We’ve had a long, pretty narrowly circumscribed conversation about specific programs at one intelligence agency, so I suppose it shouldn’t surprise when neat, clean “fixes” are presented.  It’s certainly good marketing, Microsoft gets to look like a privacy champion, differentiate itself from its competitors, and maybe claw back of that revenue that would have otherwise gone to a European company.  What of actual substance that is to be done is unclear, and I suspect will remain so.

I wouldn’t be surprised if other US companies started to make similar offerings.  As an industry they may even create some sort of seal or certification: “Guaranteed NSA Free!”.  It’ll be very comforting, but fuzzy in its details.  What I don’t expect is anything to fundamentally change behind the scenes.


This piece in the BITS blog at the New York Times about the recent RSA security conference is amusing, particularly the section “Danke, Edward Snowden”:

German executives and intelligence officials called Mr. Snowden a hero and said his disclosures had been a boon for business, as N.S.A. suspicions prompted global companies to look for alternatives to American products and services. One German executive said that many clients who had considered moving their services to the cloud were now looking to store their data on hardware inside Germany, given that “the U.S. owns the cloud.”

This must be one of those serendipitous moments when principles intersect with pecuniary interests.  But, sure enough, some folks come along to rain on the parade:

But American officials were quick to rebut the idea that foreign data would be more secure outside American borders. “There’s a big call for data localization,” said Richard A. Clarke, the former United States counterterrorism czar. He pointed to the announcement this week between the European Union and Brazil that they would run a new undersea fiber-optic cable between Brazil and Portugal to thwart American spying.

“First of all, who doesn’t think the U.S. can’t listen in?” Mr. Clarke said. “Could it possibly be that these countries are trying to take business away from U.S. carriers?”

Michael V. Hayden, the former head of the N.S.A., said on a panel that Germany’s criticism of the United States intelligence community was hypocritical given European nations’ own cyberespionage programs.


‘Warheads on foreheads’ is the title of one of the sections in this Washington Post story about a recent, extensive interview with NSA document leaker Edward Snowden.  Per Snowden, there were apparently people who joked that that was what they did…they put warheads on foreheads.

I haven’t written anything about Snowden or the NSA revelations because, well, I haven’t really had anything to say about them.  The stories that I’ve read, such as the early one on the PRISM program, have been a bit fuzzy on technical details, so I’ve found it difficult to make strong judgements.  I do value privacy though, and am concerned how modern technology can be used to violate it, so I welcome the NSA revelations, such as they’ve been so far.

And that’s why I’m a bit concerned when I read these paragraphs in the above mentioned section of the interview story:

Technology, of course, has enabled a great deal of consumer surveillance by private companies, as well. The difference with the NSA’s possession of the data, Snowden said, is that government has the power to take away life or freedom.

At the NSA, he said, “there are people in the office who joke about, ‘We put warheads on foreheads.’ Twitter doesn’t put warheads on foreheads.”

Privacy, as Snowden sees it, is a universal right, applicable to American and foreign surveillance alike.

The concern is the whiplash between the somewhat cheeky indifference he shows towards surveillance by private companies, and the proceeding paragraph wherein Snowden’s adherence to the principal of privacy as a universal right is affirmed.  Surely the lack of an ability to launch a missile doesn’t make the violation of a universal right any less profound.

And violate they do, as this Pando piece on data brokers by Yasha Levine documents.  Practically any sort of data on people is available for a price, sliced and diced to your particular needs.  Most shockingly, a list of rape victims was offered for sale.  But beyond the distaste of that specific list, the scale of the thing ( around a $200 billion industry ) combined with the opacity should be plenty enough to raise concern, even if warheads aren’t involved.

Now, Barton Gellman apparently spoke with Snowden for a full few days, and those conversations got compressed down to a short piece in the Washington Post, so the relative paucity of concern Snowden seems to express may not be quite indicative of his true feelings.  And his focus on government surveillance is understandable as that is where his experience lies.  (Though the fact that he worked for a private company, Booze Allen, when he took the NSA documents that he handed to journalists does indicate that the broad categories of private and government entities might not be that helpful)   But we all should be cognizant, no matter where our focus, that threats to the “universal right” of privacy come from many sources, and we do no service to that right when we minimize and dismiss some of those sources in favor of others.


Evgeny Morozov has written what will apparently be an oped in the Financial Times that nicely pulls away and looks at the broader picture related to privacy and the NSA revelations.  It’s well worth a read, and can be read here.  A few choice paragraphs:

No laws and tools will protect citizens who, inspired by the empowerment fairy tales of Silicon Valley, are rushing to become data entrepreneurs, always on the lookout for new, quicker, more profitable ways to monetise their own data – be it information about their shopping or copies of their genome. These citizens want tools for disclosing their data, not guarding it. Now that every piece of data, no matter how trivial, is also an asset in disguise, they just need to find the right buyer. Or the buyer might find them, offering to create a convenient service paid for by their data – which seems to be Google’s model with Gmail, its email service.

What eludes Mr Snowden – along with most of his detractors and supporters – is that we might be living through a transformation in how capitalism works, with personal data emerging as an alternative payment regime. The benefits to consumers are already obvious; the potential costs to citizens are not. As markets in personal information proliferate, so do the externalities – with democracy the main victim.





1. the process of formulating generalized ideas or concepts by extracting common qualities from specific examples

2. the act of withdrawing or removing


1. Difficult to understand; abstruse

No matter the extent of a persons’ genius, the scope of their intellectual reach, all of us have finite brain power.  The storage and processing our brains can do is limited, however impressively large those limits are for some.  And so to deal with the complex world we have built we use abstractions to blur away, or conceal, the complexity that lies underneath what we are working on or thinking about.  We see this in everyday life, in things as simple as a car dashboard, showing us a simplified readout of the status of the operations of the engine and the systems that support it.  But while abstraction is useful, vital really, in smoothing out the bewildering complexity of modern life, that it conceals what undergirds it can make it a tool of deception, rather than clarity.

Consider how abstraction is used in technology, programming in particular.  At a low level, a CPU has a set of instructions that is offers to operate on various data registers.  And though they might be similar, every processor, or at least every processor vendor, will provide a different instruction set.  A programmer that programs in that low level instruction set, or assembly language, will need to be very aware of the particulars of the CPU architecture, and exert a lot of effort making sure that they are properly moving the right data to the right areas of the CPU.  And their program will only work on computers with the specific CPU they built their assembly language program for.

Now enter higher level programming languages and compilers.  Programming languages, particularly those considered object oriented, allow the programmer to focus on the data structures and the algorithms that operate on them, rather than the particularities of a specific CPU instruction set.  The programmer can even come up with their own data types and structures if they like.  The compiler then turns the higher level program into the machine language that the CPU will ultimately read.  And notice that you will only need one compiler for every type of CPU as well.  You can write a program once in a higher level language, and as long as there is a compiler written for it, it can be run any any CPU.  This allows for incredibly more complex, and useful, software to be written.

Abstraction is also critical to computer networking.  Sending a packet halfway around the world and back may take only 100 milliseconds or so, but it is a hugely complex endeavor.  The jouney will most likely utilize a plethora of disparate technologies like DSL, DOCSIS, SONET, Metro Ethernet, MPLS, and possibly a host of others.  To help manage the complexity is the OSI layered model, lower levels, and higher levels, are “black boxes” whose operation is abstracted from those not working directly within them.  Someone writing a program that sends IP packets to and fro does not care, does not need to care, about what transport technology is being used, whether it is DSL, a cable modem, Frame Relay, or Ethernet.  So too, the developers of Ethernet did not need to care about the particulars of the higher level protocols that their technology will transport, whether it be IP, IPX, or IPv6.  In this way, the modularity and abstractions involved allow the millions of nodes that make up the global Internet to interoperate.  There is no one who knows the particulars of all the technologies found on the Internet, nor does there need to be, as long as their points of conjuncture are well defined, the internal working of connected pieces can be kept pleasantly blurry.

Now compare that usefulness with how abstraction has been used in the world of finance, specifically with Collaterlized Debt Obligations (CDOs), those financial weapons of mass destruction responsible for so much recent economic destruction.  Put simply, they are a collection of underlying debt instruments, such as home mortgages, globbed together or “securitiezed” into a single unit (though with tranches) that kinda look like a bond that pays an attractive rate of interest.  They get maddeningly complex, with CDOs containing other CDOs (CDO²), and yes, CDOs containing CDOs containing CDOs (CDO³), and something called a synthetic CDO, which I honestly don’t understand but which seems to have a tenuous hold on anything that could be called reality.

The upshot is that the mass of underlying loans is abstracted away into the CDO, a AAA rating and nice return are all that are meant to be seen, not the festering meat hidden inside of shitty, poorly documented loans made to people who were unlikely to be able to pay them off unless the housing market continued to skyrocket.  Ostensibly this abstraction spread out the risk of the underlying mortgages, but in reality it just concealed, and fobbed off, the huge risk of all the sketchy mortgages onto the CDO investors.

And sometimes it looks like the CDOs may have even been meant to fail.  Take the famous case of Abacus 2007-AC1, a CDO deal set up by Goldman Sachs that collapsed almost as it got started. It was alleged that hedge fund runner John Paulson had some hand in picking the underlying residential mortgage backed securities, designing them to maximize losses from a housing collapse.  He had bet against the CDO you see, and made a fortune when it crashed.  In the end the SEC brought a suit against Goldman Sachs, which was settled for $550 million, though Goldman admitted to no wrongdoing.

The whole mess is pretty neatly illustrated by the artwork accompanying this Rolling Stone article by Matt Taibbi.  A chef is pushing rats and other sundry parts into a meat grinder, while out the other end pumps ground red meat in the shape of AAA (the highest credit rating, or good as gold).  How to detect a product is tainted if the underlying bits are abstracted away, all ground up?

Which I suppose is all a roundabout way of saying that the abstractions we use to build and interact with a hugely complex world can be used as a helpful tools or as means of deception.  So why is it you can feel pretty relaxed  trusting the code form the public repository you downloaded and incorporated into the program you are writing, but as far as that CDO or other exotic structured financial product you are perusing…well a phrase like caveat emptor doesn’t even begin to capture the dangers lurking beneath the fine print?

For starters the code in the repository is completely transparent, you can look through every line of code if you care to, you may even contribute by improving it if you’re so inclined.  Then there is the reality of communities built around learning, sharing, and contributing to a code base that allows for the incremental development of more complex, sophisticated, and useful software.

Contrast that with financial engineers whose role model is Gordon Gekko.  Men, who when they are not collaborating with their competitors to rig LIBOR or bids on municipal bonds, are gleefully slitting their throats.  Fine fellows like Fabrice Tourre, a former VP at Goldman Sachs and player in the Abacus deal mentioned above.  The SEC brought civil charges against him because of that deal and he was found liable by a jury on six of seven charged in August.  Emails to his girlfriend helpfully reveal his psychology:

More and more leverage in the system.  The whole building is about to collapse anytime now?.?.?.?  Only potential survivor, the fabulous Fab standing in the middle of all these complex, highly leveraged, exotic trades he created without necessarily understanding all of the implications of those monstrosities!!!

Anyway, not feeling too guilty about this, the real purpose of my job is to make capital markets more efficient and ultimately provide the U.S. consumer with more efficient ways to leverage and finance himself, so there is a humble, noble, and ethical reason for my job :) amazing how good I am in convincing myself !!!

The boilerplate mumbling justification about making capital markets “more efficient” is by the book, but the end bit does reveal a touch more self awareness than I would have given his type credit for.  Anyway, as important as regulation and regulators are, I don’t see a technocratic solution to the problem.  People like the Fabulous Fab will never be constrained by rules, and in fact the more rules, the more complex the rules, the more fissures and cracks to be found and exploited.  To get a feeling for how difficult it can be to investigate and prosecute financial crimes, have a read of this recent Matt Taibbi blog post.

There just appear to be far too many people focused on making capital markets more…efficient, on exploiting the dizzying complexities of the modern world, particularly the world of finance, to engineer massive wealth for themselves…and not a whole lot for anyone else.  As opposed to people who build and hone things that actually improve lives.  I’m not sure of the answer, but it’s a fundamental problem that doesn’t speak well to who we’ve become.

Further Reading:

Complexity is (almost always) the Enemy

The Frog and the Scorpion

There are about 7 billion people living on the planet.  Evolution has given us large and maddeningly complex brains.  Even discounting the occasionally deleterious affects of politics, religion, and culture in general on the human psyche, the sheer weight of the numbers dictate that you’ll find some number of people who are malevolently deranged.  That would appear to be a fact we have to labor under at least until our understanding of the brain is advanced far beyond where it is today, something, I think, that is well over the horizon.

Meanwhile technology marches forward at a steady clip.  And that includes destructive technology.  Only a hundred years ago nuclear weaponry was but a twinkle in a theoretical physicists’ eye, with the ICBM technology to deliver multiple warheads anywhere on the globe being the stuff of science fiction.  Chemical and biological weaponry advances apace.

While the Boston bombing has currently caught the media spotlight, it is the intersection of madness and gun technology that has been occupying the clutural and political realm.  The Sandy Hook shooting prompted an extensive national conversation about how to keep guns out of the hands of the mentally ill.  Legislatively no action has yet been taken, with a bill to expand background checks on gun purchases dying in the Senate.

So, at least so far, we seem incapable of even beginning to address the intersection of guns and madness.  But what about when this intersection doesn’t mean a few dozen dead, or even a few hundred, but a few hundred thousand or a few million?  What about when there is no margin for error?  Today there are nine nations with nuclear weapons, five who are members of the Non-Proliferation Treaty ( US, Russia, France, United Kingdom, and China ), one that doesn’t declare that they have nuclear weapons ( Israel ), and three non signatories of the NPT ( India, Pakistan, North Korea ).

Looking at Pakistan, the next thing to a failed state, with radical Islam a powerful force, one has to wonder if bronze age eschatology can be kept from meaningfully intersecting with technology that can quickly wreak biblical levels of destruction.  Can those lines be kept parallel forever?  It’s an odd quirk of the human mind that a single brain can make use of the powers of reason and science but at the same time be addled by unreasonable delusions.  Francis Collins, the head of the National Institutes of Health, saw a froze waterfall and was thus convinced of the divinity of Jesus Christ.  Isaac Newton, arguably the greatest genius is history, spent an absurd amount of time on alchemy and finding hidden messages in the Bible.

We tend to have a optimistic view of technology as progressive and liberating.  But even beyond the potential uses of technology by governments to surveil and control citizens, the practical matter of the destructiveness of some technologies may mean an inherent incompatibility with our notions of individual liberty and democratic government.

This is not to say that all technology is destructive or by its very nature irreconcilable with liberal democracy.  But what if there is no room for error?  What if freedom doesn’t mean the occasional shooting spree or pressure cooker bomb, but the occasional briefcase ( or pen? ) nuke obliterating half a city, or the occasional outbreak of an engineered super flu?  What sorts of laws would constituents demand of their legislators then?

I’m not sure these are questions that will need to be answered in our lifetime.  But at some point technology will progress to the level that (even more) highly destructive technology is relatively simple, cheap, and easy to use.  Because how can the march of technology be stopped?  Would we really want it to be?  And so with any margin of error diminishing, someone is going to have to figure out how to prevent that technology from intersecting with human frailty, stupidity, and malevolence.  What would those laws look like?  What kind of society would result?  These are questions we should think about, because we might not have the luxury of handing them off to our progeny.