Skynet, Smugglers and The Gift of Fear: What we can learn from snap judgements, and machines can learn from us

So, in the day or two since I posted the piece about “Big Filter“, I’ve gotten several calls, comments and emails that all seemed to focus on the scary notion of “machines that think like us”.  Some folks went all “isn’t that what Skynet and The Matrix, and (if you’re older, like me) The Forbin Project, and W.O.P.R were on about?”  If machines start to think like us, doesn’t that mean all kinds of bad things for humanity? 

Actually, what I said was, “We have to focus on technologies that can encapsulate how people, people who know what they’re doing on a given topic, can inform those systems… We need to teach the machines to think like us, at least about the specific problem at hand.”  Unlike some people, I have neither unrealistic expectations for the grand possibilities of “smart machines”, nor do I fear that they will somehow take over the world and render us all dead or irrelevant.  (Anyone who has ever tried to keep a Windows machine from crashing or bogging down or “acting weird” after about age 2 should share my comfort in knowing that machines can’t even keep themselves stable, relevant or serviceable for very long.) 

No, what I was talking about, to use a terribly out-of-date phrase, was what used to be known as “Expert Systems”, a term out of favor now, but that doesn’t mean the basic idea is wrong. I was talking about systems that are “taught” how someone who knows a very specific topic or field of knowledge thinks about a very specific problem.  If, and this is a big if, you can ring-fence the explicit question you’re trying to answer, then it is, I believe, possible, to teach a machine to replicate the basic decision tree that will get you to a clear, and correct, answer most of the time.  (I’m a huge believer in the Pareto Principle or “80-20 rule” and most of the time is more than good enough to save gobs and gobs of time and money on many many things.  More on that in a moment.) 

A few years ago now, I read a book called “The Gift of Fear” by Gavin de Becker, an entertaining and easy read for anyone interested in psychology, crime fighting, or the stuff I’m talking about.  The very basic premise of that book, among other keen insights, is that our rational minds can get in the way of our limbic or caveman brains telling us things we already “know”, the kind of instantaneous, can’t-explain-it-but-I-know-I’m-right, in-our-gut knowledge that our rational brains sometimes override or interfere with, occasionally to our great harm.  (See the opening chapter of The Gift of Fear, in which a woman who’s “little voice” as I call it told her there was something wrong with that guy, but she didn’t listen, and was assaulted as a result.  Spoiler alert, she did, however, also escape that man, who intended to kill, her using the same intuition. Give it a read.) 

De Becker, himself a survivor of abuse and violence, went on to study the evil that men do in great detail, and from there, to codify a set of principles and metrics that, encoded into a piece of software, enabled his firm to evaluate risk and “take-it-seriously-or-not-ness” for threats against the battered spouses, movies stars and celebrities his Physical Security firm often protects.  Is this Skynet taking over NORAD and annihilating humanity? Of course not.  What is is, however, is the codification of often-hard-won experience and painful learning, the systematizing of smarts. 

I was thinking about all this in part because, in addition to the comments on my last post, I’m in the middle of re-reading “Blink” (sorry, I appear to be on a Malcolm Gladwell kick these days.)  It’s about snap decision making and the part of our brain that decides things in two seconds without rational input or logical thought.  A few years ago, as some of you know, my good friend Nick Selby of (among many other capes and costumes) the Police Led Intelligence Blog, decided he was so passionate about applying technology to making the world better and communities safer that he both founded a software company (streetcred software - Congrats on winning the Code for America competition this year!) and became a police officer to gain that expertise he and his partner would encode into the software.  He told me a story from his days at the Police Academy.  I may have the details wrong on this bit of apocrypha, but you’ll get the point. 

During training outside of Dallas, there was an experienced veteran who would sometimes spend time helping catch smugglers running north through Texas from the Mexican border.  “Magic Mike” I call this guy, I can’t remember his real name, could stand on an overpass and tell the rookies, “Watch this.”  He’d watch the traffic flowing by beneath him, pick out one car seemingly at random and say, “That one.” (Note that, viewed at 60 mph and looking at the roof from above, age, gender, race or other “profiling” concerns of the occupants is essentially a non-issue here.) 

Another officer would pull over the car in question a bit down the road, and, with shocking regularity, Magic Mike was exactly right.  How does that happen?!  And can we capture it?  My argument from yesterday is that we can, and should.  We’re not teaching intelligent machines in any kind of scary, Turing-Test kind of way.  No, it’s much clearer and more focused than that.  Whatever went on in Magic Mike’s head – the instantaneous Mulligan Stew of car make, model, year, speed, pattern of motion, state of license plate, condition etc. – if it can be extracted, codified and automated, then we can catch a lot more bad guys. 

I personally led a similar effort in Cyber space.  Some years ago, AOL decided that member safety was a costly luxury and stared laying off lots of people who knew an awful lot about Phishing and spoof sites.  Among those in the groups being RIF’ed was a kid named Brian, who had spent untold hours sitting in a cube looking at Web pages that appeared to be banks, or Paypal or whatever, saying, “That one’s real. That one’s fake.  That one’s real, that one’s fake.”  He could do it in seconds. So, we hired him, locked him in an office and said, “You can’t go to the bathroom til you write down how you do that.” 

He said it was no big deal – over the years he’d developed a 27-step process so he could teach it to new guys on the team.  Just one of those steps turned out to be “does it look like any of the thousands of fake sites I’ve gotten to know over the years?”  Encapsulating Brian’s 27 steps in a form a machine could understand took 400 algorithms and nearly 5,000 individual steps.  But… so what?  When weeks of effort was done, we had the world’s most experienced Phish-spotter built into a machine that thought the way he did, and worked 24×7 with no bathroom breaks.  We moved this very bright person on to other useful things, while a machine now did what AOL used to pay a team of people to do, and it did it based not on simple queries or keywords, but by mimicking the complex thought process of the best guy there was. 

If we can sit with Brian, who can spot a Phishing site, or De Becker who can spot a serious threat among the celebrity-stalker wannabes, or Magic Mike who can spot a smuggler’s car from an overpass at 70 miles an hour, when we can understand how they know what they know in those instant flashes of insight or experience, then we can teach machines to produce an outcome based not just on simple rules but by modeling the thoughts of the best in the business.  Whatever that business is – catching bad guys, spotting fraudulent Web sites, diagnosing cancer early or tracking terrorist financing through the banking system, that (to me) is not Skynet, or WOPR, or Colossus.  That’s a way to better communities, better policing, better healthcare, and a better world. 

Corny? Sure.  Naive? Probably.  Worth doing?  Definitely.  

 

 

“Big Filter”: Intelligence, Analytics and why all the hype about Big Data is focused on the wrong thing

These days, it seems like the tech set, the VC set, Wall Street and even the government can’t shut up about “Big Data”.  An almost meaningless buzzword, “Big Data” is the catch-all used to try and capture the notion of the truly incomprehensible volumes of information now being generated by everything from social media users – half a billion Tweets, a billion Facebook activities, 8 years of video uploaded to youtube… per day?! – to Internet-connected sensors of endless types, from seismography to traffic cams.   (As an aside, for many more, often mind-blowing, statistics on the relatively minor portion of data generation that is accounted for by humans and social media, check out these two treasure troves of statistics on Cara Pring’s “Social Skinny” blog.)

http://thesocialskinny.com/216-social-media-and-internet-statistics-september-2012/

http://thesocialskinny.com/100-more-social-media-statistics-for-2012/

In my work (and occasionally by baffled relatives) I am now fairly regularly asked “so, what’s all this ‘big data’ stuff about?”  I actually think this is the wrong question.

The idea that there would be lots and lots of machines generating lots and lots… and lots… of data was foreseen long before we mere mortals thought about it.  I mean, the dork set was worrying about  IPv4 Address exhaustion in the late 1980s.  This is when AOL dial-up was still marketed as “Quantum Internet Services” and made money by helping people connect their Commodore64’s to the Internet.  Seriously – while most of us were still saying “what’s a Internet?” and the nerdy kids at school were going crazy because, in roughly 4 hours, you could download and view the equivalent of a single page of Playboy, there were people already losing sleep over the notion then that the Internet was going to run out of it’s roughly four-and-half billion IP addresses.   My point is, you didn’t have to be Ray Kurzweil to see there would be more and more machines generating more and more data.

What I think is important is that more and more data serves no purpose without a way to make sense of it.  Otherwise, more data just adds to the problem of “we have all this data, and no usable information.” Despite all the sound and fury lately about Edward Snowden and NSA, including my own somewhat bemused comments on the topic, the seemingly omnipotent NSA is actually both the textbook example and the textbook victim of this problem.

It seems fairly well understood now that they collect truly ungodly amounts of data.  But they still struggle to make sense of it.  Our government excels at building ever more vast, capable and expensive collection systems.  Which only accentuates what I call the “September 12th problem.”  (Just Google “NSA, FBI al-Mihdhar and al-Hazmi” if you want to learn more.)  We had all the data we ever needed to catch these guys.  We just couldn’t see it in the zetabytes of other data with which it was mixed.  On September twelfth it was “obvious” we should have caught these guys, and Congress predictably (and in my opinion unfairly) took the spook set out to the woodshed perched on the high horse of hindsight.

What they failed to acknowledge was that the fact we had collected the necessary data was irrelevant.  NSA collects so much data they have to build their new processing and storage facilities in the desert because there isn’t enough space or power left in the state of Maryland to support it.  (A million square feet of space, 65 megawatts of power consumption, nearly two million gallons of water a day just to keep the machines cool?  That is BIG data my friends.)  And yet, what is (at least in the circles I run in) one of the most poignant bits of apocrypha about the senior intelligence official’s lament?  “Don’t give me another bit, give me another analyst.”

It is this problem that has made “data scientist” the hottest job title in the universe, and made the founders of Splunk, Palantir and a host of other analytical tool companies a great deal of money.  In the end, I believe we need to focus not just on rule-based systems, or cool visualizations, or fancy algorithms from Isreali and Russian Ph.Ds.  We have to focus on technologies that can encapsulate how people, people who know what they’re doing on a given topic, can inform those systems to scale up to the volumes of data we now have to deal with.  We need to teach the machines to think like us, at least about the specific problem at hand.  Full disclosure, working on exactly this kind of technology is what I do in my day job, but just because my view is parochial doesn’t make it wrong.  The need for human-like processing of data based on expertise, not just rules, was poignantly illustrated by Malcolm Gladwell’s classic piece on mysteries and puzzles.

The upshot of that fascinating post (do read it, it’s outstanding) was in part this.  Jeffrey Skilling, the now-imprisoned CEO of Enron, proclaimed to the end he was innocent of lying to investors. I’m not a lawyer, and certainly the company did things I think were horrible, unethical, financially outrageous and predictably self-destructive, but that last is the point.  They were predictably self-destructive, predictable because, whatever else, Enron didn’t, despite reports to the contrary, hide the evidence of what they were doing. As Gladwell explains in his closing shot, for the exceedingly rare few willing to wade through hundreds or thousands of pages of incomprehensible Wall Street speak, all the signs, if not the out-and-out evidence, that Enron was a house of cards, were there for anyone to see.

Jonathan Weil of the Wall Street Journal wrote the September, 2000 article that got the proverbial rock rolling down the mountain, but long before that, a group of Cornell MBA students sliced and diced Enron as a school project and found it was a disaster waiting to happen.  Not the titans of Wall Street, six B-school students with a full course load. (If you’re really interested, you can still find the paper online 15 years later.)    My point is this – the data were all there. In a world awash in “Big Data”, collection of information will have ever-declining value.  Cutting through the noise, filtering it all down to which bits of it matter to your topic of choice; from earthquake sensors to diabetes data to intelligence on terrorist cells, that will be where the value, the need and the benefits to the world will lie. 

Screw “Big Data”, I want to be in the “Big Filter” business.

Mad Magazine, the NSA and Chinese Army Hackers

A quick follow up to yesterday’s post, continuing the “Jeez, you just can’t keep a good secret anymore” meme for the week.  If you follow politics or business news you may have seen lots (and lots and lots) of headlines lately regarding US economic losses, political wrangling and business executives’ hand-wringing over enormous, far-reaching and, by all accounts, incredibly effective Chinese hacking and cyber penetration of American companies, research labs and government agencies.  (Reading like a list of B-grade spy movies, feel free to read about “Operation Shady Rat” or “Byzantine Foothold” for some eye-opening facts and figures if this stuff isn’t your normal beat.)

 

Recently, there was great sturm und drang after the folks over at Mandiant produced a very detailed and revealing public report about just how big, bad, widespread and effective these efforts have been (which wasn’t entirely news to those in the know), and much more interestingly, great specifics on how it was done, and by whom, (which was).

 

A division of the Chinese People’s Liberation Army known by the not-entirely-inspirational moniker of “PLA Unit 61398” has since been the topic of much discussion in the press, the government and the security community.  (Not that a sexy moniker is all that important I suppose.  I hear it’s a great place to work with great benefits.  You can read one of their recruiting notices here if you’d like – see aforementioned “Jeez, can’t anybody keep a secret anymore?” discussion.)

 

Not to be outdone, (and in a piece that made me feel a bit like I was seeing a media version of the old Spy vs. Spy cartoons) FP just published a story headlined “Inside the NSA’s Ultra-Secret China Hacking Group”.  When the article includes a description of the inside of the building and the door into the room housing said “Ultra-Secret” unit, I’m pretty sure the folks who work there had a pretty significant hand in un-secreting it.

 

Still, given that the Chinese have long said they have their own mountains of data that we’ve been doing the same to them, perhaps this was just a timely PR use of information that, like Unit 61398, was about to enter the public conversation anyway.  The more I think about it, the more resonant that old cartoon strip seems.  They do it to us.  We do it to them.  Both sides know it, and the game goes on.  My guess is that what is a little bit different now is that both sides have to learn to play a game of shadows on a field that’s far more brightly lit than ever before.

Big Ears, Little Ears: One article, three layers of blown secrecy, and how Edward Snowden proves my point

Well, I haven’t had much time to write here for quite a while, but the Edward Snowden affair – and more specifically this piece in the Guardian – were such a terrific display of the Digital Water concept and “a world awash in data” that I couldn’t resist, despite my current schedule.  This story is kind of a delicious “triple play” on the concept.

I suppose before I dive in I should probably comment on using the word “delicious” in this context since I know there is an awful lot of outrage and shock on all sides of this debate.  Some are appalled by Snowden’s revelations, i.e. the supposed extent of the NSA’s electronic eavesdropping on everyone and everything including American citizens.  Others are appalled by Snowden’s actions and consider it nothing short of capital treason.  Those two viewpoints need not even be fundamentally in conflict – I’m sure there are folks out there who are both appalled by the NSA’s supposed activities and would like to see Snowden executed for treason.

I confess that, on the first point – the extent of the data collection and the agency’s capabilities – I myself am relatively unfazed. I’ve been in the Open Source Intelligence business for almost 15 years.  Given the shock many people express at what I could find out about them with nothing but a laptop at a Starbucks, I just can’t be wowed by what must be possible for a huge entity with a mania for secrecy, almost no oversight and an 11-digit budget.  The Echelon, or “Big Ear” controversy of the late 1990s(!) outed many of these supposed capabilities, and anyone who has even flipped through a James Bamford book would probably be slightly less bewildered at the ability (though perhaps not at the willingness) of NSA to do the things alleged. Anyway, wherever you stand on the particulars of the Snowden case, this article in the Guardian (which originally broke the story in an earlier piece) illustrates exactly the kind of world I have been trying to noodle over with this blog.  Here’s the “can’t anybody keep a secret any more?!” meme hat trick for this one little Web page.  Ready….

1. The NSA – The most obvious.  If you take him at his word, “The NSA has built an infrastructure that allows it to intercept almost everything. With this capability, the vast majority of human communications are automatically ingested without targeting. If I wanted to see your emails or your wife’s phone, all I have to do is use intercepts. I can get your emails, passwords, phone records, credit cards… The extent of their capabilities is horrifying.”  While we can argue the legal and moral issues, as a technological matter, this hardly should be a shocker given that we live in a world where your department store can tell when you’re pregnant (even if your parents can’t yet).   So – Level 1: John Q. Public can’t really keep a secret in the digital world.  Almost anything you say, send or type outside a locked, airtight room can be captured, analyzed and recorded if someone deems you interesting enough. 

2. Edward Snowden – So the NSA is, by its very nature, ultra-secretive, institutionally paranoid and famously tight lipped (Jim Bamford’s books notwithstanding). Yet every organization is made up of people, and like any group of the NSA’s estimated 40,000 employees, they will hold a diversity of views.  Now by all accounts to date, Snowden was a patriotic, smart kid who joined the Army Reserve and worked for the CIA.  He obviously had been scrutinized, checked out and picked apart.  You don’t get to play inside The Puzzle Palace if you’re an anti-government radical.  Yet what Snowden saw working as an NSA contractor motivated him to leak, speak, and flee the country.  Level 2?  For all the supposedly terrifying ability to spy that Snowden witnessed, one insider with a moral objection meant they couldn’t keep their secrets secret either.

3. The guys at the airport – My absolute favorite (and why I found this page so delicious).  So in this sometimes-bizarre corner of the world here inside the DC beltway, it is not at all uncommon for lots of people with plastic ID badges on lanyards to be overheard talking about the sorts of things that, in most of the country, would seem at home only in a Tom Clancy novel.  You can walk through certain shopping mall food courts at lunch  and hear phrases like “I’m cleared up the wazoo – TS-SCI with lifestyle poly plus some special stuff” or “sure, anybody can read a license plate from outer space, but we can do it at night!”.

Like cars in Lansing or Dearborn, surveillance and Intelligence and secret-squirrel military programs are just kind of the local business, and this is a factory town.  A lot of people here take this stuff veeeery seriously.  So it is not entirely remarkable when the guys at the bottom of the page opine that Snowden, that dirty, rotten, no-good treasonous so-and-so ought to be “disappeared”.  The part I love so much was the extreme low-tech surveillance system that outed their conversation.  They said it out loud and in public, and a “Little Ear” (you know, the biological one attached to the guy sitting across from them) in the airport captured it.  He then used a few hundred bucks worth of smartphone to record part of the conversation and Tweeted about it to the whole world.

So-   Quis custodiet ipsos custodes?   Apparently any employee with a conscience or every jackass with a cell phone.  I think that’s probably reassuring, but I have to think some more about it.  The world really is full of dangerous people who hate us.  Meanwhile – my own personal take on the Snowden thing?  (I’m speaking technologically here, I leave the constitutional and legal questions for others to debate.)  IF you matter enough to someone, there are no secrets.  Most of us just enjoy security through obscurity.  The only reason our privacy is safe for most of us is we’re utterly uninteresting.  You may not like it, but information and technology are inextricably linked.  The capability to do what NSA does can’t be uninvented.  We can do it… so can other countries. We can only decide as a society whether we can strike the appropriate balance between protecting ourselves from those without and those within.

A real “Low Orbit Ion Cannon” gives new meaning to “Denial of Service”

So, is it just me or is this life imitating art imitating life imitating art…. or… something?  Hopefully some gamer, geek or Star Wars fan can help me untangle the levels of overlapping nerd irony and the triple (maybe more?) entendre here.  Whatever.  It’s some kind of clever, linguistic, something-funny-in-there-someplace,  with a side order of potentially-worrisome-but-in-the-meantime-sci-fi-channel-awesomeness.

If “LOIC” already makes sense to you, skip to the bottom of the graphic.  If not, read on.  This won’t take long.

Ready ?

  • So there’s a video game series called  Command & Conquer.  In it is a weapon called the Low Orbit Ion Cannon, or LOIC.  It is a space-based platform that sends targeted beams of energy down through the sky and makes very specific things go boom.
  • The name was in turn co-opted by the authors of a tool, also called Low Orbit Ion Cannon, for stress testing a target system by subjecting it to a (simulated?) Denial of Service, or DOS,  attack.  For you ungeeks out there, a DOS atttack is essentially sending highly focused streams of packets against a specific machine or network to see if you can make it go boom.  Hence, the name.
  • They later open-sourced the Low Orbit Ion Cannon software into the public domain, whereupon it was used for both legitimate network testing and by people making all kinds of mischief, to wit, making various computers or networks go boom.
  • In other words, a tool originally developed to make networks safer from Denial of Service attacks was then used to commit Denial of Service attacks.  So far so good?

low_orbit_ion_cannon

Courtesy of Digital-digest.com

  • Recently, Boeing and the US Air Force revealed in a video animation and public statements that they had successfully tested a weapon that could completely disable computer systems in specific locations with extreme precision, e.g. kill the electronics in one building, but not the building next to it.
  • How did they do this?  An aerial platform that sends targeted beams of energy down from the sky and makes very specific things go boom.

Boeing calls the platform CHAMP. (What, no gamers on the project?) It appears to use  incredibly powerful electromagnetic pulse – EMP – to knock out the target’s computers and electronic equipment.  No mystery there, EMP has been kicked around as a weapon for decades.  Except… it does so on such a targeted basis that the aircraft carrying the weapon, itself full of wires and chips and electronics, is unaffected.  Whoa….

Anyway, I think the implications of this are kind of scary in the longer run, proliferation being what it is and all.  On the other hand, this EMP thing is the same stuff that saved Neo, Morpheus and the Nebuchadnezzar from the Sentinels in  The Matrix.  Maybe the human side of the conflict will stand a chance against Skynet after all.

The Goal, Finding Ultra, and The Agile Manifesto, or “What does running 52 miles have to do with writing good code?”

So, in the mental Mulligan Stew that is my brain, I find odd patterns and connections emerging, or re-emerging, often out of whatever happens to be on my Reading List at the time.  This morning was a perfect example of this happening, and (if you can tough it out the three minutes to the end of this post) I think there’s something useful in it, at least if you’re part of the nerd herd (yes, Jeanne, this one’s for you. :) )

I was meeting with a colleague this morning and we were discussing one of the challenges organizations can face moving product/development teams to SCRUM, a flavor of Agile Development.  The topic we were discussing was both the personal bias among some developers for, and the business or upper-management pressures to fall back on, short-term, informal or “hackish” solutions to problems when something just needs to get done and get in production.

A casual reader might even think that this might make sense.  Isn’t Agile after all, supposed to be, well, agile?  Get something out, test it, get feedback, fix it later as needs be?  Kind of all “Lean Startup“-y?

I’m still relatively new to Scrum myself.  I am a CSPO, but this is still my first year leading a Scrum product development initiative, yet I can say already that I believe that this casual read would be wrong.  One of the central tenets of Scrum and Agile is that test-driven development, or, if you prefer to think of it in terms of the Lean Manufacturing process (from which the Agile disciplines were derived), “designing in quality from the get-go”.  In other words, yes, the principles (see the Principles Document accompanying The Agile Manifesto) strive to be responsive, get stuff out the door, and iterate quickly.  However, whatever does go out the door is meant to be fully-tested, production ready and of high quality, even if it is very small in scope.

You can test out a concept car with no power windows, radio or A/C and painted in primer, and people can still love the styling, fuel economy and future vision that’s rough and unfinished.  But if you put out a jalopy that can’t be trusted not to fall apart or crash, you’ll never get another fair shot at those early reviewers.  Rough is ok.  Even incomplete is ok.  Dangerous or unreliable, that’s not ok.

So, what’s wrong with a short-term hack that you know won’t hold up for the long term or under heavy load or whatever the future is, if that hack buys you some time now or gets management off your back?  The problem, in my opinion, with kicking the can down the road is that is so often makes the eventual solution more expensive; sometimes – given the law of unintended consequences – vastly more so.  The actual comment my friend made this morning was along these lines, In this scenario, which happens all the time in the real world, “the team that takes the shortcut ends up saving half the time now, but spending ten times the effort when they’re all done.”

So, they cut today’s cost by 50%, and raise the total cost by 500%.  In some cases, and this is reality unfortunately, the fast fix is a source of praise or recognition, while the long term impact is often buried in later, routine work.  The result  is that an organization can actually encourage the bad behavior that has an eventual 10x cost.  I don’t have a calculator handy, but I’m pretty sure a bad deal.  What really tickled my brain somewhere is what my colleague said next, which was roughly this; “Somehow I think some development teams lose sight of the actual goal.  In their effort to go faster, they end up actually slowing themselves down.”

It was this particular phrasing that caused the asteroid collision of two books in my head.  I just finished “Finding Ultra” by Rich Roll, overweight-middle-aged-lawyer-turned-extreme-endurance-athlete,  [you should click that one - you gotta see the pictures].  Early in the book, Rich describes the first prescription he received from his coach, when he decided (with no real experience whatsoever) that he was going to become an Ultraman.  One of the first rules his coach imposed was that he had to learn and understand where his aerobic/anearobic threshold was, and change his habits to manage his metabolism around this breakpoint.  He was not initially moving at a steady and sustainable pace, a pace that (once he switched to it) initially felt painfully slow.  This change, he was instructed, was necessary because without that change, he would burn out too fast and slow his later progress, or cause physical problems that would interrupt or end a long event.

In other words, until he changed how he approached each element or sub-part of the race, the faster he ran, the slower he finished.

Back in school, I read The Goal by Eli Goldratt.  In this fictional tale, a factory manager (and his Socratic mentor) work to understand and fix the problems in a production plant plagued by delays, high costs and poor outputs.  Everything from his marital life to a scene involving a marching cub scout troop eventually reveal the underlying principles that help solve the problem. (If you’re interested in production operations or business at all, this book remains a quick and relevant read.)  While there are a number of more detailed lessons on Operations Management to be found there, I remember discussing the “big takeaway” with Ricardo Ernst, my ops professor at Georgetown and one of the funniest, smartest and most valuable teachers it has been my honor to study with.  The bullet-point version was this.

  • If you have a guy putting 10 wheels an hour on cars, and you provide the right incentives to make it 11, he will.
  • If you have another guy putting on 14 hoods an hour and you provide the right incentives to make it 16, he will.
  • Do this all down the line, and what you have is a crew of “top performers”, every one of them beating their quotas and earning bonuses… and a factory that’s going to be shut down because everything is going wrong.

Huh?

The system can’t run any faster than it’s slowest step, plus if you incent only speed, quality will suffer besides.  So what happens? Raw unit throughput is constrained by the slowest part of the process (say, the wheel guy), rework costs balloon (because quality inevitably falls), inventory expense explodes (because of all the half finished cars piling up before the wheel station), and finished-product output craters. All the while, your individual performers are each beating their quotas and earning bonuses, while the business loses its shirt.

Oops.

What’s the point?  Well, here’s the (possibly?) useful thought I’m hoping came out of the mental Mulligan Stew.  Whether the Goal-with-a-capital-G (hey, there’s a reason he titled the book that way) is cars produced, the finishing time in a 320 mile race, or, where this all started, which is writing good software, when you focus on  local rather than global optima, what you get is counter-productivity.  Maybe that tortoise was on to something…

Nate Silver, Fox News, and the Gutenberg Effect, or “How a World Awash in Data Explains GOP Befuddlement on Nov 7th”

Plenty of ink has already been spilled over, at and about Nate Silver and the 538 Blog this election cycle, and even after the election is over, there are still some folks who both deny his math and/or claim that the problem was Hurricane Sandy, Chris Christie or that the Obama campaign “stole the election” or “suppressed the vote“.

What in the world does any of this have to do with the (somewhat intermittent) “Digital Water” meme I’m supposed to be so focused on and my obsession with how people will, and do, react to a world ever-more awash in data?

What was interesting to me as an analysis guy, and appalling to me as a data head and independent voter,  was watching the comments and criticisms of Silver’s 538 Blog before the election.  The astonishing litany of rationales assembled by Fox et al for why Silver was wrong, and just how wrong he was, defied both advanced statistics of the type in which Silver is an expert and the common sense in which we mere mortals are more versed.  While he admits to being an Obama supporter, he’s first and foremost a statistician and forecaster dedicated to understanding the science of accurate predictions.  Yet there were volumes written on critiques of his methodology, his assumptions, his math skills, and probably far more personal attacks on blogs I don’t read.

Nevertheless, Silver has now shown in two elections in a row and 99 out of 100 states called correctly that a deep understanding of not just polls and statistics, but a respect for math and facts can not be undone by all the denials (google “Karl Rove + election night + meltdown”)  and logical contortions (see “Dick Morris + prediction + landslide”)  that kept the conservative faithful, engaged, entertained and ultimately, completely unprepared for Election Day.

In the inevitable party navel-gazing the follows an election-year blowout, two questions have been haunting the conservative rank-and-file.  The first is the obvious “how could America have voted for this guy again?”  This is basically a partisan and political discussion of little interest to me, at least in this context.

More salient to this discussion are “How did we get it THAT wrong?”  This has mostly been addressed in the press by dissecting the exit polls, and talking changing demographics, Hispanic turnout and the fallout among sensible centrists like me from Republican candidates who don’t believe in eighth grade biology or a planet much older than Hal Holbrook.  (While much ignored nationally compared to Todd Akin, this last one, an unopposed Congressman who believes Earth is 9,000 years old, evolution is a lie created by Satan himself and – most insultingly – who also sits on the House Science Committee, is exactly the kind of story that sends sane moderates like me running into the arms of an otherwise completely beatable incumbent.  God Bless Bobby Jindal and his “we have just GOT to stop saying stupid shit” speech.)

Is that what really happened?  I think there’s more going on here, and my answer is two parts.  The first comes from Silver, not in his blog, but in his book, The Signal and The Noise.  I was listening to it on audio CD in my car this week and had to back it up and listen to it three times.  Silver was speaking about the changes that came after Gutenberg’s invention of the printing press, but the same is even more relevant to the “Digital Water” phenomenon, where the world is awash not only in objective and numerical data but the self-published content of every opinion, theory and form of intellectual quackery imaginable.   He explained what I am calling here the “Gutenberg Effect” as follows:

“Paradoxically, the result of having so much more shared knowledge was increasing isolation…  The instinctual shortcut that we take when we have too much information is to engage with it selectively, picking out the parts we like and ignoring the remainder, making allies with those who have made the same choices, and enemies of the rest.”

Put into the context of the 2012 Election Cycle, I think what went wrong was the intellectual and media isolation that many partisans, but particularly those on the right, increasingly engaged in.  The so-called echo chamber, in which attitudes and platitudes of an openly partisan nature ricochet and amplify through the canyons of Fox News, RedState.com and Rush Limbaugh’s radio show (or, if you prefer, MSNBC, the Daily Kos and the Rachel Madow Show) increasingly discount or vilify any opinion or person with an alternate view.

Many or even most of the criticisms however, are ideological, personal, unsubstantiated and/or filled with logical fallacies and downright absurdity, but not facts, and not math.  And here is where the world awash in data rears its head in Election 2012.  The Gutenberg Effect that Silver describes appears to have actually caused the Republican Party to drink so much of of its own pre-filtered Kool-Aid that a “shellshocked” Mitt Romney seems to have been telling the truth when he told reporters early on November 6th that his staff hadn’t even written a concession speech.

Despite the fact that (as Silver’s blog highlights) an objective read of the numbers showed Romney would have to essentially run the table on the swing states and catch every break to win, the Romney campaign – and millions of hardworking and genuinely dedicated supporters – quite literally couldn’t believe it when he, conclusively and resoundingly, lost.

If the first thing that happened was this Gutenberg Effect, an ideologically aligned group of people taking stock of data selectively to support their pre-established beliefs, I believe the second was a staggering act of exploitation by the very purveyors of that selectively-chosen information.  Check out the video below starting at 5:01, an exchange between David Frum and Joe Scarborough, two guys I don’t always agree with but who I think generally put “smart”, “factual”, and “conservative” rightly back together in one sentence.

To quote Frum, “…the real locus of the problem is the Republican activist base, and the Republican donor base. They went apocalyptic over the past four years, and that was exploited by a lot of people in the conservative world.  I won’t soon forget the lupine smile that played over the head of one major conservative institution when he told me that ‘our donors think the apocalypse has arrived‘. Republicans have been fleeced, exploited and lied to by a conservative entertainment complex.”

Taken together, I believe these can show both the root cause of the completely dumbfounded Republican reaction on November 7th, and also, I believe, a guide to a much truer understanding of on-the-ground election realities for any national campaign going forward.  A clear-eyed view of the state of the race should start with three things:

1.  Understand the Gutenberg Effect and realize the election-strategy dangers in an intentionally (and ideologically tilted) selective filter when viewing an over-abundance of opinions, polls and data;

2.  Acknowledge that the media makes far more money if they denigrate the opposition and radicalize and rile up the faithful than if they help their chosen team actually win elections; and

3.  Take these facts together and strive for the most objective, fact-based view possible of polls, voters, the economy and the country over the coming election cycle, and make sure you listen to, and account (literally) for the views, numbers and opinions presented by the people who most disagree with you.

While I think the right currently has a larger problem than the left in this area, at least for now (i.e. they are often a party whose candidates lose swing votes like mine when they not only ignore but vilify math, science, and objective, rigorous analysis), the lesson for all sides is, I believe, to separate your opinions from the data.  Stop attacking people like Nate Silver, and perhaps start reading his book instead.

Follow

Get every new post delivered to your Inbox.

Join 62 other followers

%d bloggers like this: