Skynet, Smugglers and The Gift of Fear: What we can learn from snap judgements, and machines can learn from us

So, in the day or two since I posted the piece about “Big Filter“, I’ve gotten several calls, comments and emails that all seemed to focus on the scary notion of “machines that think like us”.  Some folks went all “isn’t that what Skynet and The Matrix, and (if you’re older, like me) The Forbin Project, and W.O.P.R were on about?”  If machines start to think like us, doesn’t that mean all kinds of bad things for humanity? 

Actually, what I said was, “We have to focus on technologies that can encapsulate how people, people who know what they’re doing on a given topic, can inform those systems… We need to teach the machines to think like us, at least about the specific problem at hand.”  Unlike some people, I have neither unrealistic expectations for the grand possibilities of “smart machines”, nor do I fear that they will somehow take over the world and render us all dead or irrelevant.  (Anyone who has ever tried to keep a Windows machine from crashing or bogging down or “acting weird” after about age 2 should share my comfort in knowing that machines can’t even keep themselves stable, relevant or serviceable for very long.) 

No, what I was talking about, to use a terribly out-of-date phrase, was what used to be known as “Expert Systems”, a term out of favor now, but that doesn’t mean the basic idea is wrong. I was talking about systems that are “taught” how someone who knows a very specific topic or field of knowledge thinks about a very specific problem.  If, and this is a big if, you can ring-fence the explicit question you’re trying to answer, then it is, I believe, possible, to teach a machine to replicate the basic decision tree that will get you to a clear, and correct, answer most of the time.  (I’m a huge believer in the Pareto Principle or “80-20 rule” and most of the time is more than good enough to save gobs and gobs of time and money on many many things.  More on that in a moment.) 

A few years ago now, I read a book called “The Gift of Fear” by Gavin de Becker, an entertaining and easy read for anyone interested in psychology, crime fighting, or the stuff I’m talking about.  The very basic premise of that book, among other keen insights, is that our rational minds can get in the way of our limbic or caveman brains telling us things we already “know”, the kind of instantaneous, can’t-explain-it-but-I-know-I’m-right, in-our-gut knowledge that our rational brains sometimes override or interfere with, occasionally to our great harm.  (See the opening chapter of The Gift of Fear, in which a woman who’s “little voice” as I call it told her there was something wrong with that guy, but she didn’t listen, and was assaulted as a result.  Spoiler alert, she did, however, also escape that man, who intended to kill, her using the same intuition. Give it a read.) 

De Becker, himself a survivor of abuse and violence, went on to study the evil that men do in great detail, and from there, to codify a set of principles and metrics that, encoded into a piece of software, enabled his firm to evaluate risk and “take-it-seriously-or-not-ness” for threats against the battered spouses, movies stars and celebrities his Physical Security firm often protects.  Is this Skynet taking over NORAD and annihilating humanity? Of course not.  What is is, however, is the codification of often-hard-won experience and painful learning, the systematizing of smarts. 

I was thinking about all this in part because, in addition to the comments on my last post, I’m in the middle of re-reading “Blink” (sorry, I appear to be on a Malcolm Gladwell kick these days.)  It’s about snap decision making and the part of our brain that decides things in two seconds without rational input or logical thought.  A few years ago, as some of you know, my good friend Nick Selby of (among many other capes and costumes) the Police Led Intelligence Blog, decided he was so passionate about applying technology to making the world better and communities safer that he both founded a software company (streetcred software – Congrats on winning the Code for America competition this year!) and became a police officer to gain that expertise he and his partner would encode into the software.  He told me a story from his days at the Police Academy.  I may have the details wrong on this bit of apocrypha, but you’ll get the point. 

During training outside of Dallas, there was an experienced veteran who would sometimes spend time helping catch smugglers running north through Texas from the Mexican border.  “Magic Mike” I call this guy, I can’t remember his real name, could stand on an overpass and tell the rookies, “Watch this.”  He’d watch the traffic flowing by beneath him, pick out one car seemingly at random and say, “That one.” (Note that, viewed at 60 mph and looking at the roof from above, age, gender, race or other “profiling” concerns of the occupants is essentially a non-issue here.) 

Another officer would pull over the car in question a bit down the road, and, with shocking regularity, Magic Mike was exactly right.  How does that happen?!  And can we capture it?  My argument from yesterday is that we can, and should.  We’re not teaching intelligent machines in any kind of scary, Turing-Test kind of way.  No, it’s much clearer and more focused than that.  Whatever went on in Magic Mike’s head – the instantaneous Mulligan Stew of car make, model, year, speed, pattern of motion, state of license plate, condition etc. – if it can be extracted, codified and automated, then we can catch a lot more bad guys. 

I personally led a similar effort in Cyber space.  Some years ago, AOL decided that member safety was a costly luxury and stared laying off lots of people who knew an awful lot about Phishing and spoof sites.  Among those in the groups being RIF’ed was a kid named Brian, who had spent untold hours sitting in a cube looking at Web pages that appeared to be banks, or Paypal or whatever, saying, “That one’s real. That one’s fake.  That one’s real, that one’s fake.”  He could do it in seconds. So, we hired him, locked him in an office and said, “You can’t go to the bathroom til you write down how you do that.” 

He said it was no big deal – over the years he’d developed a 27-step process so he could teach it to new guys on the team.  Just one of those steps turned out to be “does it look like any of the thousands of fake sites I’ve gotten to know over the years?”  Encapsulating Brian’s 27 steps in a form a machine could understand took 400 algorithms and nearly 5,000 individual steps.  But… so what?  When weeks of effort was done, we had the world’s most experienced Phish-spotter built into a machine that thought the way he did, and worked 24×7 with no bathroom breaks.  We moved this very bright person on to other useful things, while a machine now did what AOL used to pay a team of people to do, and it did it based not on simple queries or keywords, but by mimicking the complex thought process of the best guy there was. 

If we can sit with Brian, who can spot a Phishing site, or De Becker who can spot a serious threat among the celebrity-stalker wannabes, or Magic Mike who can spot a smuggler’s car from an overpass at 70 miles an hour, when we can understand how they know what they know in those instant flashes of insight or experience, then we can teach machines to produce an outcome based not just on simple rules but by modeling the thoughts of the best in the business.  Whatever that business is – catching bad guys, spotting fraudulent Web sites, diagnosing cancer early or tracking terrorist financing through the banking system, that (to me) is not Skynet, or WOPR, or Colossus.  That’s a way to better communities, better policing, better healthcare, and a better world. 

Corny? Sure.  Naive? Probably.  Worth doing?  Definitely.  

 

 

Yet another water metaphor, or “Why physically securing our borders is a dumb way to spend money…”

I read this post earlier today, and was struck by the fact that (IMO and with all due respect to the author) it was the commentors, i.e. the community of people actually doing the work and using the technology, not the ones selling or buying it, who hit the right point.  So many of them have so many valid, and in many cases obvious, complaints, it got me wondering about the border protection issue, one which I hadn’t really thought about before.

So, as I so often do, I find my mind melds in the original bean counter nerd (my formal education) and the computer nerd (most of my actual career) and here’s what comes out from under the green-eyeshade-cum-propeller beanie – As a purely economic and technological matter, I can’t understand the math of trying to physically secure our borders.  It asks for all kinds of really expensive solutions and technologies that are unlikely to work, when the money could be better spent securing our SOCIETY in ways that would dis-incent the illegal migration in the first place.

Here’s my unscientific argument:

People are like water too (see Digital Water series (see here, here and here).  If you put something in their way, they flow around it to get where they want to go.  Given that fact, plus a bit of rudimentary economics and technology, “securing our borders” is in my uninformed opinion, mathematically provable to be a dumb way to spend scarce resources, both human and financial.

What’s the problem?  Well, there are many, but let’s start with the water metaphor.  Imagine people as a river, pouring across one or several points along a thousand-mile stretch of border.  With water, the pull is of gravity, with people, that of economic prosperity, freedom from persecution, reunification with loved ones or other human need.  Either way, the flow is basically unidirectional and essentially irresistible.  So what happens if you dam up one little piece of the border?  The water flows around it.  Duh, right?  Well, most of the proposals involve some combination of physical barriers (hopeless – I’m not making this up, it cost an average of nearly $4 MILLION bucks a mile!) and digital barriers.

If physical barriers are impractical (but we sure spent a bunch on it anyway) we get into cameras, drones, IR imaging, blah blah blah.  OK, suppose for a moment, that the physical gear were actually available in sufficient volume to secure a thousand mile stretch of border (remember, that with the crenelations of the great lakes, for example, a thousand miles of shoreline may have a lateral distance of only a few hundred miles in a two- or three state region).  So now you’ve got how many thousands of cameras, sensors, drones, and collection nodes running.  How much did that gear cost?

Here’s the resultant problem – How many people do you need to watch, sift, prioritize and act on all that data?   We’ve run into the same problem in the war in Afghanistan.  We’ve spent so much money on drones and other eyes in the sky, and they are so great at producing data, that we have far more full motion video from every corner of the theatre than we have people to watch the screens.  So without some smart downstream systems or algorithms (full disclosure, I make prioritization algorithms for a living, so I have a bias here), what good is 300 TV screens worth of video running at the same time if you only have 6 guys to watch them.  So what systems will you need to address that problem?  Those cost more money.

Now calculate how many people sneaking over the border from Canada have you just stopped.  What is the economic damage to the US you have just saved?  So on a per-person-stopped basis, what was the investment?  What is the ROI? Don’t forget to fully load the headcount (as we say in the bean counter biz) with the cost of catching, incarcerating, processing and deporting each of them.

Now add in how long it will take (in a world awash in data) for the coyotes (or Canadian equivalents) to work out the areas that are and aren’t effectively covered, camera’d and patrolled.  So, in a shrinking-budget political and fiscal environment, we have to ask, what is the ROI on all this technology and expense?

So, what’s a possible alternative? Spend the money putting up “anti-gravity” barriers.  Make the pull less pull-y.  Securing physical space over long distances is not in the “sweet spot” of what technology is good at.  Gathering, storing, sharing and disseminating information? Computers ARE good at that.  For the amounts of money involved in “securing the border”, how much technology could we create that makes the appeal and viability of illegal immigration much lower?

Could we make it way harder to get a job or paid work of ANY kind?  Sure.  Could we map historical data and interview deportees to understand how they stayed as long as they did? Who hired them? What work they did?  Then use the data to identify the most likely places illegals are being employed now and put pressure on, and/or incent those industries to hire only documented workers? Yep.  Data is great for all that kind of stuff.

Here’s another thought – how did those people not just get paid, but live at all?  Proof of legal presence is (where I live) already required to get hired, register a car, get a license, or lease an apartment.  Close the loopholes that allow illegal immigrants to live and work (while simultaneously creating sensible, pro-economic growth policies to bring in needed guest labor), and there will be less incentive to come illegally.  If you’ll be just as broke, homeless and hungry in Texas as you will in Mexico, and smart data modeling ensures you will be found and deported a lot faster, the incentives to come in the first place start to dry up, don’t they?

Why can’t we do the things technology is GOOD at to address illegal immigration cost effectively? (Oh, bureaucratic inertia, local politics, resistance to change, agency turf wars and vested interests and lobbying on the part of the vendors getting paid to do it the dumb way. But aside from all that?)

ONE FINAL NOTE – The “security” argument.  Here’s one I love.  “We HAVE to physically secure the borders to keep out terrorists.” The most transparently dumb defense ever of profligate wasteful spending.  A border fence, patrols, sensors, etc. These are all meant to address illegal immigration as a macro economic issue and a legitimate crime/LE concern.  It’s about JOBS and routine criminal concerns (both valid, but there are better ways to spend the money, as I argue above).  You can make a dent in the river of people flowing over a border with nothing but the shirt on their back.  It is supremely unlikely a fence will keep out the ONE guy whose coming to blow up the Sears Tower. Why?

A, you’ll never stop more than a portion of the river, how do you know you’ll catch the half of the water containing the next Mohammed Atta.

B, A determined, well funded illegal or someone with the backing of a terrorist organization has options for getting in that don’t involve swimming the Rio Grande or Lake Ontario.

And C) the people we really need to be afraid of don’t come in that way anyway.  Let’s see here:

  • All of the 9/11 hijackers? Entered the US through legal channels.
  • Khalid Sheik Mohammed?  Entered the US on a legal visa.
  • Ramzi Yousef, the 1993 WTC bomber? Arrived through JFK airport in NY.
  • Times Square bomber? Naturalized US citizen.
  • Y2K Bomber? Stopped by an astute agent at a standard border crossing station.

You get the idea.  Anyone who says securing our borders (not better customs control, not immigration control but our physical territorial lines with the rest of the world) is necessary because it will stop terrorism is either too dumb to know that’s a silly argument, knows it but thinks voters are too dumb to know it’s a silly argument, or has a stake in the contractors building the fence.  Any way you slice it, the data say otherwise and it makes me seriously question the person making the argument.

 

Disclaimer: The views expressed on this blog are mine alone, and do not represent the views, policies or positions of Cyveillance, Inc. or its parent, QinetiQ-North America.  I speak here only for myself and no postings made on this blog should be interpreted as communications by, for or on behalf of, Cyveillance (though I may occasionally plug the extremely cool work we do and the fascinating, if occasionally frightening, research we openly publish.)

%d bloggers like this: