Skynet, Smugglers and The Gift of Fear: What we can learn from snap judgements, and machines can learn from us

So, in the day or two since I posted the piece about “Big Filter“, I’ve gotten several calls, comments and emails that all seemed to focus on the scary notion of “machines that think like us”.  Some folks went all “isn’t that what Skynet and The Matrix, and (if you’re older, like me) The Forbin Project, and W.O.P.R were on about?”  If machines start to think like us, doesn’t that mean all kinds of bad things for humanity? 

Actually, what I said was, “We have to focus on technologies that can encapsulate how people, people who know what they’re doing on a given topic, can inform those systems… We need to teach the machines to think like us, at least about the specific problem at hand.”  Unlike some people, I have neither unrealistic expectations for the grand possibilities of “smart machines”, nor do I fear that they will somehow take over the world and render us all dead or irrelevant.  (Anyone who has ever tried to keep a Windows machine from crashing or bogging down or “acting weird” after about age 2 should share my comfort in knowing that machines can’t even keep themselves stable, relevant or serviceable for very long.) 

No, what I was talking about, to use a terribly out-of-date phrase, was what used to be known as “Expert Systems”, a term out of favor now, but that doesn’t mean the basic idea is wrong. I was talking about systems that are “taught” how someone who knows a very specific topic or field of knowledge thinks about a very specific problem.  If, and this is a big if, you can ring-fence the explicit question you’re trying to answer, then it is, I believe, possible, to teach a machine to replicate the basic decision tree that will get you to a clear, and correct, answer most of the time.  (I’m a huge believer in the Pareto Principle or “80-20 rule” and most of the time is more than good enough to save gobs and gobs of time and money on many many things.  More on that in a moment.) 

A few years ago now, I read a book called “The Gift of Fear” by Gavin de Becker, an entertaining and easy read for anyone interested in psychology, crime fighting, or the stuff I’m talking about.  The very basic premise of that book, among other keen insights, is that our rational minds can get in the way of our limbic or caveman brains telling us things we already “know”, the kind of instantaneous, can’t-explain-it-but-I-know-I’m-right, in-our-gut knowledge that our rational brains sometimes override or interfere with, occasionally to our great harm.  (See the opening chapter of The Gift of Fear, in which a woman who’s “little voice” as I call it told her there was something wrong with that guy, but she didn’t listen, and was assaulted as a result.  Spoiler alert, she did, however, also escape that man, who intended to kill, her using the same intuition. Give it a read.) 

De Becker, himself a survivor of abuse and violence, went on to study the evil that men do in great detail, and from there, to codify a set of principles and metrics that, encoded into a piece of software, enabled his firm to evaluate risk and “take-it-seriously-or-not-ness” for threats against the battered spouses, movies stars and celebrities his Physical Security firm often protects.  Is this Skynet taking over NORAD and annihilating humanity? Of course not.  What is is, however, is the codification of often-hard-won experience and painful learning, the systematizing of smarts. 

I was thinking about all this in part because, in addition to the comments on my last post, I’m in the middle of re-reading “Blink” (sorry, I appear to be on a Malcolm Gladwell kick these days.)  It’s about snap decision making and the part of our brain that decides things in two seconds without rational input or logical thought.  A few years ago, as some of you know, my good friend Nick Selby of (among many other capes and costumes) the Police Led Intelligence Blog, decided he was so passionate about applying technology to making the world better and communities safer that he both founded a software company (streetcred software – Congrats on winning the Code for America competition this year!) and became a police officer to gain that expertise he and his partner would encode into the software.  He told me a story from his days at the Police Academy.  I may have the details wrong on this bit of apocrypha, but you’ll get the point. 

During training outside of Dallas, there was an experienced veteran who would sometimes spend time helping catch smugglers running north through Texas from the Mexican border.  “Magic Mike” I call this guy, I can’t remember his real name, could stand on an overpass and tell the rookies, “Watch this.”  He’d watch the traffic flowing by beneath him, pick out one car seemingly at random and say, “That one.” (Note that, viewed at 60 mph and looking at the roof from above, age, gender, race or other “profiling” concerns of the occupants is essentially a non-issue here.) 

Another officer would pull over the car in question a bit down the road, and, with shocking regularity, Magic Mike was exactly right.  How does that happen?!  And can we capture it?  My argument from yesterday is that we can, and should.  We’re not teaching intelligent machines in any kind of scary, Turing-Test kind of way.  No, it’s much clearer and more focused than that.  Whatever went on in Magic Mike’s head – the instantaneous Mulligan Stew of car make, model, year, speed, pattern of motion, state of license plate, condition etc. – if it can be extracted, codified and automated, then we can catch a lot more bad guys. 

I personally led a similar effort in Cyber space.  Some years ago, AOL decided that member safety was a costly luxury and stared laying off lots of people who knew an awful lot about Phishing and spoof sites.  Among those in the groups being RIF’ed was a kid named Brian, who had spent untold hours sitting in a cube looking at Web pages that appeared to be banks, or Paypal or whatever, saying, “That one’s real. That one’s fake.  That one’s real, that one’s fake.”  He could do it in seconds. So, we hired him, locked him in an office and said, “You can’t go to the bathroom til you write down how you do that.” 

He said it was no big deal – over the years he’d developed a 27-step process so he could teach it to new guys on the team.  Just one of those steps turned out to be “does it look like any of the thousands of fake sites I’ve gotten to know over the years?”  Encapsulating Brian’s 27 steps in a form a machine could understand took 400 algorithms and nearly 5,000 individual steps.  But… so what?  When weeks of effort was done, we had the world’s most experienced Phish-spotter built into a machine that thought the way he did, and worked 24×7 with no bathroom breaks.  We moved this very bright person on to other useful things, while a machine now did what AOL used to pay a team of people to do, and it did it based not on simple queries or keywords, but by mimicking the complex thought process of the best guy there was. 

If we can sit with Brian, who can spot a Phishing site, or De Becker who can spot a serious threat among the celebrity-stalker wannabes, or Magic Mike who can spot a smuggler’s car from an overpass at 70 miles an hour, when we can understand how they know what they know in those instant flashes of insight or experience, then we can teach machines to produce an outcome based not just on simple rules but by modeling the thoughts of the best in the business.  Whatever that business is – catching bad guys, spotting fraudulent Web sites, diagnosing cancer early or tracking terrorist financing through the banking system, that (to me) is not Skynet, or WOPR, or Colossus.  That’s a way to better communities, better policing, better healthcare, and a better world. 

Corny? Sure.  Naive? Probably.  Worth doing?  Definitely.  




SCAM ALERT: Facebook messages just came to a mailbox I don’t use for Facebook

QUICK HIT:  I just got an email from “facebook” with the usual annoying “You have notifications pending” but it came to an account that I don’t use for Facebook.

The link is to and the actual sender address, you can see in the picture is q7frrf4s6rc9 (AT)  Norma.No is the legitimate site of a Scandinavian industrial firm, so clearly something’s gone a wee bit amiss in their IT somewhere.

Anyway, for all you happy/active Facebookers out there, take some care and check sender fields, mouseover/hover over the links in those supposed FB emails, or of course, better yet, don’t click ANY links in emails and go log into FB yourself if you have notifications to see.  Screenshot below so you can see what not to trust.


SCAM ALERT: LinkedIn breach and eHarmony phishing, and what you should do about it

Sorry this is late in coming, I was tied up all day yesterday at an offsite. By now most people will probably have heard that about 6.5 million LinkedIn passwords were stolen and posted on a hacker Web site the day before yesterday.  (eHarmony was hit too in case you didn’t know that.) There’s good news and there’s bad news here:

The good news

1.  The only things stolen, supposedly, were passwords.  Why is that good news? Without the matching user account, they’re not very useful.

2.  The passwords were hashed, so MOST but not all of them remained encrypted.  Some were posted in clear text, but most were not.

3.  The actual password hack is an easy problem to resolve.  Just log in and change your password.

The Bad News

1.  We’ll probably see many more of the passwords compromised/decrypted soon.  Why?  Well, hashing is done by feeding your password into an algorithm that creates a meaningless string of characters, and there are many standard hashing algorithms of various sophistication and obsolescence in use (MD5, SHA-1 etc.)

Unfortunately, this means that unless the passwords were also “salted” (they weren’t), anyone with the algorithm can brute force lists of common passwords and produce the hash of that password.  I would be willing to bet a dollar that the passwords that were published in cleartext were common ones that either available libraries had pre-determined the hash for (e.g. password, 12345, mylogin, etc.) or they were simple ones that were easy to brute force. (There is by the way a wee bit of interesting stuff about how they did it, but we’ll get to that a bit further down).

2.  The really bad news is that the compromised passwords aren’t the real danger, the danger is the social engineering attacks that have already begun that play off users’ fears about the breach.  Even IF your password was published in the clear, without your account name, it’s useless.  However, most users who see only the headlines don’t know that or don’t understand the details enough to discern a scam like this one (thanks here to CBS/CNET for the example):CBS/CNET provided example of LinkedIn Phish

CBS/CNET provided example of LinkedIn Phish

So, what should you actually DO about it?

1.  Type the address for LinkedIn into your browser yourself, and change your password from the account-management screen.

2.  Use a strong password to prevent pre-published or easy decryption of the hash, and having done that, you can then ignore / distrust any email, legitimate or not that purports to come from LinkedIn regarding the breach and asking you to do anything about it.  (As usual, whenever possible, don’t click links in emails, type it in yourself and find what you need on the site you know is the real one.)

3.  Since many of us use the same password for lots of Web sites, you might want to update the password for those that shared the password you used for linkedin, and

4.  Finally and most importantly (for many reasons), read this strip from XKCD for some ideas on how to create very strong, easy to remember passwords, and for those who don’t already read it, it has the added benefit of introducing you to what is undoubtedly the greatest, nerdiest, smart-humor-est awesomest stick figure blog ever.

A final-note: For the nerd-herd, by the way, the brute forcing of password cracking was reportedly crowd sourced, which I find both neat and slightly scary.  Like the old SETI search that broke down radio noise from outer space into chunks for processing on “volunteer” pc’s all over the world, password cracking is a wonderful activity for divvying up among thousands of machines and harnessing supercomputer power without having to, you know, spring for a Cray. Wonder if the machines were voluntary, or done by renting a botnet

SCAM ALERT: Fedex emails, Best Buy text messages and in the news, new APWG report

Just another quick “Be careful” note….

Today, I get to warn you about scams I am aware because I’ve personally gotten all of them in the last 24 hours.  The first, which I hope and expect NO ONE should fall for, is a flood of “Fedex” notifications that are so badly written they’re actually entertaining.

What’s more interesting to me as a linguist is to see if you can localize the scammer based on HOW it’s badly written.  For instance, Russian speakers (and those of other related Slavic languages) will frequently make all kinds of errors with particles. You see, Russian has no “a”, “an” or “the” equivalents, so they often appear (and disappear) sporadically and in the wrong places.  See excerpts from my flood of (malware-laden by the way, please don’t open those attachments!) Fedex notices the last few days.

  • “Our courier couldn’t make the delivery of parcel.”
  • “Label is enclosed to the letter.”
  • “…information about the procedure of parcels keeping…”

You can almost hear the voice of The Count from Sesame Street.

Then I got a text message that said:

“Your entry last month has WON! Goto and enter your Winning Code: “6655” to claim your FREE $1,000 Bestbuy Giftcard!”

What’s interesting about this one to me is the link sent via text.  This means essentially it is either:

  1. A phish in the classic sense, meaning it just asks you to divulge information on the destination page; or
  2. The link is malicious, which is kind of neat because, given the delivery via SMS, it would therefore (I assume) engage malware targeting either the iOS or Android operating system.

Given the deplorable, nearly non-existent state of mobile malware protections and smartphone anti-virus defenses, I elected not click the link from my phone to find out.  (Given that the domain was created on Monday of this week via anonymous registration in Panama, this seemed like a good site to avoid. )

Finally, in scam-related news, the Anti-Phishing Working Group published their report on H2 2011.  There’s a nice synopsis here, or you can download the full report from APWG’s Web site.


Columbia Researchers Put Metrics to Phishing Victims’ Gullibility

Researchers at Columbia University have built a small scale system that synthesizes phishing emails and measure the susceptibility of a targeted population to them.  First-round participants who fell for the simulated scams were notified of their mistake, but were NOT notified that they would also be re-targeted for future probing/attack.  As the guy who (warning, shameless plug alert) authored my company’s Cyber Safety Awareness Training product, I can’t say I’m surprised by the most depressing tidbit.  Even targets who were warned they were being taken online went as many as four successful scams before learning a bit of caution.

I’m just hitting a few highlights of course, but the full paper is an interesting read, available for download at

SCAM ALERT: Justin Beiber emails part of malware spreading over Facebook

Kaspersky Labs researcher Sergey Golovanov has a detailed post this morning about the the LilyJade worm, a technologically fascinating  bit of naughtiness that is spreading via messages about teen pop star Justin Beiber (though of course the content of the emails will change constantly.)  For users, all you need to know is, as always:

1.  Don’t trust messages, click on links or open attachments from anyone you don’t know.

2. Even if it’s from someone you do know, if the message seems generic, is totally off any topic you care about or seems out of character for the sender, same rules apply.  Their account may have been compromised.

3. If the message seems like it actually might be important, reach out to that person via alternate channel, e.g. phone call text or email to another account.  You may just make them aware of the fact their account is compromised and they didn’t know it.

4. Hover your mouse over all links in emails and see if the visible link and the underlying actual destination agree.  If they don’t, don’t click the deceptively labeled link.

5.  Never respond to online requests for personal information, passwords, login credentials or financial data except on a reputable web site you trust (e.g. Amazon, Zappos, eBay) where you TYPED IN THE ADDRESS YOURSELF.

For the really nerdy among you, who care about “cross-platform browser vulnerabilities or like reading code on a command line (dorks), the Kaspersky post is pretty interesting and detailed.

SCAM ALERT: Facebook, Gmail, Hotmail, Yahoo – “Rebates” and “New security measures”

Just a quick heads up to all – this post from security vendor Trusteer details the latest widespread, and technologically pretty smart, phishing / malware campaign against users of the big Web-based email services, as well as Visa and Mastercard.  A few articles out there too, but I like the original Trusteer post because it has pictures of the actual materials.

As always:

1.  Assume any email asking you to do, click or download something is fake

2.  Hover your mouse over the links in the email. The destination of the link should appear.  If it goes to a site you’ve never heard of, or the actual link disagrees with the one shown in the text, don’t click it.

3.  If you need something from any web based vendor you use and trust, amazon, gmail, or whatever, type the name in the address bar yourself.

Surf safely!



%d bloggers like this: