Security and Human Behaviour 2010 – Section 4: Culture, Risk, and Fear

Continued from Session 3 – Fraud

Scott Atran (UMich among others) is an anthropologist who talked about intractable conflict – how groups like the Hamas or the Mujahedeen define and experience conflict. He talked of sacred values that lead to sacred conflict. Sacred values are impossible to negotiate on. Certain items are just off the agenda, so to speak.

One of the most reliable predictor of who will actually commit violence (regardless of political or religious convictions) is who their friends are.

Scott also drew our attention to the fact that groups like Al-Qaeda are grass roots organisations. They don’t actively recruit, as there are enough disgruntled people to join their ranks willingly.

Dylan Evans (UC Cork) talked about how humans estimate probabilities. He presented an online risk intelligence test his group designed.

Ragnar Löfstedt (King’s) asked whether transparency is always a good thing. He mentioned that doctors didn’t want early disclosure of information on the Internet, as this created half-informed and highly opinionated patients. He also mentioned FDA’s transparency initiative which looks like a remarkable paradigm shift.

William J. Burns (Decision Research) pointed out that the highest cost to society is paid for the reaction to a terrorist strike, not by the strike itself. He also mentioned that the Times Square “strike” (which wasn’t) was “remarkably well handled”.

Chris Cocking (London Met), fresh in from Glastonbury, talked about emergency crowd management. He disagreed with Gustave Le Bon’s “The Crowd: A Study of the Popular Mind” and supported that crowds display emergent qualities that help stave off disasters. Interestingly he mentioned that many fire fatalities occur exactly because people do not panic enough and have a too phlegmatic approach to danger. But the popular view that crowds by default become anarchic, dangerous mobs, was directly challenged.

Frank Furedi (Kent) talked about fear and risk. He mentioned the Newsweek article on BP’s oil spill worst case scenario and noted that worst-case thinking reduces risk to fantasy. A manifestation of this effect is that (apparently) right now in the UK anyone who has any direct contact with children other than their own (e.g. if you’re giving your neighbour’s children a lift to school some days along with your children) has to be CRB-checked. This has resulted in 7,5 million adults being listed in the CRB registry as “dealing with children”.

That was the end of day 1 of the workshop. Further notes I scribbled in my notebook that day:

Session 5, kicking off the workshop ‘s second day, to follow…

Security and Human Behaviour 2010 – Section 3: Fraud

…continued from Security and Human Behaviour 2010 – Section 2: Foundations.

Stephen Lea (Exeter) talked about how we make judgements about whether a situation is legitimate or nefarious. One interesting snippet I remember from his talk (it was after lunch, after all), is this:

The bedrooms-to-people ratio (for people’s homes) is a good indicator of social class.

L Jean Camp (Indiana), apart from making a funny statement on browser interface design by having a secure website by virtue of using a  “golden lock” as her website icon, talked about risk perception. She mentioned Slovic’s Nine Dimensions of Risk (as discussed in Perception of Risk posed by Extreme Events among other papers) and pointed out that risk must be perceived as immediate and familiar. Otherwise people don’t worry too much about it – which, might I add, would explain the sorry state of our botnet-infested Internet nowadays.

Stuart Schechter (Microsoft) talked about his quest within Microsoft Research to design interfaces that help users make the best possible security decision given a dilemma. He demonstrated the progress made on web browser interfaces and error messages when an SSL certificate error occurs. It tries to help end users take the right choice, but does it succeed?

Stuart suggested that following natural language (e.g. English) when choosing how to construct interfaces makes a difference. Users understand the interface better when the visual constructs (text, images, animation) used loosely resemble natural language phrases. For example, it helps having a verb (like “see”) and then images for “pictures”, “videos”, “news”, rather than the other way around, or any other way for that matter.

I think this is an important take away message. Design interfaces that follow natural language, because that’s how people’s minds are conditioned to work. Sure, it creates more work for multilingual sites, but it’s worth it if you’re Facebook (350+ million users) and you actually do care about your users’ privacy. Of course Facebook, Google etc are in the business of eliciting as much information from us as possible, to sell to advertisers, so there’s a conflict of interest there. They have to provide some privacy controls (to appear they’re playing nice), but are not interested in making it easy enough to lock your personal information because then they would strangle their revenue stream.

Remember, we are not Google’s or Facebook’s or Yahoo’s customers. We are their products. They don’t sell to us. They sell us (every personal bit of information we give them) to advertisers.

Chris Hoofnagle (UC Berkeley) talked about identity theft. He boldly argued that privacy is causing identity theft. Chris talked about how credit & loans are granted. It costs £15 – £20 to process a credit application. There is no intelligent human interaction – humans just open the envelope and feed the paper into a scanner. The computer then makes the credit decision. Chris pointed out that, alas, we’re fighting a losing battle. At this point in time, consumers really have no way to protect themselves against identity theft.

Tyler Moore (Harvard) uses game theory to explore whether it is realistic to expect governments to pay much attention to defensive security. Unfortunately, the answer is “no”. Quoting  “Would a ‘Cyber Warrior’ Protect Us? Exploring Trade-offs Between Attack and Defense of Information Systems“,

[…] a mutually defensive approach to security is not a stable equilibrium […]

He also mentioned the the January 2010 Google espionage investigation and Windows 7 development. They were both “helped” by the NSA.

This was the end of session 3 of the workshop.

During the break I scribbled the following in my notebook:

Complacency: People who think they are “in the know” in an area (e.g. investments) are more likely than the layman to be scammed in that very area (in this example, investments).

Security and Human Behaviour 2010 – Session 2: Foundations

…continued from Session 1 – Deception.

Petter Johannson (UCL) presented the fascinating Choice Blindness study and asked whether choice blindness can be used to detect deception.

Michelle Baddeley (Cambridge) asked “Why aren’t people trying to protect their privacy & security?” She described security as a public good, making me instantly connect computer users’ nonchalantness towards harming others with their actions (network externalities of unsafe behaviour in a networked world) with the Tragedy of the Commons.

Michelle talked about Herbert Simon‘s concept of bounded rationality and how it translates to the struggle between substantive and procedural rationality.

Michelle mentioned further concepts that affect how people make security decisions:

  • Quasi-rational economics
  • Present bias (manifested by procrastination – e.g. we’re happy to pay yearly gym memberships, when we realistically scarcely visit the gym)
  • Our trait of being disproportionately impatient
  • The need for a “strategy-proof design”.

Terence Taylor talked about Natural Security, a concept captured in the book he co-edited titled “Natural Security –
A Darwinian Approach to a Dangerous World”
. He talked about the National Centre for Ecological Analysis and Synthesis (NCEAS) and the Darwinian Security Working Group.

Terence pointed us to the books The Starfish and the Spider and Jean-Francois Rischard’s High Noon: 20 problems, 20 years to solve them.

Rick Wash (Michigan State) took the stage and wondered about people’s motivation. Why do people do what they do? He conducted interviews with home PC users and asked them about the security-related problems. Turns out that most answers can be categorised in one of the following two buckets:

  1. “Viruses”, which includes all bad software
  2. “Hackers” which includes all bad people

Refer to “Folk Models of Home Computer Security” for the details.

Wolfram Schultz (Cambridge) demonstrated graphical images of the brain as decisions were being made. He pointed out that humans have a subjective probability perception. Also, we take a different risk attitude depending on the stakes involved.

Mark Levine (Lancaster) eloquently explained that aggression is difficult to turn to violence. He quoted Dave Grossman’s “On Killing” where the argument is made that most people studying violence are like “A world of virgins studying sex” – the assumption apparently being that modern societies do not expose their members to real violence, hence making it very difficult to understand true violence.

Mark demonstrated research on bystander behaviour and noted that an increase in group size usually leads to the de-escalation of aggression. This appears to happen because third parties bring “natural conflict resolution”. He made the counter-intuitive (but compelling) argument that groups bring peace. Hence, groups are not necessarily detrimental to the security of individuals.

At that point I scribbled in my notepad Note: Sensationalism leads perception of risk/threat and Risk can be used to manipulate population – which is a bit like stating the obvious. The establishment uses media to threaten people into submission.

The paper “The Social Amplification of Risk – A Framework” was mentioned at this point, which begins with:

One of the most perplexing problems in risk analysis is why some relatively minor risks or
risk events, as assessed by technical experts, often elicit strong public concerns and result in
substantial impacts upon society and economy.

Session 3 of the workshop to follow…

Security and Human Behaviour 2010 – Session 1: Deception

I recently attended the 2010 Security and Human Behaviour workshop, organised by Ross Anderson, Bruce Schneier and Alessandro Acquisti.

For the workshop’s official notes (by Ross), visit the Computer Laboratory, University of Cambridge blog. In the following posts I’ll capture my own notes from the workshop.

Session 1 – Deception


  • on a small scale is called fraud.
  • on a large scale is called propaganda.

Jeff Hancock (Cornell) kicked off the presentations by identifying trends in deception:

  1. Recordability of online data
  2. Search algorithms – hence easier retrieval of data
  3. Universal cues of lying/deception
  4. Nature of language in deception:
    1. Truthful language
    2. Deceptive language, which uses less first person singular (“I”). If we put all deceptive language in a “deceptive” bin of words, we can identify the lie itself, as well as truthful words/statements surrounding the lie. Psychological distancing of the actor occurs on the lie itself.

Frank Stajano (Cambridge) quoted research his group did on the psychology of scam victims. This led to BBC3’s “The Real Hustle” series (80 episodes). Factors in scams were:

  • Consistency
  • Commitment
  • Kindness
  • Distraction

Frank’s research also demonstrates the “Good Samaritan” value of people (and how it can get you in trouble if you’re not careful). Frank mentioned Robert Cialdini‘s book on consistence and commitment and how these inform influence, and an Office of Fair Trading report by Stephen Lea et al.

Peter Robinson (Cambridge) was on next. His interests are on Human-Computer Interaction (HCI). He made the point that if we judge computers with the same criteria we use for humans, computers are autistic, since they provide no non-verbal cues. He presented research on decoding facial/bodily movements to understand the feelings/posture of an individual on a particular topic.

Pam Briggs (Northumbria) was on next. She described the prototype of a biometric daemon (inspired by the daemons of Philip Pullman’s books) that might make humans make better security decisions. The premise is that it’s easier to develop a personal relationship with your daemon and then use the daemon’s lack of comfort or outright outrage when something is not quite right, rather than making fully informed and conscious security & privacy decisions yourself.

Pam also mentioned that many UK schools are now authenticating their students with fingerprints to grant kids access to school meals, which I found appalling.

Mark Frank (SUNY at Buffalo) was on next. He studies deception by people’s facial expressions. Some of the questions he is looking into are:

  • Can we detect liars?
  • Can we detect them in a natural environment?
  • Can we detect them from their facial expressions?

Mark brought the predominant USA-style approach to safety & security and introduced the familiar notions of “good vs bad guys”, “terrorism”, “airports” and “police” (presumably as the benevolent protector of society).

Mark mentioned SPOT – Screening Passengers by Observation Techniques – which is a behavioural flagging programme using facial expression analysis going on in the USA. Privacy Impact Assessment published by the TSA on the programme can be found at

Martin Taylor was up next. He’s a magician and hypnotist, but he’ll be the first to tell you he doesn’t use hypnotism. He explained that factors like social compliance are the basis of the illusion constructed in a “hypnosis” situation. Many people are forced to do stuff they wouldn’t normally do, because of peer pressure, perception of expectations and such factors. Who needs hypnotism when you’ve got:

  • Suggestion – talking about something makes people more aware of it, even if it’s just something completely regular. I.e. talking about your pulse rate in a convincing fashion, inducing doubt, citing bogus expert knowledge etc will make people believe there might actually be something wrong with their heart rate.
    Martin mentioned Derren Brown and Uri Geller as examples of other psychological illusionists.
  • Peer pressure – We tend to follow others, group behaviour overwhelms personal choice etc.
  • Obedience – When someone of authority (bogus or not) commands us to do something, we tend to do it because of the social conditioning we’ve received from parents, school, our jobs etc.

Joe Navaro (now retired FBI agent) was mentioned as a prime example of someone who can read non-verbal behavioural cues to excel at his job.

The question “when do we have the right to deceive?” came up, especially in the context of necessary deception versus an oppressive authority. In that context deception is one of the few methods to maintain one’s safety and privacy.

Martin also explained that building rapport is very important for social control (or hypnotism, call it what you like), and said the elements of rapport are

  • Similarity
  • Empathy
  • Liking

Session 2 of the workshop to follow…

How to keep your Windows computer secure

As of July 2010, I consider the following advice essential to protecting your privacy & online security if you’re using Microsoft Windows:

Microsoft Security Essentials

Free antivirus that protects your computer from malware (viruses, spyware, adware etc). Available only to genuine Windows installations.

Firefox Web Browser

Free web browser for the security conscious. Always auto-update when prompted. Use the following add-ons to protect your online privacy & security:


Free add-on for Firefox that stops stuff happening automatically on your computer without you approving it. You can teach NoScript which sites you trust for automatic script execution. Essential for a safe online experience.

Better Privacy

Free add-on for Firefox that deletes all LSO’s (the next generation of traditional cookies)

HTTPS Everywhere

Free add-on by the Electronic Frontier Foundation that automatically uses an encrypted connection to the website you visit, if it’s available (and configured in the preferences). Allows you to be lazy and search at, browse, blog on using your old bookmarks/URLs, while transparently redirecting your connections to the encrypted versions (https://) of the websites.

AdBlock Plus

Free add-on for Firefox that blocks most advertising banners, making for a simpler, safer browsing experience.

Conspiracy (for advanced users)

Free add-on for Firefox that shows you which countries the Certificate Authorities that authenticate your current TLS session are registered in. Might help expose man-in-the-middle attacks (e.g. when connecting to a UK site and noticing a Chinese flag popping up).

Certificate Patrol (for advanced users)

Free add-on for Firefox that notifies you of any new SSL certificates accepted or any existing certificate being replaced. May help expose man-in-the-middle attacks.

Act Wisely

No tool can protect you without some help from your own decisions & actions.

  1. Keep your Operating System updated by always installing the latest patches/service packs from Microsoft update.
  2. Keep your important software updated: Java, Adobe Flash plugin, your PDF reader of choice (I recommend FoxIt Reader instead of Adobe Acrobat Reader), your media player of choice (I recommend SMplayer VLC), Skype etc. Most have auto-updating mechanisms. When prompted to update, evaluate the authenticity of the program asking you to make this change.
  3. Never install codecs/plugins or any other software (free games, utilities etc) that some “friend” asks you to install to see the latest “funny video”. If it’s supposed to be a video or music file and VLC can’t handle it, it’s probably not legitimate.
  4. Think before you click “OK” on the next popup message. There must be a way of turning off repeated mundane warnings, otherwise you’re doing something wrong.
  5. If something strange happens, capture it with a screenshot before proceeding . It’s like taking a snapshot of your screen. Hit the “Prnt Scrn” button, then go to “Start” -> “Run” and type mspaint. Hit Enter, then hit CTRL-V simultaneously on your keyboard (for “Paste”) and save the image on your desktop as type JPG. Now you can send this file to your friends (or an online support forum) and ask people what’s going on.

Automatically backup your computer

This is not an online security tip, but when the inevitable happens and a hard drive melts, your computer gets stolen, or destroyed by the next good Samaritan “expert” trying to “fix” your computer, you will be relieved to know you have backup copies of your most valuable files somewhere else.

Use Mozy (free for 2GB of data) to keep an online copy of your valuable files.

[Update: 14 Jan 2011] I’ve had trouble with Mozy so I now use CrashPlan. If you agree to backup to each other with a friend of yours, and you have enough disk space, this is a free solution that works well.

If you have multiple computers and you want to make your critical files available to all of them at the same time, I can’t recommend DropBox enough. (give me credit for the tip or choose not to)

If you’re using Windows 7, a backup tool is included that allows you to take a full backup of your hard drive to an external (e.g. USB) drive. Use it. While you’re at it, create a System Repair Disk as well. It’s bound to be useful one day.

A usability case study: Microsoft Online Assisted Support

I never thought I’d be writing this on a public space but Microsoft is getting this right.

As most of us techies, I do tech support for all of my less-techie friends & family. People who are particularly close to me even get ongoing preventative maintenance. (They don’t really know it’s happening, but it is.) I thus maintain Debian servers, Macbooks and Windows XP/7 laptops alike.

A few weeks ago I had a misbehaving Windows 7 laptop. It would simply not install a specific update available from Microsoft Update. I tried my best, spent some time researching the problem on the Internet (Google and Microsoft’s own support pages), tried a few Microsoft-supplied tricks (basically the Windows Update Readiness Tool as suggested in KB947821) and finally gave up. I could not find a solution that looked elegant enough to try (I’m not willing to try stuff that sounds wacky in the first place by users going wild on forums talking about registry hacks etc)

So I went to

As far as I was concerned, I had exhausted the existing documentation, so I opted to “Contact a Support Professional by Email, Online or Phone”.

Contact a support professional

This is a quite inconspicuous link rightly placed at the bottom of the page. The thinking seems to be that people should try to help themselves first by looking for a solution to the problem using the existing published resources. Only if that fails, should they contact a human and ask for personalised help.

This makes sense. If all Microsoft product users had to speak to support professionals, Microsoft would be running a 5,000,000 people call centre just answering email and picking up the phone. The option would be abused by people who just need too much hand-holding or are inherently lazy. Sure, systems should “just work”, but as this isn’t happening any time soon, it’s worthwhile focusing on how to provide quality support services. It’s important to have the (expensive) human option as a last resort. (In saying this I fully recognise that my “last resort” is different than your “last resort”.)

So I clicked on and was taken to

There, I was asked what product I’m having trouble with, I designated “Windows Update” from the well-designed “quick product finder” input box and was instantly on my way.

I got a properly signed SSL certificate, accepted the legal terms of service and provided information about the bug I was experiencing over an encrypted connection.

Within the next 24 hours I got a polite email in proper English, giving a single suggestion in clear steps that immediately fixed my problem.

Should I want to refer back to the information I supplied, I can do so from a link sent in an email report of my opened case. To protect me from people hijacking the link in transit, Microsoft will ask for my email and then send me a new (https://) link after 7 days of the original link.

From a customer experience point of view, I am impressed.

Well done Microsoft.



PS: For posterity, my particular problem was that KB982632 fails to install with error code 0x800b0100. But the suggestion of the support engineer seems like a great way of resolving a whole class of  Windows/Microsoft Update problems. It basically wipes out all local Windows/Microsoft Update files and allows your machine to make a fresh start.

Step 1: Rename the Windows Update Softwaredistribution folder


This issue may occur if the Windows Update Software distribution folder has been corrupted. We can refer to the following steps to rename this folder. Please note that the folder will be re-created the next time we visit the Windows Update site.

1. Close all the open windows.

2. Click “Start”, click “All programs”, and click “Accessories”.

3. Right-click “Command Prompt”, and click “Run as administrator”.

4. In the “Administrator: Command Prompt” window, type in “net stop WuAuServ” (without the quotes) and press Enter.

Note: Please look at the cmd window and make sure it says that it was successfully stopped before we try to rename the folder. However, if it fails, please let me know before performing any further steps and include any error messages you may have received when it failed.

5. Click “Start”, in the “Start Search” box, type in “%windir%” (without the quotes) and press Enter.

6. In the opened folder, look for the folder named “SoftwareDistribution”.

7. Right-click on the folder, select “Rename” and type “SDold” (without the quotes) to rename this folder.

8. Still in “Administrator: Command Prompt” window, type the command “net start WuAuServ” (without the quotes) in the opened window to restart the Windows Updates service.

Note: Please look at the cmd window and make sure it says that it was successfully started. However, if it fails, please let me know before performing any further steps and include any error messages you may have received when it failed.

This worked as expected. The corrupted Microsoft Update cache was cleared out of the way and on the subsequent Microsoft Update run, everything installed appropriately. An elegant way of solving a horde of Windows/Microsoft Update problems.