…continued from Security and Human Behaviour 2010 – Section 2: Foundations.
Stephen Lea (Exeter) talked about how we make judgements about whether a situation is legitimate or nefarious. One interesting snippet I remember from his talk (it was after lunch, after all), is this:
The bedrooms-to-people ratio (for people’s homes) is a good indicator of social class.
L Jean Camp (Indiana), apart from making a funny statement on browser interface design by having a secure website by virtue of using a “golden lock” as her website icon, talked about risk perception. She mentioned Slovic’s Nine Dimensions of Risk (as discussed in Perception of Risk posed by Extreme Events among other papers) and pointed out that risk must be perceived as immediate and familiar. Otherwise people don’t worry too much about it – which, might I add, would explain the sorry state of our botnet-infested Internet nowadays.
Stuart Schechter (Microsoft) talked about his quest within Microsoft Research to design interfaces that help users make the best possible security decision given a dilemma. He demonstrated the progress made on web browser interfaces and error messages when an SSL certificate error occurs. It tries to help end users take the right choice, but does it succeed?
Stuart suggested that following natural language (e.g. English) when choosing how to construct interfaces makes a difference. Users understand the interface better when the visual constructs (text, images, animation) used loosely resemble natural language phrases. For example, it helps having a verb (like “see”) and then images for “pictures”, “videos”, “news”, rather than the other way around, or any other way for that matter.
I think this is an important take away message. Design interfaces that follow natural language, because that’s how people’s minds are conditioned to work. Sure, it creates more work for multilingual sites, but it’s worth it if you’re Facebook (350+ million users) and you actually do care about your users’ privacy. Of course Facebook, Google etc are in the business of eliciting as much information from us as possible, to sell to advertisers, so there’s a conflict of interest there. They have to provide some privacy controls (to appear they’re playing nice), but are not interested in making it easy enough to lock your personal information because then they would strangle their revenue stream.
Remember, we are not Google’s or Facebook’s or Yahoo’s customers. We are their products. They don’t sell to us. They sell us (every personal bit of information we give them) to advertisers.
Chris Hoofnagle (UC Berkeley) talked about identity theft. He boldly argued that privacy is causing identity theft. Chris talked about how credit & loans are granted. It costs £15 – £20 to process a credit application. There is no intelligent human interaction – humans just open the envelope and feed the paper into a scanner. The computer then makes the credit decision. Chris pointed out that, alas, we’re fighting a losing battle. At this point in time, consumers really have no way to protect themselves against identity theft.
Tyler Moore (Harvard) uses game theory to explore whether it is realistic to expect governments to pay much attention to defensive security. Unfortunately, the answer is “no”. Quoting “Would a ‘Cyber Warrior’ Protect Us? Exploring Trade-offs Between Attack and Defense of Information Systems“,
[…] a mutually defensive approach to security is not a stable equilibrium […]
This was the end of session 3 of the workshop.
During the break I scribbled the following in my notebook:
Complacency: People who think they are “in the know” in an area (e.g. investments) are more likely than the layman to be scammed in that very area (in this example, investments).