Security and Human Behavior (SHB) 2024 - Day 1

Interdisciplinary invite-only conference discussing how people interact with security technology. The conference is designed to share ideas, questions, and promote interesting discussion that can lead to deeper understanding and future breakthroughs.

The full schedule lists all the speakers and the sessions they are in. There is also an attendees page that lists some recent works from each person.

I am live blogging the event and then doing some clean-up afterwards. So some entries may have poor spelling or not flow as nicely as I might like. I plan on doing a post-event pass to do some cleanup and add links.

Session 1: Users and Usability

Bonnie Anderson, Nancy N. Blackburn, Marc Geffen, Jonas Hielscher, Simon Parkin, Lucy Qin, Tony Vance

Bonnie Anderson

Bonnie does work on warnings and how people’s brains process those warnings. We already know that people become habituated to warnings and click through them without seeming to read or properly engage. Bonnie’s work uses FMRI technology to look at brain activations while people interact with warnings. Such research helps us understand how the brain is processing warning content. The rough answer is that brains “generalize” the warning. Or in other words, the brain identifies a common case (the warning) and the most appropriate action to it (click next) and then applies that action in future cases thereby freeing brainpower to spend on other issues. Brains are surprisingly skilled and do far more work for us than we might expect.

Nancy Blackburn

Nancy researches how to teach security in higher education through serious games. There are many types of serous games and they cover a range of topics like math, typing, or even security and privacy. (Kami’s list of security games for example).

The problem with serious games for security is that they can be seen as “chocolate covered broccoli” because it can seem like some candy is being added to not-so-palatable topics. So they can be dismissed. So how can we teach developers to create serious games that are effective?

Nancy created a Game Jam, similar to a hackathon, where there is a short intense game development cycle. Introduce a theme or a subject, with a topic of education, then some brainstorming, they create it, and then games are analyzed.

Jonas Hielscher

Title: Human-centered Security (HSC) Leadership in Organizations

There is a good bit of research on how people react to different policies. What does HCS look like in organizations? Also, who is responsible for HCS in organizations, or in other words: who should we study? There are a bunch of groups such as those that make policies, those that do awareness campaigns, or CISO. Jonas decided to focus on CISOs in his paper: (Employees who don’t accept the time security takes are not aware enough).

His study focused on CISO’s who were willing to engage on the topic of human security, which make them a best case participant. For most CISOs, HCS means awareness and training likely because that is what is on the market and being sold to them. They were particularly interested in simulated phishing attacks. CISOs don’t think this will improve security, but they needed the numbers to show to managers. They were very interested in participating so that they can easily control employees and get them to behave. Not many considerations about employees constraints. Really wanted some checklists they can use.

CISOs do “craftsmanship” not science, keep the science away. But we are scientists….

Simon Parkin

Title: When is security “usable enough”?

We have piles of research on usable security on topics like understanding user constraints, tools that improve things, reducing challenging. Yet it is no more straightforward for companies to test if their security is usable….

The amount of security usability we have in a company, is mostly limited by how much time someone is willing to spend on it. But where are we aiming to be? The “user is the weakest link” narrative just doesn’t go away.

Usable security is not a very good product on the market. Training is “by a package, done” and is hard to compete with. The work that users do contextualizing the security rules and their work is trivialized (The trivial tickets buil the trust: a co-design approach to understanding security support…)

If we are interested in “enough” usability not just “more”. The current view is that if the user is not being secure, then they need take the training again…. But the CISOs are interested in a checklist or a require.

Lucy Qin

There is a large amount of intimate content being shared online and 1 in 6 adults have had such content shared without their consent. Lucy’s research looks at proactive mitigation strategies.

Study of 52 adults who share such content including those who did so consensually and those who were victims. Reasons for consensual sharing included sharing with strangers such as a positive body image group, new relationships, established relationships, and commercially (onlyFans). Participants used 40+ platforms, any platform that allows image creation and sharing is likely being used to share such content.

Lots of concerns about misuse. Participants use a mix of strategies.

Marc Geffen

Works for Meta as a User Experience Research Lead, Messenger Trust

Transitioning Messenger and Instagram Messaging are being transitions to E2E encryption. Users expect privacy and security but that is what they come for.

What is E2E and why does it matter to me?

Most are aware of E2E (>65%) and less understand (<50%) what E2E is. Marc is looking at how to convey what E2E is and why it matters to users. They find that there is a language mismatch. Looking at the Philippines.

Users thought about the transition to E2E is a system usabilty update. The E2E introduction will require users to hold a key which is a user issue. Also not all chats will be E2E, which users need to understand. Asking them to read causes the “I just want to message, stop blocking me from messaging”.

Key challenge is how to convey the important information (encrypted/unencrypted) with minimal extra noise.

Tony Vance

Title: What are Companies Saying in New SEC Cybersecurity Disclosures

SEC last year came up with new public regulations which includes that they must disclose within 4 days of learning that a security incident being known. It also caused a a new section be added to the annual report…. Walmart, for example has a whole section on CyberSecurity. What are companies saying in these sections.

Risk management process is supposed to be covered in the report. Things like identification, respond. If they have outside assessors like pentration testers. How they are managing third party risk like cloud providers. Board needs to say how they are managing cybersecurity risks. Why is the CISO qualified, what are their qualifications?

Most structures look roughly like the one below where a CISO is not directly managed or reporting to the Board.

  • Board
    • CEO
      • CIO
        • CISO

But companies are now required to answer if the board ever hear from CISO directly. Obviously most don’t want to say “no” so suddenly CISO reports to board. Interested in how these reports and security posture correlates with their stock price. Also about coordination between what they say and data breaches that later occur.

SEC is criminally charging SolarWinds including the CISO for misleading investors about its cybersecurity practices.

Session 2: Security

Presenters: Sascha Fahl, Adam Joinson, Alan Rubel, Frank Stajano, Kami Vaniea, Rick Wash, Josephine Wolff

I (Kami Vaniea) was part of this session so I was not able to live blog it.

Session 3: Deception

Presenters: Andrew Adams, Sadia Afroz, Bhumiratana Bhume, Tesary Lin, Tyler Moore, Arianna Schuler Scott, Geoffrey Tomaino

Andrew Adams

Deceptive design. (aka dark patterns) which causes users to act outside their best interest. Not all deceptive design is intentional. So one sub-question is about what ethical design is.

Recently published in CHI about deceptive Design.

Colin Grey recent literature review - showed a scatter-shot pragmatic research which is not very structured. So starting to structure it.

Theory of the mind. Current work looks at the actions. But we need more

Sadia Afroz

Works for Gen on Norton Genie

Talk Title: Lessons learned from ~6 months of running a security assistant

Norton Genie is a security assistant app that will detect scams using a picture. Gives lots of advice on scams and will tell people if something is a security issue. Uses ChatGPT to answer security questions as well. Users finds some new store or product on say TicToc, but is it real? Genie is supposed to help users with this issue.

Majority of the scams asked about were coming via social media. Most online shopping, romance, and investment scams.

Scam factories are a rising problem, where organizations exist to create the scams. For example, Pig Butchering Scams wehre conversations with the attacker has been going on for months.

Detection is also only one part of the issue. Only 1/3 of questions are about identifying a scam, the rest are about people who fell for a scam and are trying to recover or know someone and are trying to help them.

The problem is that the burden is still on the users. They have to do things like check certificates, research topics, or search for common scam text.

Note from Kami: This research looks very similar to my own work on PhishEd.

Bhumiratana Bhume

Title: Protecting people’s account from hacking and online fraudsters (at the scal of billions) Works for Meta

Goal of how to get users to be better protected. What makes that problem difficult to do at scale:

  • Wide range of users: platform is used by many people from different cultures, often with different risk profiles, usage objective, behaviors and its constantly changing
  • Wide range of uses: talk to friends, organize events, sell things, many different uses
  • Context and Use: Behavior and use of platform changes over time, risk, landscape. Events like Olympics, Holidays, Wars, and protests all change the risk landscape
  • Motivation and harm: Financially motivated bad actors and politically motivated bad actors are both well resourced by in different ways, with different objectives

The approach needs to be adaptive because bad actors also adapt their behavior. But Meta needs to be careful to not disrupt normal users while blocking bad actor. Protections are therefor deployed differently to different types of people. For example, for people who are important enough some features become unavailable such as account password reset, because for famous people there are many bad actors trying to attack the reset.

Tesary Lin

Data Sharing and Website Competition: The Role of Dark Patterns From Boston University

Understanding impact of dark patterns. Do dark patterns impact consumers and cause harm. What are competitive advantages to using or not using dark patterns? If a consumer sees the same dark pattern across several websites how does user behavior differ (or stay the same) across those sites.

Design a custom consent banner plugin and have users install it and browse normally for 7 days. Using a cookie dialog that gives options of “settings’, “accept all” and “reject all” cookies. Then also have 6 different variations of buttons, order, and color of the buttons.

Findings

  • Consumers make judicious privacy choices across websties: most users do different actions on different websites.
  • Deliberate obstruction has a potent effect, but other options have less impact
  • Dark patterns do not necessarily favor larger websites

Question

  • the x option: how are you handling?

Tyler Moore

Title: Measure Organizational Information Security Culture Using Situational Judgement Tests

From The University of Tulsa

Looked at organizational psychology culture: Hofstede’s Dimensions of Organizational Culture

  1. Organizational effectiveness
  2. Customer orientation
  3. Level of control
  4. Focus
  5. Approachability
  6. Management Philosophy

Situational judgement tests (SJTs) measures underlying assumptions of the individuals within organizations

One of the SJT: a co-worker unexpectedly receives a USB device in the mail from a potential new vendor. What should you do:

  • Inform the IT team
  • Plug it in
  • contact the vendor and request alternative vendor

Means-oriented cultures tend to make more secure choices. Internally driven cultures (interal ethics or code) also more secure. Similarly employee-oriented cultures

Arianna Schuler Scott

Patterns of authority and control. Pressure experts exert on people who are resource constrained.

Longitudinal consent decisions in a health conditions. People who do not have much time to make decisions about consent, because they have health issues that they are focused on. So considered “dynamic consent” which considered consent over time. Participants want control. Data collection doesn’t have to be one time.

Informed consent cannot be coercive. We have to frame choices in a non-coercive manner. That is not so easy. Norms also inform these choices, but norms change.

Geoffrey Tomaino

Concern is going to unduly aquire our data and being deceived by AI.

Activity developed to test susceptibility to AI data attacks.

Activity which participants are told up-front before any of it happened

  1. Explain study to participants
    1. you will be in an attention task
    2. the first receive a unique passcode
    3. Then then have conversation with another participant as means of “distracting” them
    4. After that interaction, I ask the user what the password is. If only they have the password at the end, they can leave early.
  2. ChatGPT is instructed to get the passcode out of the participant

In study 39.7% of participants gave up the passcode (poodle) to the AI….. Which is not good.

Follow-up: Made the password harder “poodle134” and then only 21.5% gave up the password. So when the data matched its function (as in looked like a password) less people were willing to give it up.


Session 4: Measurement and Assessment

Presenters: Zinaida Benenson, L. Jean Camp, Vaibhav Garg, Damon McCoy, Sagar Samtani, Stuart Schechter

Zina Benenson

Title: Shedding Light on CVSS (Common Vulnerablity Scoring System) Scoring Inconsistencies

CVSS consists of 8 metrics for a vulnerability such as the vector, complexity, privileges required, User Interaction needed, scope, confidentiality, integrity, availability.

The problem is that CVSS states that the scoring is intended to be agnostic to the individual. But in practice different evaluators create different scores for the same vulnerability. Survey of 196 participants who evaluated 8 vulnerabilities, including 3 from the CWE most dangerous list. Follow-up study with 59 participants came back to rate again after some time.

Results show that the valuations were not consistent and had variation. Similarly, the same participant chose different severalties the second time than the first time. Looking at Out-of-bounds Write the Attack Vector is inconsistent. Scope was particularly problematic with many people providing different answers. But when participants consult documentation when considering vulnerabilities then they produce better results.

New CVSSv4 now out. Supposedly more usable. Studying now.

Jean Camp

Work is on labels. Security markets are hard and they do not currently work. Ashton Carter: “the best war is the one you don’t get in”.

Three reasons for lack of Security and Privacy:

  • Rational choice
  • Usability
  • Market value

SBOM elements. We asked people to rank the SBOM components by importance. Then created a nutrition-style label, as well as trust mark and graded shield. Including two-level labels with QR codes.

Gave users a set of device-label pairs and asked them to judge which is the most secure device. Everyone liked the label that they used (randomly selected for study). 50% said they would use a security label, and 33% would change the decision. But….many participants actually said that they even saw the label. For those who care about security and privacy (self-described) the labels did change purchasing decisions.

Vaibhav Garg

NSTAC - National Security Telecommunications Advisory Committee. They were asked to come up with a list of incentives which resulted in a report. Talk discussed incentives for organizations and measurement options.

Damon McCoy

Demystifying recommendation systems in relation to social media.

Algorithmic Feed Systems were brought about to fill a business need. The algorithms have to balance many stakeholders including users, producers, advertisers. But the algorithms are designed to maximise usage.

Taxonomy of social media feed approaches to better map the space. Also for policy makers to understand how these differs.

  • Inventory selection - including content requested and content implicitly referred. Set of all content that could be shown.
  • Content ranking
    • Content filtering
    • Content ranking
  • Feed Assembly - business logic here, ads inserted into the feed. Content may be modified at this point. Its not a pure recommendation algorithm.

Advertisers have a range of goals. Such as “performance advertisers” which are aiming to make a sale right then by showing a product the user wants. Compared to a “brand advertiser” such as Pepsi who are trying to promote their brand, but are not looking for a sale via a click on the advertisers.

Companies are faced with growth vs integrity. For-profit companies, so growth is sometimes prioritised.

Sagar Samtani

Mapping the Security Posture of the Open Source AI Landscape

TopBots, 2021 map of “Enterprise AI Companies”

AI has shifted from focused on model building to focus on managing risks around AI models.

  • Operational - scale, building
  • Ethical - bias, fairness
  • Security & Privacy - vulnerabilities, model theft

Not much work yet on LLM threats in open source. Open Source AI is partially provided by academics. So there are many GitHub repositories that have models. Scraped those, maps to CVSS.

HuggingFace has a pile of models, they collected them, and did a vulnerability accessement of them. Included in the AI Risk Database. One of the most vulnerable code is in the Transformers library which is key for LLMs.

Stuart Schechter

Title: When Security Backfires: Understanding the scale of unintended Harms

Security can be

  • breached - not protect what it is supposed to protect
  • unintended harms

How many million years of photos and video memories will be permanently lost because I lost access? How many people are not attending job interviews because they can’t access. Unintended harms. and how do we measure these.

Setting up a survey on Prolific and get a sense of how often different breaches and backfires happen. Using the data to put it in context. Survey is based on a survey .The survey started with an open-ended question about the three most common harms/backfires.

https://uharm.org

Q&A

Discussion around security issues in models that are more traditional security vulneragbilities as opposed to “output control” which is more LLM specific and includes things like not outputing people’s phone numbers, or not outputting training data.

Discussion around rate of change of advice. If it takes 30 years for things like the Energy star label to take effect, do we have advice that would be accurate to give users 30 years from now? Jean points out that advice like not sending money to people you have never met has been good advice for a very long time and will likely remain good advice for years to come.

Content/feed filtering has some security/safety implications such as the selection of predatory advertisements based on prior browsing history. Advertising is a mix of what advertisers are asking for and what the social media network is mapping to people.

Kami Vaniea
Kami Vaniea
Associate Professor of Usable Privacy and Security

I research how people interact with cyber security and privacy technology.