Security and Human Behavior (SHB) 2024 - Day 2

Interdisciplinary invite-only conference discussing how people interact with security technology. The conference is designed to share ideas, questions, and promote interesting discussion that can lead to deeper understanding and future breakthroughs.

The full schedule lists all the speakers and the sessions they are in. There is also an attendees page that lists some recent works from each person.

I am live blogging the event and then doing some clean-up afterwards. So some entries may have poor spelling or not flow as nicely as I might like. I plan on doing a post-event pass to do some cleanup and add links.

Contents:

Session 5: How People Think

Presenters: Yi Ting Chua, Manila Devaraja, Christian Eichenmüller, Richard John, Allison McDonald, Sergio Pastrana, Ryan Shandler

Yi Ting Chua

Title: Security and behavioral Evidence

Evidence is no longer just physical, now it is also digital since technology exists throughout our environments. So if a crime happens, not only does physical evidence need to be collected so does digial devices. These devices need to be identified as part of evidence collection. Digital forensics is really just a branch of forensic science.

Evidence and traces collected via digital devices are now also used beyond crimes. LIve streaming events, for example. Such as broadcasting or recording a protest.

What does all this mean for the court system? Such a system has many stakeholders including: judges, prosecutors, defense attorneys, jurors, and witnesses. Several of these groups need to analyse and sort through the data.

Manila Devaraja

Smartphone permision settings

Permission settings are the primary way to manage privacy on smartphones. But there are allot of permission settings. And those settings and meanings do change over time. Thankfully devices, including iOS, have various features to help users understand them such as privacy report. Also the automatic removal of permissions.

Research challenges: What do users think a setting does vs what it actually does? How do user preference vary? Could users be clustered for personalization? How do socioeconomic factors influence privacy preferences and understanding?

As a side note, users are starting to use LLMs when filling in open text boxes.

Christian Eichenmüller

“My whereabouts, my location, it’s directly linked to my physical security” The spatial digital-safety strategies of at-risk users

When Snowden met with Glen Greenwald they first took the batteries out of their cell phones and put them in the fridge. They did so because a fridge is a ferdey cage that blocks signals. Christian is very interested in changing spaces. By blocking phone signals the space they are standing in changes.

Conducting a study using interviews to understand the spaces people are trusting or considering to need protection. The types of trust needs and assumptions differ a good bit by person and that person’s threat model. Their trust of providers can be quite different depending on their threats.

There are “unknown unknowns” and “known unknowns” and “unknown unknowns” are more challenging for users because they do not have many options. Managing location privacy is also tricky. Some people leave their phones at home when they go certain places. People think allot about about where devices are physically located. The control what devices are allowed in some spaces, they also choose not to enter spaces that have devices they do not like or where there is a risk of such devices.

Data-rich and data-poor environments. These can be intentional to get a bit of peace. Or it can be caused by governments say shutting off the internet.

Richard John

Title: Moral Foundations of Armed Conflict

Consideration of the armed conflict in the Ukraine. They looked at how people world wide consider the conflict.

Moral Foundations Theory (MFT; Grahm et al.2013; Haidt & Joseph, 2004) was designed to explain both the variety and universality of moral judgments (Graham et al., 2017)".

Looked at English-language tweets oer first 36 weeks in the US, NatO countries, Ukraine, and Russia. Used moral foundations discretionary (LIWC) to analyse.

The conflict was conceptualized around loyalty and care. Some about fairness.

Allison McDonald

Title: “Delete it and Move On”: Digital Management of Shared Sexual Content after a Breakup

Generally engaging in sexting is a good thing for inter-personal relationships. But not every relationship lasts and there are lots of risks that can happen with type of content if the other person who holds it is no longer trusted.

Interesting question of how people use these tools who don’t think they need protection at the time they share, but then later do need them. Conducted research about how people thought they or their partner might handle such media after a breakup. Also how do they think about co-created content.

We know from prior work that people are sexting where they already engaging in chatting like SMS, Messenger, and WhatsApp. 75% of people saved sexts. 64% were proactively saving such as downloading to camera roll, dedicated folders or locked or hidden folders. Surprisingly after a breakup only about 55% of people would want the other person to delete the images. 11% hoped the other person would keep the content: “save it and remember what he misses”. Most people had not had a conversation with their partner about managing such images, and most that did have that conversation had it after/during the breakup.

People viewed the content as meaningful (~33%) and therefor wanted to keep it. While 2/3 would prefer it was deleted.

Challenging question of how to technically operationalize shared ownersship of media. Especially in cases where that media may have been stored elsewhere. People also keep this content over time including when they change devices. How do we as designers manage the storage of sensitive content over time.

Sergio Pastrana

Title: How people think… or how they share their thoughts in times of war

In February 2022 Russia decided to invade the Ukraine. Russian government declared this as a “special operation” and evne banned the word “war” on social media.

So how do people react to the bans? They start looking for external channels. Guidance on how to do things like give reviews on Google Maps of Russian resturants which are actually comments about the war.

We sudied the unconventional use of online services to bypass censorship. Such attempts were common at the start of the war. Topics being discussed more after the war started included: fascism, information, slurs, violence, and warlike.

When western people tried to bypass to talk to people in Russia they tended toward two topics: (dis)information and censorship bypass. There were also discussions around humanitarian help, hate speech, and travel advice (how to get in/out).

Platforms didn’t really like having this war-related content being posted on them because it was not their primary purpose. TripAdvisor monitored and removed content about every 2 days. Google Maps actually disabled reviews for all sites in Russia. They also did a mass removal of many posts, not just war-related.

Ryan Shandler (Georgia Tech)

Title The Insidious Consequences of Cyberattacks: An experimental approach

George runs experiments that randomly assigns people to experience cyberattacks in controlled conditions to see how these impact people’s views.

The impacts of cyberattacks are not just the close-in ones. It also includes the long-term imapcts. These long term consequences can have severe psychological reactions that lead to issues like having views about how the world will end because of a cyberattack.

Used actors to create fake media broadcasts to present information about how. Lots of interesting methodologies to test how people react to attacks in real-world situations. Lots of ethical review done around all these studies. We have a duty of care to people to understand what they experience so we can help them.

  1. Even a simple cyberattack can cause severe stress.
  2. Exposure to cyberattacks reduces trust in government and trust in cyberspace. Distrust in protections of societies.
  3. People are willing to sacrifice civil liberties for security as a function of their exposure to cyber threats.

Ryan believes that the accumulation of cyber attacks over time is having some serious impact on political alignment and views.

Session Discussion (Q&A)

Q: What is the moral foundation we are forming in countries around war.

A: Its surprising how similar the moral foundations are across regions. Ukraine for example was using more care words.

Q: To Ryan. I wonder if these concerns are linked to prior experiences. There are large known impacts on public hospitals or other events.

Ryan: We have looked. There is a spike in stress after large events. We do find that people closer to the event experience that stress more. But to properly study we would need before/after data which is harder to do with real-life events.

Q: To Ryan: The first-order impact on people may not be the goal. Instead long-term stress (third-order) impact may be the real goal for the government behind the attacks. We know that the Russian ransomwar instead of ransomware because the goal is not to lock down data, it is long-term psychological attack.

Q: To Alison. How does what people want to happen with sensitive images change before and after a breakup.

A: I was expecting those that had a breakup to be more restrictive, but that wasn’t the case.

Q: Was there any mention of the partner/ex-partner’s security practices. Images are often lost due to attack.

A: (Allison) No one mentioned it.

Q: Sources of distress. Shock? Loss of device? Are these cognitive effects vs physical effects.

A: Usually its a “creeping dread” that something bad is coming and I don’t know how to protect myself. Beyond that, it depends on the type of attacks.

Q: To Ryan: What are your views on mitigation? When we talk about disinformation (which is different) and how talking about disinformation cause less problems then talking about the amount of disinformation. Is it similar in cyber attacks: 1) are people distressed in proportion to the attack type, 2) do you look at how governments discuss the issue such as discussing with public before hand vs after the attack.

A: Cyber attacks are different from disinformation. To deal with cyber attacks we need to be talking more about them. People closer to the attack were less worried because they saw the saw hospital and saw that everything was fine. Vs people far away were imagining all sorts of things out of movies and were therefore more worried.

Q: For Christian. Do people understand that governments get data from companies?

A: Trust in entities is tied to the threat model. People do ask themselves how a state might interact with companies. For example, participant from Iran said that Google was ok to use because Iran government do not get along.

Q: To Alison. People share sexual content partially because it is risky and that helps with trust. Anything you do to mitigate how risky an experience is, that won’t help if the purpose is partially to be risky. Important to consider because the user may be seeking risk. Also consider pre-nuptial agreement research that study un-romantic up-front discussion.

A: It is very hard to minimize the risk where the sharing does not feel risky. So the risk would still be there.

Q: To Alison. Demonstrating capability for safety is something that is part of relationship building. Because being safe is a way to signal trustworthiness.

A: We did measure security posture. It did not correlate with their sexting protection behavior.

Q: To Yi Ting. Have you talked to people who are involved such as expert witnesses. Several people in the room have been expert witnesses. There has been work on how to best educate people like judges and lawyers. There is quite a mix of knowledge levels.

A: There is not allot of research on the percentages of say lawyers who are or are not knowledgeable. They also learn on case, where if they are on a case they do a good bit of self-education.

Q: To Richard. Do you differentiate between cyber exploit and a cyber attack?

A: The public does not differentiate. They just see it as an attack as it is described by the media.

Q: What contextual information about prior attacks did you collect.

A: As much as we can get, close to all of it. Extensive surveys with people.


Session 6: Privacy

Presenters: Alessandro Acquisti, Laura Brandimarte, Serge Egelman, Susan Landau, Alena Naiakshina, Sameer Patil

Alessandro Acquisti

An infrastructure for a study <- took nearly 5 years to design

Industry says that behavioral targeting is good for consumers and sellers because it makes it easier for consumers to find products. So better for economy.

Question: Is behavioral advertising economically beneficial (privacy asside).

Study Design: Strict privacy controls:

  • Informed consent
  • De-indentified
  • Encrypted

Starting with Facebook ads. Participants who click on the ads get screened. Then sent to survey if the screening passes. Then we get them to install a browser extension. Complex technical backend to process and store all the data coming in. When the study starts in the fall we expect to get about a Petabyte of data.

Data is very sensitive, so lots of ethics discussions. Some protections put in. We blacklist some sites like GMail, Google Drive, so data is not collected. System also tries to strip PII. Communication channel that hides participant identity.

System collects: all the URLs visited, all the ads they see, the HTML for some websites. From email we get shopping and promotional emails. We don’t collect information like credit card. But do collect that they bought something and from where.

Original question was ad-blocking, ad-tracking, and similar. But we now realize how much usage is possible with this data.

Laura Brandimarte (University of Arizona)

Title: LLMs and Creativity

Motivation: Can AI-Generated Text be Reliably Detected?

Currently we (researchers) are terrible at detecting AI-generated text. People are also bad at it. Its even worse because GPT detectors are biased against non-native speakers.

Idea: can we find features of human creativity that would help distinguish between humans and LLMs. Because LLMs are just prediction models that create the most likely word or set of words that will come after the prompt.

Perplexity assesses each word prediction with equal importance. But in English we have bursty nature of language. Certain words or phrases are more prevalent in particular contexts, this is what current detectors look for. LLM output tends to have more repetition.

Some initial results are quite positive. Looking at unique ways humans write has good potential.

Serge Egelman

Title: The Medium is the Message: How secure messaging apps leak sensitive data to push notification services (to appear at PETS)

Misuse of push-notification APIs and how that is inadvertently sharing information with apps. For the purpose of batter saving, there are APIs that run in the background that will scan for new messages and “wake up” an app when there is a message for it. This means each app does not need to constantly pull for new messages.

Notifications can have payloads. You can put in the payload of the message any key/value pairs that you want.

How should this be done: (Signal does it this way)

  1. Cloud notification message has empty payload
  2. On receipt, OS wakes up target appear
  3. Target app “phones home”, securely downloads message content.

In other words, ideally no sensitive data is in the payload. Android Developer blog recommends using the payload data to send things to make it more efficient. The problem is that if information is sent via payload then Google sees it and its open to say legal requests.

Serge wanted to know if payloads are being used in such a way that leaks data. The answer is yes. Looked at apps that advertise secure messaging and have at least a million installs. Not necessarily E2E. 20 apps looked at. Skype does not do E2E and even if the user asks metadata is still sent unenctrypted.

Ron Whden wrote a letter to General Garland about how the government is collecting data this way.

Much of Serge’s work is about developer misuse of security APIs. In his opinion this is an incentive mismatch issue.

Susan Landau

Title: How is the private sector using non-content transmissions of smartphones

After the Snowden disclosure of bulk call records. Then the president said not to worry because call content was not recorded, just the metadata. But metadata says allot of information, it can be used to predict many things about a person such as religion, all based on who they contact.

Smartphones are even more invasive because they also record location. Now that data can be very useful for things like urban planning, emergency planning.

But it is not just phone metadata… phones have GPS, accelerometer, proximity sensor, lots of sensors. The sensors are super useful to the user to make experience valuable. But all that data can be offloaded off the phone. Most software reports how it is doing back to the main server. That is needed for companies to detect problems so they can fix it. Data is collected almost without any controls. Notice and choice is also implausible, because you must be asked about each packet. Its hard for people to predict how such information can be used to predict other information.

Lots of existing research on metadata and telemetry collection. Everyone looks at collection, not much on use. Why? Its easier to see what is collected. If you want to know how it is used, you have to talk to a company, and they are disinclined.

To find out, we tried to look at patents. Though patents show interest, not intent. In other words, this is what companies can do but its unclear if that is what they are doing. 2500 patents studied (not all read). Found a stubhub patent measuring the leve. of a user’s enjoyment at the event to determine prices at a future one. Meta patent uses accelerometer data to determine two users had beeen repeatedly in close proximity on the same form of transport; when one user changes traveling modality and then proposes them as a potential contact.

Alena Naiakshina

Title: Those things are written by lawyers, and programmers.

Interested in the gap between regulators, privacy experts, and developers. GDPR (2018) for example can result in fines. So studied how privacy experts, developers, and team coordinators communicated.

Legal language is (unsurprisingly) complex for developers to understand causing them to have to reach out to team coordinators to have it explained. Developers struggle to differentiate between privacy and security (they are both cryptography right?). Privacy experts create documents and send to team coordinators, sometimes that communication is one-way, sometime two-way.

Privacy experts are not seen as technically knowledgeable. Which can make communications complex as much explanation is required on all sides.

Privacy requirement verification is also an issue because its unclear how to do the verification. Developers can’t implement because they don’t understand the goal well enough to right a test to see if it was reached the way the privacy expert wanted it to be.

Also looked at how developers implemented privacy requirement. They did this as a lab study where in some conditions they had access to a privacy expert. The researchers found that developers rarely reached out to privacy experts. Though they also commented that the privacy requirements were hard to understand. Experts were usually contacted at the end once the solution was built to see if it was done correctly.

Sameer Patil

The user-side of privacy decision making. If people find things/apps/devices “creepy” why are they still using them? If there is regulations coming in, then why do apps continue to be creepy.

Users have more or less resigned to having their privacy invaded. They say things like “that is the price to pay to use the app” or “yes, but what can I do”. They feel like there is no other option. Allot of research in the area are about empowering users. WE find that that empowerment is embedded in a larger power over the users. For example: Facebook gives you control over who can see posts, but that exists within a larger Facebook ecosystem where users have few choices.

Decision making is also being driven by the larger expectations about app or other technology behavior. Users learn that apps take data and are “creepy” so they just assume this is true for all apps and that view is hard to shift. So if all apps are creepy, then what incentive do app developers have to be not-creepy if the user expects the creepiness.

Ongoing projects:

Started collecting emails that are meant for some other “Sameer Patil”. Interesting content in the emails. All sorts of things in the emails like banking transactions, bill payments, two factor authentications, and other things. Many attempts to stop the emails such as “unsubscribe” or emailing the sender. Sometimes take over the account and try and delete it if causes enough problem. Lots of experiences that are being written up in a paper.

Session Discussion (Q&A)

Q/A: Onging tensions between developers and legal teams in regards to privacy requirements. Several people have tried building tools to help the developers better understand laws. Though its tricky to convert “its complicated” in law to clear checklists for developers.

Q: For Susan. Did you see on the date of the patents any indication that its information being handed off from the NSA

A: Not really. Most patents fit in the types of analysis the companies were already doing.

Q/A: Putting research into Lawfair is a good choice. Its a good way to get information in front of policy makers, lawyers, and journalists.

Q: For Serge. Can you elaborate on what “third parties” mean. For example, I got a new car and now Signal messages are showing up on the dashboard.

A: That data is being processed on your phone. It is possible the car is scraping what is presented.

A: (Susan) Organization called “The Conversation” that take academic work and edit it and put it up on their website. They put it on their website and then it is free for any journalists to use world-wide for free.

Q: To Alessandro. Your work is awesome. I’m curious about some of your questions. Also how are you going to select the participants in terms of representation and how managed over time.

A (Alessandro): It is allot of work and we do want to make it available to others. The infrastructure can also be made available, though it would take lots of maintenance to run. The research hypothesis, it started about ads, but then we realized we needed a platform that might be more useful for research.

The initial research looked at advertising. We believe that existing research is producing some research that is not taking into account key points. If you study one ad campaign they you may not see situations where budget is being relocated elsewhere. Behavioral targeted ads have been shown to include lower-quality vendors that cost more than can be found in search. Which would not be good for consumers.

Q: For Alessandro. What evidence would be needed to convince advertisers that behavioral advertising does not work. What kind of evidence is needed to convince them to use a less privacy-invasive approach.

A: (Alessandro) Assuming that our assumption is true (study not yet run). Then it would take several studies from different groups. Just this one probably won’t do it. It could well be that if all marketers move over to behavioral advertising then it is possible that they see the same percentage of engagement as before once everyone is doing it. For research, we need to consider more wholistic view, not just local results.

A (Serge): You will never get someone to understand something where their salary depends on them not understanding.

A (Adam): The other problem is that Facebook makes their money off of this. They are disinclined to accept such results.


Session 7: Wickedness

Presenters: Max Abrahms, Luca Allodi, Miranda Bruce, Ben Collier, Sunny Consolvo, Diana Freed, Anh V. Vu

Diana Freed

Research on youth and their concerns and threats.

Sunny Consolvo (Google)

Long term interest in advice we give to end users and how experts assess that advice.

Luca Allodi

Cybercriminal enterprises as tech startups. How do criminals sort out all the technical aspects.

Looked at Genesis market. Different online markets have different features such as how involved moderators are, and how vetted the participants are.

These online ecosystems are evolving. Its moving away from following communities. More focus to services. More transaction elements move off of the forum such as onto Telegram.

Eindhoven Security Hub - infrastructure to analyze lots of data

Max Abrahms

Title: Operation AI-Aqsa Flood: Resolving the Puzzle

Interest in how governments are sometimes forced to perform actions. So things like does terrorism work. Had some good models…then the Al-Aqsa Flood happened and it upended the models.

It used to be assumed that people turned to political violence because it is effective. But that doesn’t necessarily match observational reality. Earlier research showed that violence against government was more likely to be effective than when targets were indiscriminate.

The latest situation in Israel and Palestine is leading to some higher level victories for Palestine in regards to external sympathy and recognition of their state. There is an interesting dynamic about claiming violence against military vs civilian targets.

Miranda Bruce

Title: Mapping the Geography of the Profit-driven cybercrime

Interested in the “local dimensions” or the “human element” in cybercrime. Focus is on profit-driven cybercrime. Studying the area allows us to develop global metrics, and better understanding the drivers of cybercrime. Understanding the conditions to create a cyber crime hub is helpful long term.

Locating cybercriminals is hard. To get around the problem they surveyed the experts such as active cybercrime intelligence officers. Started with a core group of known experts and used them to get references for other top experts. Also lots of cold calls to ensure experts were represented worldwide.

Survey looked at the: level of activity, impact, technical skill.

Found that cybercrime originates in a wide number of countries 90+. But only 6 appears on all indexes.

Ben Collier

Title: Influence Policing

Ben has a new book: Tor: from the Dark Web to the Future of Privacy

Influence policing - social marketing including behavioral advertising. The government normally uses these to do things like get people to brush their teeth. The police also use this type of marketing.

Government does behavior change propaganda. Such as “Lock your doors and windows”. Then there is a move towards nudging. Particularly psychological nudges. Internet means that nudges can be targeted to specific sections of the population.

What do the campaigns look like? Campaigns look not only at how to target at-risk people with content but also their parents. Also targets potential friends of people who might engage in violence.

Anh V. Vu

Title: No Easy way out: the effectiveness of de platforming an extremist forum to suppress hate and harassment

Look at Kiwi Farms - one of the largest forums known for online harassment. Sadly tied to suicides and has been up for 10+ years. Then in September 2022 someone in London Ontario who was targeted by the forum for harassment. She started a campaign to get platforms to block the forums. This caused the forum to move to several platforms. But they were even blocked by T1 networks.

We wanted to know how effective these blocking efforts were. Unintended consequence of blocking happened: Streisand effect. Right after blocking attempts more people tried to visit the forum, likely to see what they were about. Looking at network traffic it is easy to see drops in traffic as each network dropped them. As of publication, the forum was alive and well having recovered.

When the forum was blocked, people moved to competitors and to messaging apps.

Taking down such a website is very hard. Especially when the owners are not arrested or otherwise prevented from bringing the website back up.

Session Discussion (Q&A)

Q: Is there some way to put measurements behind different security advice.

A: A part of the problem is deciding the criteria. Is it that the issue is common? Or how severe the problems are? So how do you rank advice across such diversity.

Q: For Miranda. Its notable that the global south isn’t represented. Also there are countries I would expect to see based on redacted data are not there.

A: We didn’t see an evidence of any biases in terms of ranking their own countries lower/higher and similar with countries that their country might have tensions with. Experts also opted out because they did not feel like enough of an expert.

Q: IN prior SHB we have seen presentations about organized crime. How much work have you done or planned around bringing in understanding organized crime.

A: Organized crime: organized, crime, and governance.

Q: For Ben. Thoughts on what is appropriate and not appropriate for governments to do? We are used to the surveillance. But this is behavioral targeted behavior change.

A: Well it is legal in the UK. (Discussion of legality in various countries.) Governments and law enforcement are a rare advertiser that can cause serious consequences like jail or even kill you.


Session 8: Past and Future

Presenters: Judith Donath, Simson Garfinkel, Matt Goerzen, Alexander Klimburg, Aileen Nielsen, Bruce Schneier

Judith Donath

Online, false news spreads faster and farther than true information. Why?

False stories are the high heels of the internet. Signaling theory is a model of how communication evolved around deception and honesty. It can be profitable to lie. But if all communication was false, then communication would be useless. So how does communication evolve to be reliable enough to function.

A reliable signal is affordable to give by those telling the truth and expensive for those that lie. Status, for example, an be signaled by buying an expensive car which is a reliable signal of having wealth.

Claim: Fashion is a signal of status in an information society. By “fashion” is anything you show to others. Fashion also needs to change to show ability to keep up. Fashions exist in all sorts of topics from clothing, pets, and art. To show status in fashion you need to have knowledge of what is coming next, and what will be the next thing.

Innovations are adopted because they have utility. Fashion is a status symbol. So it has more value when it has less utility. The less useful an object the more it distinguishes you. Fake news is similar. True news is posted by everyone, posting it does not help people stand out.

It is important to understand why fake information is spread. If people are spreading it for fashion-like status is the goal and the accuracy is not the point.

Simson Garfinkel

  1. What is new with digitalcorpora
  2. What is differential privacy
  3. Agency

Digital Corpra catalog can be used for research for free. Built to assist in digital forensics education. The point is to all pull from a single source and do comparable research. Also, there is no PII or illegal content. There are also solutions that can be used on request.

SAFEDOCS program was ended :( resulting in an 8 million PDF corpus.

Educational impact of Digital Corpora.

Differential Privacy: Persistence pays off. NIST SP 800-188: De-Identifying Government Datasets: Techniques and Governance. The guidance is a good citation on why de-identifying does not work.

NIST SP 800-226: nice graph for visualizing privacy vs accuracy for setting the privacy loss metric.

Matt Goerzen

Title: Some Pasts and Futures

Thought about how to create a framework around data manipulation. Things like baiting journalists, and targeted advertising.

The US census became concerned that there were risks to their primary purpose. Issues like questions about citizenship. So they needed to not treat themselves as a trusted infrastructure. So developed understanding of different stakeholders and what security in census meant for those stakeholders. Security for the census includes also the protection of the people taking the survey including their perception of security.

Became interested in trolls and white hats. Who is positioned to pen test social media?

Looked at the emergence of white hat hacking. Resulting in the publication: “Wearing Many Hats”. Some concepts that came out included “security by spectacle” where experts worked with the media. Via the media they argued that the real enemy is negligent companies rather than attackers.

Exploits developed as a type of scientific proof of the existence of a vulnerability. It can be used for proof and also for attack.

Alexander Klimburg

Aileen Nielsen

Will privacy law change the privacy environment? (And can it?)

Current US privacy law is trying to protect against:

  • Lack of transparency
  • Lack of control
  • Surfeit of Friction

General approach is to create more user rights, especially in regards to technical affordances. Such as ability to download in a common format. Now we will be getting Global Privacy Control (GPC) and data access/portability rights.

Do these affordances impact the privacy of users?

Global privacy controls: have achieved some legal recognition. The Global Privacy Control is also growing.

Questions: now that there are legal teeth, does that change adoption of the affordances? If we had to pay people, how much would we need to pay?

Also what is the success of the entire statutory scheme. This is where we get to data access portability rights. Does different legal status impact the data that is being collected and stored.

Bruce Schneier

Captchas: I collect caption jokes, please send.

Trust and AI. Trust is essential to society. The fact that we don’t notice the trust shows how much it is working.

Humans will make errors and assumes AI are friends rather than services. And companies will take advantage of this incorrect view.

Trust is an overloaded word. Interpersonal trust is like the trust of a friend. There is also social trust, which is trust in predictability. For example, Uber made taxi driving safer via stars.

We regularly make category errors around trust. Governments, corporations, are services not friends. We are about to make this same category error with AI. We may trust them, but will they be trustworthy. Did your chatbot recommend a vacation because it is best for you? Or because the company wants to sell those vacation packages.

AI personal assistant will work best if it knows everything about you which will make it effective but also quite scary. And we will start thinking of it as a friend that can be trusted. It will act trustworthy but it will not be trustworthy.

Bruce has been pushing by public AI models. Something that will require public accountability not just company business models. We can have AI that are agents, not just double agents. It is essential that governments help with this since governments have the potential to create social trust.

Session Discussion (Q&A)

Q: I have an Amazon Alexa that recommends things and I say “Don’t tell me about X again” and it says “I am sorry, I don’t understand”. And I realize that I am in an abusive relationship :P There is some help in thinking about AI as people, but people!=friend. There is that annoying person. Thinking of Alexa that way may help.

A: Because the devices are so relational, you can feel pressure from machines. We are susceptible to trying to please machines.

Any public system can be co-opted. For example, an Alexa can turn on and off lights. There is also an alternative called Home Assistant which is supposed to be free of surveillance. But there is now a company behind it because governments don’t have money for maintenance.

Q: To Aileen. Laws are also dependent on enforcement. If you look at the number of VVPA cases filed. Leading to tracking being removed because there are lots of litigation being filed.

A: (Aileen) I wonder synically if this is a moment in time and it will decay. There is not always impact from enforcement. If you have to enforce against each firm, that is too expensive.

Q: Ownership. So who owns my Amazon search history. Is it me, is it them?

A: Long term question, particularly in regards to LLMs. Lots of companies have been re-writing their contracts because of things like LLMs. There are some moves to start treating data as property. But this type of information is really co-creation. I write a post on Reddit, but Reddit itself created the platform, the moderator put in work, and people reacted. It isn’t just the author.

Q: For Judith. I love your framing. How does fast fashion fits into that framework.

A: Fast fashion is very interesting. Fashion started in roughly 14 century. Fashion moved in about a year because that is how long it took to travel. The rate of information movement impacts the rate of fashion change.

Q: Data liberation front from Google where you could download your data and then they deleted it. The willingness to accept model: you can come to us and we can do business but then you also have the right to leave.

Q: Fashion trends. There tends to be shame if you are doing things that are out of fashion. Is there some way to make misinformation unfashionable.

A: There are two risks with fashion: 1) the cost of being at the forefront is riskier, people might not like your fashion, 2) but at the tail end it is less risky but no one is following you.

Q: Going low is a way of counter-signaling.

A: Its another fashion-related piece. Imagine a world where you can signal high/medium/low. You don’t want to be copied by someone below you. So one strategy is to signal low because they can get away with it.

Q: Hypothetical. Imagine that in the future AI gets so good that bad actors start using it to cause harms at scale. Such as fake news read out by a real news anchor (deepfake). Should we reach such a future, and only solution is to use piles of biometrics, but GDPR stops us. Then how do we save the world?

A: AI is already used at scale in cyber crime. Its an arms race. I doubt that GDPR will block.

Fake news and fake photos have been around for forever. This has been around for a long time. Better phishing emails are not always the answer, sometimes bad phishing is intentional to capture the types of people most likely to fall for the phish.

Weirdly the demand for reliable information is actually rather low. How do you raise the demand for reliable information? For example, building up local news. Current global news that isn’t about them causes them to treat it as entertainment.

Kami Vaniea
Kami Vaniea
Associate Professor of Usable Privacy and Security

I research how people interact with cyber security and privacy technology.