The Darker Side of Digitalization

Image
Catherine Brooks is associate director of the UA School of Information, which brings together faculty and students focused on various areas of information science, including virtual reality, artificial intelligence and machine learning.

Catherine Brooks is associate director of the UA School of Information, which brings together faculty and students focused on various areas of information science, including virtual reality, artificial intelligence and machine learning.

(Photo: Bob Demers/UANews)

As driverless cars hit the road, how will they make moral decisions in life-or-death situations?

As wearable sensors and implantable medical devices are used more widely, how will we protect the personal data they collect?

As artificial intelligence advances, how will we ensure that there remains a place in the workforce for human laborers?

These might sound like questions from a science fiction novel, but these once-hypothetical quandaries are now very real, and University of Arizona experts say higher education needs to address them, in both research and instruction.

Image
Kay Mathiesen's course on digital privacy involves students in collaborative exploration of privacy issues across an array of digital contexts, from social media to the internet of things.

Kay Mathiesen's course on digital privacy involves students in collaborative exploration of privacy issues across an array of digital contexts, from social media to the internet of things.

(Photo: Bob Demers/UANews)

UA faculty members Catherine Brooks and Kay Mathiesen, both in the UA's School of Information, are particularly interested in ethical issues involving information — including digital privacy, algorithmic discrimination and the spread of misinformation online.

They say addressing these and other complex challenges will require a collective approach that cuts across disciplinary lines.

"The grand challenges of the day are best resolved when we work together and focus on problem areas as opposed to disciplines," says Brooks, associate director of the School of Information, which brings together faculty and students focused on various areas of information science, including library science, virtual reality, artificial intelligence, data science and machine learning.

"We can't teach and talk about social issues like ethics, privacy and human freedom by relegating these topics into colleges," says Brooks, also an associate professor of communication and director of the UA's Center for Digital Society and Data Studies. "We can't only talk about citizens' rights in law or philosophy programs, for example. Issues of privacy or bias are something we should understand when creating new software, wearable technologies and other tools. We need people to understand how to build robots with social concerns in mind."

That sort of cross-disciplinary thinking is central to the School of Information, which launched in 2015 in the College of Social and Behavioral Sciences. The school is part of an international consortium of 89 such "iSchools" that work on issues at the intersection of people, information and technology. The UA's iSchool is the only one in the Southwestern U.S.  

The work of iSchools is especially timely now, as near-daily headlines raise questions about how data is being shared, interpreted, manipulated and protected.

Developing Tools With 'Baked-In' Protections

People are generating and sharing more data than ever before, sometimes without even realizing it.

Besides the obvious information we trail behind us — via things such as social media choices or online shopping habits — we also leave digital traces of where we've been (and when) via cellphones, Fitbits, cars, medical devices and other connected objects comprising a growing "internet of things."

As our physical and digital worlds become increasingly interconnected, it's becoming more difficult to manage the resulting explosion of data — a responsibility that still ultimately falls to an imperfect sentry, Brooks says: humans.

"There are large campuses on big plots of land that house big data stores that could be hacked and used for ill purpose by anyone around the globe," Brooks says. "So even if we trust our own government or tech leaders, there's an international hacking effort afoot that, in my opinion, puts us all at risk."

There are no simple answers to digital privacy concerns, which is why Brooks advocates a collective approach.

"We're still in a period where if there's a privacy concern, it gets relegated to the legal department downstairs," she says. "Instead, we need computer scientists, engineers and people building robots and working on information processing tools to presuppose the social challenges, like data management and user protection and citizen rights. We can't build any more tools without understanding the sociocultural context."

Brooks argues that developers of new technologies should consider "baked-in" privacy protections to safeguard end users.

The mobile messaging app Snapchat is an example of a platform that has incorporated privacy protections into its very design, with images and texts disappearing after they're viewed. It's not perfect — users can still capture Snapchat messages on their phone by taking a screenshot — but Brooks believes these sorts of ideas are smart, and users seem to appreciate this kind of innovation.

Image
Kay Mathiesen says that privacy is no longer a simple question of one individual "spying" and collecting personal information on another.

Kay Mathiesen says that privacy is no longer a simple question of one individual "spying" and collecting personal information on another.

(Photo: Bob Demers/UANews)

The Changing Nature of Privacy

Important to consider in discussions of privacy is what the term actually means in today's cultural context.

In what is widely regarded as the first publication to advocate a right to privacy in the U.S., an 1890 Harvard Law Review article by Samuel Warren and Louis Brandeis defines privacy as the "right to be let alone."

Americans' understanding of privacy has remained largely unchanged, with most people still thinking of privacy as a surveillance issue, says Mathiesen, an associate professor whose research focuses on information ethics and justice.

But that definition doesn't really work anymore, she argues.

Mathiesen, a philosopher who teaches courses on digital ethics and digital dilemmas, says that privacy today is no longer a simple question of one individual "spying" and collecting personal information on another.

"With big data, there's not as much of an interest in you as an individual," she says. "What's interesting is you as a member of a group and patterns we can see in your behavior and inferences we can make. We are able to take all that data, look at patterns in the data, develop algorithms, and in the future we'll see someone's pattern of behavior and make certain inferences about them based on that information we gathered.

Image
"With big data, there's not as much of an interest in you as an individual," Kay Mathiesen says. "What's interesting is you as a member of a group and patterns we can see in your behavior and inferences we can make."

"With big data, there's not as much of an interest in you as an individual," Kay Mathiesen says. "What's interesting is you as a member of a group and patterns we can see in your behavior and inferences we can make."

(Photo: Bob Demers/UANews)

"It's not about the person following you around and getting all this information about you, the individual. It's about the algorithms that can make all these inferences and know all this stuff about you, even though you've never been surveilled individually."

The recent Facebook controversy with Cambridge Analytica illustrates the changing nature of privacy, Mathiesen says.

Facebook was widely criticized after private information from tens of millions of users was exposed, without their permission, to the political consulting firm Cambridge Analytica. The firm allegedly used that information to create personality profiles and target people with tailored content in an effort to influence the 2016 presidential election.

The firm wasn't targeting any one person but a certain "type" of person.

Mathiesen says the incident also illuminates how unbalanced the privacy power struggle has become.

"Privacy has always been a power issue. If it's reciprocal — I know private stuff about you and you know private stuff about me — that's not such a big deal," she says. "But with big data, it's very asymmetrical. Big corporations or organizations have the ability to have this knowledge about you that you have not voluntarily shared with them, and you don't have that ability to know that about them."

Law Lags Behind Technology

Transparency is one way to help balance the scale, Mathiesen says. If users are told how their information might be used, they can at least make a better-informed decision about whether to share it.

Mathiesen says standard user agreements aren't enough, with their reams of complicated legal language that go largely unread and blindly accepted, even by privacy advocates. And even if user agreements were vastly simplified, transparency only goes so far, she says.

Image
Although laws often lag behind technology, that's not necessarily a bad thing, says Jane Bambauer, a professor in the UA's James E. Rogers College of Law.

Although laws often lag behind technology, that's not necessarily a bad thing, says Jane Bambauer, a professor in the UA's James E. Rogers College of Law.

(Photo: Bob Demers/UANews)

Laws and policies also can offer some measure of protection. However, American laws often lag behind technology, and that's not necessarily a bad thing, says Jane Bambauer, a professor in the UA's James E. Rogers College of Law, whose research focuses on privacy law and big data.

"There is a pretty consistent pattern of people not trusting new technology when the technology winds up — on balance — being good," Bambauer says, citing as an example the efforts of privacy groups to stop caller ID from rolling out when the technology was new.  

"That's partly why there's some reluctance to jump in and create legislation. In the U.S., we've let technology run and monitor it to see what we should do after the fact," Bambauer says.

Public outcry, like the response to Facebook-Cambridge Analytica, can sometimes motivate legislation, but even then, crafting privacy laws can be tricky, Bambauer says.

"Laws are hard enough to design well when everyone agrees on an issue," she says. "No one wants to be physically harmed, but even safety laws are hard to get right. Privacy harms have a wildly different value to people, so designing rules that everyone thinks are correct is even harder.

"I think it makes sense for legislators to do the work of figuring out what the harm is and craft sufficiently narrow regulations so that those harms can be avoided without unnecessarily deterring new technology. That's, obviously, easier said than done."

Opting Out: Is It Even Possible?

Outside of law and policy is another, far-easier-to-control way to deal with privacy concerns: individual choice. 

Users can choose not to participate in certain digital activities, although Mathiesen and Brooks agree that that's becoming more difficult in an increasingly digital world, especially when there are clear benefits to sharing certain types of information, which may outweigh the risks. For instance, sharing location data on your cellphone makes it easier to use mapping apps to find your way around, and engaging on social media may be your easiest way of connecting with faraway family and friends.

Image
When Catherine Brooks recently surveyed her students, she found that unless they or a family member had been a victim of identity theft, data theft or cyberbullying, they were largely unconcerned about their digital footprint.

When Catherine Brooks recently surveyed her students, she found that unless they or a family member had been a victim of identity theft, data theft or cyberbullying, they were largely unconcerned about their digital footprint.

(Photo: Bob Demers/UANews)

Furthermore, under the new definition of privacy that Mathiesen suggests, even if one does choose to opt out, it doesn't necessarily ensure protection, with so many others opting in.

"It doesn't matter if you don't share your information," Mathiesen says. "If everyone else is sharing their information and we can figure out patterns of behavior, and then we can look at just one behavior of yours that we were able to capture, then we're able to make inferences about you."

How accurate are those inferences? That's another concern gaining attention.

Data Interpretation Isn't Foolproof

When algorithms work as they're supposed to, they can have incredible mutual benefit for organizations and those they serve.

Online shopping, for instance, has been made easier for consumers by algorithms that use shoppers' purchasing history to suggest products they'll like. This creates a more personalized experience for the consumer, while increasing the merchant's chance of making a making a sale.

Yet it's important to remember that algorithms — however well-meaning — aren't perfect and can vary greatly in quality.

To illustrate how data interpretation can go wrong, Brooks describes a scenario in which office light-switch data could be used to make inferences about employees.

Image
This campus scene makes the point: Opting out is difficult in an increasingly digital world, especially when there are clear benefits to sharing certain types of information.

This campus scene makes the point: Opting out is difficult in an increasingly digital world, especially when there are clear benefits to sharing certain types of information.

(Photo: Bob Demers/UANews)

Because the UA has the ability to track light-switch data in campus buildings, it's easy to see when and where lights were on, which has raised questions about the potential to use that data to track employee productivity, Brooks says. If data were to show that lights are rarely turned on in a faculty member's office, it might suggest that she is not working, while in reality, she could be a highly productive faculty member who simply cares about saving energy and prefers working with the lights off.

Scenarios like these are increasingly the focus of organizational conversations, as more types of data are collected and inferences are continually made about people and their behavior, Brooks says.

Because it's so easy for humans and machines alike to draw incorrect conclusions from data, and because algorithms are becoming more influential in our daily lives, it's important to consider how well algorithmic decision-making is working, she says.

New York City is trying to do that. Late last year, the city passed a first-of-its kind algorithm accountability bill, assigning a task force to examine how city government agencies use algorithms to make decisions on everything from teacher evaluations to who gets out of jail. The goal is to increase transparency around the use of algorithms, and to ensure that algorithmic outcomes are just and fair.

Brooks recently co-authored a paper published in the Journal of Information Policy that explores issues of bias and discrimination in algorithms. She suggests that many types of data we've traditionally omitted in the interest of avoiding discrimination — such as race, age and gender — might actually help us check algorithms for bias or skewed trends that might disadvantage certain groups.

"The big burning social question is: How, in an algorithmic world, do we avoid discrimination?" she says. "It is part of the human challenge, generally, to monitor the ways that our decisions are biased or discriminatory. All we've done now is add a machine, and we can't become complacent and assume the machine will do it well. When we feed in existing data from a world where we already know there are discriminatory practices, it may get repeated and actually accentuated."

Navigating a 'Fake News' Environment

In addition to privacy and big-data issues, Brooks and Mathiesen worry about the spread of misinformation online, when computational abilities have made it easier to create and disseminate convincing false content in the form of articles, images and videos.  

Image
"Laws are hard enough to design well when everyone agrees on an issue," Jane Bambauer says. "Privacy harms have a wildly different value to people, so designing rules that everyone thinks are correct is even harder."

"Laws are hard enough to design well when everyone agrees on an issue," Jane Bambauer says. "Privacy harms have a wildly different value to people, so designing rules that everyone thinks are correct is even harder."

(Photo: Bob Demers/UANews)

There are a number of suggestions for how to tackle the "fake news" problem, ranging from online labeling or ratings systems all the way to government censorship. While some ideas have roused free-speech questions, Mathiesen argues that fake news is fraudulent content and so standard free-speech arguments shouldn't apply.

In most cases, Mathiesen says, fake news is profit-driven; those who generate and distribute it care more about ad money via clicks than they do about convincing readers to believe the content. Another common motivation for spreading misinformation is to cause polarization, chaos and distrust in traditional media, she says.

Even people who avoid social media, or who have the critical-thinking skills to be able to identify false content, are still indirectly affected by the spread of misinformation online, Mathiesen says. 

"Even if you're totally off Facebook, so you don't get the manipulation, other people are getting the manipulation, which means that we're in a democracy where people are not actually expressing their own views through their voting. They've been manipulated into certain kinds of views by people who have a tremendous amount of information about how to do that, based on big data," she says. "That's going to degrade our collective ability to govern ourselves democratically."

The fake news phenomenon also underscores how important it is to effectively communicate facts, including research findings by universities, Brooks says.

The UA will tackle that issue during a campuswide Science Communications Summit on April 20. About 100 faculty and staff from different disciplines will discuss challenges in communicating university research to policymakers, voters and others in the community.

Image
Eyes are everywhere, as evidenced by a camera positioned at the Main Gate entrance to the UA campus.

Eyes are everywhere, as evidenced by a camera positioned at the Main Gate entrance to the UA campus.

(Photo: Bob Demers/UANews)

'An Ethical Obligation'

"It has become more difficult to discern the quality of the information we see, and in a world like that — one where even live videos look real when they are fake — we can't make decisions as informed humans, and that's a scary environment," Brooks says. "We have an ethical obligation as a democratic and diverse society to resolve these issues."

From fake news to big data to digital privacy concerns, addressing emerging and complex ethical issues starts with getting people to understand that the issues exist, Brooks says.

That can be challenging when many people are relatively unconcerned about the data they leave behind. When Brooks recently surveyed her students, she found that unless they or a family member had been a victim of identity theft, data theft or cyberbullying, they were largely unworried about their digital footprint. 

"It wasn't surprising," Brooks says, "but it was a nice reminder that this really is not on someone's radar unless we keep talking about it — which is why I keep talking about it."