Background Image
Table of Contents Table of Contents
Previous Page  617 / 658 Next Page
Basic version Information
Show Menu
Previous Page 617 / 658 Next Page
Page Background

615

Thursday, November 10

1 1 : 0 0 – 1 2 : 3 0

PN 074

Social Media Companies’ Responsibility for Protecting Children’s Privacy from Commercial Data Collection

K. Montgomery

1

1

American University, Washington DC., USA

As digital marketing and data collection practices continue to grow and diversify, protecting children’s right to privacy in respect to commercial data col‑

lection is becoming an increasingly important, yet insufficiently discussed aspect of children’s rights (Montgomery, 2015). Academic studies and think

tank surveys have helped promote our understanding of the nature and extent of information that young people post, as well as their perceptions of risk.

However, children’s privacy cannot be fully understood or adequately addressed without taking into account the broader market trends that are shaping

the digital media system and young people’s involvement in it. Its core business model relies on continuous data collection and monitoring of online

behavior patterns in order to target individual users. Thus, marketing and privacy are inextricably intertwined. This integration has become even deeper in

recent years.With the growing influence of“Big Data,”social media platforms are now part of an evolving integrated, ubiquitous media environment, where

content, culture, and commerce are highly interconnected, reaching and engaging users across the Web, on mobile devices, and in the real world. A new

generation of sophisticated analytics and measurement systems enables unprecedented profiling and targeting. These systems operate under the surface

of social media platforms, without visibility or disclosure to users. Their implications are particularly important for teens, who spend considerable time

engaging with social media. Curating personal profiles, communicating with online friends, and expressing opinions and emotions have become routine

and commonplace behaviors for young people. Many of these activities tap into core developmental needs of growing up—especially through the tween

and teen years—including identity exploration, peer relationships, and individual expression. Social media marketers design their data collection, ana‑

lytics, and targeting strategies to take advantage of the special relationship that adolescents have to social media platforms. A growing body of research

suggests that biological and psychosocial attributes of adolescence may make them vulnerable to such data collection-based marketing techniques. This

contribution provides an overview of current regulatory regimes that specify intermediary liability for commercial data collection in the United States and

Europe, and examines them against the international principles for protecting children’s rights. It argues for the development of safeguards that make

distinctions between practices directed at younger children and those used with adolescents, drawing from the developmental literature on each of these

stages of childhood. For example, teenagers are at a stage in their lives when they need to establish autonomy for themselves, and to seek opportunities

for exploring their own unique identities, forging and defining friendships, and finding their voice in the broader social and political discourse. Social media

privacy and marketing protections for teens should not restrict their access to these important participatory digital platforms.

PN 075

Paper by Tijana Milosevic

T. Milosevic

1

1

University of Oslo, Oslo, Norway

This study examines relative effectiveness of policies and enforcement mechanisms that social media companies have against cyberbullying on their plat‑

forms. It relies on two theoretical frameworks: the one on privatization of digital public sphere which signals the increasing role of intermediary platforms in

managing online speech (DeNardis, 2014; van Dijck, 2013) and on EU Kids Online model on risks and opportunities for children online (Livingstone, Masche‑

roni, & Staksrud, 2015). The role of these platforms in addressing cyberbullying is under-researched and previous studies do not examine it at this scope

(Bazelon, 2013; Mathias et al., 2015). By “cyberbullying policies” the author refers to self-regulatory mechanisms (McLaughlin, 2013) that social media

companies develop within their corporate social responsibility to intervene in existing cyberbullying incidents and prevent future ones. These mechanisms

include, but are not limited to: reporting tools; blocking and filtering software; geofencing, any forms of human or automated moderation systems such as

supervised machine learning; as well as anti-bullying educational materials. Hence they include both intervention and prevention mechanisms. Based on

my dissertation research, the study is a qualitative analysis of fourteen social media companies’corporate documents that contain provisions against cyber‑

bullying. The author also conducted twenty-seven in-depth interviews with representatives of these companies, as well as with representatives of e-safety

NGOs in Europe and the United States who work with social media companies on designing the mechanisms and policies. The author signals concerns

regarding transparency and accountability and explains the process through which these policies develop and influence the perceptions of regulators as to

what constitutes a safe platform.The results explain the

following:What

companies describe as“advanced policies and mechanisms,”shift the responsibility

for incidents from the companies and onto the users; while the companies characterize advanced policies as“effective,”and as“empowering users”they tend

not to provide evidence that users find them effective; the policies that companies consider as “effective”also help the companies handle their cases more

efficiently yet the companies do not discuss how this choice of policies affect their business models.

PN 076

Paper by Elisabeth Staksrud

E. Staksrud

1

1

University of Oslo, Oslo, Norway

The EU self-regulatory action line for protecting children online can be seen in relation to the self-regulatory framework advocated by the Council Recom‑

mendation on Protections of Minors and Human Dignity in 1998 (European Council, 1998). In this recommendation, indicative guidelines are presented

for the national implementation of a self- regulatory framework for the protection of minors and human dignity in online audiovisual and information

services. The purpose was“… to foster a climate of confidence in the on-line audiovisual and information services industry by ensuring broad consistency,

at Community level, in the development, by the businesses and other parties concerned, of national self-regulation frameworks for the protection of minors

and human dignity (page 52)”. Thus, E-safety non-governmental organizations are given an important role in the multi-stakeholder self-regulatory system

of child protection in the European Union (Staksrud, 2013). Yet numerous aspects of their work remain insufficiently transparent to the public. For instance,

many influential NGOs do not reveal the nature and the details behind their financial relationships with the industry, nor with government bodies. Under