Data Injustice and Ethnographic Sampling?

Posted by on Jan 20, 2019 in Child Data Citizen Project
Data Injustice and Ethnographic Sampling?

There is something profoundly unequal and unjust about the different ways in which dataveillance and digital profiling impacts white, middle-class families on the one hand, and  ethnic minority, migrant or low-income families on the other. Data technologies and automated systems are not equal or fair. This emerges beautifully in Eubanks’ book (2018), Automating Inequality.

It is because the impacts of data technologies are so unequal, that during the Child | Data | Citizen research project, I made a genuine effort to seek diversity, as much as I could. My mission was to engage with a multiple variety of parents living in London and Los Angeles, who came from  different  cultural, ethnic, and class backgrounds, as well as from various family situations beyond the heteronormative model of the nuclear family.

The parents I ended up working with were extraordinarily diverse not only in terms of ethnicity (I interviewed Asian, Latinos, Indian, Black, Indigenous, Multiracial, White) but also in terms of cultural and national heritage (Afghani-American, Mexican-American, Brazilian-British, Indian,  German-British, Italian, Hungarian, Icelandic, Zimbabwean, Scottish, etc.). I worked with parents in low-income jobs (such as nannies, cleaners, buskers or secretaries) as well as parents  in high-income jobs (such as lawyers, film-producers, journalists, marketing etc). I also came across a plurality of family situations that challenged the hetero-normativity of the ‘nuclear family’ and I interviewed gay parents, parents who were divorced and had to juggle with a complex living arrangements,  single mothers who chose to adopt a child.

All the participants were recruited mostly through random networking strategies. Facebook has been a fundamental recruiting tool. I posted on a variety of ‘parents groups’ and contacted possible participants directly depending on the area they lived. Beyond Facebook, I   contacted mother-meet-up groups, members of baby clubs, I relied on the word of mouth from parent to parent, I approached my personal contacts who I knew worked in schools. I also, visited the local parks and libraries in different areas.  What I sought in participants was diversity, but also – where possible –  a continuity in rapport, typical of ethnographic research.

Although I have always been committed to seeking diversity in participants, the Child | Data | Citizen project was largely ethnographic in its mission, perspective and scope. This implied that whilst the project has been committed to seeking diversity, I simply do not believe that we can objectify ethnographic participants and ‘sample’ them as being representative of a specific class, gender, or ethnicity, like you would do for large scale projects.

Here, I am of course influenced by understandings in anthropology for which ‘ethnographic sampling’ is kind of an oxymoron. In fact, I do not believe that as ethnographers we can make representative claims about society, culture and structures (see Werner and Bernard, 1994 and the critique to their work). What I do believe is that through ethnographic interviewing and other techniques we can look at the richness of human experience and variety.

During my research, I realised that all the different parents that I interviewed had completely different understandings of technological use and online privacy. This diversity was a key definer of the families themselves, where partners, co-parents, relatives and close-friends had different ideas of data privacy, and this was of course creating a great deal of tension in the family.  Hence it was impossible for me not to take into account this internal variety and diversity, or to use their experiences and interviews to make representative claims about their class, gender or ethnic background.

To give you an example: a father I met was willing to share the photos of his child on Facebook going against the will of his wife who was concerned about the fact that a lot of strangers would see the pictures. Yet although he had no problem sharing pictures on social media, he was very worried about health and baby apps monitoring his child. The father was white, unemployed, non-British of working-class origins, married to a middle-class British woman working on a high-income job. They lived together with their child in a beautiful house, but in a poor suburban neighbourhood.

In thinking about their family situation I realised that I could try to define them as ‘‘white middle-class”. Yet that seemed to reduce the richness of their lived experience and the intrinsic diversity of their family.  Could I really take what he said about privacy as a ‘representative’ characteristic of a given class, education, social status? Wouldn’t I be ignoring the richness of technological engagement and personal experience?

As I found myself dealing with these questions about ethnographic research practice, I decided that I would solve my conundrum in the following way.

I would  engage with a critical analysis of the inequality of data technologies and automated systems, because I believe that these questions have to be at the very heart of current debates about the complex relationship between data technologies and the making of datafied citizens. It is for this reason that I decided to combine ethnographic methodologies with platform analysis and I decided to focus on the child as data subject.

Focusing on the child as data subject, which can be turned into citizen subject, requires of course the engagement with critical questions about data inequality and data justice. This is because far from understanding the datafied child as universal category, we need to appreciate that each data subject is constructed in very specific and individually identifiable ways. We also need to appreciate the fact that, as Taylor (2017) argued, data technologies often sort, profile and inform action based on group rather than individual characteristics and behaviour (ethnicity, class, family etc.). Therefore, when we think about the child as data subject, we need to critically question how children are being profiled on the basis of the families and the social groups they are brought up in, and how these forms of profiling can not only hinder their moral autonomy but also reproduce existing inequalities and stall social mobility.

Hence the Child | Data | Citizen project does not shy away from key questions about the structural inequalities of the data environments we live in and how these structural inequalities impact minorities in different ways.

Yet my research made me realise the impossibility of ethnographic sampling.  In the use and collection of ethnographic data, I treat participants, and their families, as diversely unique and not as objective representatives of a specific class, race, or gender.

My critique of ‘ethnographic sampling’ does not mean that I believe that ‘sampling’ is wrong in general terms. In fact, there is plenty of exciting and thought-provoking research out there, both qualitative and quantitative, that would have not been possible without accurate and ethical sampling techniques.

I also do not want to suggests that it is impossible to study data inequality from an ethnographic point, Eubanks’ (2018) work for instance is largely ethnographic in feeling and unpacks the experience of automated inequality. Ethnographers working on specific classes, specific communities and social groups are in the best position to really highlight the lived experience and implications of data injustice. Yet in the Child | Data | Citizen project I decided to focus on the diversity and the plurality of datafied experiences, and in doing so I was faced with the challenge of ‘ethnographic sampling’.