Digital Sociology: The Internet, Social Media, Ethics and Life
Nick Osbaldiston
The key goals of this chapter are to:
- understand broadly what digital sociology is
- explain Big Data
- understand some of the concerns associated with big data
- understand what social media is and how it has changed our social interaction
- comprehend and explain some of the ethical debates around technology and digital worlds
- examine key concepts regarding robotics and ethics.
Overview
The digital world is a central feature of our everyday life. Our social interactions are increasingly online in the form of social media, begging questions on how much of our sociality has changed. Furthermore, technology has fundamentally changed important areas of our society such as how political power is exercised, how economies work, how our workplaces operate, and even how our families live. Yet, these changes have brought consequences that this chapter will explore in detail. For instance, the uptake of social media brings with it issues of privacy and questions around what happens to your personal data. Increasing use of technology to track our online movements is as much an ethical as a sociological issue. In addition to this, the increasing use of robotics and investment into them for the future raises some larger sociological questions.
Digital Sociology: New Frontiers in Sociology
One of the things that sociologists strove for in early modern periods was to understand how structural changes, including technology, impacted and potentially changed society. For someone like Emile Durkheim, changes to the organisation of work, drastically challenged the social solidarity that people had with one another in modern life. For Karl Marx, technological changes in the workplace meant that workers in factories were increasingly alienated from the end product of their labour, in other words, work had become meaningless. For Weber, technological change brought with it increasing rationalisation of modern life – things were becoming predictable, calculated, and measurable.
It stands to reason then that as sociologists today, our concerns with the technological advances would follow suit. As digital technologies find their way increasingly into our everyday lives, we have to ask big questions about what this does to our social relations, social structures, identities, and how we organise life generally. Digital technologies are a major part of everything we do now, from work and study, through to entertainment, socialising, and even intimacy.
For renowned Australian sociologist Deborah Lupton (2013; 2015), these changes to our modern world need to be understood and explored sociologically. Digital sociology for her,
provides a means by which the impact, development and use of digital technologies and their incorporation into social worlds and concepts of selfhood may be investigated, analysed and understood. (Lupton, 2013, p. 5)
Watch Deborah Lupton in the following interview [3:52] define further the everyday life of digital objects that we encounter – which are the things that digital sociologists study.
Sociologists in this area of research have been investigating the impact of digital worlds on social lives since the 1990s (Lupton, 2015, p. 5). As Lupton (2015, p. 5) identifies in her introduction to digital sociology, areas such as “cybersociology” and the “sociology of the internet” have been well studied for some time. However, in more recent years and largely due to the expansion of the internet along with a significant uptake of smart devices such as smartphones, the need to understand these issues is even more pressing. We only need to look at the upswing of users on Facebook to realise that something like social media has dramatically impacted our everyday lives. With almost 3 billion users in 2022, Facebook is easily the most used social media. Importantly as we will see, Meta, Facebook’s named parent company which also owns Instagram, WhatsApp and Oculus, reported an annual revenue in 2021 of $117 billion USD, an increase of over 30 billion on the previous year. For comparison, British Petroleum’s annual revenue for 2021 was roughly $165 billion USD. More recently, billionaire businessman Elon Musk bought the Twitter platform for a reported $44 billion USD. Clearly, social media is big business now as well!
At a broad level, digital sociology engages with how these new industries, means of communication, modes of production, and consumption, impact on our social, cultural, political and economic lives. Sociologists in this area engage in critique of these areas, by asking questions on how much this has changed our society and the structures that surround it. Furthermore, many social scientists examine the question of morals and ethics in relation to digital issues. For instance, in recent times there has been a significant upswing in the development of artificial intelligence within our smart devices and in the growing Internet of Things (IoT). Social science and humanities scholars along with those who study technology, ask difficult questions on the ethics and moral limitations of these technologies, especially in relation to questions of legal/moral responsibility (see final section of this chapter). Other sociologists such as Possamai-Insedy and Nixon (2017) contend that sociologists ought to be involved in critiquing the digital/technological industries that market big data and how this creates issues for individual/collective privacies (Lupton, 2015). Within this space, there is a section of social sciences that are examining the way that data is used for population surveillance and how this is increasing in contemporary times. Some sociologists however examine everyday changes to our social worlds by examining the nature of social media and how it changes or adapts everyday communication (Hogan, 2010; Murthy, 2012). In the rest of this chapter, we will cover these issues.
🛠️ Sociological Tool Kit
What is the world’s social media uptake like today?
- How many users are there in the world of social media today?
- What are the reported increases or decreases in the number of users on social media?
- Do some more digging on the internet – how many accounts do we have on average in Australia and New Zealand?
- Why do you think social media is so popular today? How would you understand this sociologically?
Social Media: Changing Social Interaction?
Most of us reading this text are probably only a hand gesture or a keyboard click away from logging into a social media space. When we engage with social media, we can ask questions about how classical and modern sociological theory might make sense of it. Is social media part of a complex system of identity wherein we perform aspects of ourselves now online? Or is it something more sinister as we will explore later.
Dhiraj Murthy (2012), a sociologist who specialises in digital media, utilises the work of Erving Goffman (1959) and his dramaturgical approaches to unpack social interactions online. Specifically, Murthy (2012) argues that the ritualisation of speech patterns that we experience in everyday life, which Goffman unpacked in his work, correlates neatly to the way we engage with online interaction. The three principles he suggests relate are ritualisation, participation frameworks and embedding.
Ritualisation refers to the unconscious ways that we gesture or share meaning across conversations that do not require much explanation. Goffman (1959) describes this further as the different verbal and non-verbal ways we communicate to others in our conversations. For instance, I may walk in one day to class, holding my arms across my chest and say out loud, ‘BRRR!’. You, as a member of that conversation, would understand that this is not some sort of random verbal noise, but rather I am indicating through this small act that I am cold. We can think of many forms of this sort of micro-ritualised practices that are Australian in context. For instance, nodding heads as you walk past a stranger to indicate hello, smacking oneself in the forehead when you do something wrong, touching something wooden and saying ‘touch wood’ (a form of superstition), and even saying ‘g’day’ in the slang we use verbally.
Murthy (2012) takes this into the online environment arguing that we create these forms of ritualisations and copy them in social media spaces. Using Twitter as an example, he writes the following:
Though the gestural conventions may be mediated through graphical avatars, emoticons, or even unintended typed characters, these can be considered ‘gestures’ and they are laden with meaning. For example, on Twitter, one can decipher a sigh or pause through subtle and not-so-subtle textual cues, e.g. ‘…’ for an explicit pause. (Murthy, 2012, p. 1067)
We can see this sort of behaviour in other ways too. For instance, in a private conversation with a friend about something annoying, you might breathe out in a sigh to indicate displeasure. However, in a textual conversation, this is not possible (unless it was a video conversation), and so we might write ‘ughhh’ or use an emoticon to signify the sigh. As Murthy (2012) points out, we utilise a whole heap of ‘non-verbal’ ritualisations in the online world including the use of gifs, memes, emoticons, certain acronyms (eg. lol, smh, omg) and hashtags to convey things that are beyond the written word. The point that Murthy (2012) is trying to make here is that we are emulating the sorts of rituals that we all participate in online, in the offline world. However, we might ask whether we have started to construct our own forms of online ritualisations that are becoming norms in our digital worlds. For instance, are there unwritten norms now established around how long one should take before answering someone’s online message? What about how we respond?
Goffman (1959) also focuses on the conversational participation frameworks. In your real-life conversations, he argues that you have both focused and unfocused interactions. Focused interactions are conversations that take place within a group or couple that are centred on the people in the conversation alone. Unfocused interactions however relate to how we act in a larger setting where you are gathered with others. For instance, in a bar watching sports on the television, people might be shouting and debating decisions by players and officials with total strangers. Murthy (2012) argues that we take this idea into social media with us. Firstly, we try to have focused and unfocused interactions with people on social media via different methods. Clearly we do this through private messaging in different interactions. However, like the bar example, we might be watching a sports team on the television but at the same time try to have unfocused interactions with others via use of hashtags. For instance, often in games televised now, there will be a hashtag provided to join in the discussion online with others as the game progresses (and afterwards). This unfocussed encounter allows us to converse with total strangers online – albeit sometimes not in nice ways! One of the problems of social media however is how little control you have over who sees your posts – and sometimes the algorithms of social media (see below) will guide people to your post who you did not intend to involve. For scholars like Murthy (2012), this can create potential problems in that you might attract people who you never intended to see your posts. However, is that different to real life conversation? Or has social media changed significantly how we ‘socially’ engage with the world?
The final area for Goffman (1959) is the role of ’embedding’ in conversation. For him, embedding suggests that there are contexts and times where speech is not necessarily our own private talk. For instance, if you are a speaker or representative of a political party and you speak on behalf of a group of people. Our speech (and actions) in real life are also embedded in a time and space that in the past, might not be remembered years later out of context. In the area of social media however, Murthy (2012) argues that the embeddedness of those things we post through social media may not be removed easily. In virtual spaces, we meet not in physical space but time. Thus, as he argues (Murthy, 2012, p. 1068), social media posts can be copied, held in reserve, and then brought forward at later dates. Furthermore, they can be taken to represent the words of the person themselves, rather than the context of the institution/organisation that the person is representing. We have seen many incidents where people’s social media posts from years earlier have been reposted by others to challenge their political, ideological or social position. Often this is done, for someone like Goffman (1959), in order to spoil their public identity and delegitimise them in political/social discussions and debates. Or simply to embarrass people. There is always of course, a darker side to social media!
🎞️ Video: Do social media rituals work in real life?
Watch this humorous experiment from creator Jena Kingsley and ask a few questions;
- Is Murthy (2012) correct? Is social media and real life crossed over?
- Secondly, do you think social media is doing for our society?
- What do you think of Sherry Turkle’s argument that social media is making face to face conversations difficult?
Social Media Performance or Online Curation?
Several scholars utilise Erving Goffman’s (1959) The Presentation of the Self in Everyday Life to understand social media as a type of social performance (see for instance Agger, 2015; Bullingham & Vasconcelos, 2013; Hogan, 2010). Common among them, as Goffman (1959) states in his dramaturgical approach to social interaction, is the notion that we present ourselves on a front stage which is where our social media profiles and posts appear, and keep hidden away from there the backstage, the things we do not want people to see. Goffman’s (1959) argument is that the front stage is a performance where we try and convince the audience of a role or identity that we have. The audience responds negatively or positively to this, and we in turn respond to them by negotiating our projected self on the front stage. In an online world, this is quite easy to adapt when we consider how we place on our social media profiles certain profile pictures, our backgrounds, likes/dislikes and other personal identifiers. Political parties even do this now, performing their identities and politics in a highly performative way, trying to keep hidden from view all the things they do not want people to know about their parties on their platforms.
However, others like Bernie Hogan (2010, p. 381) contend that there is no longer an easy distinction between the backstage of life in social media, and the front stage. In some cases, individuals tend to open up what might well be something we would have liked to keep in the backstage typically, for everyone to note. In other cases, we might be lured into oversharing in the internet with random strangers that we have little in common with (Agger, 2015). Of course, broader and controversial topics like racism, sexism, sexting, online pornography and other facets of social media could well bring the backstage of people’s lives to the fore, due to the lack of face-to-face interaction (Hogan, 2010).
Nevertheless, Hogan (2010, p. 381) argues that all content that we post and use on social media to present ourselves cannot simply be consider performance. First, performances in everyday life are usually contextual. For instance, I might wear a suit and tie to work, but I will remove that later in my dinner date with friends. Conversely, when we present in social media, it tends to be a “recorded act” which changes the nature of performance onwards (Hogan, 2010, pp. 381-382).
Instead, Hogan (2010, p. 382) would have us consider that social media spaces, like Facebook and Instagram especially, are now “exhibition sites” where we curate online. He describes this in the following.
An exhibition site can now be defined as a site (typically online) where people submit reproducible artifacts (read: data). These artifacts are held in storehouses (databases). Curators (algorithms designed by site maintainers) selectively bring artifacts out of storage for audiences. The audience in these spaces consists of those who have and those who make use of access to the artifacts. This includes those who respond, those who lurk, and those who acknowledge or are likely to acknowledge. (Hogan, 2010, p. 382)
Curators of a museum or art gallery take artefacts or artwork and place exhibitions in different positions around the building to according to how they want the objects to be viewed. Alongside this is usually a blurb or story about the artefact/artwork, where it came from and what is important about it. Could we say our social media profiles are similar?
If we follow the metaphor, your online profiles and the way you interact and engage with them, is your ongoing collection of digital artefacts that exhibit your life. We order them according to what we want people to view first, or in different areas. We arrange them in different chronological orders potentially to represent how our life has progressed. We also use past digital artefacts, like a historian might use archives, to display past events, moments, emotions and so on. You might want to ask if you think your social media page represents a type of museum or gallery of your life? What artefacts do you use to tell a story about who you are?
Hogan (2010) however contends that in digital spaces, there are now mediators who automatically curate objects for you. He writes “curators mediate our experiences of social information” (Hogan, 2010, p. 381; cf. Agger, 2015). These moderators are the algorithms or design of the social media application you use, which organises the presentation of your site in certain ways. This includes filtering your profiles to display certain artefacts, ordering them in such a way that only select friends you engage with often will see your posts, and of course, sell on data about you to third parties who then curate online advertisements back to you (see below). Consequently, argues Hogan (2010), we are not single curators of our online worlds. We are now co-curators with the platform itself and the programmers behind it (such as Facebook). Once our artefacts are online, they are subjected to different curations that occur with and without our knowledge. The question then becomes how much control you have over that data artefact once up online.
Big Data: Surveillance, Consumption and Production of Online Lives
As Lupton (2015) explains in her introduction to digital sociology, our lives are increasingly being lived online (see also Christine Hine’s work [2015]). As technological advances were made to the world wide web (also known at one stage as the information superhighway), and the internet moved to Web 2.0, a significant shift occurred in how we used the online world. Web 2.0 technology created opportunity for individuals on the internet to not only consume information (one-way direction), but now produce information/data/artefacts themselves (two-way direction). Thus, we are now not only consuming data, but also producing it (Beer & Burrows, 2007; Hogan, 2010; Lupton, 2015).
Consider the newspaper for instance. In the paper version that you might receive on your door step in the morning, you can engage with the text in a one-way fashion only. Thus, the publisher controls what you read. In the online format though, news is two-way in that we can take a story, comment on it, republish it on our social media accounts with our own opinions, critique it, and have online discussions about the issue with others. We have added to the story itself, creating our own digital artifacts. Thus, we are no longer simply consumers of information, but also producers as well – hence the term “prosumer” (Lupton, 2015, p. 22). Yet, along with this comes some difficult ethical concerns with the use of our data.
One of the common terms you might hear in relation to the internet is that of big data. This refers simply to both the increasing amount, and the variety of and the speed of which data is accumulated and stored by corporations across the internet. The data is so diverse and significantly large, it is described as big data. Most of this data is statistical, and is gathered each time we utilise the internet, social media, or other online platforms. The explainer video below describes what big data is and how it is gathered.
Andrej Zwitter (2014) suggests that there are three different players in the role of big data. The first are the collectors. These are corporations, such as Google or Meta platforms, that store data that we supply through our various interactions online, such as online searches, likes and dislikes on different posts, demographic information (such as age, gender, location, etc), locations of check-ins on smartphones, and other metadata. The second group are the utilisers. These are companies who pay collectors for access to this data, in order to make money from it by understanding more about product users, their needs, their likes, and so on. These are usually marketing companies that work for or within corporations to maximise their profits and understand what consumers want. The final group of people are the generators which are simply those who engage with internet spaces and contribute (albeit in most cases unknowingly) to big data. These are people like you and me, everyday consumers of the internet, who also produce data. And this is not simply what we do in the online world directly. For instance, the use of different devices, such as smart watches, that connect to databases and the internet, are also collecting our information. This can also include loyalty cards (see the case study below from Deborah Lupton). If you think about this carefully, you are now looking at one of the largest focus groups that has ever existed in the world!
Consider this example. Let’s say that one of us is a 25 year old male, who lives in Australia, Melbourne, Brunswick specifically, they are in a relationship with a 26 year old female, and are fully employed at a university campus with a degree in economics. This male, like many in and around his neighbourhood, also likes basketball and has a love for a specific style of shoe from one company. One day, he searches for that basketball shoe (generator), and looks over the different colour options, clicking on different items, different styles, and finally orders a pair over the internet paying with his credit card. Now imagine that you are a big data collector, and you have 5000 people in the same area all looking for basketball shoes, different styles, different colours and so on. Some have the same demographics as this man above, but some do not. Suddenly, you have a large and immediate understanding of what everyone likes, and does not like. You sell this data to the shoe company’s marketing team (utilisers), who then analyse the data to design future shoes that align with the interests of their target demographic (basketball players). This is the nature of big data.
Critics like Zwitter (2014) argue that this is morally contentious as it assumes that people are aware of, and consent to, their data being taken like this (Beer, 2018; Lupton, 2015). However, Zwitter (2014, p. 4) like many others worries that there is an ethical dilemma in that “free will and individualism” is still assumed to exist in online spaces. However, ask yourself, when did you sign up for your data to be taken in this manner?
For Zwitter (2014), while this approach of obtaining your consent is legal, it is nevertheless unethical as people are not really aware of what they are signing up for. It also potentially creates situations where one’s privacy could be breached, as we have seen in several circumstances in recent times where businesses that store personal data have been hacked and held to ransom by online anonymous groups. (see a list of data hacks in recent years)
However, the broader issue for Zwitter (2014) and others is that of data surveillance and the predictive power of analytics and statistics. As he argues,
this information gathered from statistical data and increasingly from Big Data can be used in a targeted way to get people to consume or to behave in a certain way, e.g. through targeted marketing. Furthermore, if different aspects about the preferences and conditions of a specific group are known, these can be used to employ incentives to encourage or discourage a certain behavior. (Zwitter, 2014, p. 4)
As more information is taken, and cross-analysed, corporations can predict with greater accuracy how to market specific products to specific groups of people. For instance, Lupton (2015) shares a fascinating but troubling example of this from Australia’s grocery chain Woolworths:
Woolworths supermarket chain also owns an insurance company and petrol stations and has a 50 per cent share in a data analytics company. Using the combined databases drawn from their customer loyalty programme and insurance company and employing the skills provided by their data analytics company, Woolworths were able to demonstrate that they could target consumers for insurance packages based on their supermarket purchasing habits. They found that customers of their supermarkets who purchased higher quantities of milk and red meat were better car insurance risks than those who purchased high quantities of pasta and rice, filled their cars with petrol at night and drank spirits. Based on the information in these datasets the two groups of customers were then targeted for offering different insurance packages involving different premium costs. (p. 97)
For some sociologists, this approach to modern life is creating a type of digital panopticon where business is now the surveillance mechanism of everyday life. Campbell and Carlson (2002, p. 587) for instance predicted over 20 years ago that the internet would develop into “Big Brother” capitalism, focused on “economic imperatives” that will begin to start “driving advertising and marketing firms to expand the technologies and techniques of surveillance”. Unlike other analyses of power however, “surveillance” inside the marketplace requires the willingness of the participant, which for them raises the question of “how corporate actors compel individuals in the marketplace to engage in self-surveillance (and self-disclosure) when there is no immediate threat of coercion” (Campbell & Carlson, 2002, p. 591)? In other words, how do companies like Woolworths in the case above, or Facebook, or Google, convince us to give away personal information as we do?
🛠️ Sociological Tool Kit
Discussion point: Why do we engage with surveillance willingly?
Why is it do you think that people are willing to give out information about themselves online? How might we understand this sociologically? What do you think of the following quote by Campbell and Carlson (2002, pp. 591-592):
Though the inequitable power relationship between consumers and suppliers constitutes the context of online surveillance, the mechanisms by which marketers frame participatory surveillance as a reasonable transaction cost are sufficiently subtle as not to be evident to consumers. In other words, individuals are not necessarily aware of the degree of inequalities in their relationship with suppliers because marketers and advertisers have effectively concealed the consumerist Panopticon.
For Zwitter (2014, p. 4) and others, this type of behaviour is concerning for two reasons. Firstly, we rarely know what we are signing up to when we accept terms and conditions that allow this data to be taken and sold. Secondly, this approach violates group privacy in that our demographic (as shown in the Woolworth’s example) is breached and companies can use the information to tap into potential behaviour and sway activity in one way, or the other. Predictive statistics like this are not simply used for marketing purposes though. It is increasingly the case that the state is utilising big data to predict behaviour in relation to crime, health, and other matters (Lupton, 2015).
Sociologists, and other critics, are also increasingly concerned with the predictive power of statistics, especially with the development of algorithms that run in the background collecting information about us. Specifically, algorithms that are coded in such a way as to target particular areas, collect and codify digital data about internet users, and prioritise certain data over others. Important here, as Lupton (2015, p. 102) shows, these algorithms (written by a human) “play a part in the configuring of new data”. She writes,
algorithms play an influential role in ranking search terms in search engines, ensuring that some voices are given precedence over others. From this perspective, the results that come from search engine queries are viewed not solely as ‘information’ but as social data that are indicative of power relations. (Lupton, 2015, p. 102)
She then uses the case of Google’s Page Rank algorithm which influences what websites show up in what order when searching for a particular thing online. This can have a significant impact then on what information is shown, and what is hidden or not noted by the user.
One of the major issues of the algorithm and the predictive power of it is that it can start to reflect racial, gender or other biases. For instance, a systematic literature review conducted in 2019 by Favaretto et al., on 61 different papers that engaged with discrimination through big data found that algorithms that are programmed to mine data for information on demographics for marketing purposes, can lead to underrepresentation of certain vulnerable groups “which might result in unfair or unequal treatment”, or overrepresentation which might result in increased attention and scrutiny (Favaretto et al., 2019, p. 13). Watch this short clip [4:40] from a lecture given by Sandra Wachter on privacy and big data problems.
Algorithms also play a role in information delivery and predicting our own behaviour and needs. Lupton (2015) for instance describes the ways in which algorithms on social media will accumulate knowledge about our preferences, tastes, political and social views, and start to ‘suggest’ certain posts to us. For instance, you might be a strong advocate for action on climate change. Once the algorithms of social platforms such as YouTube accumulate this information about you from your searches, it will begin to automatically suggest videos related to your position. Australian sociologists Possamai-Inesedy and Nixon (2017, p. 871) argue that this sorting of information and knowledge is damaging to democracy as it can exacerbate existing political/social polarisation. They write,
digital vigilantism indicates that big data’s social impact is not simply a radical shift for users but also an amplification of existing tendencies […] (there is) an increase in polarisation over social issues, as groups on either side of a debate cease communicating with each other. (Possamai-Inesedy & Nixon, 2017, p. 871)
In other words, if you are inclined to a particular political position on an issue, and the algorithm behind a certain social media platform understands this about you, and continues to feed you information and connection with like-minded people, there is little chance for communication between groups. Polarisation therefore continues as we “are led by algorithm” into “echo-chambers or filter bubbles” where we “find only the news we expect and the political perspectives we hold dear” (Possamai-Inesedy & Nixon, 2017, p. 827). This then, they argue, is “likely to limit cultural experiences and social connections” and “close down interactions except for those that fit existing patterns” (Possamai-Inesedy & Nixon, 2017, p. 827). In short, the more time we spend online, the more time we are going to spend with those we agree with. What does this mean for democracies?
Despite these potentially damaging worries about big data, there are benefits as well. As Lupton (2015, pp. 98-99) shows, there are ongoing uses of big data that can track improvements in farming through to understanding and tracking progress on poverty reduction efforts globally. Furthermore, “Google now offers several tools that draw on data from Google searches” that provide insights into potential new health outbreaks such as “dengue fever” (Lupton, 2015, p. 99). Through the Internet of Things (see below) we can also start to use big data to predict natural disasters, climate change impacts and other matters of scientific importance.
Politics, Inequalities, and the Digital World
With our lives found more fully online within apps like social media, the opportunity to express identities, and also opinions, has grown significantly. Petray (2011, p. 924) for instance suggests that with the advent of Web 2.0 technology, we now potentially have a “soapbox from which anyone may shout to the world”. However, in her work, she also warns that this could well result in society suffering from “opinion overload” where we grow apathetic to the different voices online (Petray, 2011, p. 925). In addition to this, there is potential (as we explored above) for digital political polarisation on topics, that reduces the capacity for proper conversation and discussion on especially sensitive issues.
Despite this, and the issues of the ‘hack’ of democracy shown through the Cambridge Analytica case, there has been a growing set of literature around the promise of social/digital life in assisting civic life. Manuel Castells (2015) for instance in his book Networks of outrage and hope: Social movements in the internet age, suggests that the new world of political activism via the internet is posing challenges to corporate and political power. To understand Castells’ position on this, we need to understand his theory of network society.
In this work, Castells (2009) contends that the organisation of our power and capitalism generally is now no longer located in the way that Karl Marx and others recognised in their day. Rather, power is found in the ownership and flow of information along the networks found in the digital age. Unlike Marx’s analysis which focusses on the old notions of class (bourgeois vs proletariat) which places emphasis on ownership of private property, Castells (2009) contends that networked society and the new global economy, relies on inclusion and exclusion. He argues that within capitalism now, there are those who have access to networks of power (via information) and those who do not. This is especially true in relation to the stock market, which is mostly now digitised, with access to information on prices and potential growth areas accessible to only a small class of people (namely stockbrokers, equity managers, and stockholders themselves). Most of the global population is not privy to this information, however, crashes in this can have dire consequences for the entirety of the world’s population as the Global Financial Crisis of 2007-2008 demonstrated. However, the easiest way to understand this is in relation to the design of new technology.
If we can imagine that a corporation, such as Apple or Google, decides that they will design a new smartphone and they employ various designers and engineers to develop this in their offices in California. The information on the design is held by that corporation and becomes their property. However, to actually produce this product, they need someone to build the devices. The designers, the corporation, send their information to another corporation that is employed as a sub-contractor to build the new smartphones. This company, most likely located in China, has limited power informationally and can be cut out of the deal if they are too expensive or their standard of construction is poor. In this relationship, the designing corporation (such as Apple) holds significant power for Castells (2009).
Thus ‘inclusion’ and ‘exclusion’ in the network society is an important power dynamic that deserves consideration. However, you can take this further by examining the contracted company that also hires employees to build these devices (mostly for us), and pays them accordingly. In terms of Marx’s analysis, these employees, who are mostly younger, working, middle-class folk who require work, are the proletariat, with nothing to own but the labour they sell. They have no control at all over their labour, and no stake in the information sharing. They are also what Castells (2009) describes as expendable or disposable as they are a small node in a complicated network. Again, you can take this one step further but analysing where the raw materials for making the smartphone come from. In the case of devices such as these, some minerals are critical such as cobalt which is used in the development of batteries. Cobalt is mined in some of the most underdeveloped parts of the world, including the Democratic Republic of Congo. There have been serious and significant investigative reports that show that in these mines there has been evidence of abuse, slavery, child labour and death. This is the human cost of technological development. Again, for Castells (2015), these people are expendable, in the new networked society of capitalism.
🧠 Learn more
Blood Cobalt: Investigation by ABC News
Watch the following report from ABC News on the conditions and issues associated with the Congo’s Cobalt mining operations.
How might conflict theorists like Marx view this situation? Do you think people are aware of what is happening in these places? If we were more aware, do you think we might change our behaviour?
You can start to see how power in the information/digital network works across all sorts of areas from politics, economics, academia, and the media. Those with the power of access to information, and control over information, hold power that others will not. However, Castells (2015) argues that social movements in the current digital age create opportunities to disrupt these information networks. Important for Castells (2015) is the manner in which social movements are now organised. While in the past, these were mostly organised in person and required participation of time, including the physical presence of the protestor, social movements are now far broader, incorporating different platforms and sites that can disrupt information flows.
Social movements for Castells (2015), importantly, are now often structured in a flat form, not in a bureaucratic hierarchy where opinions of the movement are formulated from the top down (eg. a president and board declaring values and ideals). Rather, social media has allowed for leaderless movements that are bound to a general ideal, and seek to interrupt the flow of information or the networked society. A classic case for Castells (2015) is the Occupy Wall Street movement that organised under a banner of taking information on the banking sector and government regulation, and producing counter-narratives designed to draw people into protest against corporate/government cooperation. For instance, the slogan “We are the 99%”, which referred to the general disparity of income relying on the statistic that only 1% of the population of the world owned over half the wealth, sought to draw attention to corporate interference with politics by exposing new information to civil society. Social media was utilised as a place to interfere with the ‘status quo’ of power dynamics within that network to feed an emotive response to corruption in Wall Street.
A prominent Australian example of this is also found in the #destroythejoint action taken by feminist protestors following comments made by a prominent radio commentator Alan Jones (Lupton, 2015). After making misogynistic comments on the then Prime Minister Julia Gilliard as someone who was ‘destroying the joint’, feminists began to use the hashtag of the same phrase. From Castell’s (2015) point of view, the goal of this was to interrupt a power dynamic of the media to control the narrative in the public. Consequently, and after pressure from lobby groups as well as commercial interests, Jones rescinded his comment and made a public apology to the Prime Minister (Lupton, 2015, p. 149).
Other movements have started online with the same goal, to organise and control the narrative associated with the issue that those in power control. For instance, the Black Lives Matter movement, the #makeamazon pay protest, the September 2020 climate strikes, and International Women’s Day #IWD events. Furthermore, others have shown how social media has played a pivotal role in organising protest movements in the Arab Spring uprising, and other important political moments (Brown et al., 2017; Wolfsfeld et al., 2013).
One of the downsides of organising social and political movements in the online space is that those with significant power can also incorporate harder surveillance on would-be activists. Uldam (2018) for instance investigated the role of social media in enabling activists to reach wider audiences with their criticisms of a large multi-national corporation. However, “social media also makes activists more vulnerable” where powerful groups (such as companies) can use their influence and legal ability to contain the message activists want to send out. In short, using the power of big data and other techniques, companies are able to control the narrative and ensure that activist messaging is withdrawn (cf. Castells, 2015; Yilmaz, 2017). This is similar to the case with Indigenous activists in Australia in the online environment who, as Petray (2011, p. 929) argues, are exposed to the surveillance of online platforms like Facebook, which then in turn makes the activist’s profile a target for “research for advertisers”. In other words, activism online in social media actively aids the power of some of the most powerful nodes in the digital network, such as Facebook. It is clearly also a problem for those in other countries where surveillance is significant. Watch the following video [5:55] on the Chinese Communist Party in China and their ‘Great Firewall’ that blocks much information and also monitors social media use for discussions on things that the party does not want discussed.
Robots and the Internet of Things
With the advancement of technology, and the widespread uptake of the Internet, there has been a rise in a new form of internet that is called the ‘ Internet of Things’ (IoT). At a broad level, the IoT describes a network of different devices, objects, software, and technologies that are designed to take information/data and share this with other objects. Some devices for instance may have a sensor that tracks certain data that when shared with another device through the internet, triggers an action in that technology. We engage with many of these already with our wearable devices, smartphones, and in-home smart technologies. For instance, you may own a smart-home device (eg. an Amazon Echo), which may control your lighting in your home when you return home after being triggered by your smartphone. Or you might wear a device to monitor your exercise, which when connected via Bluetooth to your smartphone, can track your run, and provide data on average heart rate and distance covered (Lupton, 2015; 2016; 2017; 2020).
The promise of the IoT is wide-reaching. Everything from smart homes that reduce power consumption by automatically reducing or switching of supply to unnecessary electricity use, to smart environmental systems that monitor the potential threat of natural disaster and trigger warnings or other mechanisms to save lives, through to smart cities that could lower the cost of operation by monitoring and automatically reducing waste, such as water (Farhan et al., 2018; Rose et al., 2015, p. 41). The promises are significant including in the labour market, which may well result in a new industrial revolution. Farhan et al. (2018, p. 2) for instance argue that “IoT and digital technology will help ensure maximum efficiency, reduced manufacturing cost with increased quality”. This could also assist in agriculture where “IoT can provide solutions and methods for precision crop monitoring and disease diagnosis” that could help solve world food shortages into the future (Farhan et al., 2018, p. 2). However, there are growing challenges to the Internet of Things that hinder its development. These include security concerns, such as the hacking of networks, resourcing issues, the ability to store large amounts of data and the development of artificial intelligence to analyse data, privacy issues for civil society, and a growing issue – that of e-waste (Singh et al., 2014).
E-waste itself is now a significant issue that faces the world’s population. For instance, Andeobu et al. (2021, p. 1) highlight that in 2019, “50 million tons (Mt) of e-waste was generated globally” and add “of this total e-waste, 24.9 million tons were generated in the Asia Pacific region alone”. Recently the World Economic Forum released a report arguing for a proper recycling of e-waste that would lead to economic growth in some cases. However, it is clear that e-waste continues to be a drastic issue that is ever-growing, and the introduction of more devices/things into the system could exacerbate that further.
Investing in the Internet of Things has become a significant industry now with an estimated recorded value of 182 billion US dollars in 2020, with a predicted rise to over 620 billion by 2030. The smart home market is also significant, worth around 86 billion USD in 2020 with a significant growth expected to over 300 billion by 2030. However, there are concerns about the growth of the IoT, especially in the realm of the development of artificial intelligence (AI), which is programmed into smart devices, automated machinery and of course, robotics.
Sociologists for instance have been critical of both the further development of automation in our everyday lives and the potential implications for robots taking labour market roles. Frey and Osbourne (2015) for instance, predict that in the next few decades, a significant decline will occur in jobs that are already vulnerable to machine automation. However, as sociologist Judy Wajcman (2017) counters, the methodology that was used to make this prediction is now widely criticised. It does however represent a growing worry about the use of AI and robotics in taking jobs away from the working class (especially), and also in areas like law, medicine, and even academia. de Vries et al. (2020), as an example, examine the changing nature of jobs from 2005 to 2015 and calculate the impact of robotics on industry across 37 countries. They find that “increased use of robots is associated with positive changes in the employment share of non-routine analytic jobs and negative changes in the share of routine manual jobs” (de Vries et al., 2020, p. 11). In other words, employment that requires analytical work (such as problem-solving), was not as impacted by the adoption of robotics during this time period, as opposed to manual labour (such as factory work) which has been affected. Importantly they conclude, industrial robots did not replace jobs, but they did impact task demand and thus had disruptive effects on employment (de Vries et al., 2020, p. 11).
Why? This is fundamentally the goal of the IoT if we remember. Efficiency in operation, such as on a manufacturing floor, means less people are required, and fewer tasks are needed to be completed by human hands. Nevertheless, the counterargument from people like Wacjman (2017, pp. 124-125) suggests that although these jobs may well be automated and run by robots, “other novel forms will be created in unexpected ways as capital seeks new ways to accumulate”. In other words, throughout history, we have seen these sorts of disruptions in the industrial revolution and the wave of automation that happened within manufacturing. Over time, we have created new types of jobs (such as servicing robots) that will fill the gaps left behind. Watch this video below [11:00] on “Flippy” who runs the grill at White Castle – will robots take our fast food jobs in the future?
Wajcman (2017, pp. 121-125) argues that when our focus is trained on these sorts of issues, such as robotics taking over our employment, we neglect the already existing power relations that exist. For her, the corporations that have capacity to develop AI and other important design capacities are “small” in number but have significant power therein (Wajcman, 2017, p. 121). She contends that these corporations create inequality through their structures already as they employ large numbers of casualised, “insecure”, “low-paid” workers that “powers the wheels of the likes of Google, Amazon and Twitter” (Wajcman, 2017, p. 124). In addition to this, these companies subcontract significant labour to short-term workers in the ‘gig economy’ who are paid small fees for coding work and information processing. When we consider Castells’ (2009) argument around those who hold the least power in the network society, Wacjman’s (2017) contention is quite compelling. When we obsess over the idea of robots taking over, we neglect some existing inequalities within the tech industry that are rarely addressed.
Nevertheless, there are other concerns when it comes to AI, the IoT and robotics that need to be considered. The ethics of devices and morals programmed into them is one of those areas. A growing list of worries has emerged with the introduction of AI into our everyday lives and into especially military/policing use (Asaro, 2000; 2006; 2013). For Peter Asaro (2006), the question of ethics and morality in the use of AI in IoT and robotics is a deeply important question.
This all relates to some of the classical dilemmas thrown up by Isaac Asimov’s collection of short stories entitled I, Robot. Within these works, Asimov constructed the “Three Laws of Robotics” in 1942, which are as follows:
First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm
Second Law – A robot must obey orders given to it by human beings except where such orders would conflict with the First Law
Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (Asimov, 1950/2008).
Boden et al. (2017), following a workshop with a range of scholars from across different disciplines, argue that the three laws were in need of formalisation but extension and consideration for current times. They constructed new laws for robotics for the general public as follows:
Rule 1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
Rule 2. Robots should be designed and operated to comply with existing laws, including privacy.
Rule 3. Robots are products: as with other products, they should be designed to be safe and secure.
Rule 4. Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
Rule 5. It should be possible to find out who is responsible for any robot. (Boden et al., 2017, pp. 123-129)
These concerns for robotics/AI which are embedded into an IoT are centred primarily on the question of ethics. For Asaro (2006, p. 10), the real issue is where devices, machines and robots are tasked with areas of life that are fundamentally first, in conflict with the rule of killing other humans, or secondly, in places where ethical reasoning is required. He writes the following:
First, we might think about how humans might act ethically through, or with, robots. In this case, it is humans who are the ethical agents. Further, we might think practically about how to design robots to act ethically, or theoretically about whether robots could be truly ethical agents. Here robots are the ethical subjects in question. Finally, there are several ways to construe the ethical relationships between humans and robots: Is it ethical to create artificial moral agents? Is it unethical not to provide sophisticated robots with ethical reasoning capabilities? Is it ethical to create robotic soldiers, or police officers, or nurses? How should robots treat people, and how should people treat robots? Should robots have rights? (Asaro, 2006, p. 10)
Let’s take his first concern and tease out the question. The idea that the military could create robotics that could not only survey situations but also act to kill a human, is in clear violation of some of the laws of robotics set out by Asimov (1950/2008) and Boden et al., (2017) above. Yet, drone warfare is a significant issue in our contemporary age. Although drones are not yet fully autonomous machines, they are widely used in combat situations for surveillance and at times strikes. The United States for instance has utilised drone strikes in the past against terrorist targets. Government officials often cite that the use of drones in this way alleviates the human cost, as operators are not placed in life-threatening situations (Espinoza, 2018). Nevertheless, evidence continues to accumulate on the potential toll on innocent lives, and mistakes made by drone operators in killing innocent civilians (Espinoza, 2018). This itself worries scholars like Asaro (2006) when technological advances start to consider the development of fully autonomous military drones.
Asaro’s (2006) second point above is worthwhile considering further. What happens when AI or a robot must judge and use ethical reasoning to decide on what action to take? James and Whelan (2022, p. 42) argue that with the excitement of the development of artificial intelligence that is pivotal to things like the Internet of Things, we must be cautious not to continue to underestimate this concern. The development of AI with ethical frameworks embedded within robots and other devices has become a race amongst corporations with clear economic agendas. However,
at both global and local levels, ethics discourses pre-empt questions regarding the rationale of AI development, positioning investment and implementation as inevitable, and, provided ethical frameworks are adopted, laudable […] Bracketing questions as to whose ethics are installed and by what means, and indeed whether ethical AI is meaningful given the logics within which it is developed. (James & Whelan, 2022, p. 42)
Asaro (2006, p. 11) makes a similar claim arguing that the answer to his conundrum is the construction of AI with moral reasoning skills. However, there are two concerns here. One being “the practical issues involved” in what “kinds of decisions the robot will be expected to make” and secondly “whose ethical system is being used, for what purpose, and in whose interests?” (Asaro, 2006, p. 11).
When it comes to these matters, several scholars bring forward the conundrum of the ‘trolley problem’ to highlight how even everyday tasks (not associated with warfare) can produce situations that require significant decisions that rely on ethics. Asaro (2006, p. 13) contends that in these circumstances, “different perspectives on a situation would endorse making different decisions”. In other words, we all hold different views, philosophies, and ethics, as individuals living in wider society. If we were to program AI to act in certain ways in the case of an ethical decision, whose ethics is privileged?
🛠️ Sociological Tool Kit
Exercise: An adaptation on the trolley car problem
In this exercise you have to suspend reality for a moment, remembering that in moments like the one below (however very unlikely) you will likely act on instinct. This is an exercise fundamentally in ethical reasoning, not a real-life choice.
Consider for a moment that you are travelling down a highway, going the speed limit (100km/h) and two small children run onto the highway chasing a ball. To the left of you are a group of cyclists out for their morning ride, and to the right of you dividing the highway are a bunch of solid trees. In that instant, time freezes and someone approaches you with the following choices.
- First – you can do nothing which will result in the children being hit by you, likely seriously injuring or killing them
- Second – you can swerve to the left into the group of cyclists, likely seriously injuring or killing them
- Third – you can swerve to the right into the trees, likely seriously injuring or killing yourself
If you had these choices (remembering we’re suspending reality – including the potential of airbags, etc), what would you choose?
This is perhaps not an entirely difficult question for you, depending on what your ethics were. However, what if we replaced the two small children with two small puppies, or an elderly man? Would that change your view?
A study conducted by scientists and ethicists published by the journal Nature reveals that we have different responses to these questions. If you have access through your library, you can review the findings in the Nature article.
The issue is that we all have different values, ethics and backgrounds. As such, when we build AI controlled robots that could be confronted with a situation where action would save one person’s or animal’s life but potentially damage another’s, whose values and ethics get to be programmed?
For Asaro (2006) if we are faced with having to program ethical decision-making into AI into the future, this might be a way to ask whether it ought to be built at all. Any decision that is made however will need to be legally bound (as stated in the principles above), and responsibility for that programming needs to be held by a human/corporation somewhere.
The ethical limitations framework is a good one to think through and consider given how rapidly the IoT, AI and robotics are growing. A number of issues appear daily that should cause us to reflect and review where we want technology to be in centuries from now. However, there are two conflicting viewpoints to consider here. One is the ethics of progressivism, which is the idea that technological advances in the past have led to significant gains for future generations. For instance, antibiotics, electricity, the washing machine, the automobile, have all improved our lives in the current day. The second ethics is that of the precautionary principle, which highlights the unintended consequences that fall and have lasting impacts on future generations due to technological advances. For instance, with electricity generation and auto and aero travel capacity came the unintended consequences of climate change, and pollution. Nuclear power created a significant source of energy, but also resulted in situations like Chernobyl.
The question for AI, robotics and the IoT is whether the potential gains for future generations will have serious potential consequences, that we cannot predict at this stage. Whether that is worth the risk or not, is a matter of discussion.
In Summary
This chapter has covered the following information:
- Social media is now ubiquitous covering much of our social lives. Sociologists and other social scientists attempt to understand how this changes our social relations.
- Sociologists argue that social media can at times resemble our social interactions offline. We conduct ourselves similarly, especially in conversation.
- Social media, and the internet in general, generates a significant amount of data that is called ‘big data’ which is gathered by tech companies, and sold to marketers.
- We are now no longer simply consumers of information, but also producers.
- Several social theorists and ethicists argue that the collection of big data via apps like Facebook is morally complex and does not provide the consumer with consent.
- Democracies are also threatened by the collection of big data, as more information is known about voters than ever before allowing for targeted campaigning.
- Big data, and the Internet of Things, has moved rapidly, creating a need for ethical discussions around automation and the future of robotics.
References
Agger, B. (2015). Oversharing: Presentations of self in the internet age. Routledge.
Andeobu, L., Wibowo, S., & Grandhi, S. (2021). A systematic review of e-waste generation and environmental management of Asia Pacific countries. International Journal of Environmental Research and Public Health, 18(17), 1-18. https://doi.org/10.3390/ijerph18179051
Asaro, P. M. (2000). Transforming society by transforming technology: The science and politics of participatory design. Accounting, Management and Information Technologies, 10(4), 257-290. https://doi.org/10.1016/S0959-8022(00)00004-7
Asaro, P. M. (2006). What should we want from a robot ethic? The International Review of Information Ethics, 6, 9-16. https://doi.org/10.29173/irie134
Asaro, P. M. (2013). The labor of surveillance and bureaucratized killing: New subjectivities of military drone operators. Social Semiotics, 23(2), 196-224. https://doi.org/10.1080/10350330.2013.777591
Beer, D. (2018). Envisioning the power of data analytics. Information, Communication & Society, 21(3), 465-479. https://doi.org/10.1080/1369118X.2017.1289232
Beer, D., & Burrows, R. (2007). Sociology and, of and in Web 2.0: Some initial considerations. Sociological Research Online, 12(5), 67-79. https://doi.org/10.5153/sro.1560
Bullingham, L., & Vasconcelos, A. C. (2013). ‘The presentation of self in the online world’: Goffman and the study of online identities. Journal of Information Science, 39(1), 101-112. https://doi.org/10.1177/0165551512470051
Campbell, J. E., & Carlson, M. (2002). Panopticon.com: Online surveillance and the commodification of privacy. Journal of Broadcasting & Electronic Media, 46(4), 586-606. https://doi.org/10.1207/s15506878jobem4604_6
Castells, M. (2009). The rise of the network society: The information age – economy, society and culture. John Wiley & Sons.
Castells, M. (2015). Networks of outrage and hope: Social movements in the Internet age (2nd ed.). Wiley.
de Vries, G. J., Gentile, E., Miroudot, S., & Wacker, K. M. (2020). The rise of robots and the fall of routine jobs. Labour Economics, 66, 1-18. https://doi.org/10.1016/j.labeco.2020.101885
Espinoza, M. (2018). State terrorism: Orientalism and the drone programme. Critical Studies on Terrorism, 11(2), 376-393. https://doi.org/10.1080/17539153.2018.1456725
Farhan, L., Kharel, R., Kaiwartya, O., Quiroz-Castellanos, M., Alissa, A., & Abdulsalam, M. (2018, July 18-20). A concise review on Internet of Things (IoT): Problems, challenges and opportunities [Paper presentation]. 2018 11th International Symposium on Communication Systems, Networks & Digital Signal Processing (CSNDSP), Budapest, Hungary. https://doi.org/10.1109/CSNDSP.2018.8471762
Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big data and discrimination: Perils, promises and solutions: A systematic review. Journal of Big Data, 6(1), 1-27. https://doi.org/10.1186/s40537-019-0177-4
Frey, C., & Osborne, M. (2015). Technology at work: The future of innovation and employment. Oxford Martin School. https://tinyurl.com/4vxa6xx9
Fuchs, C. (2011). New media, web 2.0 and surveillance. Sociology Compass, 5(2), 134-147. https://doi.org/10.1111/j.1751-9020.2010.00354.x
Goffman, E. (1959). The presentation of self in everyday life. Doubleday.
Helm, D. T. (1982). Talk’s form: Comments on Goffman’s Forms of Talk. Human Studies, 5(2), 147-157. http://www.jstor.org/stable/20008837
Hogan, B. (2010). The presentation of self in the age of social media: Distinguishing performances and exhibitions online. Bulletin of Science, Technology & Society, 30(6), 377-386. https://doi.org/10.1177/0270467610385893
James, A., & Whelan, A. (2022). ‘Ethical’ artificial intelligence in the welfare state: Discourse and discrepancy in Australian social services. Critical Social Policy, 42(1), 22-42. https://doi.org/10.1177/0261018320985463
Lupton, D. (2013, November 25-28). Digital sociology: From the digital to the sociological [Paper presentation]. The Australian Sociological Association: Reflections, Intersections & Aspirations: 50 Years of Australian Sociology, Clayton, Australia. https://ses.library.usyd.edu.au/handle/2123/9729?show=full
Lupton, D. (2015). Digital sociology. Routledge.
Lupton, D. (2016). The quantified self: A sociology of self-tracking. Polity Press.
Lupton, D. (2017). Wearable devices: Sociotechnical imaginaries and agential capacities. Social Science Research Network. https://ssrn.com/abstract=3084419
Lupton, D. (2020). Data selves: More-than-human perspectives. Polity Press.
Murthy, D. (2012). Towards a sociological understanding of social media: Theorizing Twitter. Sociology, 46(6), 1059-1073. https://doi.org/10.1177/0038038511422553
Mützel, S. (2015). Facing big data: Making sociology relevant. Big Data & Society, 2(2), 1-4. https://doi.org/10.1177/2053951715599179
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 336(6464), 1-7. https://doi.org/10.1126/science.aax2342
Possamai-Inesedy, A., & Nixon, A. (2017). A place to stand: Digital sociology and the Archimedean effect. Journal of Sociology, 53(4), 865-884. https://doi.org/10.1177/1440783317744104
Quinn, R. (2019). Fighting to protect—and define—academic freedom. Academe, 105(4), 10-17. https://www.jstor.org/stable/26945147
Reilly, P., & Asimov, I. (2008). I, robot. Macmillan.
Rose, K., Eldridge, S., & Chapin, L. (2015). The Internet of Things: An overview. The Internet Society. https://www.internetsociety.org/resources/doc/2015/iot-overview/
Shore, C., & Wright, S. (2003). Coercive accountability: The rise of audit culture in higher education. In M. Strathern (Ed.). Audit cultures: Anthropological studies in accountability, ethics and the academy (pp. 69-101). Routledge.
Singh, D., Tripathi, G., & Jara, A. J. (2014, March 6-8). A survey of Internet-of-Things: Future vision, architecture, challenges and services. IEEE World Forum on Internet of Things (WF-IoT) 2014 (pp. 287-292). IEEE. https://doi.org/10.1109/WF-IoT.2014.6803174
Uldam, J. (2018). Social media visibility: Challenges to activism. Media, Culture & Society, 40(1), 41-58. https://doi.org/10.1177/0163443717704997
Wajcman, J. (2017). Automation: Is it really different this time? The British Journal of Sociology, 68(1), 119-127. https://doi.org/10.1111/1468-4446.12239
Wolfsfeld, G., Segev, E., & Sheafer, T. (2013). Social media and the Arab Spring: Politics comes first. The International Journal of Press/Politics, 18(2), 115-137. https://doi.org/10.1177/1940161212471716
Yilmaz, S. R. (2017). The role of social media activism in new social movements: Opportunities and limitations. International Journal of Social Inquiry, 10(1), 141-164. https://dergipark.org.tr/en/pub/ijsi/issue/30400/328298
Zajko, M. (2022). Artificial intelligence, algorithms, and social inequality: Sociological contributions to contemporary debates. Sociology Compass, 16(3), 1-16. https://doi.org/10.1111/soc4.12962
Zwitter, A. (2014). Big data ethics. Big Data & Society, 1(2), 1-6. https://doi.org/10.1177/2053951714559253
Refers to the organisation of society as theorised by Emile Durkheim. Durkheim argued that in simpler traditional societies, the world is organised around a limited division of labour and roles. He called this mechanical solidarity. Whereas the modern world is characterised by an organised division of labour that is complex, resulting in multiple roles and positions. Durkheim calls this organic solidarity.
Refers to Marx's theory that in modern labour, the worker is increasingly cut off from aspects of the production of products. This includes issues such as not having any control over the design of the product, or the sale of it post-production. It also includes being a worker in a factory where you only have one task. Creating a mundane work environment.
Refers to the theory from Max Weber that the social and cultural worlds we live in are increasingly becoming rationalised – in other words, technically or scientifically explained. Weber also argued that this would increase until there were very few things left that could be explained without science (such as religion).
Refers to a burgeoning field of research in sociology that seeks to understand the impacts and embeddedness of digital life into our everyday lives.
Refers to the increasing amount of data, variety of data, and velocity through which data is accumulated in the online space. Instead of small datasets being held such as in the past (as in a university study), massive data sets are being recorded and stored fed by our interactions in the online environment.
Refers to the data that presents information describing other data. Much of this is simply data that tells the user of times, interactions, and so on of different data engagements. For instance, metadata might describe when I made a phone call, how long the phone call went for, and what number was called, but does not include the actual conversation.
Refers to a prison architectural design by utilitarian philosopher Jeremy Bentham. The design of the building was such that the prisoners would never know if they were being surveilled or not. It consisted of a circle-shaped building with prisoners on the outside and a guard tower in the middle that prisoners were unable to see into. The argument from Foucault, who used this metaphor for governance, is that we now live in a similar situation in how we are governed by liberal democracies.
A bounded geographical territory that is ruled over by a government in the name of a community or nation. For example, Australia is governed by the Australian Government that represents the will of the Australian community.