Reply to Lords Select Committee on Democracy and Digital Technologies

Reply to Lords Select Committee on Democracy and Digital Technologies

Dr. Rebecca Rumbul, Alex Parsons

About mySociety

mySociety is an international not-for-profit social enterprise based in the UK and working internationally. We provide technology, research and data that give people the power to get things changed and help them to be active citizens. We work internationally to support partners who use our technology and data in over 40 countries around the world. As one of the first civic technology organisations in the world, we are committed to building the civic technology community and undertaking rigorous research that tests our actions, assumptions and impacts.

Summary

As one of the world's first civic technology companies, mySociety has a broad understanding of the themes raised in this consultation. The submission is longer than requested, in order to fully explore the questions raised and to make distinctions between the range of digital tools being used for democratic engagement. This submission covers the majority of questions raised in the consultation document, and provides balanced opinions based upon research and evidence.

The key points raised include:

  1. The need to make a clear distinction between for-profit and social media platforms, and non-profit purpose-built democracy platforms. Different platforms have different virtues, and digital tools for democracy cannot be judged equally.
  2. The inherent difficulty in engaging in democratic debate on digital platforms not designed for such interactions. The levels of toxicity on certain platforms demonstrate definite negative impacts upon diversity of debate and quality of engagement.
  3. The improbability of achieving meaningful regulation of commercial digital platforms (however desirable), given the speed with which platforms and algorithms evolve. Parliament has neither the expertise nor the agility to properly regulate digital platforms over the long term.
  4. The enduring digital divide and embedded biases in digital engagement. Large sections of the population remain offline, and the profile of those actively engaging in democratic debate online does not adequately represent national demographic breakdown.
  5. The feedback loop between citizen and institution needs to be closed in order to rebuild trust. Digital platforms that foster high quality interactions and responses demonstrate significantly higher positive impact than interactions on social media.

General

1. How has digital technology changed the way that democracy works in the UK and has this been a net positive or negative effect?

Digital technologies have significantly altered the way in which individuals, organisations and institutions experience and make sense of democracy in the UK. The nature of communication between citizens and their institutions, officials and politicians has evolved significantly, to a state that would have seemed impossible (and not necessarily wholly desirable) only 30 years ago. Whereas correspondence via letter or in person through a constituency surgery or other formal channel would have been the only way for the general citizenry to make direct contact with their officials in the past, multiple digital channels now exist to communicate information back and forth directly and at scale. The apparatus of political campaigns has gone through waves of digital transformation, hugely affecting how people engage with parties and candidates at election time.

In understanding how ‘digital technology' has changed democracy, it is important to understand what a wide array of technologies are being referred to in practice. Digital technologies that affect democracy exist on a wide and varied spectrum, and a wholesale judgement on whether they are beneficial or not is overly simplistic and unhelpful. Digital technologies relevant to democracy include (but are not limited to):

These digital platforms overlap and interact, creating a vast and fluid network of information publication and dissemination, through which individual consumers can navigate unique paths. While something may be widely shared on social media, it may have originated on a completely different platform (often screenshots from one social network recirculate on others).

Digital communication allows for anyone, anywhere, to create and mass-distribute political content, in a way that was traditionally only accessible to professional journalists and broadcasters. No formal qualifications, quality control processes or consideration of ethical issues are now needed for individuals or groups to author and disseminate information. That said, it is important not to overstate the ethical or quality standards of publications prior to the internet — and one boon of the internet, from the 2000s blogosphere to the current Twittersphere is that it is a space to articulate the critiques of previously hard-to-challenge institutions. All that can really be said is that the cost of publishing has dropped dramatically, to the benefit of honest and dishonest motivations alike.

Democratic services have also been transformed. At a basic and important level, the process of registering to vote has been greatly simplified through online technology. This has made it easier for groups that move more often (eg renters) to claim their right to participate in the process. In the 2017 election this arguably contributed to the increase in renter turnout which had a practical impact on the result (Bloomberg, 2018). This demonstrates a fundamental issue when evaluating whether digital tools are good for "democracy": unless a specific change increases everyone's capacity equally, they are likely to have an effect on outcome. This means that arguments about democracy can often be proxy arguments about changes in outcomes (or "politics") and have to be examined carefully.

Understanding these emerging issues is extremely complex, and landmark data manipulation scandals such as Cambridge Analytica's supposed influence on Brexit or the USA's allegation of Russian digital influence in the 2016 US Presidential election, have demonstrated not only the new ways in which political and official information can be manipulated and disseminated, but how the digital effect on democracy cannot be isolated as a single variable, and is tightly bound in other contested issues. These stories illuminate the importance of subtlety, context and social ties in introducing and disseminating information relating to politics and government, and show that communication of information is necessarily different across different nations, identities, cultures, demographics and environments.

As explored in our research on digital tools being used for democratic participation, these tools have enormous positive potential: they "can allow simple transactional actions to be performed more cheaply, can make it easy to engage with individuals simultaneously based in different geographic regions, shift what was previously one-to-one communication to one-to-many, and such a shift opens up opportunities for novel forms of participation to be explored". (Public Square, 2019)

However, positives are not automatic and there are significant potential pitfalls. Generally when tools make it easier for those excluded from the democratic process to engage, they also make it easier for those already well represented. Expanding the reach of a consultation through digital means may not shift the demographics if it also means that the kind of groups already likely to reply, simply reply more. While the early hope for the internet was as a democratic medium that allowed more people a voice, in many cases debates can still be dominated by those who have the time and resources to engage. While social media platforms make it far easier for, say, women and ethnic minorities to take part in public discourse (compared, for instance, to alternative routes of being elected or becoming employed by media organisations), those people can also then receive abuse that other members of the platform do not. This means that technologically neutral platforms in reality are not neutral in terms of the kind of person they help to enter public discourse. Digital spaces do not offer an escape from the societies that they exist in and in some cases have recreated their worst aspects in new forms.

Examining the effects of civic technology, this reveals how evaluations of digital tools are bound up in ongoing debates about democracy. On a purely factual level, civic technologies facilitate better information exchange between citizens and the state. mySociety research found that over 90% of TheyWorkForYou.com users surveyed believed that being able to see parliamentary information in such a consumable format enables them, at least in part, to hold their representatives to account (Rumbul, 2016). A similar percentage believed that representatives would behave in a different manner if this information was not publicly available.

However, MPs have occasionally claimed that TheyWorkForYou has negative effects, in terms of representing the work that they do, and this relates to a fundamental divide between how MPs see their role and how the public does. A 2019 YouGov survey found that 80 out of a sample of 100 MPs thought they should act according to their own judgements — while 63% of the public thought they should act according to the wishes of their constituents, even when they disagreed (YouGov, 2019). This is not to say that the 63% is right and MPs are wrong — but that there are two very different conceptions of an MP's role present in the same system. Is a digital tool that helps keep representatives to account (and possibly closer to the views of their constituents) a net positive or negative for democracy? It depends who you ask.

Similarly, civic technology is difficult to judge in isolation as regards its effectiveness, because it can not wholly close the feedback loop. If it makes it easier for people to contact their representative but if the interaction is not positive, this can count as a negative interaction with civic technology. Similarly, transparency does not automatically create trust — indeed, transparency may reveal reasons to distrust.

The speed at which digital tools have grown has outpaced the abilities of lawmakers to regulate the space, the ability of behavioural experts to understand how this form of discourse affects citizen attitudes and actions, and the capability of the tech giants themselves to identify and prevent pernicious uses of their own platforms. Society is only now in the early stages of understanding how the communication of official (and unofficial) political and parliamentary information is shaping societies and institutions, and what the implications of this shift may be. For a historical analogy, the invention of the printing press led to political, scientific and religious revolutions, causing wars that resulted in widespread death and human misery. An evaluation now might decide it had "net positives", but an evaluation closer to the time might have had different conclusions.

Any evaluation of the role of digital tools in democracy needs to go beyond whether they are broadly good or broadly bad and address exactly how they fit into precise understandings of democracy as a whole. Ultimately the debate about digital tools is not about how they can best fit into the democracy that we have, but should inform the arguments about what kind of democracy we want in the future — and which tools help us down which potential paths.

2. How have the design of algorithms used by social media platforms shaped democratic debate? To what extent should there be greater accountability for the design of these algorithms?

The hostility of online discourse is readily apparent, but explanations for this vary. While the format itself is often described as making users more hostile to each other, we must also consider the effect of the voices that people hear most loudly. These are not representative of all the people online. Bor and Peterson (2019) argue that "online political hostility is committed by individuals who are predisposed to be hostile in all contexts. The perception that online discussions are more hostile seemingly emerges because other people are more likely to witness the actions of these individuals in the large, public network structure of online platforms compared to more private offline settings".

That a small amount of participants create the majority of communication is a feature of even kindergarten parents' Whatsapp groups (Gov-Ari, 2017), but what makes social media different is that there is often an algorithm selecting which contributions people see. This can result in emotive comments being promoted because they fit with the goals of the service, rather than the goal of producing good quality debate.

Social media platforms were not designed to shape democratic debate. Each different platform has a different focus. The purpose of Twitter was originally ‘micro-blogging', essentially a system through which to broadcast soundbite type opinions or thoughts. Facebook was designed to link people known to each other in the real world, online, in order to share thoughts and pictures. While these platforms have mushroomed into key political battlegrounds, the fact remains that their business models are not focused on contributing to responsible democracy. Rather, their business models and algorithms are tailored towards monetising their platforms and the data they gather. As organisations are motivated to maximise profit and minimise waste, their efforts in assuming greater responsibility for content contained on their platforms (which is not ‘profitable' work) remains limited.

Usage of social media for political information gathering is correlated with lower trust in government, in particular when compared with information gathering from online and offline traditional media sources (Ceron, 2015). Citizens are happy to use social media as their main source of political or policy news, but have lower levels of trust in government regardless of whether they assess the news/information itself as reliable. In addition, social media use is positively correlated with increases in anxiety, depression and other mental health issues (Primack et al, 2017). The relative anonymity provided by the internet, the ability of motivated groups to organise and overwhelm, and the opportunities for intimidation on social media make such platforms much more difficult and mentally challenging to traverse for citizen, state and individual officials.

Just as front-page anger sells newspapers, an algorithm can reinvent this approach from first principles solely to optimise engagement, promoting posts and videos that will lead people to longer engagement with the site. YouTube's approach to this ends up creating a radicalisation process, where viewers are drawn into progressively more extreme subcultures not out of any objective goal to do this — but as a result of trying to maximise the watch time of users (New York Times, 2018).

The lack of transparency of algorithms means that harmful effects such as those cited above have to be deduced through use. An algorithm can be simply described as a decision-making process — and while many might be complicated, in principle they could be made public for examination. However, the progression towards algorithms driven by deep-learning (or AI) that learn how to create a decision-making process to reach set goals means that the decision-making process is obscure even to the people running it — let alone the wider community. This creates ‘black box' processes, where it is unclear exactly what the connection is between inputs and outputs, and the decision-making process is impossible to describe transparently.

There is an additional set of problems associated with the fact that a small set of services is enormously influential. Changes to Facebook and Google's algorithms can have huge effects on the viability of entire industries. After the 2016 US election Facebook modified its algorithm in a way that reduced the prevalence of news and focused more on updates from friends and family — but this had the effect of dramatically reducing the amount of journalism people were exposed to through the service and disrupting the media ecosystem that had developed around it.

The assumption from the above is that the changes to public discourse triggered by social media have been a byproduct of their commercial impetus rather than a deliberate goal. However, that large commercial bodies dominating public discourse might not only have political preferences, but try and achieve them through their product design is not unthinkable. Svantesson and Van Caenegem (2017) in an Australian context propose a specific offence of attempting electoral manipulation through algorithmic manipulation. Jennifer Grygiel (2019) argues that similar to quiet periods in the financial industries, prohibitions on algorithm changes in the run up to elections would act as a bar to sudden tweaks that might have a political impact.

The algorithms used by social media organisations are designed to instigate, retain, maintain, and increase the use of the platform. Armies of user experience researchers and designers are employed to achieve this, and to seamlessly integrate profitable aspects of the platform, such as groups that may be of interest; or adverts for items; or campaigns that align with the users' evidenced interests. The more time that a user can be retained on the platform, the more the platform can learn about them, and the greater the opportunities for micro-targeted marketing to that individual. Historically, social media platforms have treated political advertising much like any other product; however, Facebook has taken steps towards increasing the transparency of political advertising, with limited results. Regulation presents a "red queen" problem where innovations are constantly required just to stay in place. As soon as rules or regulations are implemented, motivated groups immediately find a work-around. As such, the ability of democratic institutions to hold social media platforms to account for their algorithms is extremely limited. Such institutions do not have the access to examine these algorithms, nor do they possess the expertise to understand them well enough to regulate or legislate; and they move so slowly that even if the previous two conditions were satisfied, the algorithms in question would have evolved by the time any analysis and legislation could have been undertaken.

Education

3. What role should every stage of education play in helping to create a healthy, active, digitally literate democracy?

In order to engage digitally with democracy, citizens must have a working understanding of the digital world, and also of parliamentary, electoral and government structures. While there is a small component of current secondary education that covers citizenship, this is wholly insufficient in equipping individuals to engage meaningfully in democracy or to critically assess political information, and a lack of understanding of democratic structures amongst the general populace is one of the key drivers in public dissatisfaction with politics in general. Citizens who do not understand how the democratic and parliamentary system works are, understandably, frustrated by what they perceive as inertia, ignorance, indifference and incompetence in the political class.

Digital skills are also required to engage meaningfully with democracy. While the majority of children are now familiar with mobile and computing devices before they begin school, and recent research has shown that younger people are much less susceptible to ‘fake news' than older people, there remains a small percentage of the population, usually the most economically disadvantaged, that do not have good access to devices or the requisite digital skills to navigate the online world with ease and understanding. While there is sometimes the assumption that younger generations will naturally develop these skills, the Nominet trust found that there are 300,000 alleged ‘digital natives' who do not in fact have basic digital skills, so active education in this area is still required (GoodThingsFoundation, 2019).

There are also significant parts of the country, primarily rural areas, that suffer persistent connectivity issues, and as such may also lag behind in the normalisation of digital skills. This relates to a double digital divide: there is less physical access to the internet, but also the demographics of rural areas (on average older, poorer and less well educated) are factors known to lead to different profiles of internet use. While Blank and Graham (2014) were expecting demographic factors in their model of internet use to weaken geographic effects, they found geographic effects became insignificant and that differing demographics were a better explanation of differences in internet use. This means these groups may not be well represented in discourse conducted solely online, to the potential detriment of their unique policy needs. While various factors intersect in rural areas, the problem of both access and education is likely to need unique policy solutions.

Both of these strands — democracy and digital competence — should of course be taught at every stage of education; however, provision for adult learners should not be overlooked. The fast pace of technological change means that a focus on early education will rapidly become out of date. Adults, particularly in professions for which the use of computers is not a requirement, may lack the digital skills to engage fully with democratic debate and activity online. Community-based provision for digital learning and democratic engagement should therefore be considered essential in developing and retaining both digital and civic skills accessible for all.

Digital democracy platforms such as TheyWorkForYou.com attempt to bridge some of these gaps in understanding, providing a range of democratic information in a user-friendly format, and advising how best to engage with political representatives. These platforms proactively work to manage expectations around democracy and encourage meaningful interactions. Such platforms are, however, relatively unknown and significantly underfunded.

Online campaigning

4. Would greater transparency in the online spending and campaigning of political groups improve the electoral process in the UK by ensuring accountability, and if so what should this transparency look like?

Transparency in the online spending and campaigning of political groups in the UK is a vital component of moving towards greater accountability in the democratic process; however, this is only one of several key components. Free data creates benefits, but to have a significant payoff it requires additional investment over time by third parties. ‘Transparency' can often be quite opaque (Fox, 2007) — for example, simply requiring the publication of long lists of donors and fragmented groups involved in financing online campaigns will not necessarily lead to greater accountability. However, it is a necessary prerequisite.

One specific concern has been "dark ads" — micro-targeted content on websites allowing different messages to be shared to different audiences without any public awareness of this. In response, Facebook and others have made their ad libraries transparent (Facebook, 2019), allowing in principle all adverts displayed at an election to be examined. However, dark ads require active investigation for critique, as opposed to national campaigns which can enter the public debate organically.

The Electoral Commission has also recommended that all digital election ads need to be clearly labelled with information about who paid for them (Electoral Commission, 2019). The ability of campaign financiers to conceal their affiliations and identities is also of concern, and again, it requires significant work to unpick those connections. This is work which often must be conducted after the fact, and therefore cannot truly correct any egregious activity that may have influenced electoral outcomes.

As such, true transparency in this area should require not only disclosure of payments, but fit into broader anti-corruption data availability such as understanding of linked business interests and beneficial ownership. Donation information should be produced in open data formats available to the public (the US FEC site is a good model), and this information should be published where possible while the campaign is ongoing (as happens quarterly in the US — but this in part reflects a longer election period). Stronger requirements for parties to know the identity of donors would help address concerns that funding limits can be circumvented through multiple donations below the £500 limit before stronger disclosure standards come into effect.

Such measures would not automatically resolve actual or perceived issues in campaign finance, but would enable greater potential for examining and discovering wrongdoing sooner. This would provide a clearer picture of how elections are funded in the UK.

5. What effect does online targeted advertising have on the political process, and what effects could it have in the future? Should there be additional regulation of political advertising?

Online advertising allows for very narrow distribution of material. Micro-targeting allows campaigners to isolate individual political messages and display them to those that are most interested. This can be replicated multiple times within multiple different interest groups, and such fragmented campaigning reduces the ability of individual voters to compare policies and overall political messages of each advertiser.

The provenance of such advertising is also often difficult to identify, and even where it is identifiable, it can be unclear whether sources are legitimate or official. Much advertising is difficult to even distinguish as advertising, and may take the form of an interest group or page to be ‘joined' or ‘liked' seemingly run by ordinary citizens. Such tactics further erode trust in online political information, and enable misinformation to proliferate.

The fast pace of technological change is not matched by changes in regulation — which is now 20 years old. Any update to address specific instances of malpractice in recent referendums and elections runs the risk of quickly becoming out of date again. There is a risk that regulating to address current political advertising issues would be futile in reducing unscrupulous campaigning in a more digitally sophisticated future referendum or election. There cannot be a single digital catch-up, but rather a constant programme of evaluation and response.

Privacy and anonymity

6. To what extent does increasing use of encrypted messaging and private groups present a challenge to the democratic process?

There are two aspects to this question. While it apparently focuses on the use of encrypted messaging by private citizens, it is also worth considering how the use by elected politicians can present challenges to the democratic process.

In the US, in the wake of the 2016 DNC hack, political parties moved towards using more encrypted messaging tools (The Verge, 2017). Encrypted communications and good data security practice by candidates are important in minimising the risk of hostile hacks interfering with the ordinary process of an election.

But when candidates become office-holders, encrypted messages pose new questions for records management. As reported in Wired magazine, Whatsapp is used as an organisational tool by MPs in part because of a belief that it is safe from public disclosure: "[M]ultiple MPs say they use it in the belief that conversations aren't subject to Freedom of Information requests – in fact, WhatsApp messages are still subjected to FOIA if they concern government business". Indeed, in Ireland there has been a successful FOI request for Whatsapp conversations between government communications staff (Wired, 2019).

Where encrypted messaging platforms are used to attempt to avoid mechanisms of accountability and transparency, this presents a challenge to the democratic process.

Turning to the use of private groups by citizens, this is a complicated issue. Concerns are rooted in stories of how services like Whatsapp are used in other countries: however, this is missing the important effect of local culture on technology. The use of WhatsApp and the internet in general can be radically different in contexts abroad, compared to the UK.

As explored in mySociety research examining the availability of parliamentary information in various countries in Sub Saharan Africa, high use of Whatsapp and other social networks is driven by differences in the cost of data, where special data plans that cheaply give access to social networks can in part explain big differences in engagement with the internet (Rumbul et al, 2018). In the UK it is not currently common to be engaged in multiple group chats with large memberships, whereas it is an important part of how information travels in Kenya and Nigeria.

In Hassan and Cheeseman's study of Whatsapp in Nigeria they found high levels of overlap between the private group chats people were in and their existing networks — often replicating "existing clientelistic networks" in a way that amplified "the significance and influence of networks that already exist within Nigerian politics and society".

For instance, one of the more bizarre examples of the rumours that spread on these networks was the idea that President Buhari had been replaced by a clone or imposter while under care in London. Hassan and Cheeseman contextualise this rumour, pointing out the role of influential figures in validating it, and recent history of a president disappearing from public view with an illness and never re-emerging. They argue that messages that "are seen to be credible are those that resonate with individuals because they contain an element of the truth, or play on recent experiences." Misinformation is at its most dangerous when it is plausible, but this is also the situation where it is hardest for platforms (and for citizens in general) to distinguish between rumours and fact.

The part played by Whatsapp in a wave of lynchings in India similarly has to be contextualised in the broader picture of vigilante violence. As Chinmayi Arun (2019) points out, there is also a recent history of both a migration triggered by rumours and threats of violence circulating over text message, and riots and a previous lynching attributed to Facebook. Alongside this are "occasions in which a cow carcass has triggered violence, incidents in which trucks have been attacked because they were carrying buffalo tallow, and that Dalits have been attacked for skinning a dead cow". Vigilante violence in India is clearly related to factors that go beyond the particular nature of Whatsapp's communication model.

To return this picture to the UK, the growth of private messaging social media will not have the same effect as it has in other countries: it will have distinct effects — both positive and negative — rooted in the problems already manifest in the culture. That said, some aspects are universal: describing the role of Whatsapp groups on politics in India, a Guardian article says that "significant parts of society are kept in a constant state of tension and polarisation, a state exacerbated by the algorithms that privilege outrage over nuance"(The Guardian, 2019). But this description applies far beyond India and Whatsapp, and could equally apply to cable news in the United States, or newspapers in the UK.

Established media organisations are at least technically regulatable, while it is unclear how this could be managed for private groups. However, this is a distinction without a practical difference. In reality there is a reluctance to regulate news organisations to the extent required to counteract their worst effects because of the belief that this would inevitably harm their positive effects. If it was practically possible to moderate content on private groups, this precedent suggests that, in reality, policy makers would decline heavy-handed use of it because of the positive uses of the service such action would interfere with.

Hassan and Cheeseman argue that the private messaging ecosystem also had numerous positive uses in the Nigerian context, both for the political process and ordinary citizens: "WhatsApp is used to both spread disinformation and to counter it. The private messenger application is also used to observe elections and to share fact-checked information. It therefore represents a competitive information environment that may spread misinformation but also levels the playing field between the ruling party and the opposition and can be used to boost electoral transparency and accountability."

Public policy may have to adapt to the fact that large numbers of people conducting their lives and communications in ways that are observable (and subject to mass surveillance) is not a permanent development, but a feature of a relatively small period of time at the start of the 21st century. People may revert to talking in the relative privacy of small personal and professional groups — just as they had done before. Communication may be faster, but ultimately these groups contain many of the same participants talking about the same things.

The historian Matthew Lockwood argues that despite advances in forensic science, most modern homicide cases are solved in the same way they were in the early-modern period, through witness evidence (Lockwood, 2017). Similarly, while encrypted messages are technically impenetrable, in reality the people sending and receiving the messages continue to be the weakest point. Private group chats, whether between teenagers or MPs, are frequently leaked via screenshots. In Nigeria, the major political parties accept that "moles" are likely to be present on group chats — because otherwise they could not be public enough to share the messages needed.

One problem with Whatsapp is that while the maximum group size is large (256 people), it is also small, in that to reach a truly mass audience, messages have to be shared through multiple groups. This means that attribution quickly gets lost and the original author of a message is difficult to determine. However, this is also effectively the case here in the UK, given how screenshots jump between social media services. This also means messages have to pass through multiple gate-keepers to reach a mass audience: if poorly designed, a single algorithm cannot show content to millions of voters: thousands of people have to make the decision to share the information.

In short, increased use of encrypted group chats in the UK will present some new challenges, but also new opportunities. The privacy features of encrypted communications have benefits in increasing the security of ordinary citizens and political candidates from hostile action, and any attempt to degrade that encryption would open up citizens to greater risk of fraud, and the electoral process to greater risk of interference.

7. What are the positive or negative effects of anonymity on online democratic discourse?

Anonymity in online political discourse has positive and negative aspects, both enabling engagement for individuals for whom privacy is essential to their personal safety, and concealing the identities of those pursuing more harmful goals. Research has shown that anonymity in itself does not necessarily have an egregious effect on political discourse, and it is rather the controversiality of the issue being debated that inflames tensions (Berg, 2016); however, other studies have shown that the likelihood of escalation in language and personal attacks online increases where anonymity is provided (Rowe, 2015). These are just two of many studies concerning anonymity and political discourse online which appear to conflict in their conclusions: differences in experimentation, methodology and participant culture account for such varied results. As with many questions concerning this subject, the answer is neither net positive or negative, but a matter of nuance, affected by cultural norms.

Away from the social media aspect of political discourse, it is important to raise the issue of anonymity in the sourcing of legitimate information for public distribution. Freedom of Information law, for example, may be used in the public interest by those with good reason not to use their real names, such as whistleblowers within public bodies. While it is understood that the code of practice must reflect the law, the code should note the Information Commissioner's guidance on what constitutes a "real name" (again in its document on recognising a Freedom of Information request) and promote the fact that a request can be made using a maiden name, any name by which you are "widely known and/or is regularly used", or the name of an organisation / a company. Public bodies regularly ask for a requestor's name when requests are submitted in the name of organisations, and this is not technically required.

Democratic debate

8. To what extent does social media negatively shape public debate, either through encouraging polarisation or through abuse deterring individuals from engaging in public life?

Social media can have both a cooling and heating effect on public democratic debate. The heating effect can be seemingly spontaneous and frenzied, arising in response to a specific incident or statement, and can quickly be blown out of all proportion. The way in which platforms — in particular Twitter — are designed, means that individuals can quickly come to crave the immediate gratification that comes with mass online engagement on a specific issue. These platforms are designed to appeal to our pleasure receptors, and it is therefore unsurprising that individuals can lose some sense of proportion when language and engagement escalate online. Often, escalation leads to a polarisation of views, and increasingly toxic language can emerge. The voices that shout the loudest and longest tend to monopolise headlines, whether or not those voices are factual or the headlines are favourable, and this gives broader media coverage to factually inaccurate or opinion-based stories. This further fuels polarisation, and acts as a deterrent to moderate voices.

The well publicised ‘sinister' side of social media has had an understandable cooling effect on debate, both in terms of politicians refusing to make statements or give opinions, and in terms of individuals declining to become involved in public life at all in order to avoid the almost inevitable online abuse so commonly experienced by politicians in the UK. This increasing reticence to be subjected to the rigours of social media is already cited by many individuals, women and minorities in particular, for deciding against pursuing roles in public office. Where once the confrontational nature of the House of Commons was often cited as unappealing to politically underrepresented individuals in pursuing a career in politics, now, it is the potential for relentless social media abuse that deters high calibre individuals from the House. The social media deterrent to engaging in public life should be taken very seriously, as the likely effect will be to reduce diversity and meaningful representation in the House of Commons, with a correlating impact on the quality and appropriateness of legislation.

9. To what extent do you think that there are those who are using social media to attempt to undermine trust in the democratic process and in democratic institutions; and what might be the best ways to combat this and strengthen faith in democracy?

N/A

Misinformation

10. What might be the best ways of reducing the effects of misinformation on social media platforms?

The fundamental issue with misinformation, whether online or offline, is the difficulty in disseminating the corrected facts sufficiently to reach all those that consumed the misinformation, and to persuade them of the veracity and legitimacy of the correction. The medical sector has spent 20 years and countless hours of campaigning in an attempt to persuade individuals that the MMR jab is, in fact, safe and not a cause of autism in children. However, the original erroneous claim that the jab is not safe retains some of its potency to this day, resulting in lower levels of take up and increases in infection. While anti-vaccination messaging and communities flourish on social media, it is important to remember that initial MMR stories were supported by mainstream newspapers. Social media is not unique in spreading misinformation, and ‘Information', once published and reported on, takes on a life of its own and cannot easily be retracted or corrected.

Social media platforms are able to design algorithms that vary the presentation of political information, and studies have shown (Bode & Vraga, 2015) that presenting the same story to users with and without misinformation reduces the likelihood of the erroneous story being believed. This does, however, rely on the platform presenting the right kind of information, and consumers reading multiple sources thoroughly. When compared, the use of sources within stories referencing facts has been shown to increase the perception of the validity of information consumed online (Vraga & Bode, 2018); however, the absence of such sources without a comparative opportunity does not significantly negatively impact upon the perception of validity, demonstrating that individuals do not actively critique the validity of source references unless comparing two stories. Again, this validity test only works where the consumer is sufficiently interested in a subject to read multiple stories on the same subject.

A relatively simple requirement for online platforms to alert readers to the lack of citations (much like Wikipedia) contained within political content may go some way to encouraging more critical assessment of online information.

Moderation

11. How could the moderation processes of large technology companies be improved to better tackle abuse and misinformation, as well as helping public debate flourish?

Robyn Caplan (2018) distinguishes between three kinds of moderation policy:

1) Artisanal, for platforms such as Vimeo, Medium, Patreon, or Discord;

2) Community-Reliant, for platforms such as Wikimedia and Reddit; and

3) Industrial approaches, for platforms such as Facebook or Google.

The content moderation practiced by mySociety on its own sites falls into the artisanal category, with each case considered on an ad hoc basis with bespoke solutions applied to suit the circumstances.

The label of ‘industrial' for the way in which large platforms approach moderation helps to emphasise that they are not just scaled up versions of a community forum, but require a distinct kind of work that has many of the same issues (environmental health, worker safety, off-shoring) as other industrial processes. While artisanal approaches are managed by a relatively small number of staff who are able to make individual judgements, industrial approaches require the creation of standard definitions and approaches that can be distributed to a staff of thousands. YouTube and Facebook employ workers in the tens of thousands in their content moderation teams.

The global nature of the platforms mean there are immediate difficulties to standardised approaches as "they must carefully consider how to be sensitive to localized variations in how issues like context-based speech, like hate speech and disinformation, manifest in different regions and political circumstances"(Cablan, 2018). This also assigns the power to these organisations to define the kind of tricky and fiddly distinction between allowable and unallowable speech that had previously been debated in legislatures and court rooms. Caplan argues that "tech companies, when made responsible for establishing the difference between hate speech and political expression, often search for straightforward, consistent calculations, which are all too often divorced from historical and cultural contexts" — but that on the other side allowing individual autonomy "can lead to rules being applied inconsistently, letting through images of child abuse and instances of hate speech.".

American tech companies policing content worldwide cause additional problems when they are successful. Arun (2019) points out that Facebook "has incorrectly flagged the phrase "Free Kashmir" as locally illegal content for censorship purposes, when it is in fact constitutionally protected speech."

Moderation guidelines show both subtle — but also questionable — decisions made in what is and is not considered hate speech (The Guardian, 2018). The more complicated and specific the policy, the harder it is to train workers (especially if not working in the same country) in applying the difference.

Reducing the human component through deep learning tools does not necessarily remove these problems, as the training dataset from human operators is used, carrying any misidentifications made into the automated process. Ultimately this is a hard problem that requires consistent standards, contextually applied — and where even correctly identifying and removing 99.99% of flagged content may still leave up thousands of instances of problematic content. Ultimately the intractableness of the problem combined with the increasing public relations issue is likely to be part of the motivation in Facebook's future direction of moving towards private groups with no public moderation needs.

In principle, many of these issues can be resolved in a single country, with large well-resourced moderation teams working to standards that reflect current law or developed in conjunction with citizen input. Worldwide this has difficulties because it involves contradictory stances on content, and may require decisions that an American entity is uncomfortable with, especially when dealing with non-democracies.

Aside from the effect these rules have on the wider system, the need for moderation also has intense effects on the workers involved. Asking "Facebook" or "Twitter" to remove harmful content from their platforms, in reality means employing thousands of people whose day-to-day jobs involve reviewing flagged material — including deeply violent and distressing material (The Verge, 2019). This is low-paid, stressful and traumatising work. While platforms hope to move more and more of this work to AI tools, the sheer scale means that humans remain deeply involved in the process.

The need to sift distressing materials out of social media platforms creates a form of psychological toxic waste (Arsct & Etcovitch, 2018; Rolling Stone, 2017). Regulations related to moderation should also include regulations on the safety and care of those disposing of it.

Technology and democratic engagement

12. How could the Government better support the positive work of civil society organisations using technology to facilitate engagement with democratic processes?

Online and digital technologies that enable citizens to hold governments to account, known as civic technology, are proliferating at a steady rate around the world. The potential for these platforms to invigorate citizen engagement, increase transparency, and broaden public debate has been recognised not only by those in civil society, but by governments, by development agencies and by philanthropists. The publication and dissemination of parliamentary information in developed countries has been shown to improve citizen engagement in governance and reduce the distance between the representative and the represented (Rumbul, 2016). Over the last 20 years, there have been increasing efforts to use digital tools to facilitate this process, which has resulted in highly successful parliamentary monitoring websites such as TheyWorkForYou.com in the UK.

The published academic research into digital tools for facilitating democratic activity with a UK focus is mainly splintered into several policy-related categories. Broadly grouped, these consist of Digital and Health, Digital and Heritage/Culture, Digital and Physical Environment, Digital and Communities of Interest, Digital and Education, and Digital Infrastructure. A range of digital tools and initiatives within these policy areas are in their nascent stages, but are evidently targeted at very specific outcomes such as community planning or policy-focused consultations, rather than as all-encompassing platforms through which democratic activity and engagement can be fostered. Currently, digital solutions for broader democratic engagement that transcend individual policy or service concerns have not been widely attempted in the UK.

Static one-way websites where people could engage in transactional relationships with governments, such as paying bills or filing forms (West, 2004), have been the standard digital experience for citizens. More recently, social media platforms, and digital platforms specifically designed to reduce the distance between citizen and institution, have emerged as key means through which governments are attempting to become more responsive (Bryer & Zavattaro, 2011; Mergel, 2013a). Social media platforms are varied, but exhibit common capabilities such as sharing, instant information gathering, networking, co-creation, and interactivity (Bryer & Zavattaro, 2011; Mergel, 2013a, 2013b). Digital platforms designed specifically for the purpose of engaging citizens in democratic participation display similar features, such as spaces for learning, sharing, deliberation and decision-making, but tend towards being more focused on specific policy areas or locations, and require significant moderation and management in order to direct the activity effectively. Peer-to-peer digital platforms can amplify social inclinations to cooperate over assumed impulses of self-interest (Benkler, 2006; Glaeser, 2011). These innovations provide local government with the opportunity to engage a larger number of individuals with varying interests in governmental affairs than could be achieved offline.

There is evidence to suggest that public authorities may not be optimising their use of the interactional capabilities of these tools, and are only increasing capacity for one-way and directed participation, rather than meaningful citizen participation and engagement (Brainard & Derrick-Mills, 2011; Brainard & McNutt, 2010; Bryer, 2013; Hand & Ching, 2011; Mergel, 2013a; Rishel, 2011; Zavattaro & Sementelli, 2014).

Government support for civil society organisations using technology to bridge the citizen and institution divide should be significantly increased, both in providing sufficient financial support to develop and implement new digital tools, and in providing meaningful access to institutions in order to work collaboratively, rather than in piecemeal form as contracted ‘outsiders'. Useful digital tools require significant user research and development prior to being constructed, and stable data to build upon, and such things take significantly longer than short annual contracts to fulfil. There are numerous examples of digital tools around the world being created and implemented, only to fall out of use immediately, because they were not tailored to user needs. Such initiatives are expensive and disillusioning for citizens. High quality and meaningful long-term support for digital civil society is the only way to create tools that work for the whole democratic ecosystem.

13. How can elected representatives use technology to engage with the public in local and national decision making? What can Parliament and Government do to better use technology to support democratic engagement and ensure the efficacy of the democratic process?

As explored in our recent work investigating the role of digital tools in local government, there are a range of options available to better integrate citizens in decision making processes.

While online engagement lowers barriers to entry by reducing knowledge and time requirements to participation, online platforms have their own barriers to participation, still requiring internet access, knowledge that the activity is taking place, and the skills required to use the website. Individuals using smartphones rather than desktop computers are less likely to want to engage with lengthy PDF documents and clunky non-responsive surveys. On the other hand, online platforms have more scope for automatic translation, making the framework of the activity, evidence and public comments more accessible to people who use other languages (where their language is not a statutory requirement). They can also move a consultation from being a set of individual replies to building a better collective picture of opinions.

There are now digital tools specifically designed for consultative exercises that push away from this practice of minor digitisation of offline forms of participation. Digital tools can make evidence and plans more readable by the general public, and add a layer of transparency to community feedback that would be absent in offline consultations designed to help communities discuss, refine and rate ideas.

Delib's Citizen Space (https://www.delib.net/citizen_space) is used by various UK national and local government organisations to assist in running a consultation process. Digital tools can make evidence and plans more readable by the general public, and through the publication of responses, can add a layer of transparency to community feedback that would be absent in offline consultations. Social Pinpoint (https://www.socialpinpoint.com/) is a similar crowdsourcing platform that allows themed proposals to be commented on, voted for and ranked. Your Priorities (https://www.yrpri.org/domain/3) is an open source tool produced by Citizens Foundation and used in a Democratic Society project in Argyll & Bute (https://www.citizens.is/portfolio_page/councils-in-scotland/), a project that generated 150 ideas and in which more than 1,300 people contributed to discussions. Questions can be proposed, categorised and given positive or negative points.

Other tools are well adapted to consultations and engagement activities around geographic issues that suit the use of maps. Common Place (https://www.commonplace.is/) is a consultation platform/service used by local councils and developers to get feedback on aspects of development projects. It creates a sub-site that can then be divided into separate areas by development or theme. It was used by the London Borough of Waltham Forest for the consultation surrounding the Mini Holland project. As in this example of a road safety consultation (https://wfroadsafety.commonplace.is), reports can also be made directly onto a map to help people give more precise feedback. Similarly, EngagementHQ (https://www.bangthetable.com/engagementhq-community-software) has features for a wide array of possible modes of engagement, with the similar option of a place-based approach, as well as more general idea-sourcing features.

Platforms useful for consultations can also be pitched as more general tools. Consul (http://consulproject.org/en/) was initially a framework designed by Madrid City Council for a participatory budgeting process, but has now been used and adapted in 33 countries. It includes options for collaborative commenting on legislation, participatory budgeting, features for creating and gathering support on proposals and holding general online debates. Hence Consul can be used for consultations and other engagement activities (and is useful as an open source tool), but also has more specialised use as a participatory budgeting platform.

Some UK cities operate general online consultation platforms where citizens register and can then take part and receive updates about future consultations, importantly allowing participation through mobiles and tablets. On Let's Talk Newcastle (https://www.local.gov.uk/lets-talk-newcastle) engagement can vary between polls, surveys focus groups and "topic walls", which allow group deliberation on an issue. Talk London (https://www.london.gov.uk/talk-london/) similarly allows comments, surveys and discussions for consultations, engagement activities, and programmes run by the GLA. Alternatively, automated tools can be used to indirectly gauge preferences and problems of citizens. Zencity (https://zencity.io/) is a US based tool that uses 311 reports and social media to produce analysis for local authorities.

Digital tools for consultation can also take the form of evaluation frameworks that allow councils to assess and make better decisions for an area. Public Health England's SHAPE tool (https://www.local.gov.uk/gathering-evidence-digital-tools-yorkshire-and-humber) is a platform that includes a number of health assets to help understand patterns of ill health across an area to assist in medical planning and transportation decisions (eg where a trend of poorer health can be understood by inaccessible healthcare). The PLACE Standard (https://www.placestandard.scot/guide/quick) is a tool that uses 14 questions to assess the quality of place, and aggregate responses can be used to understand what people feel is good and bad about an area and to make simple comparisons to other areas.

Digital tools provide an opportunity to explore new ways of conducting conversations with the public, where both sides can be a part of question forming. Allourideas.org (https://www.allourideas.org/) explores the idea of wikisurveys, which allow alternative options to be presented by users over time. Pol.is (https://pol.is/home) is designed for open-ended questions made to a large group. Based on the results of these answers, similar statements and votes are grouped together to understand the different kinds of opinions that are present among the users. This is useful as a way of hedging against skewed participation (NESTA, 2019) because it will help explain the different kinds of opinions (with different sets of preferences), instead of simply crowning winners.

The above examples show the diversity of ways in which digital tools can be used to create a more collaborative consultation process that helps the community build a collective picture of options from local knowledge and ideas. However, the most important element remains the question of whether the consulting authority is committed to engaging with the process in earnest.

There have been a number of Participatory Budgeting (PB) activities initiated by local governments in the UK, most related to small grants given to community organisations, although there has been a move towards using these approaches to decide authority spending priorities. Dundee ran the UK's first participatory budgeting process using mainstream budget in 2018 (PB Scotland, 2018). The role of the public authority is both to provide funds for projects, but also to define the framework and process for public involvement.

As explored in mySociety's 2018 research into participatory budgeting (Rumbul et al, 2018), PB is a "fast policy" that has spread around the world, and not only has diverse forms, but diverse reasons for implementation: "Government believes it can be used to increase democratic engagement, to generate public ownership of budgetary decision-making, and to tacitly generate public support for incumbent government. [...] And for citizens, it could provide a public route to request new or improved facilities or services that may not have been a political priority for governing parties or civil service managers"

Participatory budgeting is at its most successful where it provides residents with an opportunity to suggest new ideas for their community and to work together, rather than just inviting them to decide upon ideas developed by government. PB Scotland's Charter reflects that PB does not inherently lead to good outcomes, but has to be rooted in principles and meet standards.

There are a variety of digital tools that can be used to enable participatory budgeting processes. These include tools that can be used to support the generation of proposals or activities to be funded, including Your Priorities created and maintained by Citizens Foundation, and tools that can support the decision-making process itself, such as Citizen Foundation's Open Active Voting, or D21 (https://en.d21.me/). There are also tools that can support many different aspects of participatory budgeting including Participare (https://participare.io/), Consul (http://consulproject.org/en/), and Empatia (https://empatia-project.eu/). The proposal stage is a key area where digital tools can be transformative, and they allow a wide array of ideas to be considered and presented before moving to a smaller list for the voting phase.

Thinking about how to best use these tools depends upon the design of the process itself. As with other digital tools, using online platforms for deliberation has benefits in allowing wider participation, but perhaps at the cost of some of the relationship building and resultant shared solution finding that is seen in face-to-face deliberation. Digital tools used for voting in participatory budgeting likewise can allow wider participation, and make it easy for those who are taking part in the process to view relevant information prior to selection. There is a problem, however, if increased participation (more people taking part) results in more concentrated participation from subsets of the population (who could skew the distribution of public money).

Online voting can also open up security problems in the process, and trying to address these can increase barriers to participation (Rumbul et al, 2017), where increasing the security of the process through adding verification removes some of the benefits of increasing access. Security related concerns may not be significant in the current UK context, when PB is used to to allocate comparatively small funds, but if PB becomes a means by which to allocate larger sums of money then this becomes more challenging. In EU countries, ID cards are often used as verification, but there is not a similar universal form of ID in the UK.

As such, digital tools do not provide easy answers to the problems of offline PB, but when deployed as part of a hybrid system they create greater reach. For instance, assisted kiosks to help people who do not have access to the online voting can help expand access and improve the management of a PB process.

Overall, there are a number of digital tools and processes that can effect positive democratic engagement, however these all require both the active participation of citizens who may not feel the inclination to give up the time required to perform meaningful engagement activities, and the closure of the feedback loop by government that demonstrates that the activity was indeed worthwhile.

14. What positive examples are there of technology being used to enhance democracy?

There are many, varied examples of technology being used to enhance certain democratic events and features.

Examining the effects of civic technology reveals how evaluations of digital tools are bound up in ongoing debates about democracy. On a purely factual level, civic technologies facilitate higher quality information exchange between citizens and the state than general sharing platforms, as these digital tools provide logical user journeys that shape interactions to maximise their clarity and relevance and direct them towards the most appropriate recipient. Civic technologies are designed to, as far as possible, close the feedback loop, and therefore encourage users towards meaningful dialogue, problem solving and information facilitation, and away from ‘light touch' opinion sharing on a general platform, which can be likened to shouting into a great void: it may be cathartic but is unlikely to achieve anything. Fostering better quality democratic interaction has benefits. mySociety research found that over 90% of TheyWorkForYou.com users surveyed believed that being able to see parliamentary information in such a consumable format enabled them, at least in part, to hold their representatives to account (Rumbul, 2016). A similar percentage believed that representatives would behave in a different manner if this information was not publicly available. This statistic is important in demonstrating that disclosure is one of the fundamental features of building trust in democratic process, something that can be further and more easily developed technologically.

Studies demonstrate that local authority website usage is correlated with trust in local authorities (Tolbert & Mossberger, 2006) and with the scope of interaction between citizens and local governments (Feeney, Welch, & Haller, 2011; Garrett & Jensen, 2011). The more citizens use local government websites, the more they communicate with local authorities and trust them, even more so than at the national level. Thus, local authority Facebook or Twitter activities have the potential to generate beneficial results for both citizen and institution. The development of local policy alongside new digital democracy services is also shown to produce better quality policy and services, compared to policy developed in isolation that is then digitised as a secondary activity or afterthought. The ‘Civic Tech Cities' report by mySociety (2017) evidences this, showing that the user design thinking brought to the table by tech experts positively affects how eventual policies are solidified and operationalised, creating not only better digital tools, but better policy too. This has a knock on effect of service users feeling more positive towards public bodies. Here again, as with other digital democratic activities, the closure of the feedback loop increases the trust citizens have in institutions.

Democratic services provided by government have also been transformed over the last few years. At a basic and important level, the process of registering to vote has been greatly simplified through online technology. This has made it easier for groups that move more often (e.g renters and young people) to claim their right to participate in the process. In the 2017 election this arguably contributed to the increase in the renter turnout which had a practical impact on the result (Bloomberg, 2018).

The parliamentary e-petition system has had positive impact. While most petitions are rejected, few create debates, and even fewer have a meaningful impact, some do have an impact, especially through the ability of the petitions committee to gather additional evidence and information. Such petitions also mobilise debate around specific issues that can then be amplified by the press.

Cristina Leston-Bandeira (2019) argues that the e-petition system highlights "issues dispersed across the country, of no particular significance within specific constituencies. The high heels petition is a good example of this: in the first instance seemingly trivial, its rapid collation of signatures highlighted it as a serious issue affecting many women across the country. Besides inquiries, the gathering evidence role is also evident in oral evidence sessions held, such as the ones on the Meningitis B vaccination (Petition 108072), grouse shooting (Petitions 125003 and 164851) and a cap on young people's car insurance (Petition 166847). In all three, the evidence gathered would then inform the petitions' respective debates. [T]he Meningitis B e-petition's evidence sessions were key for the Committee to press on policy change".

On a very broad level, technology has empowered many individuals to learn more about democracy and engage in it, even if only at a very low level, through reading, sharing or commenting. While the quality of this experience, and its level of informedness will be variable, there is a net benefit to individuals being able to easily access information and opinion concerning their democracy. If the pernicious heating and cooling effects of certain aspects of the digital democracy experience can be managed more effectively, technology can represent a net benefit to citizens and representatives alike in levelling opportunities to engage, promoting diversity in public life, and better informing policy-making for the benefit of all.

Notes


1:Bloomberg. (2018) Housing crisis renters are driving British voters to Labour. [Accessed 19.9.2019] <https://www.bloomberg.com/opinion/articles/2018-03-05/housing-crisis-renters-are-driving-british-voters-to-labour>

2:Public Square (2019), Digital tools for democratic participation

3:Rumbul, R. (2016, September). ICT and citizen efficacy: the role of civic technology in facilitating government accountability and citizen confidence. In IFIP World Information Technology Forum (pp. 213-222). Springer, Cham.

4:YouGov. (2019) Are MPs elected to exercise their own judgement or do their constituents' bidding? [Accessed 17.9.19] <https://yougov.co.uk/topics/politics/articles-reports/2019/08/13/are-mps-elected-exercise-their-own-judgement-or-do>

5:Gov-Ari, E. (2017) What I've discovered when analyzing the "Parents" WhatsApp group. Medium [Accessed 17.9.19] <https://medium.com/@eladgovari/what-ive-discovered-when-analyzing-the-parents-whatsapp-group-dc1fe19f2e00>

6:Ceron, A. (2015). Internet, news, and political trust: The difference between social media and online media outlets. Journal of Computer-Mediated Communication, 20(5), 487-503.

7:Primack, B. A., Shensa, A., Escobar-Viera, C. G., Barrett, E. L., Sidani, J. E., Colditz, J. B., & James, A. E. (2017). Use of multiple social media platforms and symptoms of depression and anxiety: A nationally-representative study among US young adults. Computers in human behavior,69, 1-9.

8:New York Times. (2018) YouTube, The Great Radicalizer. New York Times [Accessed 17.9.19] <https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html>

9:Svantesson, D. J. B. and van Caenegem, W. (2017) ‘Is it time for an offence of "dishonest algorithmic manipulation for electoral gain"?', Alternative Law Journal, 42(3), pp. 184–189. doi: 10.1177/1037969X17730192.

10:GoodThingsFoundation (2019) Young Adults and the Digital Skills Gap. [Accessed 17.9.19] <https://www.goodthingsfoundation.org/news-and-blogs/blogs/Young-adults-and-the-digital-skills-gap>

11:Fox, J. (2007). The uncertain relationship between transparency and accountability. Development in practice, 17(4-5), 663-671.

12:Facebook (2019) Facebook Ad Library. [Accessed 17.9.19] [https://www.facebook.com/ads/library/?active_status=all&ad_type=political_and_issue_ads&country=GB](https://www.facebook.com/ads/library/?active_status=all&ad_type=political_and_issue_ads&country=GB)

13:Electoral Commission (2019) Transparent Digital Campaigning. [Accessed 17.9.19] <https://www.electoralcommission.org.uk/who-we-are-and-what-we-do/changing-electoral-law/transparent-digital-campaigning>

14:The Verge (2017) House Democratic committee moves to encrypted messaging for internal communications. [Accessed 17.9.19] <https://www.theverge.com/2017/7/18/15989456/house-democratic-committee-encrypted-messaging-wickr>

15:Wired (2019) In Westminster, a Brexit war is raging in secret WhatsApp groups. [Accessed 17.9.19] <https://www.wired.co.uk/article/brexit-vote-whatsapp-groups>

16:Arun, C. (2019) ‘On Whatsapp, Rumours, and Lynchings', Economic and Political Weekly, 54(6), pp. 30–35.

17:The Guardian (2019) Is India the frontline in big tech's assault on democracy? [Accessed 17.9.19] <https://www.theguardian.com/commentisfree/2019/may/13/big-tech-whatsapp-democracy-india>

18:Lockwood, M. H. (2017). The Conquest of Death: Violence and the Birth of the Modern English State. Yale University

19:Berg, J. (2016). The impact of anonymity and issue controversiality on the quality of online discussion. Journal of Information Technology & Politics, 13(1), 37-51.

20:Rowe, I. (2015). Civility 2.0: A comparative analysis of incivility in online political discussion. Information, communication & society, 18(2), 121-138.

21:Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication,65(4), 619-638.

22:Vraga, E. K., & Bode, L. (2018). I do not believe you: how providing a source corrects health misperceptions across social media platforms. Information, Communication & Society,21(10), 1337-1353.

23:Caplan, R. (2018) Content Or Context Moderation? Artisanal, Community-reliant, And Industrial Approaches. [Accessed 17.9.19] <https://datasociety.net/output/content-or-context-moderation/>

24:Arun, C. (2019) ‘On Whatsapp, Rumours, and Lynchings', Economic and Political Weekly, 54(6), pp. 30–35.

25:The Guardian (2018) Facebook releases content moderation guidelines – rules long kept secret. [Accessed 17.9.19] <https://www.theguardian.com/technology/2018/apr/24/facebook-releases-content-moderation-guidelines-secret-rules>

26:The Verge (2019) The Trauma Floor. [Accessed 17.9.19] <https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona>

27:Arsht, A., & Etcovitch, D. (2018) The Human Cost of Online Content Moderation. [Accessed 17.9.19] <https://jolt.law.harvard.edu/digest/the-human-cost-of-online-content-moderation>

28:Rolling Stone (2017) The Human Cost of Monitoring the Internet. [Accessed 17.9.19] <https://www.rollingstone.com/culture/culture-features/the-human-cost-of-monitoring-the-internet-202291/>

29:Rumbul, R. (2016, September). ICT and citizen efficacy: the role of civic technology in facilitating government accountability and citizen confidence. In IFIP World Information Technology Forum (pp. 213-222). Springer, Cham.

30:West, D.M. (2004). E-government and the transformation of service delivery and citizen attitudes. Public Administration Review, 64(1), 15–27.

31:Bryer, T. A., & Zavattaro, S. M. (2011). Social Media and Public Administration: Theoretical Dimensions and Introduction to the Symposium. Administrative Theory & Praxis, 33, 325-340.

32:Mergel, I. (2013a). Social media adoption and resulting tactics in the U.S. federal government. Government Information Quarterly, 30(2), 123–130.

33:Bryer, T. A., & Zavattaro, S. M. (2011). Social Media and Public Administration: Theoretical Dimensions and Introduction to the Symposium. Administrative Theory & Praxis, 33, 325-340.

34:Mergel, I. (2013a). Social media adoption and resulting tactics in the U.S. federal government. Government Information Quarterly, 30(2), 123–130.

35:Mergel, I. (2013b). A framework for interpreting social media interactions in the public sector. Government Information Quarterly, 30(4), 327–334.Press.

36:Benkler, Y. (2006). The wealth of networks. New Haven, CT: Yale University Press.

37:Glaeser, E. (2011). Triumph of the city: how our greatest invention makes us richer, smarter, greener, healthier, and happier. New York, NY: The Penguin Press.

38:Brainard, L.A., & Derrick-Mills, T. (2011). Electronic commons, community policing and communication: On-line police –citizen discussion groups in Washington D.C. Administrative Theory & Praxis, 33(3), 383–410.

39:Brainard, L.A., & McNutt, J.G. (2010). Virtual government–citizen relations: Informational, transactional or collaborative? Administration and Society, 42(7), 836–858.

40:Bryer, T. A. (2013). Designing social media strategies for effective citizen engagement: A case example and model. National Civic Review, 102(1), 43–50.

41:Hand, L.C., & Ching, B.D. (2011). You have one friend request: An exploration of power and citizen engagement in local governments' use of social media. Administrative Theory & Praxis, 33(3), 362–382.

42:Mergel, I. (2013a). Social media adoption and resulting tactics in the U.S. federal government. Government Information Quarterly, 30(2), 123–130.

43:Zavattaro, S.M., & Sementelli, A.J. (2014). A critical examination of social media adoption in government: Introducing omnipresence. Government Information Quarterly, 31(2), 257–264

44:NESTA (2019) Crowdsourcing for democracy using Wikisurveys. [Accessed 17.9.19] <https://www.nesta.org.uk/blog/crowdsourcing-democracy-using-wikisurveys/>

45:PB Scotland (2018)Dundee Decides: Results in for mainstream PB process. [Accessed 17.9.19] <https://pbscotland.scot/blog/2018/4/3/dundee-decides-a-first-for-mainstreaming-in-scotland>

46:Rumbul, R. (2016, September). ICT and citizen efficacy: the role of civic technology in facilitating government accountability and citizen confidence. In IFIP World Information Technology Forum (pp. 213-222). Springer, Cham.

47:Tolbert, C. J., & Mossberger, K. (2006). The effects of e-government on trust and confidence in government. Public Administration Review, 66(3), 354–369.

48:Feeney,M.,Welch, E. W., & Haller, M. (2011). Transparency, civic engagement, and technology use in local government agencies: Findings from a national survey. Institute for Policy and Civic Engagement, University of Illinois at Chicago.

49:Garrett, R. K., & Jensen, M. J. (2011). E-democracy writ small: The impact of the Internet on citizen access to local elected officials. Information, Communication & Society, 14(2), 177–197.

50:Bloomberg. (2018) Housing crisis renters are driving British voters to Labour. [Accessed 19.9.2019] <https://www.bloomberg.com/opinion/articles/2018-03-05/housing-crisis-renters-are-driving-british-voters-to-labour>

51:Leston-Bandeira, C. (2019). Parliamentary petitions and public engagement: an empirical analysis of the role of e-petitions. Policy & Politics, xx(xx), 1–22. <https://doi.org/10.1332/030557319X15579230420117>