The Effect of Government Responsiveness On Future Political Participation

The Effect of Government Responsiveness On Future Political Participation

Fredrik M. Sjoberg, Data-Pop Alliance Jonathan Mellon, Oxford University Tiago Peixoto, The World Bank

Government Responsiveness

This paper is a reformatting of a paper from the World Bank's Digital Engagement Evaluation Team (DEET) at the World Bank's Governance Global Practice. You can view the original here.

Abstract

What effect does government responsiveness have on political participation? Since the 1940s political scientists have used attitudinal measures of perceived efficacy to explain participation. More recent work has focused on underlying genetic factors that condition citizen engagement. We develop a ‘Calculus of Participation' that incorporates objective efficacy – the extent to which an individual's participation actually has an impact – and test the model against behavioral data from FixMyStreet.com (n=399,364). We find that a successful first experience using FixMyStreet.com (e.g., reporting a pothole and having it fixed) is associated with a 54 percent increase in the probability of an individual submitting a second report. We also show that the experience of government responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of government responsiveness for fostering an active citizenry, while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

Introduction

The most direct example of objective efficacy is a direct government response to an act of participation. In this paper we focus on a new type of non-electoral participation: submission of online reports on local problems through the online platform FixMyStreet. This platform allows citizens in the United Kingdom to report micro-local problems via a website that displays the complaint online for everyone to view, but also, importantly, automatically forwards the complaint to the local authorities. Local authorities can thus engage citizens with updates about specific complaints. As a result of this direct response, we can observe a voter's objective efficacy (whether their participation, through submission of a report, resulted in their problem being fixed) as well as their subsequent engagement with the system. Here we focus on explaining continued participation with the platform beyond the first report submission. This allows us to ignore factors that are constant, such as socialization levels and genetics. The question is simple: what is the effect of having a first reported problem fixed (government responsiveness) on future participation? Building on the existing literature we present a simple calculus of participation model, inspired by the classic turnout model. We then present the unique data source, system data from the online platform FixMyStreet (UK), which has records of over three hundred thousand acts of participation. Since this is the first time the data has been used in an academic study, we describe it in detail. In the analysis section we estimate the effect of government responsiveness – objective efficacy – on continued participation in the platform using regression modeling. We conclude by highlighting the implications for future research and policy work.

Efficacy, Government Responsiveness, and Participation

Figure 1. Calculus Of Participation - Iterative Updating of Perceived Efficacy Based On Government Responsiveness. ASSET MISSING

Hypotheses

From the theory presented above we derive the following hypothesis: H1 A positive experience of government responsiveness – objective efficacy – will increase the probability of future participation. The main hypothesis can be further specified to apply both to the causal effect on the probability of submitting a second report (H1.1) and the effect on total future report submission (H1.2), given that the first time a report is submitted can be considered a formative experience. It is also useful to consider whether an experience of low objective efficacy (failing to have a problem fixed) reduces an individual's propensity to participate again.

Despite the proliferation of solutions similar to FMS in both developed and developing countries, the understanding of citizen engagement dynamics mediated by these platforms remains extremely limited. Similarly, little research has tapped into the potential of incidentally collected data to shed light on participatory behavior, particularly at the individual level. The FMS data and the analyses carried out are described in the following sections.

Data Description

We obtained raw platform data directly from MySociety. The full dataset includes 399,364 individual reports in time-series long format with a unique user id and a time variable. There are 154,957 unique users in the data set. The analysis is conducted on wide format dataset with unique user on separate rows and a set of variables related to the nth report submitted by a user. The long-form data contains the following variables: user id of report submitted, user id of fix reporter (if applicable), report category (self-selected from a drop-down menu), title of report, body text, timestamp, and a dummy for whether a photo was attached. The mean number of reports submitted per user is 2.58, while the median is one.1 The uptake has been increasing steadily, reaching 106,601 submitted reports in 2013. The most common report categories are potholes (23.5%), roads/highways (10.5%) and street lighting (10.1%). Figure 2 illustrates how problem category frequencies have developed over time. Only 11.7% of the reports come with an attached picture of the problem. Figure 2. Top FMS Problem Categories By Year. ASSET MISSING. As would be expected, different problem categories are associated with different fix rates. For instance, problems with streetlights have a relatively high fix rate of 50%, problems such as dog fouling have much lower fix rates (20%). In terms of fixes, a total of 159,539 (39.9%) problems have been reported as fixed, either as reported by the council (11.0% of all fixes), report submitters themselves (79.9%), or other users (9.0%).

Statistical Modeling

Here we estimate two models: (1) exploring the effect of having the first report fixed on submitting a second report,2 and (2) the effect of having the first report fixed on all future reporting. The first is a binary logistic regression model focusing on the probability of submitting a second report in a specified future window (35 to 365 days after the first report): [1] logit(πi) = α + β1Xi1 +… + βk Xik + εi where we model the logit of the probability π of submitting a second report for each user (i,…, n). The explanatory variable, Xi1, is a dummy indicating the fix status of the first report. The subscript k indicates the number of independent variables or regressors. The estimator here is Maximum Likelihood Estimation (MLE). Note that the fix status of a problem reported to FMS cannot be taken as an indication of the problem being fixed, but rather as an indication of someone reporting the problem to be fixed. There is currently no way for us to verify the accuracy of either the original report or the fix status provided by the platform. The second model is a negative binomial regression, where we model the total number of reports submitted in the same future window as for the logistic regression model. We use a negative binomial model to account for the over-dispersion of the counts. [2] log(yi) = α + β1Xi +… + βk Xik + εi where yi is the number of reports submitted between n and 365 days after the original report. Note that we are not estimating the effect of government responsiveness on participation in the general population, but rather among a subset of people that have already participated by submitting a first report.

Analysis

Endogeneity in Reporting Problems as Fixed

The first issue mentioned above is addressed by focusing only on the sample of users that did not report their own problem as fixed. In this sense, to eliminate the selection effect in the model, we restrict the data to exclude any first reports that were marked as fixed by the same user who reported the problem. To reiterate, FMS tracks which reports have been fixed and which are still outstanding. This information is supplied by users themselves so cannot be entirely separated from participation more generally. Reporting a problem as fixed is itself a form of participation. If we see that a user has not marked a problem as fixed, this can either mean that the problem has not actually been fixed or that the problem has been fixed and the user has not updated the problem‘s status. By excluding reports that the user marked as fixed and only including reports that were marked as fixed by another user, we avoid contaminating the measurement of a user's participation with their own participation in the form of indicating a fix. In order to provide a sensible ‘control group' for those who had their problems marked as fixed by others, we restrict the sample to include only problems reported in councils that had previously had another problem marked as fixed by another user. As a result, we are comparing problems that were marked as fixed by other users with problems that at least had a chance of being marked as fixed by other users. To check whether the endogeneity is actually present, we also run a model on the full sample to compare the estimates to those on the restricted sample.

Cut-offs in the Models

The second issue, whereby the effect may occur prior to the cause, is addressed by choosing a cut-off where a fix counts if, and only if, it takes place before the cut-off and subsequent reports are counted if, and only if, they take place after the cut-off. We then test the robustness of the model using different cut-offs. At the default 35 days cut-off, 28,723 reports were marked as fixed. This excludes 13,251 reports that were sent by users whose problem was marked as fixed after 35 days. Our estimates can therefore be considered conservative, since a second report submitted at say day 40 and preceded by a ‘fix by others' on day 38 is coded as a ‘no fix' in the data. This means that we are underestimating the effect due to the creation of an arbitrary cut-off that excludes all subsequent fixes. Using the restricted dataset (only problems marked as fixed by another user), at the default 35 days cut-off there are 3,655 reports sent by users whose first report was marked as fixed by another user. This excludes 4,134 reports that were sent by users whose problem was marked as fixed by another user after 35 days.

Revised Explanatory Variable

Control Variables

In the regression model, we include the following control variables: date of first report, a dummy for whether the first report included a photo, and local authority dummies. Including the date of the first report submission is important since the FMS platform has changed over time in terms of engagement with councils, user uptake and design of the website. For instance, the average time it takes for a problem to be marked as fixed has been steadily declining (see Figure 8, in the appendix). Given that there are various time trends in the data (see statistical appendix), it is important to account for such trends when running models in order to avoid finding correlations merely because two variables are trending. The second control variable is designed to capture how engaged the user was originally, i.e. whether or not they took the time to take and upload a photo. A potential factor that might bias estimates of the effect of government responsiveness on future participation is the quality of a user's report. In this sense, more engaged or conscientious individuals may tend to submit higher quality first reports and therefore tend to get a more positive government response. Conscientious people are also more likely to participate in general, so a spurious correlation would be generated between future participation and a positive response to i's first report. By controlling for indicators of initial engagement, we can reduce this potential bias. Finally, we also include dummy variables for each council in the model to reduce a source of variation that could otherwise introduce a confounding factor into the model. The reason here is that while the third party fixes should not be correlated with the user's tendency to participate, these fixes will be correlated with the pool of other available users who can report fixes and their tendencies to participate.

Results

Testing for Endogeneity

The long-term impact of initial success in participation

To assess whether there is a long-term impact of initial success on future participation, we model the total count of reports submitted by a user between 35 days and a year after they submit their first report. The long-term model in Table 1 shows that the first report does has a significant effect on encouraging future participation. The negative binomial model results differ from the main model in terms of the relative importance of initial success and underlying motivation. In the short-term model, the estimate of the first report including a photo is substantially smaller than the estimate for the first report being marked as fixed. However, in the long-term model, the estimate of having a problem fixed within 35 days is smaller than the photo estimate. This suggests that long-term participation is driven more by factors related to an individual's underlying participation propensity than their initial experience with participation.

Alternative explanations

There are several possible alternative explanations for the results outlined here. One explanation may be that those users that submit high-quality and constructive reports are more likely to get the council to fix the problem, or at least report back that the problem has been fixed. However, we do control for one key indicator of report quality – attaching a photo. This does significantly predict future participation but does not greatly reduce the magnitude of the effect of a fix. Future work should focus on incorporating further indicators of report quality such as the tone of a report. Another potential explanation is that some participants are willing to submit reports about more minor problems that are both more common and easier to fix. We tested whether this mechanism was present by including the detailed category of problem (from a list of 187) as dummy variables in our short term fix model (rather than the 6 dummy variables we used in the main model). However, the inclusion of all these variables barely changed the responsiveness parameter (0.532 to 0.501). While there are other aspects of problems that are important, such a small impact of the category suggests that this mech-anism is not driving our results.

Conclusions

The analysis presented above consistently shows that government responsiveness is positively associated with future participation via FixMyStreet in the United Kingdom. While we cannot estimate causal effects per se, we have attempted to eliminate the most likely sources of endogeneity, and the evidence so far is entirely consistent with the hypothesis that objective efficacy affects future participation in this type of activity. We show both a short-term and long-term model, contrasting the effect of the first experience on sending any subsequent reports and its effect on the total number of reports a user sends. The short-term model suggests that users whose first reported problem was fixed are 54 percent more likely to send at least one more report. The long-term model indicates that there is a small effect of the first report's success on the total number of reports that a user eventually submits. The literature on political participation has been dominated by the study of electoral participation. The rational model originally developed to help understand the decision to turn out to vote has been widely questioned in the empirical literature. However, different modes of participation are associated with different considerations, and the type of participation considered here might be more applicable to a Downsian decision-theoretic framework. Participation in FMS is associated with extremely low costs and observable, targeted benefits. Getting a pothole in front of one's house fixed should be less susceptible to behaviors and social dynamics expected in voting (e.g. free-riding, herding). This being the case, we might say that participation in FMS is over-determined. The paradox might rather be why so few people participate given that the benefits are so clear and the costs are so low. Abstention could be explained by lack of awareness about the opportunity to report problems, the lack of problems to report on, or, as we argue, due to subjective beliefs about external efficacy based on bad experiences with objective efficacy. The majority of users who report a second time do so independently of whether a previous report has been addressed or not. In total, 11.3 percent of FMS users whose problem was not reported as fixed by anyone within 35 days still submit another report within a year of their first submission, as opposed to 17.4% of FMS users whose first submission was successful4. Furthermore, when it comes to sustained participation in the long-term, our findings show that the effect of government responsiveness is smaller. This should not be surprising given the many documented instances of political participation where an individual's objective efficacy is virtually zero. Overall, these findings call for an understanding of participation as a multidimensional phenomenon in which government responsiveness, while an important predictor of future participation, is certainly not the only one. Our findings also call for a rethinking of subjective efficacy. Much of the literature has tended to suggest that internal and external efficacy are generally long-term, stable attributes that are partially the result of socialization. However, if we assume that the differences in objective efficacy affect participation through subjective efficacy, then at least some form of subjective efficacy can be changed by government responsiveness. This suggests that the stable nature of subjective efficacy measures within individuals may owe more to the fairly constant objective situation they are faced with (mature democracies do not tend to change radically in the degree to which an individual can affect their outcomes) rather than because the attitude is unchangeable. Without further data it is not possible to assess whether individuals whose problems are fixed focus on this as proving their internal efficacy (how competent they are to participate) or external efficacy (how likely the system is to respond to their action). But it seems likely that at least one of these changes in response to the objective signal from the local government. Future research should examine whether this change is domain-specific (‘I now believe that my actions will have an effect on getting the council to fix potholes') or general (‘I now believe that the political system will be more responsive to my actions'). The model in this paper focuses only on the user's first experience and its impact on any subsequent participation. We chose this stage in order for all the decisions across different users to be comparable and because the majority of users submit just one report, meaning this is where the majority of dropout occurs. However, there are also ‘super-users' who submit many reports and it is important that future work looks at the factors that influence their continued and initial participation. Responsiveness could also potentially affect total participation in two further ways. First, responsiveness is likely to affect existing users' recruitment of new participants through word of mouth. Second, people deciding whether to submit a first report may make their decision based partially on their perceived chance of success, which will be affected by the experience of other nearby users (communicated either through being told directly or by looking at the success of reports on the FMS website). Future work should examine both of these mechanisms linking government responsiveness to future participation. This paper demonstrates the value of using incidentally collected data to examine citizen behavior. By using these records, we directly observe behavior rather than relying on self-reported survey measures, which have consistently been shown to suffer from issues of poor recollection and social desirability. We also obtain accurate information about the timing of the observed actions, which would be impossible if relying solely on respondents' own recollections. An additional advantage of incidentally collected data is that the only cost involved is the time taken to collate it from the existing databases. Finally, the incidentally collected data gives us full sample coverage – it is a census of all FMS users – and therefore is not subject to non-response bias. Nonetheless, we believe that future work will benefit from combining data sources such as these with attitudinal data on respondents that can help to further examine the mechanisms through which objective efficacy affects future participation. The previous focus on subjective over objective indicators of efficacy is not just one of measurement. Fundamentally, the question lies in whether getting people to participate in politics require making them feel empowered or actually giving them power. This paper suggests that giving power and genuine efficacy to individuals can encourage greater participation.

Appendices

Summary Statistics

Table 2. Summary Statistics For FixMyStreet Data. ASSET MISSING

Fixes Summary Statistics

Robustness tests

Differential impact of government responsiveness depending on prior participation

Acknowledgements

The Digital Engagement Working Paper Series disseminates the findings of work in progress to encourage the exchange of ideas about issues at the intersection of technology and citizen engagement. The papers carry the names of the authors and should be cited accordingly. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. Note that author order is determined using bounded randomization. All authors contributed in equal shares. Acknowledgements: We would like to thank Tom Steinberg, Struan Donald, and Paul Lenz at My Society for providing the data and valuable comments. This paper was presented at the American Political Science Association Annual Conference 2014 in Washington D.C. We would also like to thank Amy Chamberlain and Josh Kalla for editorial assistance.

Notes


1:With one user submitting as many as 2,108 reports. MySociety confirmed that this user‘s reports are genuine and that ‘someone's just very diligent' (personal correspondence, 2014).

2:As will be seen, we actually have two separate versions of this first model: the naive model and the main model (see below).

3:47% of fixes in the reduced data (3,655 cases) take place before 35 days.

4:Based on the predicted probability of submitting a second report holding all variables at their means and fix as false.