Crowdworkers’ Temporal Flexibility is Being Traded for the Convenience of Requesters Through 19 ‘Invisible Mechanisms’ Employed by Crowdworking Platforms

A Comparative Analysis Study of Nine Platforms

Laura Lascău, Sandy J. J., Duncan P. Brumby, Anna L. Cox

  • This is an HTML author version of the paper.
  • There is a PDF author version.
  • You can get the published PDF from the ACM Digital Library, too.

Please cite the work as:** Laura Lascau, Sandy J. J. Gould, Duncan P. Brumby, and Anna L. Cox. 2022. Crowdworkers’ Temporal Flexibility is Being Traded for the Convenience of Requesters Through 19 ‘Invisible Mechanisms’ Employed by Crowdworking Platforms: A Comparative Analysis Study of Nine Platforms. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI ‘22 Extended Abstracts), April 29-May 5, 2022, New Orleans, LA, USA. ACM, New York, NY, USA 8 Pages. https://doi.org/10.1145/3491101.3519629

Abstract

Crowdworking platforms are a prime example of a product that sells flexibility to its consumers. In this paper, we argue that crowdworking platforms sell temporal flexibility to requesters to the detriment of workers. We begin by identifying a list of 19 features employed by crowdworking platforms that facilitate the trade of temporal flexibility from crowdworkers to requesters. Using the list of features, we conduct a comparative analysis of nine crowdworking platforms available to U.S.-based workers, in which we describe key differences and similarities between the platforms. We find that crowdworking platforms strongly favour features that promote requesters’ temporal flexibility over workers’ by limiting the predictability of workers’ working hours and restricting paid time. Further, we identify which platforms employ the highest number of features that facilitate the trade of temporal flexibility from workers to requesters, consequently increasing workers’ temporal precarity. We conclude the paper by discussing the implications of the results.

Keywords: platform economy; crowdwork; microwork; temporal flexibility

Introduction

Research suggests that although the people who work on crowdworking platforms are advertised temporal flexibility, they do not benefit from the flexibility to choose ‘when’ and ‘for how long’ to work [40]. Instead, requesters (i.e., private companies or individual consumers) [33] benefit from temporal flexibility to the detriment of the crowdworkers [9]. In this paper, we argue that crowdworking platforms employ platform features that facilitate the trade of temporal flexibility from workers to requesters. Whilst these features can increase the temporal flexibility of requesters, they can also increase the temporal precarity of workers. Temporal precarity is defined as the unpredictability, uncertainty, and insecurity workers experience with respect to work scheduling and work pace [37]. Alongside economic precarity, temporal precarity contributes to the precarious working conditions crowdworkers face [7], which has been of growing interest for scholars examining on-demand platform work (e.g., [5, 21]).

We begin by identifying 19 platform features relating to five categories of temporal precarity on crowdworking platforms: (a) Unpaid Time, (b) Oversupply of Workers, (c) Worker Competitiveness, (d) Unpredictable Work Hours, and (e) Inflexibility of Time Use. We next use the 19 features identified to analyse nine existing crowdworking platforms available to U.S.-based workers: (1) Amazon Mechanical Turk (AMT), (2) Appen, (3) Clickworker, (4) Hive Micro, (5) Microworkers, (6) Neevo, (7) PicoWorkers, (8) Prolific, and (9) Universal Human Relevance System (UHRS). As part of the analysis, we assessed each platform against the list of features identified by reviewing the platforms’ descriptions, Frequently Asked Questions (FAQs), Documentation, and Terms of Services (TOS). We also assessed the platforms through our own experiences of interacting with the platforms, as workers and requesters. Finally, for each platform feature, the nine platforms were scored on a dichotomous scale (i.e., Identified; Not identified). The scoring allowed us to identify key differences and similarities between the platforms and assign the platforms temporal precarity scores. We used the scores to rank the platforms based on the number of features employed that facilitate the trade of temporal flexibility from workers to requesters, with the highest-ranking platforms creating the most temporally-precarious working conditions.

We find that crowdworking platforms strongly favour features that promote requesters’ temporal flexibility over workers’ by limiting the predictability of workers’ working hours and restricting paid time. Furthermore, using the temporal precarity scores, we identify which platforms available to U.S.-based workers employ the highest number of features that facilitate the trade of temporal flexibility from workers to requesters (i.e., UHRS, Hive Micro, and PicoWorkers), consequently increasing the temporal precarity of workers. Taken together, the results of the paper show that crowdworkers’ temporal flexibility is being traded for the convenience of requesters through 19 ‘invisible mechanisms’ employed by crowdworking platforms. The paper contributes a new understanding of the features employed by crowdworking platforms that enable invisible labour [52] and limit U.S.-based crowdworkers from accessing fair working conditions [32].

Background

Flexibility is increasingly sold as a product for the convenience of consumers. Growing consumer demand has resulted in a plethora of on-demand services becoming available through online apps [41]. Due to a phenomenon known as ‘liquid expectations’ [20], the culture of ‘on-demand expectations’ has bled from consumer-facing products into business-facing products [12]. As a result, both individual and business consumers have developed new fluid expectations from on-demand services, from ordering food and watching TV shows, to collecting datasets and labelling images.

Whilst consumers enjoy the convenience of on-demand services, they come at a cost for the people at the forefront of providing these services. In this sense, workers within the on-demand platform economy face poor working conditions [59], such as financial and temporal work precarity. The financial precarity of workers is reflected through low pay and lack of financial security [7], whereas temporal precarity is reflected through unpredictable work schedules and an intensified work pace [37]. Whilst workers’ financial precarity has been a prominent topic of conversation among platform stakeholders, workers, regulators, trade unions, and academics [23], the temporal precarity of workers has received less attention [17]. We argue that temporal precarity contributes to the poor working conditions of workers because of its relationship to consumer conveniences such as temporal flexibility. In this regard, on-demand platforms trade the temporal flexibility of workers as a resource for the convenience of consumers, to workers’ detriment [9].

Crowdworking platforms are a prime example of a product that sells flexibility to its consumers [28]; this flexibility is advertised as temporal flexibility. Because of the short temporal nature of the work, the consumers of these platforms—requesters and workers—are advertised temporal flexibility twofold. First, requesters can “access a global, on-demand, 24x7 workforce” [53] through the platforms, where they can hire workers for as little as a few minutes and “free up resources and time for the company” [53]. Second, workers can “benefit from having no set hours or schedules […] and freedom to choose when and how much to work” [6]. Therefore, on crowdworking platforms, time, capital, and labour entwine [13].

Whilst ‘time and capital’ (e.g., [50]) and ‘capital and labour’ (e.g., [31, 50, 56]) have been of interest to advocates of fair working conditions for the people working on crowdworking platforms, ‘time and labour’ has received less interest due to the invisible nature of the work [17]. The invisibility of crowdwork is partly physical [17], since the work mainly takes place in the homes of workers [42], and partly conceptual [17], since crowdworking is an invisible aspect of A.I. production [30]. Consequently, time and labour are aspects of crowdworking that have been swept under the rug under the narrative of temporal flexibility [5]. Further, research suggests that although crowdworkers are advertised temporal flexibility, they do not benefit from the flexibility to choose when and for how long to work [40]. Instead, requesters benefit from temporal flexibility to the detriment of workers [9]. But what are the ‘invisible mechanisms’ employed by crowdworking platforms that facilitate the trade of temporal flexibility from workers to requesters?

Method

To address the question, we used value sensitive design (VSD). VSD is a theoretically-grounded design framework applied to technology design, which accounts for human value. [24, 25, 26]. Our work was inspired by Wisniewski et al. [58], who used the lens of value sensitive design to reverse engineer a subset of values embedded in the design of 75 mobile apps. VSD conceptualises values as an interactional product of technology and society, produced in the socio-technical gap [1]. According to VSD, when values are not accounted for in the design process, they are often unconsciously embodied by technology, thus supporting the values held by the designers of the technology instead of the values of those impacted by the technology [3].

Prior work identified a set of nine values that AMT workers share: access, autonomy, fairness, transparency, communication, security, accountability, making an impact, and dignity [15]. In this paper, we focused on a single value held by crowdworkers: autonomy, conceptualised as temporal flexibility.

VSD employs an iterative methodology that integrates three types of investigations: (i) conceptual investigations, (ii) empirical investigations, and (iii) technical investigations. For the scope of this paper, we present (a) a conceptual investigation and (b) a technical investigation, which we describe next.

We first conducted a conceptual investigation in which we identified a list of value tensions [25]—presented as platform features—that can trade workers’ temporal flexibility for the convenience of requesters’ temporal flexibility. Within VSD, value tensions are conflicts that can arise among key values, describing constraints on a design space [25]. Therefore, we present in Table 1 (on page 5) 19 platform features relating to five categories of temporal precarity on crowdworking platforms: (1) Unpaid Time, (2) Oversupply of Workers, (3) Worker Competitiveness, (4) Unpredictable Work Hours, and (5) Inflexibility of Time Use. We derived the features and categories from: (a) prior work by Lascau et al. [39], which identified a set of time constraints imposed on workers by the design of crowdworking platforms, and (b) prior work examining the working conditions of crowdworkers (e.g., [23, 30]). We acknowledge that the list of platform features included in this paper is not exhaustive, and other features could be considered.

Next, following VSD, we present our technical investigation. Technical investigations examine how existing technical features that underlie the mechanisms of technology support or constrain human values. Within VSD, technical investigations centre the technology as a unit of analysis [25], rather than the people who use the technology [26], which is the focus of empirical investigations. In other words, the emphasis of technical investigations is on understanding the value implications of technology [25]. Therefore, as part of our technical investigation, we conducted a comparative analysis of nine crowdworking platforms available to U.S.-based workers to identify which platforms employ the highest number of features that facilitate the trade of temporal flexibility. We describe below how we conducted the comparative analysis.

Platform Selection Process

To identify candidate crowdworking platforms for our comparative analysis, we used the search engine DuckDuckGo. We chose to use DuckDuckGo because it does not tailor search results to users’ preferences or search history, unlike other search engines [18]. Thus, using DuckDuckGo for our feature analysis allowed us to avoid ‘filter bubbles’, a type of personal bias introduced by search engines such as Google [46]. By avoiding ‘filter bubbles’, we aimed to ensure the reproducibility of the results [11]. Additionally, to ensure reproducibility, we next describe the period when we conducted the searches, what search terms we used, and how we eventually selected the platforms included in the analysis.

We conducted the searches in October 2021. In our search, we used the following four keywords: “crowdwork”, “crowd work”, “microwork”, and “micro work”. We chose these keywords because they were consistent with the terminology commonly used by workers [30]. We first searched these keywords in isolation and then in combination with the following seven keywords: “AI jobs”, “data entry”, “make money”, “work from home”, “list of”, “hire people”, and “find workers”; this combination resulted in 28 searches.

As research shows that more than 70% of internet users tend to only explore the first page of search engines [51], we decided to examine solely the first page returned (i.e., results above the “More Results” button on DuckDuckGo), totalling 280 search results. Thus, the results are not an exhaustive list of crowdworking platforms.

Out of the 280 search results, we identified: (a) 44 websites that linked us directly to crowdworking platforms, (b) 188 websites that included links to crowdworking platforms or provided examples of such platforms, and (c) 54 websites that described crowdworking as a concept but did not included links or provided examples of such platforms. We discarded the 54 websites that did not include links or examples of crowdworking platforms; this left us with 232 websites for the analysis. After excluding duplicate platforms returned by multiple websites (n = 31), the search identified 38 crowdworking platforms. Furthermore, from the initially identified platforms, we excluded 27 platforms that:

  1. Were reported as ‘spam’ by workers on multiple websites (n = 8; e.g., onlinemicrojobs.com);

  2. Did not have any jobs available at the time of the study (n = 5; e.g., CloudCrowd);

  3. Were location-based crowdworking platforms that advertised jobs in specific geographical areas, rather than web-based crowdworking platforms [8] (n = 5; e.g, local microtasking platforms such as AppJobber);

  4. Were not available to U.S.-based workers (n = 3; e.g., Crowdtask) — we excluded these platforms because most crowdworkers are based in the U.S. [7, 47]. Thus, our results are limited to platforms with a presence in the U.S.;

  5. Were only available on mobile devices (n = 1; i.e., microwork) — we excluded these platforms because workers based in the Global North are less likely to work on their phones compared to workers in the Global South [42];

  6. Were not accepting new participant sign-ups at the time of the study (n = 1; i.e., Sequence);

  7. Did not accept our application to work on the platform (n = 4; e.g., Toloka).

The final number of crowdworking platforms that we included in the analysis was nine. We analysed the following nine platforms: (1) AMT, (2) Appen, (3) Clickworker, (4) Hive Micro, (5) Microworkers, (6) Neevo, (7) PicoWorkers, (8) Prolific1, and (9) UHRS (accessed through Teemwork).

3.2 Platform Analysis Process

Next, we conducted our analysis by reviewing each crowdworking platform against the list of features we developed. For each feature, the nine platforms were scored on a dichotomous scale (i.e., Identified; Not identified). The scoring allowed us to identify key differences and similarities between the platforms, which we describe in the Results section.

We began the analysis of each platform feature by first reviewing the platforms’ descriptions. Reviewing the platforms’ descriptions provided us insights into the ways the platforms were advertised to the workers and requesters. Next, if the platforms’ descriptions did not provide us with a definitive score (i.e., Identified; Not identified), we reviewed the platforms’ list of FAQs, Documentation, and TOS. Reviewing the three resources provided us with insights into the ways the platforms were documented to work.

Finally, if the platforms’ FAQs, Documentation, or TOS did not provide us with a definite score, we continued analysing the platforms through the researcher’s interaction with the features. We first interacted with the features from a worker’s perspective and afterwards from an employer’s perspective. Creating worker and employer accounts enabled us to interact with the platforms’ features. For example, as a worker, we were able to review jobs, whereas, as an employer, we were able to create jobs. A second rater reviewed 30% of the platforms, with a good degree of inter-rater reliability (κ =.79). We next present the results of the analysis by describing each feature category.

Platform FeaturesUHRSHive M.PicoW.AMTClickw.NeevoMicrow.ProlificAppenDescription of Platform Features
1. Unpaid Time
1.1 Platforms do not require requesters to pay workers a minimum hourly wage.Requesters can pay workers as little as ∼ $2 per hour [31]. Thus, workers have to spend additional time working to reach their monetary goals [40].
1.2 Platforms allow requesters to ask workers to complete lengthy unpaid assessments or training.Requesters can ask workers to pass assessments (e.g., ‘qualifications’) or training before working [31]. Thus, workers can spend long periods of unpaid time without the guarantee of future jobs [52], whilst requesters get to keep the data [36].
1.3 Platforms ask workers to complete unpaid qualification tests to register on the platform.Requesters get a guarantee that the workers are of “good quality” [10] and will not lose time posting jobs on the platform. Thus, workers have to spend time working on unpaid qualification tests before they can begin working [31].
1.4 Platforms allow requesters to keep the data from rejected jobs.Requesters can potentially still make use of the data obtained for free from jobs they had rejected [35]. Thus, workers do not get paid for any time spent on rejected jobs and have to spend time reversing rejections [43].
1.5 Platforms allow requesters to reject workers who completed jobs ‘too quickly’.Requesters can refuse to pay workers they believe have not spent an adequate amount of time working [27]. Thus, workers risk not being paid for their time [44].
1.6 Platforms do not require requesters to provide timely feedback about rejected work.Requesters do not have to spend any time providing workers with feedback after rejecting their work [19]. Thus, workers have to spend time determining why their work was rejected and how to remedy the situation [44].
2. Oversupply of Workers
2.1 Platforms do not limit the max. no. of workers completing jobs at any given time.Requesters have the convenience of having an unlimited number of workers completing their jobs throughout the day [4]. Thus, workers have to spend a long amount of time on the platform finding suitable work [7].
2.2 Platforms do not limit the no. of workers who can register on the platform.Requesters have an unlimited supply of workers to complete jobs at any given time [60]. Thus, workers have to compete against a 24/7 global labour [4].
2.3 Platforms limit the max. no. of jobs workers can complete in a day, week, or month.Requesters have a variety of workers completing jobs on-demand [54]. Thus, workers cannot complete additional jobs once they have reached the platforms’ limits, even if requesters are still making jobs available.
2.4 Platforms limit the max. amount of money workers can earn in a day, week, or month.Requesters have a variety of workers completing their jobs [60]. Thus, workers cannot complete additional jobs once they reached the platforms’ limits and have to spend time finding other revenue sources [49].
3. Worker Competitiveness
3.1 Platforms do not allocate jobs to the workers.Requesters get a large pool of workers that compete for completing jobs on a first- come, first-served basis [17]. Thus, workers have to be ‘on call’ for work [40].
3.2 Platforms do not reserve jobs for the workers.Requesters get the jobs they posted completed quickly because workers have to compete with one another for completing jobs [39]. Thus, workers do not get paid for the time spent working if other workers managed to complete the job first [14].
4. Unpredictable Work Hours
4.1 Platforms do not have set hours when requesters post jobs.Requesters can make jobs available on the platforms at any time [30]. Thus, workers have to wait an unpredictable amount of time for requesters to post new jobs [40].
4.2 Platforms do not have set hours when workers can complete jobs.Requesters have the flexibility of workers completing jobs at any time of day [60]. Thus, workers have difficulties predicting their working hours [40].
4.3 Platforms do not limit the number of hours workers can work.Requesters benefit from workers spending an unlimited amount of time on the platform [45]. Thus, workers spend long hours working or searching for work [57].
4.4 Platforms do not have a degree of control over the completion times of jobs.Requesters can post jobs that have very short completion times in order to have the work returned faster [61]. Thus, workers have to complete jobs quickly not to risk having their work rejected if they exceed the jobs’ estimated completion times [39].
4.5 Platforms do not provide clear payment timelines to workers.Requesters can pay workers whenever it is suitable for the requesters [34]. Thus, workers risk not knowing when they will get paid (and how much) [16] and could have difficulties planning their work and non-work time.
5. Inflexibility of Time Use
5.1 Platforms do not allow workers to complete multiple jobs at once.Requesters get a guarantee that the workers are not multitasking and are spending time on the job [29]. Thus, workers can be restricted in how they use their time by being required to monotask rather than using the time as they prefer [39].
5.2 Platforms require workers to wait a certain amount of time between submitting jobs.Requesters get a guarantee that the workers are of “good quality” [10] and will not lose time making jobs available on the platform. Thus, workers might be limited in how they spend their time and the pace at which they work.
Temporal Precarity Scores17/1915/1915/1913/1913/1913/1912/199/198/19
✓ = Identified features; ‘Not Identified’ features are represented by the spaces intentionally left blank

Table 1: Comparative analysis of the features of crowdworking platforms that trade workers’ temporal flexibility for the convenience of requesters’ flexibility. The platform features were identified as part of our conceptual investigation (page 3). The table shows that the crowdworking platform UHRS achieved the highest temporal precarity score (17/19), followed by Hive Micro (15/19) and PicoWorkers (15/19); this suggests that the three platforms trade workers’ temporal flexibility the most. In comparison, Appen achieved the lowest temporal precarity score (8/19), followed by Prolific (9/19); this suggests that compared to UHRS, Hive Micro or PicoWorkers, these two platforms trade workers’ temporal flexibility the least.

4 RESULTS

*1. Unpaid Time.*1.1 Platforms do not require requesters to pay workers a minimum hourly wage. Only two of the nine platforms analysed required requesters to pay workers at a minimum hourly wage rate. In contrast, most platforms (n = 6) did not have this requirement. For example, whilst Appen used machine learning and statistical models to ensure workers were paid local minimum wages, UHRS did not prevent workers from earning below minimum wage.

1.2 Platforms allow requesters to ask workers to complete lengthy unpaid assessments or training. Only two of the nine platforms analysed did not enable requesters to ask workers to complete lengthy unpaid assessments or training. In contrast, most platforms (n = 7) did not prevent requesters from posting such assessments or training. For example, whilst PicoWorkers did not enable requesters to create unpaid training jobs for workers, UHRS did not prevent workers from having to complete unpaid mandatory training before working on requesters’ jobs.

1.3 Platforms ask workers to complete unpaid qualification tests to register on the platform. This platform feature was the least widely supported feature across the nine platforms. Most platforms (n = 8) did not support this feature. In contrast, only one platform required workers to complete unpaid qualification tests to register on the platform. In the case of Prolific, workers had to take a mock test as part of the registration process.

1.4 Platforms allow requesters to keep the data from rejected jobs. Only two of the nine platforms analysed did not allow requesters to keep the data from rejected jobs. In contrast, most platforms (n = 7) did not prevent requesters from keeping data from rejected work. For example, whilst Appen paid workers for any rejected jobs before removing them from these jobs, AMT did not prevent requesters from keeping the workers’ data and not paying.

1.5 Platforms allow requesters to reject workers who completed jobs ‘too quickly’. Only one of the nine platforms analysed did not allow requesters to reject workers who completed jobs faster than expected. In the case of Prolific, the platform only allowed rejections based on speed for jobs that were statistical outliers in the data set. In contrast, most platforms (n = 8) did not prevent requesters from rejecting workers who completed jobs too fast.

1.6 Platforms do not require requesters to provide timely feedback about rejected work. Only two of the nine platforms analysed required requesters to provide workers timely feedback about rejected work. In contrast, most platforms (n = 7) did not require requesters to provide such feedback. For example, whilst AMT required requesters to include a feedback message with any rejected work, Hive Micro did not support a feature of this type.

*2. Oversupply of Workers.*2.1 Platforms do not limit the maximum number of workers completing jobs at any given time. None of the platforms analysed limited the number of workers who can complete jobs.

2.2 Platforms do not limit the number of workers who can register on the platform. Only one of the nine platforms analysed limited the number of workers who can register on the platform. In the case of Prolific, the platform had a waiting list for people who wanted to register to work on the platform. In contrast, most platforms (n = 8) did not have any mechanisms in place to limit the number of workers who could register on the platform.

2.3 Platforms limit the maximum number of jobs workers can complete in a day, week, or month Five of the nine platforms did not limit the maximum number of jobs workers can complete. In contrast, four platforms limited the number of jobs. For example, whilst Clickworkers had no such limits, AMT limited workers to 3,800 jobs per day.

2.4 Platforms limit the maximum amount of money workers can earn in a day, week, or month. Five of the nine platforms analysed did not limit the amount of money workers can earn in a day, week, or month. In contrast, four platforms limited the amount of money workers could earn. For example, whilst Microworkers did not set such limits, Prolific employed a mechanisms that limited workers from earning above a certain threshold.

3. Worker Competitiveness. 3.1 Platforms do not allocate jobs to the workers. Only two of the nine platforms analysed allocated jobs to the workers. In contrast, seven platforms required workers to claim tasks before other workers on a first-come, first-serve basis. For example, whilst Prolific allocated jobs to specific workers, Microworkers did not allocate jobs, workers having to claim jobs before other workers did so.

3.2 Platforms do not reserve jobs for the workers. Five of the nine platforms analysed reserved jobs for workers. In contrast, four platforms did not reserve jobs. For example, whilst AMT reserved jobs for workers for the allotted time set by requesters, Neevo did not reserve jobs to specific workers, any worker having access to the same tasks.

*4. Unpredictable Work Hours.*4.1 Platforms do not have set hours when requesters post jobs. None of the platforms analysed had set hours when requesters could post jobs.

4.2 Platforms do not have set hours when workers can complete jobs. None of the platforms had set hours.

4.3 Platforms do not limit the number of hours workers can work. None of the platforms limited work hours.

4.4 Platforms do not have a degree of control over the completion times of jobs. Only two of the platforms analysed had a degree of control over the completion times of jobs posted by requesters. In contrast, most platforms (n = 7) could not adjust completion times. For example, whilst Appen had a mechanism that could increase the completion times of jobs slower than predicted, AMT could not overwrite the times allotted by requesters.

4.5 Platforms do not provide clear payment timelines to workers. Five of the nine platforms provided clear timelines to workers regarding when they will get paid for their work. In contrast, four platforms did not provide clear timelines. For example, whilst Clickworker aimed to pay workers within seven days, Neevo did not provide a clear payment timeline, claiming it paid workers within two weeks or a longer period of time.

*5. Inflexibility of Time Use.*5.1 Platforms do not allow workers to complete multiple jobs at once. Five one of the nine platforms analysed allowed workers to work on multiple jobs at once. In contrast, four platforms limited the number of jobs workers can complete simultaneously. For example, whilst PicoWorkers allowed workers to complete an unlimited number of jobs at once, Prolific limited workers to completing only one job at a time.

5.2 Platforms require workers to wait a certain amount of time between submitting jobs. Most platforms (n = 7) did not require workers to wait between submitting jobs. In contrast, two platforms required workers to wait a certain amount in between submitting jobs. For example, whilst UHRS allowed workers to start working on a new job immediately after finishing one, PicoWorkers required workers to wait a certain amount of time before continuing to work on a new job.

5 DISCUSSION AND CONCLUSION

In this paper, we show that crowdworking platforms trade workers’ temporal flexibility for the convenience of requesters’ flexibility through 19 platform features relating to five categories of temporal precarity on crowdworking platforms. First, we find that crowdworking platforms strongly favour features that promote requesters’ temporal flexibility over workers’ by limiting the predictability of workers’ working hours and restricting paid time. In this sense, the results of the study suggest that ‘Unpredictable Work Hours’ was the category of features most traded by the nine platforms. Across this category, we found only seven instances out of the five features that did not trade workers’ temporal flexibility; in other words, workers’ flexibility was traded across 85% of the features within the ‘Unpredictable Work Hours’ category. Moreover, the second most traded category was ‘Unpaid Time’, where we found 17 instances out of the six features that did not trade workers’ flexibility; in other words, workers’ flexibility was traded across 69% of the features within the ‘Unpaid Time’ category. Therefore, the results suggest that requesters benefit from temporal flexibility by limiting the predictability of working time and pay.

These results are important because of the temporal and economic precarity crowdworkers face [7, 37]. Prior research has criticised the way that platforms exacerbate work precarity [21] and has called for an investigation of the precarity of platform work [5]. Our work extends the current understanding of the reasons why U.S.-based platform workers experience work precarity, by showing that features of crowdworking platforms that are meant to support workers’ predictability of working time and pay are in fact the most traded features of temporal flexibility from workers to requesters across the nine platforms analysed.

Second, using the temporal precarity scores, we identified which platforms available to U.S.-based workers employed the highest number of features that facilitated the trade of flexibility from workers to requesters, consequently increasing workers’ temporal precarity. In this sense, the highest temporal precarity score achieved by one of the nine crowdworking platforms was 17 points out of 19 (i.e., UHRS), whereas the lowest was eight points out of 19 (i.e., Appen). Nevertheless, the results suggest that even the lowest-scoring platform, which obtained eight points out of 19, managed to trade workers’ flexibility through eleven different features.

These results are important because they provide an initial understanding of the mechanisms of crowdworking platforms that facilitate the trade of temporal flexibility from workers to requesters. In line with van Dijck’s approach to platforms as socio-technical structures [55], we show that analysing platform features can expose the techno-cultural and socioeconomic logics underlying crowdworking platforms [55]. In this sense, crowdworking platforms become mediators that shape socioeconomic trades between workers and requesters, rather than just intermediaries that facilitate these trades. Thus, this paper makes some of the invisible mechanisms used for trading temporal flexibility more visible and enables further scrutinising of platforms’ technology, users, governance, and business models [55]. Moreover, using VSD, we conceptualised temporal flexibility as an interactional product of technology (i.e., crowdworking platforms) and society (i.e., workers and requesters), produced in the socio-technical gap [1]. The results of our study show the different ways in which the values held by the customers of the technology (i.e., requesters), instead of the values of those impacted by the technology (i.e., workers), can be embodied consciously or unconsciously by technology [3].

The results of the study have implications for: (a) people working on crowdworking platforms, (b) requesters, (c) the design of crowdworking platforms, and (d) the wider platform economy. First, the people working on crowdworking platforms can benefit from having increased awareness about the exploitative mechanisms of crowdworking platforms [4], although they might have little power to change them. Therefore, future work should consider the following question: What future crowdworking platforms do workers who value temporal flexibility envision, if any?

Second, requesters can benefit from reflecting on the power asymmetries perpetuated by crowdworking platforms [2, 38], particularly when choosing which platforms to use in their work. Therefore, future work should consider the following question: How much more money would requesters be willing to spend so that crowdworkers could gain more temporal flexibility?

Third, the design of crowdworking platforms can benefit from adopting a worker-centred design approach [22] by exploring the service and business design possibilities of future platforms in view of the 19 platform features identified in this study. Therefore, future work should consider the following question: If crowdworking platforms would not employ any of the 19 platform features benefiting requesters’ temporal flexibility, would these platforms still be defined as crowdworking platforms?

Finally, the wider platform economy, in particular organisations advocating for platform workers (e.g., the Fairwork project [48]), can benefit from auditing other on-demand platforms against our list of features to further investigate the temporal precarity and working conditions of people the working within the platform economy [32, 37]. Therefore, future work should consider the following question: As flexibility is increasingly sold as a product for the convenience of consumers, how much temporal flexibility are consumers willing to give up on so that on-demand platform workers are not negatively impacted?

Acknowledgements

We thank the anonymous reviewers of this work for their constructive feedback. This work was supported by the UK Engineering and Physical Sciences Research Council grant EP/L504889/1.

References

  1. Mark S Ackerman. 2000. The intellectual challenge of CSCW: the gap between social requirements and technical feasibility. Human–Computer Interaction 15, 2-3 (2000), 179–203.
  2. Ayad Al-Ani and Stefan Stumpp. 2016. Rebalancing interests and power structures on crowdworking platforms. Al-Ani, A. & Stumpp, S.(2016). Rebalancing interests and power structures on crowdworking platforms. Internet Policy Review 5, 2 (2016).
  3. Tamara Alsheikh, Jennifer A Rode, and Siân E Lindley. 2011. (Whose) value-sensitive design: a study of long-distance relationships in an Arabic cultural context. In Proceedings of the ACM 2011 conference on Computer supported cooperative work. 75–84.
  4. Moritz Altenried. 2020. The platform as factory: Crowdwork and the hidden labour behind artificial intelligence. Capital & Class 44, 2 (2020), 145–158. https://doi.org/10.1177/0309816819899410 arXiv:https://doi.org/10.1177/0309816819899410
  5. Mohammad Amir Anwar and Mark Graham. 2021. Between a rock and a hard place: Freedom, flexibility, precarity and vulnerability in the gig economy in Africa. Competition & Change 25, 2 (2021), 237–258. https://doi.org/10.1177/1024529420914473 arXiv:https://doi.org/10.1177/1024529420914473
  6. Appen. [n.d.]. How It Works, Flexible Work Opportunities. Last accessed: 10-31-2021, https://web.archive.org/web/20201031022310/https://connect.appen.com/qrp/public/how_it_works.
  7. Janine Berg. 2015. Income security in the on-demand economy: Findings and policy lessons from a survey of crowdworkers. Comp. Lab. L. & Pol’y J. 37 (2015), 543.
  8. Janine Berg, Marianne Furrer, Ellie Harmon, Uma Rani, and M Six Silberman. 2018. Digital labour platforms and the future of work. Towards Decent Work in the Online World. Rapport de l’OIT (2018).
  9. Birgitta Bergvall-Kåreborn and Debra Howcroft. 2014. Amazon Mechanical Turk and the commodification of labour. New Technology, Work and Employment 29, 3 (2014), 213–223. https://doi.org/10.1111/ntwe.12038 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/ntwe.12038
  10. Prolific Blog. [n.d.]. Data quality at Prolific - Part 1: What is a ”good participant”?Last accessed: 10-08-2021, https://doi.org/10.1016/j.geoforum.2016.10.005.
  11. Simon Briscoe. 2015. Web searching for systematic reviews: a case study of reporting standards in the UK Health Technology Assessment programme. BMC research notes 8, 1 (2015), 1–7.
  12. Daniel G. Cockayne. 2016. Sharing and neoliberal discourse: The economic function of sharing in the digital on-demand economy. Geoforum 77(2016), 73–82. https://doi.org/10.1016/j.geoforum.2016.10.005
  13. Kate Crawford. 2021. The Atlas of AI. Yale University Press.
  14. Valerio De Stefano. 2015. The rise of the just-in-time workforce: On-demand work, crowdwork, and labor protection in the gig-economy. Comp. Lab. L. & Pol’y J. 37 (2015), 471.
  15. Xuefei Nancy Deng, K. D. Joshi, and Robert D. Galliers. 2016. The Duality of Empowerment and Marginalization in Microtask Crowdsourcing: Giving Voice to the Less Powerful through Value Sensitive Design. MIS Q. 40, 2 (June 2016), 279–302.
  16. Jan Drahokoupil and Kurt Vandaele. 2021. A Modern Guide to Labour and the Platform Economy. Edward Elgar Publishing.
  17. Veena B Dubal. 2020. The Time Politics of Home-Based Digital Piecework. In Center for Ethics Journal: Perspectives on Ethics, Symposium Issue “The Future of Work in the Age of Automation and AI. 50.
  18. DuckDuckGo. 2021. We don’t collect or share personal information.https://duckduckgo.com/privacy, Last accessed on 2021-11-08.
  19. Christian Fieseler, Eliane Bucher, and Christian Pieter Hoffmann. 2019. Unfairness by design? The perceived fairness of digital labor on crowdworking platforms. Journal of Business Ethics 156, 4 (2019), 987–1005.
  20. Fjord. [n.d.]. Liquid Expectations: Consumers are setting a different bar for experiences. Last accessed: 09-04-2021, https://web.archive.org/web/20210904192752/https://www.fjordnet.com/conversations/liquid-expectations/.
  21. Peter Fleming. 2017. The human capital hoax: Work, debt and insecurity in the era of Uberization. Organization Studies 38, 5 (2017), 691–709.
  22. Sarah E. Fox, Vera Khovanskaya, Clara Crivellaro, Niloufar Salehi, Lynn Dombrowski, Chinmay Kulkarni, Lilly Irani, and Jodi Forlizzi. 2020. Worker-Centered Design: Expanding HCI Methods for Supporting Labor. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3334480.3375157
  23. Sandra Fredman, Darcy du Toit, Mark Graham, Kelle Howson, Richard Heeks, Jean-Paul van Belle, Paul Mungai, and Abigail Osiki. 2020. Thinking Out of the Box: Fair Work for Platform Workers. King’s Law Journal 31, 2 (2020), 236–249. https://doi.org/10.1080/09615768.2020.1794196 arXiv:https://doi.org/10.1080/09615768.2020.1794196
  24. Batya Friedman. 1996. Value-sensitive design. interactions 3, 6 (1996), 16–23.
  25. Batya Friedman and David G Hendry. 2019. Value sensitive design: Shaping technology with moral imagination. Mit Press.
  26. Batya Friedman, Peter Kahn, and Alan Borning. 2002. Value sensitive design: Theory and methods. University of Washington technical report2-12 (2002).
  27. Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI ’15). Association for Computing Machinery, New York, NY, USA, 1631–1640. https://doi.org/10.1145/2702123.2702443
  28. Sandy J. J. Gould. 2022. Consumption experiences in the research process. In press of Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–17. https://doi.org/10.1145/3491102.3502001
  29. Sandy J. J. Gould, Anna L. Cox, and Duncan P. Brumby. 2016. Diminished Control in Crowdsourcing: An Investigation of Crowdworker Multitasking Behavior. ACM Trans. Comput.-Hum. Interact. 23, 3, Article 19 (June 2016), 29 pages. https://doi.org/10.1145/2928269
  30. Mary L Gray and Siddharth Suri. 2019. Ghost work: how to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
  31. Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Chris Callison-Burch, and Jeffrey P. Bigham. 2018. A Data-Driven Analysis of Workers’ Earnings on Amazon Mechanical Turk. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3174023
  32. Ellie Harmon and M Six Silberman. 2019. Rating working conditions on digital labor platforms. Computer Supported Cooperative Work (CSCW) 28, 5 (2019), 911–960.
  33. Debra Howcroft and Birgitta Bergvall-Kåreborn. 2019. A Typology of Crowdwork Platforms. Work, Employment and Society 33, 1 (2019), 21–38. https://doi.org/10.1177/0950017018760136 arXiv:https://doi.org/10.1177/0950017018760136
  34. Lilly Irani and M. Six Silberman. 2014. From Critical Design to Critical Infrastructure: Lessons from Turkopticon. Interactions 21, 4 (jul 2014), 32–35. https://doi.org/10.1145/2627392
  35. Lilly C Irani and M Six Silberman. 2013. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI conference on human factors in computing systems. 611–620.
  36. David Johnstone, Mary Tate, and Erwin Fielt. 2018. Taking rejection personally: An ethical analysis of work rejection on Amazon Mechanical Turk. In Proceedings of the 26th European Conference on Information Systems (ECIS2018). Association for Information Systems, 1–12.
  37. Arne L Kalleberg. 2011. Good jobs, bad jobs: The rise of polarized and precarious employment systems in the United States, 1970s-2000s. Russell Sage Foundation.
  38. Sara Constance Kingsley, Mary L. Gray, and Siddharth Suri. 2015. Accounting for Market Frictions and Power Asymmetries in Online Labor Markets. Policy & Internet 7, 4 (2015), 383–400. https://doi.org/10.1002/poi3.111 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/poi3.111
  39. Laura Lascau, Sandy J. J. Gould, Anna L. Cox, Elizaveta Karmannaya, and Duncan P. Brumby. 2019. Monotasking or Multitasking: Designing for Crowdworkers’ Preferences. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300649
  40. Vili Lehdonvirta. 2018. Flexibility in the gig economy: managing time on three online piecework platforms. New Technology, Work and Employment 33, 1 (2018), 13–29.
  41. James Manyika, Susan Lund, Jacques Bughin, Kelsey Robinson, Jan Mischke, and Deepa Mahajan. 2016. Independent-Work-Choice-necessity-and-the-gig-economy. Technical Report. McKinsey Global Institute.
  42. David Martin, Sheelagh Carpendale, Neha Gupta, Tobias Hoßfeld, Babak Naderi, Judith Redi, Ernestasia Siahaan, and Ina Wechsung. 2017. Understanding the Crowd: Ethical and Practical Matters in the Academic Use of Crowdsourcing. In Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Daniel Archambault, Helen Purchase, and Tobias Hoßfeld (Eds.). Springer International Publishing, Cham, 27–69.
  43. David Martin, Benjamin V Hanrahan, Jacki O’Neill, and Neha Gupta. 2014. Being a turker. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 224–235.
  44. Brian McInnis, Dan Cosley, Chaebong Nam, and Gilly Leshed. 2016. Taking a HIT: Designing around Rejection, Mistrust, Risk, and Workers’ Experiences in Amazon Mechanical Turk. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’16). Association for Computing Machinery, New York, NY, USA, 2271–2282. https://doi.org/10.1145/2858036.2858539
  45. Jon Messenger. 2018. Working time and the future of work. ILO future of work research paper series(2018).
  46. Eli Pariser. 2011. The filter bubble: What the Internet is hiding from you. Penguin UK.
  47. Lisa Posch, Arnim Bleier, Fabian Flöck, and Markus Strohmaier. 2018. Characterizing the global crowd workforce: A cross-country comparison of crowdworker demographics. arXiv preprint arXiv:1812.05948(2018).
  48. The Fairwork Project. [n.d.]. Fairwork. Last accessed: 12-12-2021, https://fair.work/.
  49. Uma Rani and Marianne Furrer. 2021. Digital labour platforms and new forms of flexible work in developing countries: Algorithmic management of work and workers. Competition & Change 25, 2 (2021), 212–236. https://doi.org/10.1177/1024529420905187 arXiv:https://doi.org/10.1177/1024529420905187
  50. Saiph Savage, Chun Wei Chiang, Susumu Saito, Carlos Toxtli, and Jeffrey Bigham. 2020. Becoming the Super Turker:Increasing Wages via a Strategy from High Earning Workers. In Proceedings of The Web Conference 2020. Association for Computing Machinery, New York, NY, USA, 1241–1252. https://doi.org/10.1145/3366423.3380200
  51. Amanda Spink and Bernard J Jansen. 2004. A study of web search trends. Webology 1, 2 (2004), 4.
  52. Carlos Toxtli, Siddharth Suri, and Saiph Savage. 2021. Quantifying the Invisible Labor in Crowd Work. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 319 (oct 2021), 26 pages. https://doi.org/10.1145/3476060
  53. Amazon Mechanical Turk. [n.d.]. Amazon Mechanical Turk Home Page. Last accessed: 12-12-2021, https://web.archive.org/web/20211210075243/https://www.mturk.com/.
  54. Donna Vakharia and Matthew Lease. 2013. Beyond AMT: An analysis of crowd work platforms. arXiv preprint arXiv:1310.1672(2013).
  55. José Van Dijck. 2013. The culture of connectivity: A critical history of social media. Oxford University Press.
  56. Mark E Whiting, Grant Hugh, and Michael S Bernstein. 2019. Fair Work: Crowd Work Minimum Wage with One Line of Code. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 197–206.
  57. Alex C. Williams, Gloria Mark, Kristy Milland, Edward Lank, and Edith Law. 2019. The Perpetual Work Life of Crowdworkers: How Tooling Practices Increase Fragmentation in Crowdwork. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 24 (Nov. 2019), 28 pages. https://doi.org/10.1145/3359126
  58. Pamela Wisniewski, Arup Kumar Ghosh, Heng Xu, Mary Beth Rosson, and John M. Carroll. 2017. Parental Control vs. Teen Self-Regulation: Is There a Middle Ground for Mobile Online Safety?. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (Portland, Oregon, USA) (CSCW ’17). Association for Computing Machinery, New York, NY, USA, 51–69. https://doi.org/10.1145/2998181.2998352
  59. Alex J Wood, Mark Graham, Vili Lehdonvirta, and Isis Hjorth. 2019. Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy. Work, Employment and Society 33, 1 (2019), 56–75. https://doi.org/10.1177/0950017018785616 arXiv:https://doi.org/10.1177/0950017018785616PMID: 30886460.
  60. Jamie Woodcock and Mark Graham. 2019. The gig economy. A critical introduction. Cambridge: Polity(2019).
  61. Ming Yin, Siddharth Suri, and Mary L. Gray. 2018. Running Out of Time: The Impact and Value of Flexibility in On-Demand Crowdwork. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3173574.3174004

  1. We decided to also include Prolific because it is one of the five platforms examined in ILO’s survey of working conditions on crowdworking platforms [8]. ↩︎