Monotasking or Multitasking: Designing for Crowdworkers’ Preferences

Laura Lascau, University College London
Sandy J. J. Gould, University of Birmingham
Anna L. Cox, University College London
Elizaveta Karmannaya, University College London
Duncan P. Brumby, University College London

This paper will appear in the proceedings of CHI’19. The ACM reference is:
Laura Lascau, Sandy J. J. Gould, Anna L. Cox, Elizaveta Karmannaya, and Duncan P. Brumby. 2019. Monotasking or Multitasking: Designing for Crowdworkers’ Preferences. In CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4– 9, 2019, Glasgow, Scotland UK. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3290605.3300649

There is also a PDF version of this paper.

Abstract

Crowdworkers receive no formal training for managing their tasks, time or working environment. To develop tools that support such workers, an understanding of their preferences and the constraints they are under is essential. We asked 317 experienced Amazon Mechanical Turk workers about factors that influence their task and time management. We found that a large number of the crowdworkers score highly on a measure of polychronicity; this means that they prefer to frequently switch tasks and happily accommodate regular work and non-work interruptions. While a preference for polychronicity might equip people well to deal with the structural demands of crowdworking platforms, we also know that multitasking negatively affects workers’ productivity. This puts crowdworkers’ working preferences into conflict with the desire of requesters to maximize workers’ productivity. Combining the findings of prior research with the new knowledge obtained from our participants, we enumerate practical design options that could enable workers, requesters and platform developers to make adjustments that would improve crowdworkers’ experiences.

Introduction

Many classes of tasks are offered on crowdworking platforms. Finding, accepting, completing and submitting these tasks requires that workers navigate the peculiarities of idiosyncratic platforms. Crowdworkers, the people who work on crowdsourcing platforms, receive no formal training on how to be successful in this challenging context. Instead, they learn strategies through informal support networks, such as forums [68] and develop working practices based on their own experience of what appears to be successful. These practices are often influenced by their preferences for certain styles of working.

Here, we focus specifically on one aspect of crowdworkers’ working practice – multitasking. We define multitasking in line with the definition provided by the Multitasking Preferences Inventory (MPI): “an individual’s preference for shifting attention among ongoing tasks, rather than focusing on one task until completion and then switching to another task” [55]. By this definition, switches between tasks are signs of polychronicity. Switches between activities are not. We document a number of examples from Amazon Mechanical Turk (AMT) [3] work: a) working on a Human Intelligence Task (HIT), and then working on a second HIT; b) working on a HIT, and then monitoring the HIT queue; c) working on a HIT, and then switching to a personal task. We do not consider behaviors such as googling for information as part of a task to be part of our definition of multitasking, since this type of activity contributes directly to the goal of the task and does not require a shift in attention.

We aim to understand why workers on AMT multitask and the factors that influence their multitasking behaviors. First, we investigate workers’ preferences for multitasking to understand if AMT workers are more likely to prefer monotasking or multitasking. Then, we consider this multitasking behavior in the context of workers’ workspaces and their work-home boundary management. Finally, we explain how the structures of crowdworking platforms and the demands of requesters influence multitasking behavior and constrain its adaptation.

Our results show that, like other populations, workers on AMT tend to prefer multitasking over monotasking. Our data give us insights into the dispositional and contextual influences that underlie this behavior. We interpret this new data in the context of established theories of work, productivity and wellbeing. We develop a set of practicable changes that workers, requesters and platforms could make without compromising workers’ autonomy. These changes might help to ameliorate some of the worst effects of multitasking [1, 2, 13, 46, 49], improving productivity and making crowdwork viable for the people who try to make a living from it.

Background

Although remote working has a long history, crowdworking and the technologies that enable it are newer. Crowdworking platforms offer people the possibility of making a living in the absence of a traditional workplace and at times that suit them.

In theory, the lack of formal working hours on platforms like AMT (and broader ’gig’ work in general) allows people to decide when they work, how much work they do and to enjoy complete freedom to choose where they work [11]. The majority of AMT workers complete tasks from their homes [50], but they also complete tasks on the go, such as on their mobile phone, in internet cafés or between classes [22, 50].

Wood et al. [65] show that, in some ways, “platform-based rating and ranking systems facilitate high levels of autonomy, task variety and complexity, as well as potential spatial and temporal flexibility”. This flexibility could mean that crowdworking platforms better accommodate variations in working practices than traditional workplaces. However, the nature of crowdworking platforms and the goals of requesters of work mean that workers often take on as many tasks as possible, the moment that they become available [50]. Workers therefore need to adopt strategies to managing these competing demands, such as frequently switching between working on a task and monitoring the platform for new tasks.

Working strategies are influenced by people’s preference to work in certain ways, and also by the environments in which they work. People are constrained by their broader working context; when and where they would work is not necessarily where and where they can work. The choices that requesters make in their tasks and that developers make in building crowdsourcing platforms constrain the choices available.

Recall that crowdworkers receive no formal training. Tools could be developed to help crowdworkers to become more effective in adopting optimal working practices. Any tool that is developed to support crowdworkers in this way will need to recognize the strength of a given worker’s preferences, but also the extent to which their ability to align their working practices with those preferences is constrained by other factors.

In this paper we are focused on a particular aspect of working practices; multitasking. Multitasking is an integral part of most working contexts [4, 13, 16, 17, 46], and while multitasking has been found to increase feelings of entertainment and relaxation [6], multitasking is more widely known for its negative effects on task performance [13, 42, 46]. Tools that can influence multitasking behavior by adjusting tasks or people’s attitudes might be able to increase productivity in crowdworking settings. To build such a tool, we need to understand how preferences and constraints in crowdsourcing contexts influence multitasking behavior.

AMT workers report that they multitask while taking part in online experiments [52]. Gould, Cox and Brumby’s [21] investigation of multitasking activity among AMT workers found that AMT workers switched tasks every five minutes and that they were willing to switch in the middle of tasks. Chandler, Mueller and Paolacci [15] also report that many of the AMT workers in their study watched TV or listened to music while working on AMT. However, it is difficult to say for certain from this prior research what the contributions of preferences and constraints of multitasking are and this limits the potential chance for support tools to be successful.

Working Preferences of Crowdworkers

Multitasking preferences

There are individual differences in how likely people are to engage in multitasking behavior (e.g., [14, 51, 53, 56]). These individual differences do not arise from task constraints, but rather from an individuals’ natural propensity to multitask. This propensity seems to be influenced by factors such as the somewhat controvertible ’Big Five’ personality traits [45].

Polychronicity, a preference for multitasking

Polychro-nicity describes the preference for doing several things at the same time. Köning and Waller [38] note that the term ’polychronicity’ should be used to describe people’s preferences for doing multiple things at once, whereas people’s actual behaviors, rather than attitudes, should be referred to as ’multitasking’. People with higher polychronic tendencies (i.e., polychronics) have a preference for multitasking, whereas people with lower polychronic tendencies (i.e., monochronics) prefer monotasking (i.e., executing tasks in sequence). Polychronicity is believed to be linked to job prosperity in domains that require high levels of multitasking, such as air traffic control [8, 44].

Bluedorn et al. [8] developed a short ten-item scale, the Inventory of Polychronic Values (IPV), that has frequently been used to probe attitudes toward multitasking behavior. Poposki and Oswald [55] identify a potential issue with the IPV, which is that it conflates behaviors with attitudes; constraints from the broader context of work mean that behaviors and attitudes do not always align. They point out that there is a difference between individual attitudes and the broader cultural-level attitudes that exist in many workplaces. To overcome some of the limitations of the IPV, Poposki and Oswald develop the fourteen-item Multitasking Preference Inventory (MPI) [55].

We use the MPI scale [55] in our study to measure workers’ individual polychronicity preferences and gauge their attitudes towards multitasking. We deploy the scale with minor modifications so that it refers to concepts that are salient in online crowdsourcing (e.g., assignment and task – which tend to be of short duration) rather than concepts that are not (e.g., project – which is of longer duration).

Work-life balance

People have strategies for switching quickly between the tasks they need to perform to get things done. They also organize their work at a higher level: when to work, how long to work for, what kinds of work to do. These decisions take place alongside decisions about non-working time with the aim of achieving work-life balance.

The term ’work-life balance’ is defined as a situation in which “an individual is simultaneously able to balance the temporal, emotional and behavioral demands of both paid work and family responsibility” [24] in order to achieve an ideal equilibrium of wellbeing in all aspects of one’s life [39]. This is an area in which people exhibit individual differences.

However, just as people have preferences for multitasking, they also have preferences for how they balance work and non-work as they try to find work-life balance. Crowdworking is of interest here because it does not fit standard conceptualizations of work-life balance. One aspect of crowdworking that we are particularly interested in therefore is the extent to which people maintain boundaries between work and leisure in crowdworking settings, where work and leisure are co-located and have more chance to be interposed.

Kossek et al.’s [37] Work-Life Indicator scale seeks to understand work-life balance by identifying individuals’ preferences for integrating or segmenting work and non-work aspects of their lives. We use this scale to explore the preferences of crowdworkers. If crowdworking offers true flexibility to workers, then we would expect their work-life balancing behaviors to align with their preferences.

Perhaps unsurprisingly, the concepts of work-life balance and polychronicity are related: Benabou [7] found that people with high polychronic tendencies are more likely to overlap work and personal time. We therefore aim to understand how workers’ multitasking preferences align with workers’ work-life balance preferences.

Understanding work-life balance is important because we know from boundary theory research that one of the factors that influences people’s satisfaction with their work-life balance is the extent to which they perceive that they are able to control the mixing of work and non-work activities [36]. Without control over boundaries, unwanted mixing of work and non-work activities can make people less productive [28, 29].

Constraints on Working Preferences

So far, we have discussed people’s preferences for organizing their work, both in terms of short-term multitasking and longer-term work-life balance. In theory, crowdworking provides workers with the flexibility, autonomy and control that enables them to align their behavior to their preferences. People who enjoy polychronic working could switch frequently. People who like to keep their work and non-work time separated could keep them entirely separate.

In reality, crowdworkers are subject to a number of different constraints which mediate the relationship between people’s preferences and their behaviors. In crowdwork, we have identified three major sources of constraints: requesters, platforms, and personal context.

Requester imposed constraints

Simple, quick, independent microtasks like image labeling characterize much of the work on paid crowdsourcing platforms. Each of these small tasks has been designed by a different requester. Requesters, like workers, receive no formal training for their role. Each task, therefore, has its own set of requirements, instructions, quality criteria and interfaces to which the worker must adapt.

Investigations of patterns of activity have typically had the objective of forming a comprehensive understanding of how a particular task is performed (e.g., [35, 59, 64]). In some scenarios, multitasking behavior is an integral part of the process of task execution. Some tasks require volunteers to go and find out information before returning with an answer. For instance, crowd-based question and answer systems (e.g., [25]) might require respondents to go search the web before they can return a response. Collaborative authoring of articles (e.g., [34]) requires the recruitment of multiple information fragments to form a coherent narrative. This requirement to switch tasks to collect new information is common to many types of work (e.g., [30, 61]).

Some tasks naturally require more concentrated effort than others. Requesters can also feel that their task requires focused attention to be completed to the required standard. They may engineer their tasks in such a way as to ’catch out’ workers who are not providing focused attention [21]. Whether a task intrinsically requires focused attention or requesters feel that it does, participants will adapt their behavior to avoid having their work rejected [21].

Requesters can also influence the behavior of workers through the time that that workers are given to complete a task and the amount of pay that requesters offer. Each requester can set their own task completion times, which workers must adhere to. The amount of time varies from task to task and requester to requester. If the worker does not manage to complete the tasks in the allocated amount of time, the task expires, and the worker might receive no pay. Therefore, a worker must adapt their multitasking strategy to each task. This is especially the case for tasks where there is a lot of work to be done in a small window.

Rates of pay also vary greatly as requesters have the freedom to decide on their own rates of pay. A requester could ask a worker to complete a task worth $5 in 10 minutes or in 1 hour. This can create variations in hourly wage for the workers [23]. Workers might be inclined to take on several tasks of shorter duration with higher pay. To create support that helps workers and to make helpful recommendations for requesters, we need to know more about how the potentially naïve choices requesters make about their tasks constrain workers’ ability to work in the way that they prefer.

Platform imposed constraints

Task management is also a significant part of crowdworkers’ work [40]. Microtasks are usually independent, but this does not mean that they are completed in isolation: they need to be managed. For instance, workers on AMT spend a significant amount of time on task management [40]. This includes finding assignments, accepting them and making sure they are completed in the allotted time.

In a sense, platforms like AMT imply frequent task switching [21]. Tasks appear at irregular intervals throughout the day. Workers have only a small amount of time to accept tasks before other workers take their place. Although some workers make use of tools to aid this task management process [31], monitoring activities still consumes workers’ time and attention. This unpaid task management effort is common across platforms [65]. To create support that helps workers and to make helpful recommendations for requesters, we need to know more about how the hard-to-change structural elements of platforms constrain workers’ ability to work in the way that they prefer.

Constraints imposed by personal context

The final set of constraints on workers come from their circumstances. They include people’s access to productivity-increasing technology, whether they have the space and furniture to work effectively, whether they are trying to do another job whilst they are working. Whether someone has caring responsibilities or a disability might also put constraints on the extent to which they can enact their multitasking and work-life balance preferences, or is forced to adopt a different strategy.

In crowdsourcing contexts, constraints in personal circumstances are by far the most poorly understood of our three types. They are also among the most important. For instance, having a support tool recommend that a worker interrupts themselves less if they are working and simultaneously caring for a toddler is not helpful. To create support that helps workers and to make helpful recommendations for requesters, we need to know more about how workers’ personal circumstances constrain their ability to work in the way that they prefer.

Summary

For AMT workers, working on the platform is managed through instances of multitasking and “finding time and space within their lives” [22]. Constraints imposed on workers influence how, when and where people find this time and space.

In a context where workers receive no formal training, it makes sense to think about how workers can be better supported. However, any set of advice to workers and any support tools developed for their use will not be helpful unless they account appropriately for these constraints. Likewise, requesters receive no formal training on how to be a successful requester. To give recommendations to requesters that are likely to be practically useful to them, we have to first understand the preferences workers have and the constraints that they are working under.

At the moment, we know quite a lot about crowdworkers’ working behavior. But we know nothing about crowdworkers’ working preferences. Once these preferences are understood, we can begin to interpret their behavior in the context of tensions between such preferences and the constraints they are placed under.

We have identified three types of constraints: those imposed by requesters, those imposed by platforms and those imposed by personal circumstances. In particular, there is a complete lack of understanding in the literature, of the ways in which personal circumstances impact the behavior of crowdworkers. None of the three classes of constraint have been scrutinized in the context of workers’ preferences. This might help to explain the lack of specific, evidence-backed recommendations that can be made to workers, requesters and platform designers.

To address these shortcomings and to make progress toward useful support tools, we asked 317 AMT workers about their working practices, such as task management strategies, and about their working preferences, such as preferences for multitasking, working environments and work-life boundary management. In the following section, we describe the method used in the study.

Method

An online survey was administered on Amazon Mechanical Turk as a HIT. The questionnaire was administered in 36 batches at various times of the day over a two-month period.

Participants

A sample of 317 workers were recruited from AMT. Participation was restricted to experienced U.S.-based AMT workers. Participants were required to have completed a minimum of 10,000 HITs, and to have a task acceptance rate of at least 98%. The participation of the workers was voluntary and informed consent was obtained from all participants.

The HIT was advertised with a rate of pay of approximately $6 USD for 30 minutes of work. Participants were also told that they would be paid an extra $2 USD bonus for responses that showed a degree of thought and consideration. In practice, all work was accepted without precondition, and all participants received a $2 USD bonus regardless of how they responded. This brought the hourly rate to approximately $16 USD, including working on the questionnaire and reading instructions and debriefings.

Measures

The first part of the questionnaire contained questions about the participants (i.e., demographics) and their normal routine and habits while working on AMT. This included workers’ ages, nationalities, educational attainment. This kind of data has been collected before (e.g., [18, 26, 57]). We additionally collected data on where people worked and the kinds of equipment they were using to work. Examples of questions include: ’Most of the time where do you complete Amazon Mechanical Turk tasks?’ and ’Which software items from the list do you use to aid your Amazon Mechanical Turk Work?’. The final part of the study was a mix of standard questionnaires and questions specific to the study, focused on personality, multitasking preferences and working context which we describe below. Before we ran the study, three highly experienced Amazon Mechanical Turk workers critically reviewed the questionnaire to ensure the questions were accessible and that the proposed remuneration was fair. They were paid an agreed rate of $20 USD. The reviews led to the adjustment of ambiguous language, but no major changes were made.

Polychronicity

In order to learn about workers’ multitasking habits and preferences we measured polychronicity with the 14-item Multitasking Preference Inventory (MPI) [55]. Sample items from the scale include, ’I prefer to work on several projects in a day, rather than completing one project and then switching to another’ and ’I would rather switch back and forth between several projects than concentrate my efforts on just one’. Polychronicity scores can range from 1 (Strongly Disagree) to 5 (Strongly Agree). The score is measured continuously by summing the scores. Possible scores range from 14 to 70, with higher scores indicating more polychronic behaviors and attitudes. We deployed the questionnaire with one modification, which was replacing the word ’project’ with ’task’ throughout the scale, to make it more relevant to the workers, e.g. ’I would rather switch back and forth between several tasks than concentrate my efforts on just one.’

Workspaces

We also explored the relationship between the space in which people work and the equipment they use. We asked the participants to think about the equipment that they use for work (e.g., ’Which software items from this list do you use to aid your Amazon Mechanical Turk work?’) and to think more broadly about the space they work in. In particular, we were keen to understand whether crowdworkers are happy with their working environment and equipment or have to put up with them of necessity. Do they have to work in a busy space with roommates chatting or do they have somewhere quiet to work? Which work space do they prefer?

Task management questions

Participants were asked to indicate on a five-point Likert scale whether they agree or disagree with 18 statements about their task management routines and habits. Statements included: ’I feel that I have the best strategy for managing multiple tasks’, ’I switch in the middle of tasks to check my progress on other tasks’ and ’If a task is difficult I tend to switch to working on an easier task instead’.

As part of the survey, we also asked crowdworkers two open-ended questions regarding their task management strategies: ’What advice would you give about effective task management to someone just starting on AMT?’ and ’Do you have any particular strategies that help you focus on your work?’.

Work-Life Indicator

We administered the Work-Life Indicator Scale (WLI) [37] to measure boundary management strategies. This is a 17-item 5-point Likert scale is split into five subsections and is comprised of five factors. The first two sections, 1) ’Nonwork interrupting work behaviors’ (NWIW) and 2) ’Work interrupting nonwork behaviors’ (WINW) focus on the extent to which people find their personal lives interrupt their working lives and vice versa. Example statements include ’I respond to personal communications (e.g., emails, text, and phone calls) during work’ and ’I allow work to interrupt me when I spend time with my family or friends.’

The other three sections cover broader aspects of boundary management and family and work identities: 3) ’Boundary control’ (BC) measures perceived boundary control over boundary crossing, e.g. ’I control whether I am able to keep my work and personal life separate’. Next, 4) ’Work Identity’ (WI) and 5) ’Family Identify’ (FI) measure the degree of identity with work and family roles. Sample statements include ’People see me as highly focused on my work’ and ’I invest a large part of myself in my family life’.

We administered the questionnaire to all participants and asked them to consider ’work’ to be any crowdsourcing work. WLI scores can range from 1 (Strongly Disagree) to 5 (Strongly Agree), and the score for each factor is calculated individually by taking the mean of the items.

Switching behaviors

While our participants worked through the scales that comprise the questionnaire, we also collected behavioral activity measures (telemetric data, i.e. key touches, scrolling behavior and tab switches). In this way we were able to examine whether our participants’ self-reported behaviors aligned with observed behaviors. For example, were participants who self-reported high preferences for multitasking more likely to switch between tabs when completing the study than participants who self-reported as having low polychronic tendencies?

Additional measures

The following additional measures were also included but data from these scales is not reported in this paper: the positive and negative affect schedule (PANAS) [63]; mindful attention awareness scale (MAAS) [12].

Procedure

Before agreeing to complete the HIT, workers were presented with an information page that contained all study information. They were told that they had one hour to fill out the questionnaire. After the participants accepted the task, they were taken to the main study and presented with the questionnaire. The participants were debriefed at the end of the task and email addresses were collected for future studies. Participants were paid within 24 hours of completing the task.

Results

In this section we present our empirical results and we confirm that AMT workers recognize a need for additional support and characterize their multitasking and work-life balance preferences. Then we consider how constraints that prevent participants from working in a way that suits them can be overcome.

Preparing Data for Analysis

Of the 317 participant responses collected from Amazon Mechanical Turk, 303 responses have been used in our analysis. We discarded responses from 14 workers who exhibited very high degrees of inattentiveness. To determine the degree of respondents’ attentiveness we used reversed questions. Reverse questions are delivered in pairs; for instance, the questionnaire contained two questions about the effect of people’s devices on their work:

“The device I used limits how effectively I can work”

“The device I use does not limit how effectively I can work”

We had six pairs of these reversed questions in the study. Each of the six pairs of value was given a ’badness’ rating. In the example given, answering ’Strongly agree’ or ’Strongly disagree’ to both of the questions gives a maximum ’badness’ rating of 15. Less diametrically opposed responses were assigned lower badness scores. The maximum possible badness score was 90 (six sets, maximum of 15 points).

In our responses, the median badness score was 2, with a maximum of 42, including outliers. The mean badness score was 5.32 (SD = 6.96). Scores greater than 20 were excluded from the analysis. We used 20 as a cut-off point for identifying inattentive respondents as it is approximately equal to 2 standard deviations from the mean. All participants were paid regardless of whether we used their data.

Respondents

The 303 participants included in the analysis ranged in age from 20 to 69, with a mean of 37 (SD = 11) and a median of 34. 161 respondents (52%) were male, 143 (47%) were female. One worker preferred not to disclose this information.

All workers were residents of the USA. 294 workers identified as being from the USA, 2 from Canada, 1 from Germany, 1 from Guyana, 1 from Hong Kong, 1 from Saint Kitts and Nevis, 1 from Pakistan, 1 from Panama, and 1 from Uruguay.

In terms of education level, 145 (47%) of the participants in the study reported holding bachelor’s degrees. A further 138 reported holding high school diplomas (45%), nineteen reported holding master’s degrees and two participants had doctorates.

The average amount of time that the workers spent on the platform is two and a half years (M = 2.42, SD = 1.55). The workers estimated spending on average five hours and ten minutes (M = 5.24, SD = 3.22) per session working on AMT in a day.

Of the 303 participants, 184 (60%) had other jobs apart from AMT, while 119 (40%) only worked on AMT. Those who completed AMT tasks from work said that they managed to squeeze in a few HITs during quiet times or during lunch breaks.

Do crowdworkers feel they struggle with staying focused?

Developing support tools and guidance for crowdworkers only makes sense if they feel they might benefit from new tools that would help them with their task management. If workers have found their own local optima, tools will not help them. Helpfully, most participants agree that most of the time they could be managing AMT tasks more efficiently (Q3.15, Mode = Agree, Median = Agree).

Many workers (N = 217) complained about getting distracted while completing tasks on AMT: “I think discipline is the key to task management. When using the internet it is very easy to become distracted with other sites. You have to force yourself to stay focused. I think developing habits helps a lot. You can also set a schedule - something like taking a 5 minute break each hour. Continually remind yourself that spending time on other things means that you’re earning less money.” – P144

Other digital distractions workers pointed to included watching videos online, listening to podcasts and watching TV. Distraction is clearly something that concerns workers.

The majority of the participants who reported having good levels of focus worked in private spaces that allowed them to concentrate on the task at hand. These spaces would lack distractions or only allow for a few.

When asked about where they complete AMT tasks most of the time, 296 participants (97%) said that they complete AMT from their homes. In the home, work is spread across the home office, bedroom and living room. 19 of the workers (6.22%) split their time equally working from both home and work. 10 participants (3.27%) said that they completed AMT tasks solely from work. Our sample had more participants working from private spaces (60%) than from shared spaces (40%). 55% of the workers recruited (N = 168) worked on AMT from their home offices.

When asked what changes they would make to their work-spaces, out of the workers who do not already work on AMT from a private space in their home (N = 128), half (70, 55%) said that they would like to have a separate space in their home for working on AMT due to noise issues and interruptions: “I would first and foremost move it into a private room. I miss out on numerous hits because I am unable to record audio or video due to the fact others are making noise around me or would be in the webcam video. It also provides many distractions since it’s in the front room. A personal private room would be the best upgrade.” – P92, MPI Score = 41

What are the multitasking preferences of the AMT workers?

Polychronicity scores from our data range from 14 to 69 (M MPI Score = 38.01, SD = 12.54). 60% of our sample tend toward polychronicity and the remainder prefer monochronic working. The mean score in our sample is comparable to samples from other studies: a sample of 89 college students (M = 36, SD = 10) [62], a sample of AMT workers (M = 42.42, SD = 10.92) [66], and a sample of undergraduates (M = 38.36, SD = 11.20) [58]. Polychronicity was calculated by summing all of the answers to the MPI questionnaire, with items 4, 5, 6, 8, 10, 11, 13 and 14 reversed.

Workers were asked to comment on their strategies for staying focused while they were working. Some workers reported highly monochronic strategies: “I use noise canceling headphones. I tune out environmental noises or activities. I remain focused on what I’m doing. I do not engage in more than one activity at a time unless the HITs require me to do so. I do not eat or listen to music while I work on HITs. I tend to work when the environment is calm rather than when I know those around me will be active.” – P225, MPI Score = 18

Others had far more polychronic approaches to their work: “I like to switch things up - variety is the spice of life - sometimes I listen to music, other times I don’t - I take breaks when I start to feel my focus fading, stay up-to-date with different tasks.” – P16, MPI Score = 52

Distribution of switching durations
Figure 1: Distribution of switching durations

As well as asking for their subjective experience, we also measured our participants’ task switching behavior. Participants logged 2,283 tab switches in total. The average switch count per person was 7.53 (SD = 7.54, median = 5), with a range of 0 to 45 switches. In comparison, in Mark, Voida and Cardello’s study [47], information workers switched screen windows 37.1 times an hour on average (SD = 31.4). 12 participants in our study did not switch at all from our task, or their switches were undetectable to us.

Participants switched for an average of 28.40 seconds. The shortest duration was 0.58 seconds and the longest duration was 11.5 minutes. Figure 1 shows the distribution of switching durations. The red line on the graph marks the mean duration. The distribution was positively skewed with a long tail: 77% of the switches were under 28 seconds, but the longest switch was greater than 10 minutes, which highlights occurrences where participants were likely distracted. To scale the distribution in one histogram, switches longer than 50 seconds are grouped as one bar.

The workers scoring higher on the polychronicity scale were more likely to spend a longer time away from the HIT per switch. A Spearman correlation coefficient was computed to assess the relationship between the length of task switches undertaken by our participants and their score on the polychronicity scale. There was a small positive correlation between the two variables (rs = .136, p = .018). This suggests that despite the time pressures that workers are under, polychronic workers felt able, to some degree, to fully break from tasks and engage with them at a later time.

What are workers’ work-life balance preferences?

As part of the survey, we administered Kossek et al.’s [37] Work-life Indicator Scale (WLI) to address work-nonwork boundaries. We found that our sample looks like the ’Family guardians’ identified by Kossek et al. [37]. ’Family guardians’ is one of the six clusters (’Work warriors’, ’Overwhelmed reactors’, ’Family guardians’, ’Fusion lovers’, ’Dividers’, ’Nonwork-eclectics’), which describe boundary management patterns.

’Family Guardians’ are characterized as family-centric individuals who identify strongly with their families and workplaces alike – they have fairy equal scores for work identity (M = 3.56) and family identity (M = 3.80), which are both above the mean. They also have high control over their work-life boundaries (M = 4.16).

We see that workers who had higher instances of personal matters interrupting their work (NWIW on the WLI scale) were more likely to work on AMT from shared spaces. Results of an independent-samples t-test indicated that, on average, those working from shared spaces scored higher on NWIW (M = 3.68, SE = 0.09) than those working from private spaces (M = 3.39, SE = 0.06). This difference was significant (t(301) = 2.63, p = .009) and represented a reasonable effect (d = .32). This is expected, as shared spaces are known to invite more distractions and interruptions.

In our sample, there was no evidence of a relationship between polychronicity and perceived boundary control (rs = -.037, p = .519). This suggests that these concepts can be treated independently when designing support solutions.

Discussion

In this section we present our recommendations for supporting crowdworkers. The recommendations synthesize evidence from three sources: prior literature, our quantitative data (in the form of Likert-scale selections) and our qualitative data from our free-text responses.

Our recommendations are based on the idea that workers will be most content when they are able to work in ways that align with their preferences. Constraints imposed by requesters, platforms and workers’ own lives significantly control the degree of fidelity between these preferences and how people behave. Therefore, our recommendations focus on how constraints on workers can be most easily relaxed. The creators of the constraints have it within their power to relax these constraints, whether they are workers, requesters or platform designers.

Design Recommendations

Our aim is to support different working preferences on platforms like AMT. The first step we took was to understand why crowdworkers choose to multitask and the factors that constrain their multitasking behaviors. While we agree that it is difficult to fundamentally redesign large platforms like AMT, we believe that it is possible for task designers (the requesters) and for platform designers to better accommodate workers’ preferences (e.g., multitask vs. monotask). As progress toward this goal, we arrive at Design Recommendations that consider factors that constrain workers’ multitasking behaviors.

Recommendations for Workers

Workers have to reconcile the constraints placed on them with their preferences. Any support system should help workers to identify the personal constraints they have, avoiding giving advice about immutable constraints. At the same time, it should encourage them to think about constraints that they might be able to loosen.

Participants volunteered clear examples of personal constraints influencing their task management: “If kids are at school/napping I will turn on some music to power through a batch that doesn’t take much concentration. I try to work during times I know the kids are being independent or napping.” – P28, MPI Score = 43

W1. Dedicating a monitor to AMT work

When asked what advice they would give about effective task management to someone just starting on AMT, workers who have scored highly on the polychronicity scale (scores of 58, 56, 54, 53, etc.) said that they would recommend getting a second monitor: “I would tell them to absolutely get a second monitor, and set it up to have one of the forums on it, to look for HITs, also other tabs on it that you don’t need all the time, then the monitor in front of you has what you’re working on”. – P318, MPI Score = 56

Not everyone can afford a second screen, though–a personal constraint. Instead of having to buy new equipment a common alternative strategy seems to be tiling windows on the screen. This strategy allows the workers to achieve the same effect without the additional monitor. The strategy has been observed in e-learning environments [54].

“I have one monitor dedicated to Mturk. I have Chrome running on it. 75 percent of the screen is workspace where I keep AMT tabs open like the AMT page and the survey page. On 20 percent of the screen I run a separate chrome tab running HITscraper, a program that continually checks AMT for new surveys and sorts them by color/payrate/Turkopticon rating/etc. The remaining 5 percent I keep a word document open where I copy/paste important information like the survey consent page (especially contact information) as well as any information from the survey itself I am allowed to copy.” – P84, MPI Score = 58

W2. Dedicating a quiet workspace for AMT work that is less likely to lead to interruptions

We know reducing the frequency of interruptions can improve well-being and performance in workplaces [48]. We wanted to know if specific workspaces can have an influence on workers’ levels of focus, since lack of focus can be detrimental to productivity [46].

Our results suggest that respondents who worked on AMT from private spaces had good levels of focus, and that over half of the workers who work from a shared space would like to have a separate space in their home for uninterrupted AMT work. For some people, it is not a lack of space that is the challenge, but balancing their other commitments with what would be best for work: “My problem is that I have to work in my living room because I’m a stay at home mom - I prefer to work upstairs in our attic office, but it’s not a good space for my toddler. So I would like to have a desk to work at down in the living room so I could use a mouse and 2-monitor setup, but we don’t have space right now because there are so many baby things in here.” – P6, MPI Score = 25

But not all workers would like to have a separate space for working on AMT or to eliminate interruptions altogether. For a large number of workers, being able to work on AMT from their homes provides them with the opportunity to be close to their families: “Well, there’s things I could change to make working easier with less distractions, but I watch my 3-year-old daughter full time at home. As you can imagine, there are plenty of interruptions, but it’s not like I want to eliminate them. My daughter is more important than my work at MTurk, so I strive for a good balance between the two.” – P104, MPI Score = 58

It is clear from our data that having access to quiet spaces makes people feel more productive and focused. But there are a number of personal constraints that mean that ’try and find a quiet space’ would not be practical for many workers. Likewise, obvious advice like using headphones isn’t always an alternative: “I sometimes have to work while the tv is on in another room and while I can’t see it, I can hear it and it’s bothersome. I have a set of headphones but I don’t use them to block out noise like that.” – P144, MPI Score = 43

Rather than have participants try and change their environment, it might make more sense for a support tool to recommend different kinds of tasks to people working in noisy settings; filtering out tasks that require audio classification or deep engagement with text. The work adapts to the worker, rather than vice versa.

W3. Keeping an eye on times

When asked what advice they would give about effective task management to someone just starting on AMT, a large number of the workers who have scored highly on the polychronicity scale (scores of 63, 56, 54, 51, etc.) said that by monitoring the time limits of the HITs that they have in the queue they can make sure that they work on all of the HITs, and that none of them will expire: “Tasks are timed so make sure that you’re always aware to how much time you have to do them. Always give yourself more time than you think it’ll take just to be on the safe side. Follow forums where people will give you more accurate times on how long the tasks actually take.” – P153, MPI Score = 54

Participant 123 adds: “To someone just starting on AMT, I recommend resisting the temptation to get carried away with doing too many tasks at once. I have had instances where multiple tasks ’timed out’ on me because I had them waiting in the queue and couldn’t get to them quickly enough. So pacing oneself is important. I also note that there are certain requesters whose hits I enjoy and therefore I get an idea of when these are posted and try to make room in my schedule for those tasks. Specific tools like Turkmaster, Hit Scraper, and Turkopticon are invaluable for managing tasks. I wish I had installed these immediately after I registered at AMT.” – P123, MPI Score = 57

Recommendations for Task Designers

We appreciate that some of our worker-centric design recommendations are often found on crowdsourcing forums. We believe that having evidence from our data to back some of the workers’ well-shared ’folk theories’ is necessary for building tools for helping crowdworkers. Additionally, since task designers do not receive training on how to get the best out of their workers, we hereby present four design recommendations that requesters can consider to alleviate any task constraints put on the workers.

R1. Give workers plenty of time to complete the tasks

Our analysis indicates that polychronic workers were likely to spend a longer time away from the HIT per switch. As time is crucial on AMT, if workers with higher polychronic tendencies choose to work on multiple things at once, this might mean that they require more time to finish their tasks. Luximon and Gooenetikkele [43] found that in the case of tasks with different priorities, people with higher polychronic tendencies did not adjust their strategy as much as people with lower polychronic tendencies, who adjusted their strategy to give the task with the higher priority more attention.

Giving workers plenty of time to complete their task will also be beneficial for both polychronics and monochronics because it puts one less constraint on workers’ strategies. With relatively short times, participants have to prioritize finishing quickly over everything else: “I always make sure I look at the time on the hit. I can then decide if I will do it right that second, or be able to finish what I am doing so that I can go back to it when I am done with the task at hand.” – P18, MPI Score = 42

We know from HCI research that when people are told to work quickly and accurately, they inevitably have to find equilibrium [5, 9]. Under serious time pressure, speed will be favored over accuracy.

R2. Make sure the task is responsive

As many workers sometimes split their screen into multiple windows during AMT work, making sure that the content on the page will move with the page is important. For an optimal viewing experience, requesters should ensure that if workers are going to resize the windows, the HIT will resize accordingly. Docking windows can help users perform multiple tasks in parallel faster and more efficiently [60].

Furthermore, we know from research on crowdworkers that any kind of delay in tasks will encourage workers to switch to other activities [20, 21]. This is not surprising; AMT workers do not get paid for the amount of time they work but for what they produce and so have a strong incentive to maximize their productivity in a given time period.

R3. Encourage workers to return to the task (or to AMT)

Workers complained about getting distracted with other things on the internet while completing tasks on AMT: “The biggest distraction I have to deal with is other people and the demands they place upon me. My second biggest distraction is the web and the infinite amount of interesting information out there. I can listen to music and it helps me focus most of the time but every now and then I’ll get distracted trying to find the perfect song on YouTube and I’ll go down a rabbit hole and I’ll end up spending a few hours watching old music videos and just wasting way too much time instead of getting anything done.” – P219, MPI Score = 47

When the workers are finished with a task (especially if the task is outside of AMT, e.g. a questionnaire in Qualtrics [67]), requesters should make sure that they provide workers with a link to return to the AMT dashboard. In this way, workers can return to the platform in the event they get distracted.

R4. Pay well

As pay can be quite low on AMT, workers can choose to work on multiple tasks at once as a way of generating a higher income in a shorter period of time. In our study, in terms of the data quality, workers who switched more did not perform worse than workers who avoided switching. Maybe the fact that the pay was good for workers in our study explains why workers were able to prioritize our task: “[…] I think my level of focus is mostly related to the hourly pay. If it’s high enough, I can put aside distractions and really focus for hours on end.” – P66, MPI Score = 26

Participant 27 adds: “The only strategy I use is to turn off my HIT notifiers if I find a high-paying task that requires complete focus. I do not usually multitask as I find it hurts concentration, but I leave my HIT notifier on so that I do not miss other tasks. For HITs such as this one where the pay is substantially higher than the tasks I normally take, I turn off all other distractions so that I can completely focus on a single task.” – P27, MPI Score = 65

Also, in our study, workers with higher polychronic tendencies did not perform worse than monochronic workers. We note from the literature that whereas polychronicity is a preference, multitasking is a behavior that can change during the day depending on factors such as opportunities that arrive, interruptions, or unplanned tasks at work [33].

Recommendations for Platform Designers

Crowdsourcing platforms are not always known for being responsive to the needs of workers or requesters. Nevertheless, there are features that a platform could introduce which would allow workers more freedom with their behaviours and improve overall work quality.

PD1. Allow workers to auto-save work in progress

Our analysis indicated that polychronic workers were likely to spend a longer time away from the HIT per switch. For anyone who switches between tasks, it is important that their work does not get lost. Auto-saving the work can ensure that workers who choose to switch rapidly between task will not lose their progress on the task. Knowing that 40% of the workers in our study worked in shared spaces, and might, therefore, get interrupted, it is important that their progress on the task(s) gets saved.

PD2. Allow workers to set goals

Leveraging Locke’s Goal-Setting Theory [41], we recommend that platform designers build tools which allow workers to set goals for their work sessions. Goal-setting, for instance, can increase the amount of contributions made in citizen scientists’ sessions in online communities [27].

In our study, workers with high polychronic tendencies suggest that workers should set goals that they can work towards: “Set goals and work towards them - it is easy to get sucked into working many hours for very little money. […] I set up my day into chunks where I commit to working with a monetary goal in mind (or a number goal if I am close to a milestone). Once that goal is reached I usually will take a break and set another goal to pursue. I find that I absolutely need the periodic breaks to stay focused or I will wander off to look through the internet.” – P27, MPI Score = 65

Setting goals before starting a work session can help workers determine how much time they might want to spend on the working session, or how much money they would like to make in the session. Referring back to their goals during the day might lower any impulses to work on too many things at once.

PD3: Recontextualize the work on return from a switch

After returning to a task after a long switch, workers could be reminded about what they have to do on the task (e.g., what the task is about, how much of the task they have remaining, how much money they have made so far, etc.). The workers could also be provided with information about when they have made the switch (and for how long) [10].

We know that these kinds of place-keeping tools are a good idea. Kern et al. [32] found that showing people where they were last working before they were interrupted significantly improved the speed with which they could resume work.

This recommendation can help workers understand their switching behaviors by revealing how much time they have spent away from the task.

PD4: Allow workers to take notes on tasks

Taking notes on tasks could enable workers to offload information about the task at hand right before making a switch. On return from the switch, the notes can act as a trigger for their delayed intentions [19].

“Over the years, I have developed custom spreadsheets for tracking goals, bills, income, task/time analysis, etc. I’ve turked away from my computer before and found it very frustrating to not have my spreadsheets. They help me focus on what I need to accomplish and how productive I’m being. My income has been increasing because of this.” – P112, MPI Score = 45

Limitations

Having discussed our design recommendations with reference to our own data and existing literature in the section above, we now consider the limitations of our approach.

Our survey focused on AMT workers in the US. The sampling strategy might have omitted potential participants from different locations who might work under different kinds of constraints, particularly from a personal perspective. Our study also focused on experienced AMT workers, with histories of producing high quality work. This biases their experiences; to be successful they must have discovered good strategies for managing the tensions between preferences and constraints. This form of survivorship bias in the data could be ameliorated by recruiting very new AMT workers without any restrictions on their track-record. Inexperienced or unsuccessful AMT workers might provide radically different perspectives on what kind of support would be most useful to them.

Conclusion

Crowdworkers generally receive no real training on how best to manage their tasks and time. Building a nuanced understanding of workers’ multitasking preferences, behaviors and habits is the starting point to creating tools that support workers. Our work helps to explain why crowdworkers may struggle with focus levels and attention, how they could alter their working conditions (especially physical and digital spaces) to address this, and what the requesters and platform designers can do in order to improve productivity. We propose tools to help workers understand their multitasking behaviors and preferences, and support behavior change for workers that may be looking to change the way they work. The tools should expressly avoid making value judgments about behavior. The objective is not to tell people they are not working hard enough – it is to help them align their habits more closely to their objectives.

Acknowledgements

We thank Kristy Milland for giving our questionnaire the benefit of her substantial experience and expertise. Likewise, we also would like to thank Manish Bhatia and Marie Mento for their input. We thank the workers who took the time to participate in our study. A lot of effort went into so many of the fascinating responses. Finally, we thank Jake Rigby and Judith Borghouts for helping us visualize the data. This work was supported by UK Engineering and Physical Sciences Research Council grant EP/L504889/1.

References

[1] Adler, R. and Benbunan-Fich, R. 2012. The effects of positive and negative self-interruptions in discretionary multitasking. Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts - CHI EA ’12 (Austin, Texas, USA, 2012), 1763.

[2] Altmann, E.M. et al. 2014. Momentary interruptions can derail the train of thought. Journal of Experimental Psychology: General. 143, 1 (2014), 215–226. DOI:https://doi.org/10.1037/a0030986.

[3] Amazon Mechanical Turk Amazon Mechanical Turk.

[4] Bainbridge, L. 1999. Processes underlying human performance. Handbook of aviation human factors. (1999), 107–171.

[5] Banovic, N. et al. 2013. The effect of time-based cost of error in target-directed pointing tasks. Proceedings of the sigchi conference on human factors in computing systems (2013), 1373–1382.

[6] Becker, M.W. et al. 2013. Media multitasking is associated with symptoms of depression and social anxiety. Cyberpsychology, Behavior, and Social Networking. 16, 2 (2013), 132–135.

[7] Benabou, C. 1999. Polychronicity and temporal dimensions of work in learning organizations. Journal of Managerial Psychology. 14, 3/4 (Jun. 1999), 257–270. DOI:https://doi.org/10.1108/02683949910263792.

[8] Bluedorn, A.C. et al. 1999. Polychronicity and the Inventory of Polychronic Values (IPV): The development of an instrument to measure a fundamental dimension of organizational culture. Journal of Managerial Psychology. 14, 3/4 (Jun. 1999), 205–231. DOI:https://doi.org/10.1108/02683949910263747.

[9] Bogunovich, P. and Salvucci, D. 2011. The effects of time constraints on user behavior for deferrable interruptions. Proceedings of the sigchi conference on human factors in computing systems (2011), 3123–3126.

[10] Borghouts, J.W. et al. 2018. Looking Up Information in Email: Feedback on Visit Durations Discourages Distractions. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18 (Montreal QC, Canada, 2018), 1–6.

[11] Brewer, R. et al. 2016. “Why would anybody do this?”: Understanding Older Adults’ Motivations and Challenges in Crowd Work. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16 (Santa Clara, California, USA, 2016), 2246–2257.

[12] Brown, K.W. and Ryan, R.M. 2003. The benefits of being present: Mindfulness and its role in psychological well-being. Journal of Personality and Social Psychology. 84, 4 (2003), 822–848. DOI:https://doi.org/10.1037/0022-3514.84.4.822.

[13] Brumby, D.P. et al. 2009. Focus on driving: How cognitive constraints shape the adaptation of strategy when dialing while driving. Proceedings of the 27th international conference on Human factors in computing systems - CHI 09 (Boston, MA, USA, 2009), 1629.

[14] Cades, D.M. et al. 2010. Factors Affecting Interrupted Task Performance: Effects of Adaptability, Impulsivity and Intelligence. th ANNUAL MEETING. (2010), 5.

[15] Chandler, J. et al. 2014. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods. 46, 1 (Mar. 2014), 112–130. DOI:https://doi.org/10.3758/s13428-013-0365-7.

[16] Chisholm, C.D. et al. 2000. Emergency department workplace interruptions are emergency physicians “interrupt-driven” and “multitasking”? Academic Emergency Medicine. 7, 11 (2000), 1239–1243.

[17] Czerwinski, M. et al. 2004. A diary study of task switching and interruptions. Proceedings of the 2004 conference on Human factors in computing systems - CHI ’04 (Vienna, Austria, 2004), 175–182.

[18] Difallah, D. et al. 2018. Demographics and Dynamics of Mechanical Turk Workers. Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining - WSDM ’18 (Marina Del Rey, CA, USA, 2018), 135–143.

[19] Gilbert, S.J. 2015. Strategic offloading of delayed intentions into the external environment. The Quarterly Journal of Experimental Psychology. 68, 5 (2015), 971–992.

[20] Gould, S.J. et al. 2015. Task lockouts induce crowdworkers to switch to other activities. Proceedings of the 33rd annual acm conference extended abstracts on human factors in computing systems (2015), 1785–1790.

[21] Gould, S.J.J. et al. 2016. Diminished Control in Crowdsourcing: An Investigation of Crowdworker Multitasking Behavior. ACM Transactions on Computer-Human Interaction. 23, 3 (Jun. 2016), 1–29. DOI:https://doi.org/10.1145/2928269.

[22] Gupta, N. et al. 2014. Turk-Life in India. Proceedings of the 18th International Conference on Supporting Group Work - GROUP ’14 (Sanibel Island, Florida, USA, 2014), 1–11.

[23] Hara, K. et al. 2018. A Data-Driven Analysis of Workers’ Earnings on Amazon Mechanical Turk. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18 (Montreal QC, Canada, 2018), 1–14.

[24] Hill, E.J. et al. 2001. Finding an Extra Day a Week: The Positive Influence of Perceived Job Flexibility on Work and Family Life Balance*. Family Relations. 50, 1 (Jan. 2001), 49–58. DOI:https://doi.org/10.1111/j.1741-3729.2001.00049.x.

[25] Hsieh, G. et al. 2010. Why pay?: Exploring how financial incentives are used for question & answer. Proceedings of the 28th international conference on Human factors in computing systems - CHI ’10 (Atlanta, Georgia, USA, 2010), 305.

[26] Ipeirotis, P.G. 2010. Demographics of mechanical turk. (2010).

[27] Jackson, C. et al. 2016. Encouraging Work in Citizen Science: Experiments in Goal Setting and Anchoring. Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion - CSCW ’16 Companion (San Francisco, California, USA, 2016), 297–300.

[28] Jackson, T. et al. 2001. The cost of email interruption. Journal of Systems and Information Technology. 5, 1 (Jun. 2001), 81–92. DOI:https://doi.org/10.1108/13287260180000760.

[29] Jackson, T.W. et al. 2003. Understanding email interaction increases organizational productivity. Communications of the ACM. 46, 8 (Aug. 2003), 80–84. DOI:https://doi.org/10.1145/859670.859673.

[30] Jin, J. and Dabbish, L.A. 2009. Self-Interruption on the Computer: A Typology of Discretionary Task Interleaving. (2009), 10.

[31] Kaplan, T. et al. 2018. Striving to earn more: A survey of work strategies and tool use among crowd workers. HCOMP (2018), 70–78.

[32] Kern, D. et al. 2010. Gazemarks: Gaze-based visual placeholders to ease attention switching. Proceedings of the sigchi conference on human factors in computing systems (2010), 2093–2102.

[33] Kirchberg, D.M. et al. 2015. Polychronicity and Multitasking: A Diary Study at Work. Human Performance. 28, 2 (Mar. 2015), 112–136. DOI:https://doi.org/10.1080/08959285.2014.976706.

[34] Kittur, A. et al. 2011. CrowdForge: Crowdsourcing complex work. (2011), 10.

[35] Komarov, S. et al. 2013. Crowdsourcing performance evaluations of user interfaces. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13 (Paris, France, 2013), 207.

[36] Kossek, E.E. et al. 2006. Telecommuting, control, and boundary management: Correlates of policy use and practice, job control, and work–family effectiveness. Journal of Vocational Behavior. 68, 2 (Apr. 2006), 347–367. DOI:https://doi.org/10.1016/j.jvb.2005.07.002.

[37] Kossek, E.E. et al. 2012. Work–nonwork boundary management profiles: A person-centered approach. Journal of Vocational Behavior. 81, 1 (Aug. 2012), 112–128. DOI:https://doi.org/10.1016/j.jvb.2012.04.003.

[38] König, C.J. and Waller, M.J. 2010. Time for Reflection: A Critical Examination of Polychronicity. Human Performance. 23, 2 (Apr. 2010), 173–190. DOI:https://doi.org/10.1080/08959281003621703.

[39] Kreiner, G.E. et al. 2009. Balancing Borders and Bridges: Negotiating the Work-Home Interface via Boundary Work Tactics. Academy of Management Journal. 52, 4 (Aug. 2009), 704–730. DOI:https://doi.org/10.5465/amj.2009.43669916.

[40] Lasecki, W.S. et al. 2015. The Effects of Sequence and Delay on Crowd Work. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI ’15 (Seoul, Republic of Korea, 2015), 1375–1378.

[41] Locke, E.A. and Latham, G.P. 1990. A theory of goal setting & task performance. Prentice-Hall, Inc.

[42] Lottridge, D.M. et al. 2015. The effects of chronic multitasking on analytical writing. Proceedings of the 33rd annual acm conference on human factors in computing systems (New York, NY, USA, 2015), 2967–2970.

[43] Luximon, Y. and Goonetilleke, R.S. 2012. Time use behavior in single and time-sharing tasks. International Journal of Human-Computer Studies. 70, 5 (May 2012), 332–345. DOI:https://doi.org/10.1016/j.ijhcs.2012.01.001.

[44] Mark, G. 2015. Multitasking in the digital age. Synthesis Lectures On Human-Centered Informatics. 8, 3 (2015), 1–113.

[45] Mark, G. et al. 2008. The cost of interrupted work: More speed and stress. Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI ’08 (Florence, Italy, 2008), 107.

[46] Mark, G. et al. 2016. Neurotics Can’t Focus: An in situ Study of Online Multitasking in the Workplace. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16 (Santa Clara, California, USA, 2016), 1739–1744.

[47] Mark, G. et al. 2012. “A pace not dictated by electrons”: An empirical study of work without email. Proceedings of the sigchi conference on human factors in computing systems (New York, NY, USA, 2012), 555–564.

[48] Mark, G. et al. 2012. “A pace not dictated by electrons”: An empirical study of work without email. (2012), 10.

[49] Mark, G. et al. 2014. Stress and multitasking in everyday college life: An empirical study of online activity. Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI ’14 (Toronto, Ontario, Canada, 2014), 41–50.

[50] Martin, D. et al. 2017. Understanding the Crowd: Ethical and Practical Matters in the Academic Use of Crowdsourcing. Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments (2017), 27–69.

[51] Meys, H.L. and Sanderson, P.M. 2013. The Effect of Individual Differences on How People Handle Interruptions. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 57, 1 (Sep. 2013), 868–872. DOI:https://doi.org/10.1177/1541931213571188.

[52] Necka, E.A. et al. 2016. Measuring the Prevalence of Problematic Respondent Behaviors among MTurk, Campus, and Community Participants. PLOS ONE. 11, 6 (Jun. 2016), e0157732. DOI:https://doi.org/10.1371/journal.pone.0157732.

[53] Ophir, E. et al. 2009. Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences. 106, 37 (Sep. 2009), 15583–15587. DOI:https://doi.org/10.1073/pnas.0903620106.

[54] Park, J.H. and Liu, M. 2012. Multitasking in e-learning environments: Users’ multitasking strategies and design implications. (2012), 6.

[55] Poposki, E.M. and Oswald, F.L. 2010. The Multitasking Preference Inventory: Toward an Improved Measure of Individual Differences in Polychronicity. Human Performance. 23, 3 (Jun. 2010), 247–264. DOI:https://doi.org/10.1080/08959285.2010.487843.

[56] Rigby, J.M. et al. 2017. Media Multitasking at Home: A Video Observation Study of Concurrent TV and Mobile Device Usage. Proceedings of the 2017 ACM International Conference on Interactive Experiences for TV and Online Video - TVX ’17 (Hilversum, The Netherlands, 2017), 3–10.

[57] Ross, J. et al. 2010. Who are the crowdworkers?: Shifting demographics in mechanical turk. Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems - CHI EA ’10 (Atlanta, Georgia, USA, 2010), 2863.

[58] Rubenking, B. 2016. Multitasking With TV: Media Technology, Genre, and Audience Influences. Communication Research Reports. 33, 4 (Oct. 2016), 324–331. DOI:https://doi.org/10.1080/08824096.2016.1224167.

[59] Sampath, H.A. et al. 2013. Effect of task presentation on the performance of crowd workers—a cognitive study. First aaai conference on human computation and crowdsourcing (2013).

[60] Shibata, H. and Omura, K. 2012. Docking window framework: Supporting multitasking by docking windows. Proceedings of the 10th asia pacific conference on Computer human interaction - APCHI ’12 (Matsue-city, Shimane, Japan, 2012), 227.

[61] Spink, A. et al. 2002. Multitasking Information Seeking and Searching Processes. J. Am. Soc. Inf. Sci. Technol. 53, 8 (Aug. 2002), 639–652. DOI:https://doi.org/10.1002/asi.10124.

[62] Wang, Z. 2015. Media distraction in college students. (2015).

[63] Watson, D. et al. 1988. Development and validation of brief measures of positive and negative affect: The panas scales. Journal of personality and social psychology. 54, 6 (1988), 1063.

[64] Welinder, P. and Perona, P. 2010. Online crowdsourcing: Rating annotators and obtaining cost-effective labels. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops (San Francisco, CA, USA, Jun. 2010), 25–32.

[65] Wood, A.J. et al. 2018. Good gig, bad gig: Autonomy and algorithmic control in the global gig economy. Work, Employment and Society. (2018), 0950017018785616.

[66] Zide, J.S. et al. 2017. Work interruptions resiliency: Toward an improved understanding of employee efficiency. Journal of Organizational Effectiveness: People and Performance. 4, 1 (Mar. 2017), 39–58. DOI:https://doi.org/10.1108/JOEPP-04-2016-0031.

[67] Qualtrics: The Leading Research & Experience Software. Qualtrics.

[68] Turker Nation : Our mTurk Forum helps you earn money online with Amazon mTurk.