Computing and social responsibility in Human Resource Management – looking back and looking forwards

Vincent Bryce

Mature PhD student at the Horizon Centre for Doctoral Training in the School of Computer Science, University of Nottingham and DMU and senior HR project manager at the University of Nottingham. My research interests are RRI (Responsible Research and Innovation) and HRIS (Human Resources Information Systems). I am also an experienced Equality and Diversity practitioner. I am currently developing mixed methods research into the business case for Responsible Research and Innovation using meta-review of published RRI case studies and case studies of HR/HRIS aspects of RRI.

Computing and social responsibility in Human Resource Management – looking back and looking forwards

Looking back

The nineties, noughties and teenies saw a number of developments in digital people management and employment practices. For many organisations, this period saw the first digitisation of HR records, the development of increasingly sophisticated management information and analytics capabilities, and the materialisation of computer ethics issues and concerns anticipated by CCSR and the Ethicomp community.

Evolving systems and expanding data collection

The more widespread adoption of enterprise resource management and planning systems such as Oracle, SAP and Peoplesoft has significantly altered HR practice and enabled increasing use of analytics to describe and increasingly predict employment trends at organisational and individual level (Johnson et al., 2016). While the extent of HR digitisation varies across regions and industries (CIPD, 2019), companies of all sizes typically hold a much wider range of data than under traditional paper personnel file approaches, and this data increasingly may include health and activity data collected through work-provided wearables alongside more typical personnel records. This period has seen a growing movement in HR towards the use of predictive analytics (CIPD, 2018), with ethical considerations detected earlier by Centre members (e.g. McBride, 2015) only belatedly coming under wider academic scrutiny. (Giermindl et al., 2021; Tursunbayeva et al., 2021).   Even in organisations where legacy systems, staff capabilities or value-driven decisions have limited their development, ongoing data collection through HR systems builds a foundation for future applications, and consequently the potential for a wider range of organisations to repeat mistakes of digital pioneers and entrench bias (Dastin, 2018; Cheong et al., 2021).

New forms of digital labour

The emergence of both specialised and universal digital platforms from Uber and Lyft to Upwork and Flexjobs have catalysed broader trends towards ‘gigification’ and flexible, insecure work, posing challenges for employment legislation and requiring us to reimagine our preconceptions of what ‘work’ is (Duggan et al., 2019). Recent judgements have challenged the presumption of platform companies that workers can be managed as employees in all but name (Russon, 2021), but there remains a long way to go in aligning regulation to newer forms of work such as content creation through Youtube and similar channels which although offering flexibility, exercise significant managerial control and chain creators’ livelihoods to opaque algorithms (Prtorić, 2021).

Increased interest in machine learning has led to rising numbers of both graduate computer science roles, and often menial, frequently offshored work associated with the classification of images and similar tasks (Tan et al., 2020). The use of tools such as Mechanical Turk enables international as well as contractual arbitrage (Semuels, 2018), with the potential for large scale ‘ethics dumping’.

The changing workplace

Technology has enabled changes in places of work, including through remote work and cloud-based collaboration tools whose adoption and use saw exponential increase through the Covid pandemic. The rapid rise to prominence of videoconference tools such as Zoom spotlighted security weaknesses and the potential for surreptitious surveillance (Amatulli, 2020), and in some cases catalysed action by developers in response to societal concerns (Griffin, 2020). An increasing number of workers share their workspaces with robots, whether physically in warehouses or in the form of chatbots and other digital agents seeing increasing use as an initial point of customer contact. The effects of this cohabitation are starting to become apparent (Evans, 2020), including in increased fears of worker replacement (PWC, 2020).

While surveillance concerns have been a constant theme in computer ethics circles, the past decades have increasingly acquainted workers with different forms of monitoring. The Centre’s contribution here includes observations that while this is of ongoing concern to some, others may have come to accept it (Stahl et al. 2005), and workers may be more concerned with the surreptitious tracking tendencies of technologies they use in the home such as social media and digital speakers (Wilford 2004).

The way we find and apply for roles has changed significantly in this period, with machine learning technologies enabling targetted recommendations for employers and job hunters, the rise (and potentially, fall) of radically new recruitment technologies in some cases based on hidden and questionable claims (Kahn, 2021), and the integration of recruitment, assessment, and professional networking into online hubs such as LinkedIn. Here also, algorithms such as those which decide whether individuals are described as ‘highly skilled’, and the potential to curate and game profiles for example by inflating the number of individual followers or connections, increasingly influence job prospects.

Looking forwards

More digital interactions

HR technologies will continue to shape employees’ initial, and final experiences with organisations. The 1990s saw the unfortunate innovation of ‘firing by SMS’; the intervening years saw an increasing amount of candidate interaction taking place by social media; and in coming decades, many candidates may not even meaningfully interact with a human employee until the initial phases of their recruitment and ‘onboarding’ have taken place through a combination of automated sifting, chatbot-enabled queries, AI video interviewing and psychometric testing, and induction training using online and mixed reality formats.

Workplace systems will increasingly reflect the progression of web-based technologies, through software that enables improved collaboration (Stone et al., 2015) towards ‘ambient AI’ technologies that through data from connected devices may, in time, solve perennial workplace problems such as getting the heating right. In doing so, productivity software and wearables will generate an ever larger amount of data, creating the potential for beneficial interventions such as those relating to wellbeing, as well as for misinterpretation, reduced autonomy and illicit surveillance (Singer-Velush et al., 2020; Ajunwa, 2020). The requirement in some jurisdictions for firms to state the data they collect and its purposes may be challenged by the temptation to collect new kinds of data, for either beneficial or nefarious purposes (BBC 2020).

The way workers respond to increasingly digital and automated ‘EX’ (employee experience) is likely, in turn, to shape the fortunes of companies and the development of perceived good practice. We may increasingly encounter both the negative experience of unhelpful and frustrating chatbot conversation attempts, and the positive experience of being able to get information or complete transactions at any hour of the day through our devices or voice user interfaces such as Echo, Alexa, or Siri, with the ability to be passed through to a real-life employee if we want to. The coming decades may see the increasing anthropomorphisation of digital assistants, through continual improvements in appearance, speech, and complexity, to the extent that they are indistinguishable from humans. This will be an important opportunity for ethical practice through transparent disclosure of increasingly hyperrealistic bot-based activity (Metz and Chen, 2019; Stanford HAI, 2021).

Technology workers, assemble

Just as the ‘Arab Spring’, Brexit referendum and 2016 US Presidential election saw significant and consequential new applications of technology in the political sphere, worker collaboration apps and social media will continue to enable new forms of organising that may hold companies to account, and change their course on technology decisions. The cases of the Maven project, in which the vocal opinions of Google employees contributed to a corporate decision to stop a technology development (Crofts and van Rijswijk, 2020),  Timnit Gebru, in which the departure of an internal AI ethicist brought a wave of negative publicity to a technology company (Tiku, 2020), and ongoing unionisation battles in technology firms – some going in workers’ favour, others not (Fortson, 2021; Russ and Naidu, 2021) – indicate the ‘faultline’ that employee responses to ethical issues presents. The value orientation of the next generation IT workers may define the outlook – but more experienced professionals may be best placed to guide attention to ethical issues (Wilford and Wakunuma, 2014). The ongoing development of professional codes of conduct and ethical curricula within Computer Science degrees – an area in which the research of the Centre and Ethicomp community has been instrumental (e.g. Gotterbarn et al., 2018) – is likely to be of continuing importance in shaping the responsibility climate within technology companies, with implications for the design and trajectory of the software products businesses use.

The increasingly multinational nature of IT developments may result in continuing arbitrage of privacy standards and worker protections between jurisdictions through piecework platforms, and the emergence of more highly technologised working environments in countries with more permissive regulatory approaches (Sun, 2021) supported by massive state investment in digital technologies (Li, 2018). Fierce battles to rein in technology companies in both West (Palmer, 2020) and East (Carr and Liu, 2021) may continue. The case for responsible digital innovation (Jirotka et al., 2017) and socially responsible HRM (Shen, 2011) may more frequently clash with cost and profit motivations.

A technology dilemma for HR departments

Increasing opportunities for process automation and predictive, or prescriptive people analytics may impose hard choices. Should HR use technology to ‘make itself redundant’ by automating all aspects of employment process and management information provision, or to augment HR teams so as to retain a ‘human’ element while improving efficiency? And if HR workers don’t have the digital skills to implement, operate, or advise management on increasingly sophisticated technologies, will HR in some organisations act as a blocker, rather than enabler, of digital practice? Natural caution based on accountability for the security of employee data may predispose HR leaders to act tentatively, and together with advocacy for employee engagement this may enable a more responsible approach to the introduction of new technologies.

In many cases, workplace technology decisions may not involve HR – a high proportion of decisions may be made by IT or other senior leaders without the opportunity for HR teams to consider what they or employee representatives feel is right (CIPD 2019, p13), and on a day to day basis bot-based interaction and self-service may increasingly take HR workers out of the loop. While this may create more opportunities for HR staff to supervise digital as well as human agents in the HR department – and to facilitate learning and development as ‘machine relations management’ (Daugherty and Wilson 2018, p131-132) takes on increasing importance alongside ‘people management’ – we may increasingly come to the conclusion that technology should not be left to technologists (Jacobs, 2020).

Digital dilemmas may more frequently require HR to revisit its core roles (Ulrich and Dulebohn, 2015), and may bring some of these into conflict as organisations evolve their practice (Ulrich, 2019). For example – people analytics offers opportunities to increase HR’s credentials as a strategic business partner, but automated and algorithm-driven processes and decisions may take HR further from its often-valued role as ‘employee champion’.

Bridging past, present and future

The pandemic has brought workplace technology issues into sharp focus – the potential of remote collaboration software to enable more flexible ways of working, the benefits and pitfalls of health surveillance, and the creep of surreptitious surveillance into the home environment.

However, the saga around the Post Office ‘Horizon’ system (McCormack, 2016) – commissioned around the same time the CCSR was created 25 years ago, and only recently moving towards resolution (BEIS, 2020) – suggests the broader implications of ubiquitous technology and algorithmic management may just be starting to emerge. It warns us of the long-lasting effects of the ethical and design choices that senior managers, IT and HR staff will face in the implementation of workplace technologies.


Ajunwa, I. (2020). The “black box” at work. Big Data & Society, 7(2), 205395172096618.

Amatulli, J. (2020). Zoom Can Track Who’s Not Paying Attention In Your Video Call. Here’s How. Huffington Post.

BBC. (2020). Barclays scraps ‘Big Brother’ staff tracking system. BBC News Website.

Bondarouk, T., Parry, E., & Furtmueller, E. (2017). Electronic HRM: Four decades of research on adoption and consequences. The International Journal of Human Resource Management, 28(1), 98–131.

Carr, A., & Liu, C. (2021). The China Model: What the Country’s Tech Crackdown Is Really About. Bloomberg.

Cheong, M., Lederman, R., McLoughney, A., Njoto, S., Ruppanner, L., & Wirth, A. (2021). Ethical Implications of AI Bias as a Result of Workforce Gender Imbalance. University of Melbourne.,-UniBank.pdf

CIPD. (2019). CIPD Research Report—People and machines: From hype to reality. Chartered Institute for Personnel and Development.

Crofts, P., & Van Rijswijk, H. (2020). Negotiating ‘Evil’: Google, Project Maven and the Corporate Form. Law, Technology and Humans, 2(1), 75–90.

Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.

Daugherty, P., & Wilson, H. J. (2018). Human + Machine: Reimagining Work in the Age of AI.

Dave Ulrich. (2019). Digital HR: What Is It and What’s Next? Linkedin Pulse.

Department for Business, Energy and Industrial Strategy. (2020). Independent review into the Post Office Ltd Horizon IT system.

Duggan, J., Sherman, U., Carbery, R., & McDonnell, A. (2020). Algorithmic management and app‐work in the gig economy: A research agenda for employment relations and HRM. Human Resource Management Journal, 30(1), 114–132.

Evans, W. (2020). How Amazon hid its safety crisis. Reveal.

Fortson, D. (2021). Is this the dawn of unionisation at Google? The Times.

Giermindl, L. M., Strich, F., Christ, O., Leicht-Deobald, U., & Redzepi, A. (2021). The dark sides of people analytics: Reviewing the perils for organisations and employees. European Journal of Information Systems, 1–26.

Gotterbarn, D., Brinkman, B., Flick, C., Kirkpatrick, M. S., Miller, K., Vazansky, K., Wolf, M. J. (2018) ACM Code of Ethics and Professional Conduct.

Griffin, A. (2020). Zoom responds to concerns over privacy. The Independent.

Jirotka, M., Grimpe, B., Stahl, B., Eden, G., & Hartswood, M. (2017). Responsible research and innovation in the digital age. Communications of the ACM, 60(5), 62–68.

Johnson, R. D., Lukaszewski, K. M., & Stone, D. L. (2016). The Evolution of the Field of Human Resource Information Systems: Co-Evolution of Technology and HR Processes. Communications of the Association for Information Systems, 38, 533–553.

Johnson, R. D., Wright State University, & Stone, D. L. (2016). The Evolution of the Field of Human Resource Information Systems: Co-Evolution of Technology and HR Processes. Communications of the Association for Information Systems, 38, 533–553.

Kahn, J. (2021). HireVue drops facial monitoring amid A.I. algorithm audit. Forbes.

Katie Jacobs. (2020). Don’t leave technology in the hands of technologists. CIPD.

Li, Y. (2018). Understanding China’s Technological Rise.

McBride, N. (2015). Virtuous Business Intelligence: International Journal of Business Intelligence Research, 6(2), 1–17.

McCormack, T. (2016). The Post Office Horizon system and Seema Misra. Digital Evidence & Electronic Signature Law Review, 13, 133–138.

Metz, K., & Chen, B. (2019). Google’s Duplex Uses A.I. to Mimic Humans (Sometimes). New York Times.

Natalie Singer-Velush, Sherman, K., & Anderson, E. (2020). Microsoft Analyzed Data on Its Newly Remote Workforce. Harvard Business Review.

Palmer, A. (2020). What the EU’s investigation of Amazon means for U.S. antitrust probes. CNBC.

Prtorić, J. (2021). YouTubers of the world, unite! – What happens when a grassroots, international creators’ movement and a traditional trade union join forces?

PWC. (2020). Workforce of the future—The competing forces shaping 2030.

Russ, H., & Naidu, R. (2021). Amazon Alabama workers reject union in key loss for U.S. organized labor. Reuters.

Russon, M.-A. (2021). Uber drivers are workers not self-employed, Supreme Court rules. BBC.

Semuels, A. (2018). The Internet Is Enabling a New Kind of Poorly Paid Hell. The Atlantic.

Shen, J. (2011). Developing the concept of socially responsible international human resource management. The International Journal of Human Resource Management, 22(6), 1351–1363.

Stanford HAI Adaptive Agents Group. (2021). The Shibboleth Rule for Artificial Agents.

Stone, D. L., Deadrick, D. L., Lukaszewski, K. M., & Johnson, R. (2015). The influence of technology on the future of human resource management. Human Resource Management Review, 25(2), 216–231.

Sun, N. (2021). China’s tech workers pushed to their limits by surveillance software. Financial Times.

Tan, Z. M., Aggarwal, N., Cowls, J., Morley, J., Taddeo, M., & Floridi, L. (2020). The Ethical Debate about the Gig Economy: A Review and Critical Analysis. SSRN Electronic Journal.

Tiku, N. (2020). Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it. Washington Post.

Tursunbayeva, A., Pagliari, C., Di Lauro, S., & Antonelli, G. (2021). The ethics of people analytics: Risks, opportunities and recommendations. Personnel Review, ahead-of-print(ahead-of-print).

Ulrich, D., & Dulebohn, J. H. (2015). Are we there yet? What’s next for HR? Human Resource Management Review.

van den Heuvel, S., & Bondarouk, T. (2017). The rise (and fall?) of HR analytics: A study into the future application, value, structure, and system support. Journal of Organizational Effectiveness, 4(2), 157–178.

Wilford, S. (2004). Information and communication technology privacy and policies within organisations : an analysis from the perspective of the individual. PhD thesis, De Montfort University.

Wilford, S., & Wakunuma, K. (2014). Perceptions of ethics in IS: How age can affect awareness. Journal of Information, Communication and Ethics in Society, 12(4), 270–283.

Funding: The corresponding author is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (UKRI Grant No. EP/S023305/1) and by ORBIT, the Observatory for Responsible Research and Innovation in ICT (

Copyright: Copyright remains with the authors. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.