The development of care robots has been accompanied by a number of technical and social challenges, which are guided by the question: “What is a robot for?” Debates guided by this question have discussed the functionalities and tasks that can be delegated to a machine that does not harm human dignity. However, we argue that these ethical debates do not offer any alternatives for designing care robots for the common good. In particular, we stress the need to shift the current ethical discussion on care robots towards a reflection on the politics of robotics, understanding politics as the search for the common good. To develop this proposal, we use the theoretical perspective of science and technology studies, which we integrate into the analysis of disagreement inspired by a consensus-dissensus way of thinking, based on discussing and rethinking the relationships of care robots with the common good and the subjects of such good. Thus, the politics of care robots allows for the emergence of a set of discussions on how human-machine configurations are designed and practiced, as well as the role of the market of technological innovation in the organisation of care.
The consensus debate on functionalities
Care robots are progressively being introduced into nursing homes, hospitals, and schools, among others environments, as pilot or experimental programmes (Savage, 2022). They are designed to perform caring and assistive activities. However, this process was exacerbated by the COVID-19 pandemic. There has also been a significant increase in the narrative about the importance of robots in the economy and society. This issue is accompanied by relevant challenges, which are commonly found in public debates from a functionalist approach: Technologies are designed to solve a problem or fill a need. Consequently, robots are assessed according to their ability to realise the end to which they were designed. From this perspective, the discussion of their effects and controversies mainly focuses the normative aspects of the goals or the quality of the way in which technologies function (Verbeek, 2006). Accordingly, from the common-sense concept of functionalities, the debates about the role of robots in society have been guided by the question: “What is a robot for?” For example, could they be designed to feed a person with reduced mobility (Sharkey and Sharkey, 2011)? Would it be appropriate to use robots to provide palliative care for a person in the final days before their death (Sharkey and Sharkey, 2012; Sparrow, 2016)? Could a robot be designed to watch over a baby while they are at home alone (Vallor, 2011)? These questions revolve around whether certain functionalities can be delegated to a robot or, similarly, if it is appropriate or not that some functionalities can be designed for a robot (Santoni de Sio and van Wynsberghe, 2016). This opens an important debate about issues related to what care or assistive tasks can be delegated to a machine that do not harm human dignity or take away the humanity of care (Savela et al., 2018). These primary preoccupations are accompanied by other more complex debates that are entangled with the functionality discussion, specifically around the possibility of deception (Sharkey and Sharkey, 2011), the autonomy of the person, the liability in case of damage or harm (Matsuzaki and Lindemann, 2016), and the confidentiality of data collected during the execution of these tasks (Jenkins and Draper, 2015).
From this approach, a series of normative proposals have been developed to guarantee the good care of the elderly, children or people with some kind of disability when a care robot is introduced (Vandemeulebroucke et al., 2018). Suggestive proposals have also been developed in the design process of this type of artefact—that is, the Care Centered Value-Sensitive Design of van Wynsberghe (2013)—which integrate the academic debate on the moral values embedded in technologies (Verbeek, 2008). Although these proposals broaden the discussion of functionalities, they establish a link between ethics and morality, and develop a series of normative proposals that continue to revolve around how to design robots to best fulfil their mandated care functions.
The “what for” question that guides current debates is part of a general drive towards infusing robotics and artificial intelligence with ethical guiding and thinking, assuming that care robots are good for all, and contribute to the common good or to the social good (Berendt, 2019). There are multiple legitimate answers to the “what for” questions that articulate the functionalist debate. However, discussing care robots for good or the common good only in light of the ethical debate is extremely problematic. To which community do robots contribute? What is the problem that robotics can solve? Who defines this problem?
The ethical discussion on functionalities is not the only one, as shown (for example) by the ethical debate about the agency of robots (Coeckelbergh, 2015; Gunkel, 2007) or the discussion about their moral status (Danaher, 2021). However, we consider the ethical debate on functionalities to be the dominant one in technologies (Verbeek, 2004) and the dominant discussion about whether care robots respond to what we consider to be the good. For this reason, we take this type of discussion as a discussion of consensus, in which the various interlocutors pose the problem in a way in which they understand each other.
Disagreement in care robots
Based on this consensus, and taking Jaques Rancière’s thinking, we propose to investigate disagreement as a form of political discussion about care robots. In this way, the politics of robotics may be understood as a discussion about the common good that entails disagreement and conflict in relation to how care problems are defined, how these problems are responded to, and who defines them. Using Rancière’s terminology, robots propose a certain distribution of the sensible (Rancière, 2019): what is possible and acknowledged; and with its sensors, algorithms and automated responses, these artefacts configure what is felt, heard, seen, and perceived within a physical and a symbolic space. However, robots are not stabilised artefacts, they are open to new imaginaries and interactions (Vallès-Peris and Domènech, 2020). Based on the identified consensus position, a set of dissent arguments on how care is approached by care robots can be identified. At the same time, other voices can be introduced, proposing other configurations, which enunciate the questions in other ways.
Dissensus about the problems that robots attempt to address
For Rancière ‘Politics[…] is equality as its principle. And the principle of equality is transformed by the distribution of community shares as defined by a quandary: when is there and when is there not equality in things between who and who else? What are these “things” and who are these whose?’ (Rancière, 1999). The analysis of the politics of care robots is the interest in paradoxes, in the scandals that shake the foundations of what is meant by community and care. What is the care that such artefacts refer to? How do robots participate in the community of care? And, in what way? At least three forms of dissent or tensions can be identified in the way in which robots approach care:
Tension 1
Although it may seem obvious, it does not hurt to begin by remembering a key assumption in the debate on functionalities—discussing what tasks can be delegated to a robot, or how these tasks can be executed ethically and responsibly, implies the assumption that robots are a solution to something, whatever that may be. In other words, although the use of humanoid robots that are endowed with a certain degree of autonomy to care for dependent people in everyday environments is not a real possibility at present (Maibaum et al., 2021), the assumption of their existence as a solution is a matter of fact (as demonstrated by the large investments in this area, the volume of business generated, and the lines of research that have been carried out in this area).
Tension 2
Choosing which tasks can be delegated to a robot assumes that care operates with the same logic as productive work. The translation of the model of the robot in the factory is embedded in this discussion: The tasks that are performed by industrial robots are conceived for doing dirty, dull, and dangerous jobs (the triple D model). This model is applied to caring robots, which means that everyday caring activities are conceived of as being fragmented into small pieces. These individual fragments are thus organised hierarchically—those with less value, those that are heavier, and those that are more tedious can be delegated to the robot (such as cleaning, repeating the same thing many times to dementia patients, giving medication, feeding, etc.). Meanwhile, the most valuable tasks (related to affections and emotions) cannot be delegated and must be kept in human hands (Vallès-Peris and Domènech, 2020).
Tension 3
To decide what functionality or care tasks can be designed to be delegated to a robot is based on an ontological principle that humans and robots are two separate entities (Suchman and Weber, 2016). “We” humans reflect on what kind of functionality can be delegated to “them”, the machines. This separation is reinforced with an argument that is commonly used in regulatory frameworks and ethical recommendations about artificial intelligence (AI) and robotics, that of “the human in the loop” (Steels & Lopez de Mantaras, 2018). This refers to the idea that we need to preserve meaningful human control over automated decisions carried out by a robot. “The human in the loop” is meant to anticipate and reject the proposition that any form of oversight over automated decisions constitutes “human control”, as if robots and the AI systems were not designed, produced, and maintained by humans.
Other voices: the experiences of patients in a care crisis
If we agree that a guiding objective is to design care robots for the common good, then a major question will be, what does the “common good” consist of? This term is not uniquely defined, but in a vague manner it can be understood as the aim to be good for all (Berendt, 2019). Again, the next question will be what “the good” consists of? Here, the answer will depend on the part of the common or the community that defines the good and the problems to achieve it.
Community, as politics, is linked to the notion of equality. From a Rancièreian perspective, equality does not respond to any foundational characteristic but it is understood as the count of the parts that make up a community, in a recount that is always erroneous and incomplete (Rancière, 2004). What are the parts of the community that have a voice in the controversies surrounding technological development? As sociologist de Sousa Santos (2016) explains, the epistemological and experiential distribution of fear and hope is defined in a way that tends to benefit social groups with greater access to scientific knowledge and technology. For these groups, precaution and constraints are something negative that slows down the progress of science and technology. However, for those groups with little or no control over the development of knowledge and technology, uncertainty has no voice because they live in a cognitive injustice in which their knowledge and experiences place them in an inferior position in a world that is defined and legislated by powerful and alien knowledge. For them, will the benefits of care robots outweigh the losses? Who will reap the benefits? And, who will reap the losses?
In the discussion about care robots, some voices count more than others: different social groups are not equal in their capacity to impose their logic or their interpretation of care inscribed in technology (Vallès-Peris and Domènech, 2020). We know that the large volume of business and economic benefits that are generated by care robots are not distributed equally among the various social groups. This undoubtedly, as de Sousa Santos (2016) points out, affects the different concerns that various groups have about these robots, and also the recognition and capacity of these concerns to articulate the debate.
To develop a political reflection on care robots, it seems to be essential to look into the conflicts and concerns of those parts of the community that do not usually define the debate on robotics. Tensions are declared and disagreement appear when the needs, fears, and hopes of the main actors involved in healthcare relationships are taken into account. In this way, we base our arguments on a research project that involved hospitalised elderly patients during the first and second waves of the COVID-19 pandemic (Vallès-Peris et al., 2021). The main aim of this research was to ascertain if patients would accept the use of robots for performing caring activities in the hospital, and to ascertain their motives and under which circumstances they based their preferences. The two results conclude that:
-
Patients’ perspectives on caring robots are ambivalent: On the one hand, they preferred to be cared for and to perform care practices with humans; while on the other hand, they considered that it could be very positive to introduce robots to take care of human beings, assisting carers and medical personnel, and (if necessary) replacing them.
-
This ambivalence is not related to the different care tasks or functions that are delegated to a robot, but with a context of high pressures on the health system and a lack of resources. At the beginning of this situation, patients assume an individual and collective responsibility to facilitate the proper functioning of the system and the guarantee of health assistance.
The organisation of care as an economic and political decision
The main issue of supporting patients’ acceptance of the use of caring robots is their own experience of being hospitalised during the first and second waves of the COVID-19 pandemic. Unfortunately, experiences during the COVID-19 pandemic illustrate the oversaturation and under-resourcing of the healthcare system, exemplifying what the crisis of care advocates. Health systems around the world are facing a complex issue—although there is increasing expenditure and jobs allocated to healthcare, it is not enough to respond to the ageing population or give the necessary attention to its associated health problems, nor to the deteriorating key human health outcomes (Topol, 2019). The capacity of a society to provide healthcare is expressed through fundamental political decisions about how to organise services, to privatise or support, and regulate or deregulate various forms of care. Small and large expenditures in public health, the network of acute hospital services and specialist medical care, the organisation of social care services in residential, home, and community settings, as well as the many types and formulas to facilitate childcare provision and organise the provision of care and healthcare guarantee more or less care for citizens (Fine and Tronto, 2020). It seems then that we are faced with a large political issue, which relates to the organisation and formulation of measures and policies of all kinds to ensure the provision of health and care for the population.
When patients consider the introduction of robots as a desirable way to respond to the saturation of the healthcare system, even though they prefer a relationship with humans, they are not conditioning it to the functions that could be delegated to a robot. Their ambivalence is not related to the artefact but rather to the context, and therefore patients refer to two characteristics of care that differentiate it from productive work—responsibility and bidirectionality. The core idea of the ethics of care is that people are interdependent beings who move in relationships of responsibility and mutual support (Puig de la Bellacasa, 2017). If we take the response to the “why” and not to the “what for”, then we need robots that facilitate and ensure that societal systems could provide healthcare and good care. This means that we cannot translate the conceptualisation of the factory robot to the daily life of care relations because these relations operate in accordance with another logic and not with the strongly rule-based environments of task fragmentation or the economic efficiency of factory manufacturing (Vallès-Peris and Domènech, 2020). In summary, from this perspective it makes little sense to discuss what functionalities or tasks can be done by a robot because this conceptualisation is not accurately suited to mobile robots in dynamic care processes and relationships, and consequently does not guarantee the provision of good care.
The double displacement in robot politics
Up to this point, the proposed political reflection on care robots follows a certain route:
-
The robotics of care is part of a socially available narrative that arises in response to the provision of care, which assumes that there is a problem in this provision, the so-called ‘care crisis’. When faced with this situation, care robots are imagined to be used to mitigate the lack of personnel. Although this is neither technologically feasible nor aligned with good care, the expectation that robots will somehow address the care crisis remains.
-
The dominant debate (or the consensus position, in Rancerian terms) assumes this starting position of a care crisis and is articulated around the functions that the robot could perform to alleviate the crisis, and the moral and normative controversies that this would entail.
-
Against this consensus position, we identify a series of tensions in the debate on robotics for the common good and introduce the experience of people who have lived through a situation attributable to the care crisis.
This analysis based on a consensus-dissensus process, which leads us to a double displacement movement (which we will explain below): (a) instead of focusing on the robot, we should shift the focus of the debate to the human-machine configuration and to the community in which the robot participates; and (b) instead of locating the common good in the negotiation of ethical norms and recommendations, we should shift it towards a process of rupture and continuous conflict.
Human-machine configurations
From an STS approach, it is understood that when technologies are used, they help to shape the context in which they fulfil their function. This is described as “technological mediation” (Latour, 1998). Robots mediate the experiences and practices of their users, help to shape the quality of our lives, and, more importantly, they help to shape our moral actions and decisions (Verbeek, 2006). When robots are used in a hospital or another healthcare setting to support caring activities, they participate in the caring relations in that scenario and contribute to care. Care has always been carried out with technologies (e.g., wheelchairs, hearing aids, telecare systems, etc.), and technology is not the opposite of care. Instead, artefacts and humans are part of the same assemblage of care relationships (Latimer & López Gómez, 2019). Robots mediate the way in which we understand and perform caring relations, just as the robot is reconfigured from the assemblage of care relations in which it participates. However, as part of the same process, for the bidirectionality of care with artefacts (Lipp, 2022), robots also mediate the maintenance of the health system, and how we care for the saturation of its professionals and ensure the care of others.
If we understand technologies as part of that framework that shapes our ways of seeing, saying, and feeling that mediates our relationship with the world, then care robots and the consensus around them (in this case, we take the debate on functionalities) are part of a certain way of conceptualising problems from certain collectives. In this sense, in Rancerian terms, the idea of technological mediation would be associated with how artefacts materialise a certain distribution of the sensible.
Focusing the debate on the possible functionalities of the robot is only possible from the idea of two separate entities. Nevertheless, if care is produced in the assemblage in which humans and technologies participate, then the debate must focus on how human-robot configurations are designed to ensure the provision of good care. These configurations must also take into account the traditionally unfair and precarious working conditions of care labour, with high exploitation and low wages (especially for migrant women) (Lightman, 2022), as well as the high informality (mainly family) and irregularity among care workers (in health, social work and domestic services) (Jokela, 2019). The lack of sufficient public support to organise and practice care and care work (Fine and Tronto, 2020) articulates how community of human-machine configurations are designed and practiced. How are these questions integrated when we design how a robot and a nursey assistant collaborate to feed a person with severe reduced mobility? How does the hospital management organise the coordination between healthcare staff and robotic systems to avoid the burnout of professionals? Which methodologies can roboticists use to integrate the patient’s care needs in the robot’s design? Do care robots materialise a highly precarious and irregular working context? Or, they are materialising a distribution of the sensible in which these issues are not considered relevant?
Care as an alternative to the market
A possible idea that emerges when we look at disagreements is that to design care robots for the common good, we need a model of human-machine configurations that is not based on the industry model. In this sense, instead of taking the logic of industrial production and efficiency as a reference, we can take care as a starting point for our social and political theories, a theory that offers an alternative to the currently prevailing paradigm of market fundamentalism (Tronto, 2018).
In the face of the logic of production, the ideological challenge of care is based on the idea that people, rather than being market creatures, are creatures who live in relationships of mutual care. For interdependent lives to be possible in the world in which we live, there have to be some forms of care that take place somewhere in this world that make it possible to live in it (Puig de la Bellacasa, 2017). This idea of interdependence as a common element of our lives, and of the relationships we establish between ourselves and our environment, conflicts with the notion of the care crisis. Without care, our lives, artefacts, the world, and life as we understand it would not be possible. From this point of view, instead of talking about the crisis of care, we could talk about the crisis productive and efficiency logic of care—the crisis of the care market.
Thus, the political debate on care robots (i.e., care robots for the common good) does not take as its starting point the necessary search for an alternative to the provision of care but rather possible emergence of the discussion on the search for an alternative to the market as a regulator of relationships. From some perspectives, it is argued that the market economy has never existed and could be considered a utopian (or dystopian) ideal (Dupuy, 1991; Graeber, 2015; Polany, 2015; Tronto, 2018) because a society that is discarded and tyrannised in the service of the market would become an accessory to the economic system and the latter would end up destroying everything. Consequently, the market economy constantly generates political and social mechanisms that limit its logic to ensure its own survival. Thus, a debate about the role of the market of innovation in care does not refer to the functionalities of the robot but to how robotics are entangled in care’s organisation: How is the budget of health programmes distributed between technological innovation programmes and improving the working conditions of nurses and auxiliary staff? How is the goodness and wellbeing of using care robots distributed among the different social groups? And, what are the benefits?
Conclusions
To ensure that the introduction of caring robots responds to the common good, a shift in the ethics debate currently guided by robots’ potential functionalities is necessary. In this comment, we defend the need for a political debate on care with robots as a way to interrupt ‘the distribution of the sensible’. If the objective to develop care robots is to improve healthcare and provide good care, then the discussion needs to go beyond the question of functionalities.
Within this aim, we use Rancière’s notion of politics: politics as the disruption of the visible and describable order of the community. Robotics for the common good is thus a constant search for those experiences that are not contemplated with care robots or in the debate on robotics, a movement between consensus and dissent. Disagreement is not the ultimate goal, nor the establishment of a series of recommendations and rules, but the goal is the movement between what makes care robotics possible in a market context and the rupture of the order of the sensible that this context offers. Thus, our discussion towards robotics for the common good is based on countering the logic of the market with the priority of organising and providing care, the way in which caring responsibility is entangled in the design of the artefacts, as well as on new ways of approaching human-machine configurations.
Data availability
Data sharing is not applicable to this research as no data were generated or analysed.
References
Berendt B (2019) AI for the Common Good?! Pitfalls, challenges, and ethics pen-testing. Paladyn 10(1):44–65. https://doi.org/10.1515/pjbr-2019-0004
Coeckelbergh M (2015) Artificial agents, good care, and modernity. Theor Med Bioeth 36(4):265–277. https://doi.org/10.1007/s11017-015-9331-y
Danaher J (2021) Technology and the Value of Trust: Can we trust technology? Should we? Philosophical disquisitions. https://philosophicaldisquisitions.blogspot.com/2021/03/. Accessed 8 June 2023
Dupuy J-P (1991) El pánico. Gedisa Editorial, Barcelona
Fine M, Tronto J (2020) Care goes viral: Care theory and research confront the global covid-19 pandemic. Int J Care Caring 4(3):301–309. https://doi.org/10.1332/239788220X15924188322978
Graeber D (2015) La utopía de las normas. De la tecnología, la estupidez y los secretos placeres de la burocracia. Ariel, Barcelona
Gunkel DJ (2007) Thinking otherwise: ethics, technology and other subjects. Ethics Inf Technol 9(3):165–177. https://doi.org/10.1007/s10676-007-9137-3
Jenkins S, Draper H (2015) Care, monitoring, and companionship: views on care robots from older people and their carers. Int J Soc Robot 7(5):673–683. https://doi.org/10.1007/s12369-015-0322-y
Jokela M (2019) Patterns of precarious employment in a female-dominated sector in five welfare states-The case of paid domestic labor sector. Soc Polit 26(1):116–138. https://doi.org/10.1093/sp/jxy016
Latimer J, López Gómez D (2019) Intimate Entanglements: Affects more-than-human intimacies and the politics of relations in science and technology. Sociol Rev 67(2):247–263. https://doi.org/10.1177/0038026119831623
Latour B (1998) From the World of Science to the World of Research? Science 280(5361):208–209. https://doi.org/10.1126/science.280.5361.208
Lightman N (2022) Comparing care regimes: worker characteristics and wage penalties in the global care chain. Soc Polit Int Stud Gender, State Soc 28(4):971–998. https://doi.org/10.1093/sp/jxaa008
Lipp B (2022) Caring for robots: how care comes to matter in human-machine interfacing. Soc Stud Sci 030631272210814. https://doi.org/10.1177/03063127221081446
Maibaum A, Bischof A, Hergesell J, Lipp B (2021) A critique of robotics in health care. AI Soc 37(2):467–477. https://doi.org/10.1007/s00146-021-01206-z
Matsuzaki H, Lindemann G (2016) The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis. AI Soc 31(4):501–517. https://doi.org/10.1007/s00146-015-0630-7
Polany K (2015) La Gran Transformación. Crítica del liberalismo económico. Virus editorial, Barcelona
Puig de la Bellacasa M (2017) Matters of care. Speculative Ethics in More Than Human Worlds. University of Minnesota Press
Rancière J (2004) Introducing disagreement Angelaki 9(3):3–9. https://doi.org/10.1080/0969725042000307583
Rancière J (2019) El tiempo de la igualdad. Herder, Barcelona
Rancière J (1999) Disagreement. University of Minnesota Press
Santoni de Sio F, van Wynsberghe A (2016) When should we use care robots? The nature-of-activities approach. Sci Eng Ethics 22(6):1745–1760. https://doi.org/10.1007/s11948-015-9715-4
Savage N (2022) Robots rise to meet the challenge of caring for old people. Nature 601(7893):S8–S10. https://doi.org/10.1038/d41586-022-00072-z
Savela N, Turja T, Oksanen A (2018) Social Acceptance of Robots in Different Occupational Fields: A Systematic Literature Review. Int J Soc Robot. 10:493–502. https://doi.org/10.1007/s12369-017-0452-5
Sharkey A, Sharkey N (2011) Children, the elderly, and interactive robots: Anthropomorphism and deception in robot care and companionship. IEEE Robot Autom Mag 18(1):32–38. https://doi.org/10.1109/MRA.2010.940151
Sharkey N, Sharkey A (2012) The eldercare factory. Gerontology 58(3):282–288. https://doi.org/10.1159/000329483
de Sousa Santos B (2016) La incertidumbre: entre el miedo y la esperanza. In: de Sousa Santos B eds La difícil democracia. Una mirada desde la periferia europea. Akal, Madrid, p 89–95
Sparrow R (2016) Robots in aged care: a dystopian future? Introduction. AI Soc 31(4):445–454. https://doi.org/10.1007/s00146-015-0625-4
Steels L, Lopez de Mantaras R (2018) The Barcelona declaration for the proper development and usage of artificial intelligence in Europe AI Communications 31(6):485–494. https://doi.org/10.3233/AIC-180607
Suchman L, Weber J (2016) Human-machine autonomies. In: Bhuta CKN, Beck S, Geis R, Liu H-Y(eds) Autonomous weapons systems: Law, ethics, policy. Cambridge University Pres, Camridge, pp. 75–102
Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25(1):44–56. https://doi.org/10.1038/s41591-018-0300-7
Tronto J (2018) La democracia del cuidado como antídoto frente al neoliberalismo. In: Domínguez Alcón C, Kohlen H, Tronto J (eds) El futuro del cuidado. Comprensión de la ética del cuidado y práctica enfermera. Ediciones San Juan de Dios, Barcelona, pp. 7–19
Vallès-Peris N, Domènech M (2020) Roboticists’ imaginaries of robots for care: the radical imaginary as a tool for an ethical discussion. Eng Stud 12(3):157–176. https://doi.org/10.1080/19378629.2020.1821695
Vallès-Peris N, Barat-Auleda O, Domènech M (2021) Robots in healthcare? What patients say. Int J Environ Res Public Health 18:9933. https://doi.org/10.3390/ijerph18189933
Vallor S (2011) Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos Technol 24(3):251–268. https://doi.org/10.1007/s13347-011-0015-x
Vandemeulebroucke T, Dierckx de Casterlé B, Gastmans C (2018) The use of care robots in aged care: a systematic review of argument-based ethics literature. Arch Gerontol Geriatr 74:15–25. https://doi.org/10.1016/j.archger.2017.08.014
Verbeek P-P (2006) Materializing morality: design ethics and technological mediation. Sci Technol Human Values 31(3):361–380. https://doi.org/10.1177/0162243905285847
Verbeek P-P (2004) What things do: philosophical reflections on technology, agency and design. University Park Press
Verbeek P-P (2008) Morality in design. design ethics and the morality of technological artifacts. In: Vermaas PE (ed) Philosophy and design, Springer, pp. 91–103
van Wynsberghe A (2013) Designing robots for care: care centered value-sensitive design. Sci Eng Ethics 19(2):407–433. https://doi.org/10.1007/s11948-011-9343-6
Acknowledgements
This study was supported by “la Caixa” Foundation under agreement LCF/PR/RC17/10110004 and the postdoctoral fellowship programme “Margarita Salas”, Ministerio de Universidades (Spain).
Author information
Authors and Affiliations
Contributions
All authors contributed to the paper conception and design. The first draft of the manuscript was written by NV-P, and all authors commented and edited previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval
This article does not contain any studies with human participants performed by any of the authors.
Informed consent
This article does not contain any studies with human participants performed by any of the authors.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Vallès-Peris, N., Domènech, M. Care robots for the common good: ethics as politics. Humanit Soc Sci Commun 10, 345 (2023). https://doi.org/10.1057/s41599-023-01850-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-023-01850-4
This article is cited by
-
The next generation of robotic surgery is emerging: but is it better than a human?
Nature Medicine (2024)