2024
|
 | Umbrello, Steven; Balistreri, Maurizio The Ethics of Space Travelling and Extraterrestrial Colonization Journal Article Forthcoming In: Ragion Pratica, Forthcoming. @article{Umbrello2024,
title = {The Ethics of Space Travelling and Extraterrestrial Colonization},
author = {Umbrello, Steven and Balistreri, Maurizio},
url = {https://www.researchgate.net/publication/366427045_The_Ethics_of_Space_Travelling_and_Extraterrestrial_Colonization_What_is_Moral_in_Space_is_also_Moral_on_Earth?channel=doi&linkId=63a19f5840358f78eb05962f&showFulltext=true},
year = {2024},
date = {2024-01-01},
journal = {Ragion Pratica},
abstract = {Mirko Garasic (2021) argued that space travel and, by extension, the colonization of other planets could morally justify using technologies and interventions capable of profoundly modifying the characteristics of astronauts and future Martian generations. According to Garasic, however, the fact that space interventions such as human (bio)enhancement or reproductive technologies such as artificial wombs may be morally justified does not mean that they are morally acceptable technologies to be used on Earth as well. Garasic's thesis is that we should resist the temptation to establish or reinforce a continuity between the ethical standards that apply to space travel and those that apply on Earth because what applies in space (or on other planets) does not necessarily apply on our planet. Garasic argues that in space, genetic enhancement interventions are morally acceptable, as they are essential for survival. On Earth, however, we can survive even without any form of (bio)enhancement. In a previous article (2022), we presented several arguments against Garasic's thesis regarding the exceptional morality of what is morally acceptable in space. In this article, we examine and reply to Garasic (2022), showing that he fails to defend his thesis from the objections we previously advanced. Garasic's mistakes are: (1) assuming nature (and consequently what is 'natural') as a normative point of reference which allows establishing what is moral and what is not moral; (2) defending the choices concerning reproduction as an insufficiently demanding and therefore unsatisfactory moral principle (according to Garasic we should not bring into the world people who can have a quality of life or a condition of well-being superior to mere survival); and finally (3) of not explaining clearly what it means to defend a principle of moral exceptionalism for space travel (Garasic does not explain whether this position applies to any behavior, for which what on Earth is a virtue, in space is a vice or is not a general principle, but he makes this case only for enhancement and reproductive technologies). Through our analysis, we not only refute Garasic's position but also show that any attempt to address space travel issues as 'special' moral issues faces several difficulties and, ultimately, is doomed to failure.},
keywords = {},
pubstate = {forthcoming},
tppubtype = {article}
}
Mirko Garasic (2021) argued that space travel and, by extension, the colonization of other planets could morally justify using technologies and interventions capable of profoundly modifying the characteristics of astronauts and future Martian generations. According to Garasic, however, the fact that space interventions such as human (bio)enhancement or reproductive technologies such as artificial wombs may be morally justified does not mean that they are morally acceptable technologies to be used on Earth as well. Garasic's thesis is that we should resist the temptation to establish or reinforce a continuity between the ethical standards that apply to space travel and those that apply on Earth because what applies in space (or on other planets) does not necessarily apply on our planet. Garasic argues that in space, genetic enhancement interventions are morally acceptable, as they are essential for survival. On Earth, however, we can survive even without any form of (bio)enhancement. In a previous article (2022), we presented several arguments against Garasic's thesis regarding the exceptional morality of what is morally acceptable in space. In this article, we examine and reply to Garasic (2022), showing that he fails to defend his thesis from the objections we previously advanced. Garasic's mistakes are: (1) assuming nature (and consequently what is 'natural') as a normative point of reference which allows establishing what is moral and what is not moral; (2) defending the choices concerning reproduction as an insufficiently demanding and therefore unsatisfactory moral principle (according to Garasic we should not bring into the world people who can have a quality of life or a condition of well-being superior to mere survival); and finally (3) of not explaining clearly what it means to defend a principle of moral exceptionalism for space travel (Garasic does not explain whether this position applies to any behavior, for which what on Earth is a virtue, in space is a vice or is not a general principle, but he makes this case only for enhancement and reproductive technologies). Through our analysis, we not only refute Garasic's position but also show that any attempt to address space travel issues as 'special' moral issues faces several difficulties and, ultimately, is doomed to failure. |
2023
|
 | Umbrello, Steven From Subjectivity to Objectivity: Bernard Lonergan's Philosophy as a Grounding for Value Sensitive Design Journal Article In: Scienza & Filosofia, vol. 29, pp. 36-44, 2023. @article{Umbrello2023e,
title = {From Subjectivity to Objectivity: Bernard Lonergan's Philosophy as a Grounding for Value Sensitive Design},
author = {Umbrello, Steven},
url = {https://www.scienzaefilosofia.com/wp-content/uploads/2023/07/SF_29.pdf},
year = {2023},
date = {2023-07-24},
urldate = {2023-07-18},
journal = {Scienza & Filosofia},
volume = {29},
pages = {36-44},
abstract = {This article explores the potential of Bernard Lonergan's philosophy of subjectivity as objectivity as a grounding for value sensitive design (VSD) and the design turn in applied ethics. The rapid pace of scientific and technological advancement has created a gap between technical abilities and our moral assessments of those abilities, calling for a reflection on the philosophical tools we have for applying ethics. In particular, applied ethics often presents interconnected problems that require a more general framework for ethical reflection. Lonergan's philosophy, which emphasizes the importance of self-understanding and self-transcendence in achieving objectivity, can provide a valuable perspective on VSD and the design turn in applied ethics. The article examines how Lonergan's philosophy can be applied to VSD and the design turn, and how scientific knowledge can be integrated into an ethics of science without reducing it to an external reflection. By adopting Lonergan's perspective, we can address the ethical challenges arising from scientific and technological advancements while promoting a more holistic approach to applied ethics.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This article explores the potential of Bernard Lonergan's philosophy of subjectivity as objectivity as a grounding for value sensitive design (VSD) and the design turn in applied ethics. The rapid pace of scientific and technological advancement has created a gap between technical abilities and our moral assessments of those abilities, calling for a reflection on the philosophical tools we have for applying ethics. In particular, applied ethics often presents interconnected problems that require a more general framework for ethical reflection. Lonergan's philosophy, which emphasizes the importance of self-understanding and self-transcendence in achieving objectivity, can provide a valuable perspective on VSD and the design turn in applied ethics. The article examines how Lonergan's philosophy can be applied to VSD and the design turn, and how scientific knowledge can be integrated into an ethics of science without reducing it to an external reflection. By adopting Lonergan's perspective, we can address the ethical challenges arising from scientific and technological advancements while promoting a more holistic approach to applied ethics. |
 | Umbrello, Steven; Bernstein, Michael J.; Vermaas, Pieter E.; Resseguir, Anaïs; Gonzalez, Gustavo; Porcari, Andrea; Grinbaum, Alexei; Adomaitis, Laurynas From speculation to reality: enhancing anticipatory ethics for emerging technologies (ATE) in practice Journal Article In: Technology in Society, vol. 74, no. 102325, 2023. @article{Umbrello2023f,
title = {From speculation to reality: enhancing anticipatory ethics for emerging technologies (ATE) in practice},
author = {Umbrello, Steven and Bernstein, Michael J. and Vermaas, Pieter E. and Resseguir, Anaïs and Gonzalez, Gustavo and Porcari, Andrea and Grinbaum, Alexei and Adomaitis, Laurynas},
url = {https://www.sciencedirect.com/science/article/pii/S0160791X23001306?via%3Dihub},
doi = {10.1016/j.techsoc.2023.102325},
year = {2023},
date = {2023-07-24},
urldate = {2023-07-31},
journal = {Technology in Society},
volume = {74},
number = {102325},
abstract = {Various approaches have emerged over the last several decades to meet the challenges and complexities of anticipating and responding to the potential impacts of emerging technologies. Although many of the existing approaches share similarities, they each have shortfalls. This paper takes as the object of its study Anticipatory Ethics for Emerging Technologies (ATE) to technology assessment, given that it was formatted to address many of the privations characterising parallel approaches. The ATE approach, also in practice, presents certain areas for retooling, such as how it characterises levels and objects of analysis. This paper results from the work done with the TechEthos Horizon 2020 project in evaluating the ethical, legal, and social impacts of climate engineering, digital extended reality, and neurotechnologies. To meet the challenges these technology families present, this paper aims to enhance the ATE framework to encompass the variety of human processes and material forms, functions, and applications that comprise the socio-technical systems in which these technologies are embedded.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Various approaches have emerged over the last several decades to meet the challenges and complexities of anticipating and responding to the potential impacts of emerging technologies. Although many of the existing approaches share similarities, they each have shortfalls. This paper takes as the object of its study Anticipatory Ethics for Emerging Technologies (ATE) to technology assessment, given that it was formatted to address many of the privations characterising parallel approaches. The ATE approach, also in practice, presents certain areas for retooling, such as how it characterises levels and objects of analysis. This paper results from the work done with the TechEthos Horizon 2020 project in evaluating the ethical, legal, and social impacts of climate engineering, digital extended reality, and neurotechnologies. To meet the challenges these technology families present, this paper aims to enhance the ATE framework to encompass the variety of human processes and material forms, functions, and applications that comprise the socio-technical systems in which these technologies are embedded. |
 | Umbrello, Steven; Balistreri, Maurizio Human Enhancement and Reproductive Ethics on Generation Ships Journal Article Forthcoming In: Argumenta, Forthcoming. @article{Umbrello2023d,
title = {Human Enhancement and Reproductive Ethics on Generation Ships},
author = {Umbrello, Steven and Balistreri, Maurizio},
url = {https://doi.org/10.14275/2465-2334/20230.umb},
doi = {10.14275/2465-2334/20230.umb},
year = {2023},
date = {2023-07-18},
journal = {Argumenta},
abstract = {The past few years has seen a resurgence in the public interest in space flight and travel. Spurred mainly by the likes of technology billionaires like Elon Musk and Jeff Bezos, the topic poses both unique scientific as well as ethical challenges. This paper looks at the concept of generation ships, conceptual behemoth ships whose goal is to bring a group of human settlers to distant exoplanets. These ships are designed to host multiple generations of people who will be born, live, and die on these ships long before it reaches its destination. This paper takes reproductive ethics as its lens to look at how genetic enhancement interventions can and should be used not only to ensure that future generations of offspring on the ships, and eventual exoplanet colonies, live a minimally good life but that their births are contingent on them living genuinely good and fulfilling lives. The paper makes the further claim that if such a thesis holds, it also does so for human enhancement on Earth.},
keywords = {},
pubstate = {forthcoming},
tppubtype = {article}
}
The past few years has seen a resurgence in the public interest in space flight and travel. Spurred mainly by the likes of technology billionaires like Elon Musk and Jeff Bezos, the topic poses both unique scientific as well as ethical challenges. This paper looks at the concept of generation ships, conceptual behemoth ships whose goal is to bring a group of human settlers to distant exoplanets. These ships are designed to host multiple generations of people who will be born, live, and die on these ships long before it reaches its destination. This paper takes reproductive ethics as its lens to look at how genetic enhancement interventions can and should be used not only to ensure that future generations of offspring on the ships, and eventual exoplanet colonies, live a minimally good life but that their births are contingent on them living genuinely good and fulfilling lives. The paper makes the further claim that if such a thesis holds, it also does so for human enhancement on Earth. |
 | Umbrello, Steven Miglioramento e Potenziamento degli Operatori Sanitari Attraverso la Progettazione Journal Article In: NEU, vol. 42, iss. 2, pp. 53-59, 2023. @article{Umbrello0000,
title = {Miglioramento e Potenziamento degli Operatori Sanitari Attraverso la Progettazione},
author = {Umbrello, Steven},
url = {https://philpapers.org/rec/UMBMEP},
year = {2023},
date = {2023-07-01},
journal = {NEU},
volume = {42},
issue = {2},
pages = {53-59},
abstract = {Gran parte della letteratura riguardante l’uso dell’intelligenza artificiale (IA) sul posto di lavoro, in particolare nell’ambito dell’assistenza infermieristica e dei servizi di cura, si è concentrata sui problemi etici che insorgono a valle della sua implementazione o per ragioni puramente speculative. Concentrarsi sull’IA come artefatto separato dal suo design e dai suoi progettisti rende l’assistenza infermieristica e la cura, come qualsiasi altro settore, in gran parte impotente nei confronti degli impatti dell’IA. Per questo motivo, la focalizzazione sul design e su come progettare rispettando valori e professionalità dei professionisti e degli operatori sanitari è di massima importanza. Sono state proposti vari approcci per raggiungere questo obiettivo, come la progettazione partecipata, la progettazione universale, la progettazione inclusiva e la progettazione orientata ai valori. Nonostante i vantaggi di questi approcci più consolidati, nessuno di essi si concentra sulle narrazioni e le metafore sottostanti che guidano e influenzano come l’IA viene percepita e quindi impatta il lavoro dei professionisti. Questo lavoro esamina brevemente un approccio complementare, ovvero l’IA centrata sull’essere umano (HCAI), che propone nuove narrazioni di potenziamento e miglioramento per gli utenti, piuttosto che narrazioni di sostituzione. Se i professionisti spingono per l’adozione di tale approccio e nuove narrazioni, possono aumentare, piuttosto che abdicare, la loro responsabilità sul posto di lavoro e aumentare la loro capacità di fornire cure adeguate.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gran parte della letteratura riguardante l’uso dell’intelligenza artificiale (IA) sul posto di lavoro, in particolare nell’ambito dell’assistenza infermieristica e dei servizi di cura, si è concentrata sui problemi etici che insorgono a valle della sua implementazione o per ragioni puramente speculative. Concentrarsi sull’IA come artefatto separato dal suo design e dai suoi progettisti rende l’assistenza infermieristica e la cura, come qualsiasi altro settore, in gran parte impotente nei confronti degli impatti dell’IA. Per questo motivo, la focalizzazione sul design e su come progettare rispettando valori e professionalità dei professionisti e degli operatori sanitari è di massima importanza. Sono state proposti vari approcci per raggiungere questo obiettivo, come la progettazione partecipata, la progettazione universale, la progettazione inclusiva e la progettazione orientata ai valori. Nonostante i vantaggi di questi approcci più consolidati, nessuno di essi si concentra sulle narrazioni e le metafore sottostanti che guidano e influenzano come l’IA viene percepita e quindi impatta il lavoro dei professionisti. Questo lavoro esamina brevemente un approccio complementare, ovvero l’IA centrata sull’essere umano (HCAI), che propone nuove narrazioni di potenziamento e miglioramento per gli utenti, piuttosto che narrazioni di sostituzione. Se i professionisti spingono per l’adozione di tale approccio e nuove narrazioni, possono aumentare, piuttosto che abdicare, la loro responsabilità sul posto di lavoro e aumentare la loro capacità di fornire cure adeguate. |
 | Umbrello, Steven Sociotechnical Infrastructures of Dominion in Stefan L. Sorgner’s We Have Always Been Cyborgs Journal Article In: Etica & Politica / Ethics & Politics, vol. XXV, iss. 1, pp. 336-251, 2023. @article{Umbrello2023c,
title = {Sociotechnical Infrastructures of Dominion in Stefan L. Sorgner’s We Have Always Been Cyborgs},
author = {Umbrello, Steven},
url = {http://www2.units.it/etica/2023_1/UMBRELLO.pdf},
year = {2023},
date = {2023-05-01},
urldate = {2023-05-01},
journal = {Etica & Politica / Ethics & Politics},
volume = {XXV},
issue = {1},
pages = {336-251},
abstract = {In We Have Always Been Cyborgs (2021), Stefan L. Sorgner argues that, given the growing economic burden of desirable welfare programs, in order for Western democratic societies to continue to flourish it will be necessary that they establish some form of algocracy (i.e., governance by algorithm). This is argued to be necessary both in order to maintain the sustainability and efficiency of these programs, but also due to the fact that further integration of humans into technical systems provides the only effective means to bridge gaps in functionality and governance. However, Sorgner’s position is entirely insensitive to the design turn in applied ethics, which argues against the neutrality of technology, instead maintaining that technology and society co-construct each other with persistent feedback loops. This, I argue, is a problem for his account inasmuch as technologies, as they become more ubiquitous, likewise become pervasive and inextricable from our sociotechnical infrastructures. As such, less-than-beneficent forces, as current trends illustrate, can appropriate these seemingly banal infrastructures to gear them towards oppressive ends, thereby ultimately threatening the social democracies that Sorgner’s position aims to buttress.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In We Have Always Been Cyborgs (2021), Stefan L. Sorgner argues that, given the growing economic burden of desirable welfare programs, in order for Western democratic societies to continue to flourish it will be necessary that they establish some form of algocracy (i.e., governance by algorithm). This is argued to be necessary both in order to maintain the sustainability and efficiency of these programs, but also due to the fact that further integration of humans into technical systems provides the only effective means to bridge gaps in functionality and governance. However, Sorgner’s position is entirely insensitive to the design turn in applied ethics, which argues against the neutrality of technology, instead maintaining that technology and society co-construct each other with persistent feedback loops. This, I argue, is a problem for his account inasmuch as technologies, as they become more ubiquitous, likewise become pervasive and inextricable from our sociotechnical infrastructures. As such, less-than-beneficent forces, as current trends illustrate, can appropriate these seemingly banal infrastructures to gear them towards oppressive ends, thereby ultimately threatening the social democracies that Sorgner’s position aims to buttress. |
 | Balistreri, Maurizio; Umbrello, Steven Modifying the environment or human nature? What is the right choice for space travel and Mars colonisation? Journal Article In: NanoEthics, vol. 17, no. 5, 2023. @article{Balistreri2023,
title = {Modifying the environment or human nature? What is the right choice for space travel and Mars colonisation?},
author = {Balistreri, Maurizio and Umbrello, Steven},
doi = {10.1007/s11569-023-00440-7},
year = {2023},
date = {2023-04-22},
urldate = {2023-04-01},
journal = {NanoEthics},
volume = {17},
number = {5},
abstract = {As space travel and intentions to colonise other planets are becoming the norm in public debate and scholarship, we must also confront the technical and survival challenges that emerge from these hostile environments. This paper aims to evaluate the various arguments proposed to meet the challenges of human space travel and extraterrestrial planetary colonisation. In particular, two primary solutions have been present in the literature as the most straightforward solutions to the rigours of extraterrestrial survival and flourishing: (1) geoengineering, where the environment is modified to become hospitable to its inhabitants, and (2) human (bio)enhancement where the genetic heritage of humans is modified to make them more resilient to the difficulties they may encounter as well as to permit them to thrive in non-terrestrial environments. Both positions have strong arguments supporting them but also severe philosophical and practical drawbacks when exposed to different circumstances. This paper aims to show that a principled stance where one position is accepted wholesale necessarily comes at the opportunity cost of the other where the other might be better suited, practically and morally. This paper concludes that case-by-case evaluations of the solutions to space travel and extraterrestrial colonisation are necessary to ensure moral congruency and the survival and flourishing of astronauts now and into the future.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
As space travel and intentions to colonise other planets are becoming the norm in public debate and scholarship, we must also confront the technical and survival challenges that emerge from these hostile environments. This paper aims to evaluate the various arguments proposed to meet the challenges of human space travel and extraterrestrial planetary colonisation. In particular, two primary solutions have been present in the literature as the most straightforward solutions to the rigours of extraterrestrial survival and flourishing: (1) geoengineering, where the environment is modified to become hospitable to its inhabitants, and (2) human (bio)enhancement where the genetic heritage of humans is modified to make them more resilient to the difficulties they may encounter as well as to permit them to thrive in non-terrestrial environments. Both positions have strong arguments supporting them but also severe philosophical and practical drawbacks when exposed to different circumstances. This paper aims to show that a principled stance where one position is accepted wholesale necessarily comes at the opportunity cost of the other where the other might be better suited, practically and morally. This paper concludes that case-by-case evaluations of the solutions to space travel and extraterrestrial colonisation are necessary to ensure moral congruency and the survival and flourishing of astronauts now and into the future. |
 | Umbrello, Steven Oggetti buoni Book Fandango, Rome, Italy, 2023, ISBN: 9788860449023. @book{Umbrello2023b,
title = {Oggetti buoni},
author = {Umbrello, Steven},
url = {https://www.fandangolibri.it/prodotto/oggetti-buoni/},
isbn = {9788860449023},
year = {2023},
date = {2023-04-07},
publisher = {Fandango},
address = {Rome, Italy},
series = {Icaro},
abstract = {Dai disastri di Three Mile Island, Chernobyl e Fukushima abbiamo fatto sicuramente molta strada, ma che cosa facciamo con i rifiuti e le scorie che le tecnologie nucleari producono? Possiamo dire “lo scopriremo più tardi” e che alla fine una soluzione emergerà, ma non sarebbe giusto condannare le generazioni future a farsi carico delle conseguenze delle nostre scelte. Per questa ragione gli oggetti e le tecnologie che costruiamo dovrebbero essere veramente buoni. Negli ultimi anni sono stati proposti diversi modelli di progettazione responsabile: per esempio, il design per tutti, il design partecipativo e il design inclusivo. Tra i metodi progettuali esistenti, c’è un approccio che nel corso dei suoi vent’anni di storia si è imposto all’attenzione a livello internazionale. Questo è il Value Sensitive Design ovverosia un design o progettazione sensibile ai valori. Steven Umbrello descrive i principi teorici del Value Sensitive Design e ci mostra per quale ragione questo modello di progettazione è quello più promettente per chi è impegnato nella costruzione del nostro futuro secondo un’innovazione responsabile e intelligente.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}
Dai disastri di Three Mile Island, Chernobyl e Fukushima abbiamo fatto sicuramente molta strada, ma che cosa facciamo con i rifiuti e le scorie che le tecnologie nucleari producono? Possiamo dire “lo scopriremo più tardi” e che alla fine una soluzione emergerà, ma non sarebbe giusto condannare le generazioni future a farsi carico delle conseguenze delle nostre scelte. Per questa ragione gli oggetti e le tecnologie che costruiamo dovrebbero essere veramente buoni. Negli ultimi anni sono stati proposti diversi modelli di progettazione responsabile: per esempio, il design per tutti, il design partecipativo e il design inclusivo. Tra i metodi progettuali esistenti, c’è un approccio che nel corso dei suoi vent’anni di storia si è imposto all’attenzione a livello internazionale. Questo è il Value Sensitive Design ovverosia un design o progettazione sensibile ai valori. Steven Umbrello descrive i principi teorici del Value Sensitive Design e ci mostra per quale ragione questo modello di progettazione è quello più promettente per chi è impegnato nella costruzione del nostro futuro secondo un’innovazione responsabile e intelligente. |
 | Umbrello, Steven Emotions and Automation in a High-Tech Workplace: A Commentary Journal Article In: Philosophy & Technology, vol. 36, no. 12, 2023. @article{Umbrello2023,
title = {Emotions and Automation in a High-Tech Workplace: A Commentary},
author = {Umbrello, Steven},
url = {https://link.springer.com/article/10.1007/s13347-023-00615-w},
doi = {10.1007/s13347-023-00615-w},
year = {2023},
date = {2023-03-02},
journal = {Philosophy & Technology},
volume = {36},
number = {12},
abstract = {In a recent article, Madelaine Ley evaluates the future of work, specifically robotised workplaces, via the lens of care ethics. Like many proponents of care ethics, Ley draws on the approach and its emphasis on relationality to understand ethical action necessary for worker wellbeing. Her paper aims to fill a research gap by shifting away from the traditional contexts in which care ethics is employed, i.e., health and care contexts and instead appropriates the approach to tackle the sociotechnicity of robotics and how caring should be integrated into non-traditional contexts. This paper comments on that of Ley’s, making the case that the author does, in fact, achieve this end while still leaving areas of potential future research open to buttressing the approach she presents.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
In a recent article, Madelaine Ley evaluates the future of work, specifically robotised workplaces, via the lens of care ethics. Like many proponents of care ethics, Ley draws on the approach and its emphasis on relationality to understand ethical action necessary for worker wellbeing. Her paper aims to fill a research gap by shifting away from the traditional contexts in which care ethics is employed, i.e., health and care contexts and instead appropriates the approach to tackle the sociotechnicity of robotics and how caring should be integrated into non-traditional contexts. This paper comments on that of Ley’s, making the case that the author does, in fact, achieve this end while still leaving areas of potential future research open to buttressing the approach she presents. |
 | Seskir, Zeki; Umbrello, Steven; Coenen, Christopher; Vermaas, Pieter Democratization of Quantum Technologies Journal Article In: Quantum Science and Technology, vol. 8, iss. 2, no. 024005, 2023. @article{Seskir2023,
title = {Democratization of Quantum Technologies},
author = {Seskir, Zeki and Umbrello, Steven and Coenen, Christopher and Vermaas, Pieter},
url = {https://iopscience.iop.org/article/10.1088/2058-9565/acb6ae},
doi = {10.1088/2058-9565/acb6ae},
year = {2023},
date = {2023-02-07},
urldate = {2023-02-07},
journal = {Quantum Science and Technology},
volume = {8},
number = {024005},
issue = {2},
abstract = {As quantum technologies (QT) advance, their potential impact on and relation with society has been developing into an important issue for exploration. In this paper, we investigate the topic of democratization in the context of QT, particularly quantum computing. The paper contains three main sections. First, we briefly introduce different theories of democracy (participatory, representative, and deliberative) and how the concept of democratization can be formulated with respect to whether democracy is taken as an intrinsic or instrumental value. Second, we give an overview of how the concept of democratization is used in the QT field. Democratization is mainly adopted by companies working on quantum computing and used in a very narrow understanding of the concept. Third, we explore various narratives and counter-narratives concerning democratization in QT. Finally, we explore the general efforts of democratization in QT such as different forms of access, formation of grassroot communities and special interest groups, the emerging culture of manifesto writing, and how these can be located within the different theories of democracy. In conclusion, we argue that although the ongoing efforts in the democratization of QT are necessary steps towards the democratization of this set of emerging technologies, they should not be accepted as sufficient to argue that QT is a democratized field. We argue that more reflexivity and responsiveness regarding the narratives and actions adopted by the actors in the QT field and making the underlying assumptions of ongoing efforts on democratization of QT explicit, can result in a better technology for society.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
As quantum technologies (QT) advance, their potential impact on and relation with society has been developing into an important issue for exploration. In this paper, we investigate the topic of democratization in the context of QT, particularly quantum computing. The paper contains three main sections. First, we briefly introduce different theories of democracy (participatory, representative, and deliberative) and how the concept of democratization can be formulated with respect to whether democracy is taken as an intrinsic or instrumental value. Second, we give an overview of how the concept of democratization is used in the QT field. Democratization is mainly adopted by companies working on quantum computing and used in a very narrow understanding of the concept. Third, we explore various narratives and counter-narratives concerning democratization in QT. Finally, we explore the general efforts of democratization in QT such as different forms of access, formation of grassroot communities and special interest groups, the emerging culture of manifesto writing, and how these can be located within the different theories of democracy. In conclusion, we argue that although the ongoing efforts in the democratization of QT are necessary steps towards the democratization of this set of emerging technologies, they should not be accepted as sufficient to argue that QT is a democratized field. We argue that more reflexivity and responsiveness regarding the narratives and actions adopted by the actors in the QT field and making the underlying assumptions of ongoing efforts on democratization of QT explicit, can result in a better technology for society. |
2022
|
 | Allahabadi, H. Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients Journal Article In: IEEE Transactions on Technology and Society, vol. 3, iss. 4, pp. 272–289, 2022. @article{Allahabadi2022,
title = {Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients},
author = {H. Allahabadi et al.},
url = {https://doi.org/10.1109/TTS.2022.3195114},
doi = {10.1109/TTS.2022.3195114},
year = {2022},
date = {2022-12-01},
urldate = {2022-08-24},
journal = {IEEE Transactions on Technology and Society},
volume = {3},
issue = {4},
pages = {272–289},
abstract = {The paper’s main contributions are twofold: to demonstrate how to apply the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does “trustworthy AI” mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The paper’s main contributions are twofold: to demonstrate how to apply the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does “trustworthy AI” mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic. |
 | Balistreri, Maurizio; Umbrello, Steven Space travel does not constitute a condition of moral exceptionality. That which obtains in space obtains also on Earth! Journal Article In: Medicina e Morale, vol. 71, iss. 3, pp. 311-321, 2022. @article{Balistreri2022b,
title = {Space travel does not constitute a condition of moral exceptionality. That which obtains in space obtains also on Earth!},
author = {Maurizio Balistreri and Steven Umbrello},
url = {https://doi.org/10.4081/mem.2022.1213},
doi = {10.4081/mem.2022.1213},
year = {2022},
date = {2022-11-03},
urldate = {2022-07-01},
journal = {Medicina e Morale},
volume = {71},
issue = {3},
pages = {311-321},
abstract = {There is a growing body of scholarship that is addressing the ethics, in particular, the bioethics of space travel and colonisation. Naturally, a variety of perspectives concerning the ethical issues and moral permissibility of different technological strategies for confronting the rigours of space travel and colonisation have emerged in the debate. Approaches ranging from genetically enhancing human astronauts to modifying the environments of planets to make them hospitable have been proposed as methods. This paper takes a look at a critique of human bioenhancement proposed by Mirko Garasic who argues that the bioenhancement of human astronauts is not only functional but necessary and thus morally permissible. However, he further claims that the bioethical arguments proposed for the context of space do not apply to the context of Earth. This paper forwards three arguments for how Garasic's views are philosophically dubious: (1) when he examines our responsibility towards future generations he refers to a moral principle (which we will call the principle of mere survival) which, besides being vague, is not morally acceptable; (2) the idea that human bioenhancement is not natural is not only debatable but morally irrelevant; and (3) it is not true that the situations that may arise in space travel cannot occur on Earth. We conclude that not only is the (bio)enhancement of humans on Earth permissible but perhaps even necessary in certain circumstances.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
There is a growing body of scholarship that is addressing the ethics, in particular, the bioethics of space travel and colonisation. Naturally, a variety of perspectives concerning the ethical issues and moral permissibility of different technological strategies for confronting the rigours of space travel and colonisation have emerged in the debate. Approaches ranging from genetically enhancing human astronauts to modifying the environments of planets to make them hospitable have been proposed as methods. This paper takes a look at a critique of human bioenhancement proposed by Mirko Garasic who argues that the bioenhancement of human astronauts is not only functional but necessary and thus morally permissible. However, he further claims that the bioethical arguments proposed for the context of space do not apply to the context of Earth. This paper forwards three arguments for how Garasic's views are philosophically dubious: (1) when he examines our responsibility towards future generations he refers to a moral principle (which we will call the principle of mere survival) which, besides being vague, is not morally acceptable; (2) the idea that human bioenhancement is not natural is not only debatable but morally irrelevant; and (3) it is not true that the situations that may arise in space travel cannot occur on Earth. We conclude that not only is the (bio)enhancement of humans on Earth permissible but perhaps even necessary in certain circumstances. |
 | Umbrello, Steven Should We Reset? A Review of Klaus Schwab and Thierry Malleret’s ‘COVID-19: The Great Reset’ Journal Article In: Journal of Value Inquiry, vol. 56, iss. 4, pp. 693-700, 2022. @article{Umbrello2021g,
title = {Should We Reset? A Review of Klaus Schwab and Thierry Malleret’s ‘COVID-19: The Great Reset’},
author = {Steven Umbrello},
url = {https://link.springer.com/article/10.1007/s10790-021-09794-1},
doi = {10.1007/s10790-021-09794-1},
year = {2022},
date = {2022-10-18},
urldate = {2021-02-17},
journal = {Journal of Value Inquiry},
volume = {56},
issue = {4},
pages = {693-700},
abstract = {More than simply the title of the book, The Great Reset is a theoretical construct appropriated by various communities. While popular primarily within the intellectual dark web and conspiracy circles, the term has been given more recent attention from academic scholarship taking such an approach to seriously revisioning political economy (Shannon Vattikuti in “The Great Green Reset of Global Economies: A Golden Opportunity for Environmental Change and Social Rehabilitation.” Earth and Space Science Open Archive ESSOAr [2020]). The present volume is coauthored by Klaus Schwab, founder and Executive Chairman of the World Economic Forum (WEF), and Thierry Malleret. The former is the author of similar works on which this volume expands (most famously the 2017 book The Fourth Industrial Revolution), while the latter is the managing partner of the Monthly Barometer.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
More than simply the title of the book, The Great Reset is a theoretical construct appropriated by various communities. While popular primarily within the intellectual dark web and conspiracy circles, the term has been given more recent attention from academic scholarship taking such an approach to seriously revisioning political economy (Shannon Vattikuti in “The Great Green Reset of Global Economies: A Golden Opportunity for Environmental Change and Social Rehabilitation.” Earth and Space Science Open Archive ESSOAr [2020]). The present volume is coauthored by Klaus Schwab, founder and Executive Chairman of the World Economic Forum (WEF), and Thierry Malleret. The former is the author of similar works on which this volume expands (most famously the 2017 book The Fourth Industrial Revolution), while the latter is the managing partner of the Monthly Barometer. |
 | Balistreri, Maurizio; Umbrello, Steven Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space Journal Article In: Journal of Responsible Technology, vol. 11, iss. October, pp. 1-7, 2022. @article{Balistreri2022c,
title = {Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space},
author = {Maurizio Balistreri and Steven Umbrello},
url = {https://doi.org/10.1016/j.jrt.2022.100040},
doi = {10.1016/j.jrt.2022.100040},
year = {2022},
date = {2022-10-01},
journal = {Journal of Responsible Technology},
volume = {11},
issue = {October},
pages = {1-7},
abstract = {This paper aims to argue for the thesis that it is not a priori morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
This paper aims to argue for the thesis that it is not a priori morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel. |
 | Matthew; Ishmaev Dennis, Georgy; Umbrello (Ed.) Values for a Post-Pandemic Future Collection Springer, 2022, ISBN: 9783031084232. @collection{Dennis2023,
title = {Values for a Post-Pandemic Future},
editor = {Dennis, Matthew; Ishmaev, Georgy; Umbrello, Steven; van den Hoven, Jeroen},
url = {https://link.springer.com/book/9783031084232},
isbn = {9783031084232},
year = {2022},
date = {2022-09-11},
urldate = {2022-09-11},
booktitle = {Values for a Post-Pandemic Future},
pages = {254},
publisher = {Springer},
series = {Philosophy of Engineering and Technology},
abstract = {This Open Access book shows how value sensitive design (VSD), responsible innovation, and comprehensive engineering can guide the rapid development of technological responses to the COVID-19 crisis. Responding to the ethical challenges of data-driven technologies and other tools requires thinking about values in the context of a pandemic as well as in a post-COVID world. Instilling values must be prioritized from the beginning, not only in the emergency response to the pandemic, but in how to proceed with new societal precedents materializing, new norms of health surveillance, and new public health requirements.
The contributors with expertise in VSD bridge the gap between ethical acceptability and social acceptance. By addressing ethical acceptability and societal acceptance together, VSD guides COVID-technologies in a way that strengthens their ability to fight the virus, and outlines pathways for the resolution of moral dilemmas. This volume provides diachronic reflections on the crisis response to address long-term moral consequences in light of the post-pandemic future. Both contact-tracing apps and immunity passports must work in a multi-system environment, and will be required to succeed alongside institutions, incentive structures, regulatory bodies, and current legislation. This text appeals to students, researchers and importantly, professionals in the field.},
keywords = {},
pubstate = {published},
tppubtype = {collection}
}
This Open Access book shows how value sensitive design (VSD), responsible innovation, and comprehensive engineering can guide the rapid development of technological responses to the COVID-19 crisis. Responding to the ethical challenges of data-driven technologies and other tools requires thinking about values in the context of a pandemic as well as in a post-COVID world. Instilling values must be prioritized from the beginning, not only in the emergency response to the pandemic, but in how to proceed with new societal precedents materializing, new norms of health surveillance, and new public health requirements.
The contributors with expertise in VSD bridge the gap between ethical acceptability and social acceptance. By addressing ethical acceptability and societal acceptance together, VSD guides COVID-technologies in a way that strengthens their ability to fight the virus, and outlines pathways for the resolution of moral dilemmas. This volume provides diachronic reflections on the crisis response to address long-term moral consequences in light of the post-pandemic future. Both contact-tracing apps and immunity passports must work in a multi-system environment, and will be required to succeed alongside institutions, incentive structures, regulatory bodies, and current legislation. This text appeals to students, researchers and importantly, professionals in the field. |
 | Balistreri, Maurizio Sex Robots: Love in the Age of Machines Book Trivent Publishing, 2022, ISBN: 978-615-6405-40-1 . @book{Balistreri2022,
title = {Sex Robots: Love in the Age of Machines},
author = {Maurizio Balistreri},
editor = {Steven Umbrello},
url = {https://trivent-publishing.eu/home/144-187-sex-robots-love-in-the-age-of-machines.html#/29-format-e_book},
isbn = {978-615-6405-40-1 },
year = {2022},
date = {2022-08-08},
urldate = {2022-05-01},
publisher = {Trivent Publishing},
abstract = {Professor Maurizio Balistreri introduces us to the fascinating world of sex of the future by addressing all the ethical issues that the large-scale commercialization of sex robots will raise without taboos and judgements. What will become of love if all our sexual relationships are conducted with machines? What will happen to the world of paid sex and pornography? Will sex robots increase or decrease sexual violence? In addition to confronting the international debate on the moral acceptability of sex robots, the book examines the most recent studies on violent video games and pornography, questioning the widespread belief that playing violent games or witnessing violent representations corrupts people and makes them violent. Not only could sex robots be an essential tool for expressing and exploring our most forbidden sexual fantasies, but they could also be used to treat sex offenders and paedophiles. “Sex Robots” is a book that questions our prejudices towards sex robots with clarity and simplicity, helping us reason and reflect on a future that is already present, in the awareness that robots will change the world and our lives.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}
Professor Maurizio Balistreri introduces us to the fascinating world of sex of the future by addressing all the ethical issues that the large-scale commercialization of sex robots will raise without taboos and judgements. What will become of love if all our sexual relationships are conducted with machines? What will happen to the world of paid sex and pornography? Will sex robots increase or decrease sexual violence? In addition to confronting the international debate on the moral acceptability of sex robots, the book examines the most recent studies on violent video games and pornography, questioning the widespread belief that playing violent games or witnessing violent representations corrupts people and makes them violent. Not only could sex robots be an essential tool for expressing and exploring our most forbidden sexual fantasies, but they could also be used to treat sex offenders and paedophiles. “Sex Robots” is a book that questions our prejudices towards sex robots with clarity and simplicity, helping us reason and reflect on a future that is already present, in the awareness that robots will change the world and our lives. |
 | Umbrello, Steven Designed for Death: Controlling Killer Robots Book Trivent Publishing, Etele út 59-61 H-1119 Budapest, Hungary, 2022, ISBN: 978-615-6405-38-8 . @book{Umbrello2022,
title = {Designed for Death: Controlling Killer Robots},
author = {Steven Umbrello},
url = {https://trivent-publishing.eu/home/139-182-designed-for-death-controlling-killer-robots.html#/26-cover-hardcover},
isbn = {978-615-6405-38-8 },
year = {2022},
date = {2022-07-26},
urldate = {2022-07-26},
publisher = {Trivent Publishing},
address = {Etele út 59-61 H-1119 Budapest, Hungary},
series = {Ethics and Robotics},
abstract = {Autonomous weapons systems, often referred to as ‘killer robots’, have been a hallmark of popular imagination for decades. However, with the inexorable advance of artificial intelligence systems (AI) and robotics, killer robots are quickly becoming a reality. These lethal technologies can learn, adapt, and potentially make life and death decisions on the battlefield with little-to-no human involvement. This naturally leads to not only legal but ethical concerns as to whether we can meaningful control such machines, and if so, then how. Such concerns are made even more poignant by the ever-present fear that something may go wrong, and the machine may carry out some action(s) violating the ethics or laws of war.
Researchers, policymakers, and designers are caught in the quagmire of how to approach these highly controversial systems and to figure out what exactly it means to have meaningful human control over them, if at all.
In Designed for Death, Dr Steven Umbrello aims to not only produce a realistic but also an optimistic guide for how, with human values in mind, we can begin to design killer robots. Drawing on the value sensitive design (VSD) approach to technology innovation, Umbrello argues that context is king and that a middle path for designing killer robots is possible if we consider both ethics and design as fundamentally linked. Umbrello moves beyond the binary debates of whether or not to prohibit killer robots and instead offers a more nuanced perspective of which types of killer robots may be both legally and ethically acceptable, when they would be acceptable, and how to design for them.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}
Autonomous weapons systems, often referred to as ‘killer robots’, have been a hallmark of popular imagination for decades. However, with the inexorable advance of artificial intelligence systems (AI) and robotics, killer robots are quickly becoming a reality. These lethal technologies can learn, adapt, and potentially make life and death decisions on the battlefield with little-to-no human involvement. This naturally leads to not only legal but ethical concerns as to whether we can meaningful control such machines, and if so, then how. Such concerns are made even more poignant by the ever-present fear that something may go wrong, and the machine may carry out some action(s) violating the ethics or laws of war.
Researchers, policymakers, and designers are caught in the quagmire of how to approach these highly controversial systems and to figure out what exactly it means to have meaningful human control over them, if at all.
In Designed for Death, Dr Steven Umbrello aims to not only produce a realistic but also an optimistic guide for how, with human values in mind, we can begin to design killer robots. Drawing on the value sensitive design (VSD) approach to technology innovation, Umbrello argues that context is king and that a middle path for designing killer robots is possible if we consider both ethics and design as fundamentally linked. Umbrello moves beyond the binary debates of whether or not to prohibit killer robots and instead offers a more nuanced perspective of which types of killer robots may be both legally and ethically acceptable, when they would be acceptable, and how to design for them. |
 | Umbrello, Steven The Role of Engineers in Harmonising Human Values for AI Systems Design Journal Article In: Journal of Responsible Technology, vol. 10, iss. July, no. 100031, 2022. @article{Umbrello2022b,
title = {The Role of Engineers in Harmonising Human Values for AI Systems Design},
author = {Steven Umbrello},
url = {https://doi.org/10.1016/j.jrt.2022.100031},
doi = {10.1016/j.jrt.2022.100031},
year = {2022},
date = {2022-04-12},
urldate = {2022-04-12},
journal = {Journal of Responsible Technology},
volume = {10},
number = {100031},
issue = {July},
abstract = {Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this broad stakeholder group, engineers must adopt a systems thinking approach that allows them to understand the sociotechnicity of artificial intelligence systems across sociocultural domains. It claims that value sensitive design, and envisioning cards in particular, provides a solid first step towards helping designers harmonise human values, understood across spatiotemporal boundaries, with economic values, rather than the former coming at the opportunity cost of the latter. |
 | Brooks, Laurence; Cannizzaro, Sara; Umbrello, Steven; Bernstein, Michael J.; Richardson, Kathleen Ethics of climate engineering: Don’t forget technology has an ethical aspect too Journal Article In: International Journal of Information Management, vol. 63, no. 102449, 2022. @article{Brooks2021,
title = {Ethics of climate engineering: Don’t forget technology has an ethical aspect too},
author = {Laurence Brooks and Sara Cannizzaro and Steven Umbrello and Michael J. Bernstein and Kathleen Richardson},
url = {https://doi.org/10.1016/j.ijinfomgt.2021.102449},
doi = {10.1016/j.ijinfomgt.2021.102449},
year = {2022},
date = {2022-04-01},
urldate = {2021-11-10},
journal = {International Journal of Information Management},
volume = {63},
number = {102449},
abstract = {Climate change may well be the most important issue of the 21st century and the world’s response, in the form of ‘Climate Engineering’, is therefore of equal pre-eminent importance. However, while there are technological challenges, there are equally just as important ethical challenges that these technologies also generate. Governments, funding agencies and non-governmental organisations increasingly recognise the importance of incorporating ethics into the development of emerging technologies (for example, within the EU draft legislation on AI). As the world faces the global challenge of climate change there are urgent efforts to develop strategies so that responses to the climate problems do not reproduce more of the same. Ethical values from the onset are fundamental to this process and need highlighting. Hence, this paper analyses a series of ethical codes, framework and guidelines of the new emerging technologies of climate engineering (CE) through a review of both published academic literature and grey literature from either industry, government, and non-governmental (NGO) organisations. This paper was developed as part of a collaboration with international partners from TechEthos (TechEthos receives funding from the EU H2020 research and innovation programme under Grant Agreement No 101006249; Ethics of Emerging Technologies), an EU-funded project that deals with the ethics of the new and emerging technologies anticipated to have high socio-economic impact. Our findings have identified the following ethical considerations including autonomy, freedom, integrity, human rights and privacy in the developmental process of climate engineering, while a poverty of ethical values reflecting dignity and trust were noted.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Climate change may well be the most important issue of the 21st century and the world’s response, in the form of ‘Climate Engineering’, is therefore of equal pre-eminent importance. However, while there are technological challenges, there are equally just as important ethical challenges that these technologies also generate. Governments, funding agencies and non-governmental organisations increasingly recognise the importance of incorporating ethics into the development of emerging technologies (for example, within the EU draft legislation on AI). As the world faces the global challenge of climate change there are urgent efforts to develop strategies so that responses to the climate problems do not reproduce more of the same. Ethical values from the onset are fundamental to this process and need highlighting. Hence, this paper analyses a series of ethical codes, framework and guidelines of the new emerging technologies of climate engineering (CE) through a review of both published academic literature and grey literature from either industry, government, and non-governmental (NGO) organisations. This paper was developed as part of a collaboration with international partners from TechEthos (TechEthos receives funding from the EU H2020 research and innovation programme under Grant Agreement No 101006249; Ethics of Emerging Technologies), an EU-funded project that deals with the ethics of the new and emerging technologies anticipated to have high socio-economic impact. Our findings have identified the following ethical considerations including autonomy, freedom, integrity, human rights and privacy in the developmental process of climate engineering, while a poverty of ethical values reflecting dignity and trust were noted. |
 | Caffo, Leonardo The Contemporary Posthuman Book Ethics International Press, Cambridge, UK, 2022, ISBN: 978-1-80441-010-3. @book{Caffo2022,
title = {The Contemporary Posthuman},
author = {Leonardo Caffo},
editor = {Umbrello, Steven},
url = {https://ethicspress.com/products/the-contemporary-posthuman?_pos=1&_sid=9ced79ae0&_ss=r},
isbn = {978-1-80441-010-3},
year = {2022},
date = {2022-04-01},
urldate = {2022-04-01},
publisher = {Ethics International Press},
address = {Cambridge, UK},
abstract = {The interest in what can be considered ‘posthumanism’ has surged over the past few years. There is no surprise as to why, given the urgency and immanence of a likely sixth mass extinction event, and the catastrophic consequences of global warming. These processes, all of which fundamentally rest on the foundations of human practices and abuses, are forcing us to rethink our place in existence.
The foundations of this position have a history firmly rooted in the daily practices and beliefs of Western cultures. The Contemporary Posthuman confronts these assumptions of truth, head-on. The author follows his conceptual journey with practical steps for putting his philosophy into practice, by drawing on philosophy, design, art, and architecture.},
keywords = {},
pubstate = {published},
tppubtype = {book}
}
The interest in what can be considered ‘posthumanism’ has surged over the past few years. There is no surprise as to why, given the urgency and immanence of a likely sixth mass extinction event, and the catastrophic consequences of global warming. These processes, all of which fundamentally rest on the foundations of human practices and abuses, are forcing us to rethink our place in existence.
The foundations of this position have a history firmly rooted in the daily practices and beliefs of Western cultures. The Contemporary Posthuman confronts these assumptions of truth, head-on. The author follows his conceptual journey with practical steps for putting his philosophy into practice, by drawing on philosophy, design, art, and architecture. |
 | Susanne; Bauer Vernima, Harald; Rauch A value sensitive design approach for designing AI-based worker assistance systems in manufacturing Journal Article In: Procedia Computer Science, vol. 200, pp. 505-516, 2022. @article{Vernima2022,
title = {A value sensitive design approach for designing AI-based worker assistance systems in manufacturing},
author = {Vernima, Susanne; Bauer, Harald; Rauch, Erwin; Ziegler, Marianne Thejls; Umbrello, Steven},
url = {https://www.sciencedirect.com/science/article/pii/S1877050922002575},
doi = {10.1016/j.procs.2022.01.248},
year = {2022},
date = {2022-03-08},
journal = {Procedia Computer Science},
volume = {200},
pages = {505-516},
abstract = {Although artificial intelligence has been given an unprecedented amount of attention in both the public and academic domains in the last few years, its convergence with other transformative technologies like cloud computing, robotics, and augmented/virtual reality is predicted to exacerbate its impacts on society. The adoption and integration of these technologies within industry and manufacturing spaces is a fundamental part of what is called Industry 4.0, or the Fourth Industrial Revolution. The impacts of this paradigm shift on the human operators who continue to work alongside and symbiotically with these technologies in the industry bring with it novel ethical issues. Therefore, how to design these technologies for human values becomes the critical area of intervention. This paper takes up the case study of robotic AI-based assistance systems to explore the potential value implications that emerge due to current design practices and use. The design methodology known as Value Sensitive Design (VSD) is proposed as a sufficient starting point for designing these technologies for human values to address these issues.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Although artificial intelligence has been given an unprecedented amount of attention in both the public and academic domains in the last few years, its convergence with other transformative technologies like cloud computing, robotics, and augmented/virtual reality is predicted to exacerbate its impacts on society. The adoption and integration of these technologies within industry and manufacturing spaces is a fundamental part of what is called Industry 4.0, or the Fourth Industrial Revolution. The impacts of this paradigm shift on the human operators who continue to work alongside and symbiotically with these technologies in the industry bring with it novel ethical issues. Therefore, how to design these technologies for human values becomes the critical area of intervention. This paper takes up the case study of robotic AI-based assistance systems to explore the potential value implications that emerge due to current design practices and use. The design methodology known as Value Sensitive Design (VSD) is proposed as a sufficient starting point for designing these technologies for human values to address these issues. |
 | Umbrello, Steven Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles Journal Article In: International Journal of Social Robotics, vol. 14, iss. 2, pp. 313–322, 2022. @article{Umbrello2021m,
title = {Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles},
author = {Steven Umbrello},
url = {https://doi.org/10.1007/s12369-021-00790-w},
doi = {10.1007/s12369-021-00790-w},
year = {2022},
date = {2022-03-01},
urldate = {2021-05-16},
journal = {International Journal of Social Robotics},
volume = {14},
issue = {2},
pages = {313–322},
abstract = {One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways of autonomous systems. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper proposes the Value Sensitive Design (VSD) approach as a principled framework for incorporating these values in design. The example of autonomous vehicles is used as a case study for how VSD offers a systematic way for engineering teams to formally incorporate existing technical solutions towards ethical design, while simultaneously remaining pliable to emerging issues and needs. It is concluded that the VSD methodology offers at least a strong enough foundation from which designers can begin to anticipate design needs and formulate salient design flows that can be adapted to changing ethical landscapes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways of autonomous systems. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper proposes the Value Sensitive Design (VSD) approach as a principled framework for incorporating these values in design. The example of autonomous vehicles is used as a case study for how VSD offers a systematic way for engineering teams to formally incorporate existing technical solutions towards ethical design, while simultaneously remaining pliable to emerging issues and needs. It is concluded that the VSD methodology offers at least a strong enough foundation from which designers can begin to anticipate design needs and formulate salient design flows that can be adapted to changing ethical landscapes. |
 | Capasso, Marianna; Umbrello, Steven Responsible Nudging for Social Good: New Healthcare Skills for AI-Driven Digital Personal Assistants Journal Article In: Medicine, Health Care and Philosophy, vol. 25, iss. 1, pp. 11-22, 2022. @article{Capasso2021b,
title = {Responsible Nudging for Social Good: New Healthcare Skills for AI-Driven Digital Personal Assistants},
author = {Marianna Capasso and Steven Umbrello},
url = {https://doi.org/10.1007/s11019-021-10062-z},
doi = {10.1007/s11019-021-10062-z},
year = {2022},
date = {2022-03-01},
urldate = {2021-11-25},
journal = {Medicine, Health Care and Philosophy},
volume = {25},
issue = {1},
pages = {11-22},
abstract = {Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing behaviours. This article discusses how these AI-driven systems pose particular ethical challenges with regards to nudging. To confront these issues, the value sensitive design (VSD) approach is adopted as a principled methodology that designers can adopt to design these systems to avoid harming and contribute to the social good. The AI for Social Good (AI4SG) factors are adopted as the norms constraining maleficence. In contrast, higher-order values specific to AI, such as those from the EU High-Level Expert Group on AI and the United Nations Sustainable Development Goals, are adopted as the values to be promoted as much as possible in design. The use case of Amazon Alexa's Healthcare Skills is used to illustrate this design approach. It provides an exemplar of how designers and engineers can begin to orientate their design programs of these technologies towards the social good.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing behaviours. This article discusses how these AI-driven systems pose particular ethical challenges with regards to nudging. To confront these issues, the value sensitive design (VSD) approach is adopted as a principled methodology that designers can adopt to design these systems to avoid harming and contribute to the social good. The AI for Social Good (AI4SG) factors are adopted as the norms constraining maleficence. In contrast, higher-order values specific to AI, such as those from the EU High-Level Expert Group on AI and the United Nations Sustainable Development Goals, are adopted as the values to be promoted as much as possible in design. The use case of Amazon Alexa's Healthcare Skills is used to illustrate this design approach. It provides an exemplar of how designers and engineers can begin to orientate their design programs of these technologies towards the social good. |
2021
|
 | Stanislav; Umbrello Ivanov, Steven The Ethics of Artificial Intelligence and Robotization in Tourism and Hospitality – A Conceptual Framework and Research Agenda Journal Article In: Journal of Smart Tourism, vol. 1, iss. 4, pp. 9-18, 2021. @article{Ivanov2021,
title = {The Ethics of Artificial Intelligence and Robotization in Tourism and Hospitality – A Conceptual Framework and Research Agenda},
author = {Ivanov, Stanislav; Umbrello, Steven},
url = { https://doi.org/10.52255/smarttourism.2021.1.4.3},
doi = { https://doi.org/10.52255/smarttourism.2021.1.4.3},
year = {2021},
date = {2021-12-22},
urldate = {2021-12-22},
journal = {Journal of Smart Tourism},
volume = {1},
issue = {4},
pages = {9-18},
abstract = {The impacts that AI and robotics systems can and will have on our everyday lives are already making themselves manifest. However, there is a lack of research on the ethical impacts and means for amelioration regarding AI and robotics within tourism and hospitality. Given the importance of designing technologies that cross national boundaries, and given that the tourism and hospitality industry is fundamentally predicated on multicultural interactions, this is an area of research and application that requires particular attention. Specifically, tourism and hospitality have a range of context-unique stakeholders that need to be accounted for in the salient design of AI systems is to be achieved. This paper adopts a stakeholder approach to develop the conceptual framework to centralize human values in designing and deploying AI and robotics systems in tourism and hospitality. The conceptual framework includes several layers – ‘Human-human-AI’ interaction level, direct and indirect stakeholders, and the macroenvironment. The ethical issues on each layer are outlined as well as some possible solutions to them. Additionally, the paper develops a research agenda on the topic.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The impacts that AI and robotics systems can and will have on our everyday lives are already making themselves manifest. However, there is a lack of research on the ethical impacts and means for amelioration regarding AI and robotics within tourism and hospitality. Given the importance of designing technologies that cross national boundaries, and given that the tourism and hospitality industry is fundamentally predicated on multicultural interactions, this is an area of research and application that requires particular attention. Specifically, tourism and hospitality have a range of context-unique stakeholders that need to be accounted for in the salient design of AI systems is to be achieved. This paper adopts a stakeholder approach to develop the conceptual framework to centralize human values in designing and deploying AI and robotics systems in tourism and hospitality. The conceptual framework includes several layers – ‘Human-human-AI’ interaction level, direct and indirect stakeholders, and the macroenvironment. The ethical issues on each layer are outlined as well as some possible solutions to them. Additionally, the paper develops a research agenda on the topic. |
 | Umbrello, Steven Shikake: The Japanese Art of Shaping Behavior Through Design Journal Article In: International Journal of Art, Culture and Design Technologies (IJACDT), vol. 10, iss. 2, pp. 57-60, 2021. @article{Umbrello2021j,
title = {Shikake: The Japanese Art of Shaping Behavior Through Design},
author = {Steven Umbrello},
url = {https://www.igi-global.com/pdf.aspx?tid=297020&ptid=254309&ctid=17&title=review%20of%20shikake:%20the%20japanese%20art%20of%20shaping%20behavior%20through%20design&isxn=9781799862314},
year = {2021},
date = {2021-12-01},
urldate = {2021-07-01},
journal = {International Journal of Art, Culture and Design Technologies (IJACDT)},
volume = {10},
issue = {2},
pages = {57-60},
abstract = {A new book by Naohiro Matsumura is reviewed. Shikake are described as designs that ‘open up’ new options to people and that positively allow them to freely choose those options. By providing numerous examples and illustrations, Matsumura explores the motivations, philosophy and implementations of Shikake in the real world. Aimed at the general reader, this book is approachable to numerous individuals, ranging from the general interest reader who wishes to understand nudging from a traditional ranging from the history of Japanese design, as well as the specialist designer who wishes to employ nudging techniques in a positive and fair manner.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A new book by Naohiro Matsumura is reviewed. Shikake are described as designs that ‘open up’ new options to people and that positively allow them to freely choose those options. By providing numerous examples and illustrations, Matsumura explores the motivations, philosophy and implementations of Shikake in the real world. Aimed at the general reader, this book is approachable to numerous individuals, ranging from the general interest reader who wishes to understand nudging from a traditional ranging from the history of Japanese design, as well as the specialist designer who wishes to employ nudging techniques in a positive and fair manner. |
 | Umbrello, Steven Towards a Value Sensitive Design Framework for Attaining Meaningful Human Control over Autonomous Weapons Systems PhD Thesis 2021, ISBN: 979-12-200-7923-5. @phdthesis{Umbrello2021f,
title = {Towards a Value Sensitive Design Framework for Attaining Meaningful Human Control over Autonomous Weapons Systems},
author = {Steven Umbrello},
url = {https://www.researchgate.net/publication/347678398_Towards_a_Value_Sensitive_Design_Framework_for_Attaining_Meaningful_Human_Control_over_Autonomous_Weapons_Systems},
isbn = {979-12-200-7923-5},
year = {2021},
date = {2021-11-26},
urldate = {2021-11-01},
publisher = {Consorzio FINO},
institution = {Università degli Studi di Torino},
abstract = {The international debate on the ethics and legality of autonomous weapon systems (AWS) as well as the call for a ban are primarily focused on the nebulous concept of fully autonomous AWS. More specifically, on AWS that are capable of target selection and engagement without human supervision or control. This thesis argues that such a conception of autonomy is divorced both from military planning and decision-making operations as well as the design requirements that govern AWS engineering and subsequently the tracking and tracing of moral responsibility. To do this, this thesis marries two different levels of meaningful human control (MHC), termed levels of abstraction, to couple military operations with design ethics. In doing so, this thesis argues that the contentious notion of ‘full’ autonomy is not problematic under this two-tiered understanding of MHC. It proceeds to propose the value sensitive design (VSD) approach as a means for designing for MHC.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
The international debate on the ethics and legality of autonomous weapon systems (AWS) as well as the call for a ban are primarily focused on the nebulous concept of fully autonomous AWS. More specifically, on AWS that are capable of target selection and engagement without human supervision or control. This thesis argues that such a conception of autonomy is divorced both from military planning and decision-making operations as well as the design requirements that govern AWS engineering and subsequently the tracking and tracing of moral responsibility. To do this, this thesis marries two different levels of meaningful human control (MHC), termed levels of abstraction, to couple military operations with design ethics. In doing so, this thesis argues that the contentious notion of ‘full’ autonomy is not problematic under this two-tiered understanding of MHC. It proceeds to propose the value sensitive design (VSD) approach as a means for designing for MHC. |
 | Maurizio Balistreri Alberto Pirni, Marianna Capasso Robot Care Ethics -Between Autonomy and Vulnerability: Coupling Principles and Practices in Autonomous Systems for Care Journal Article In: Frontiers in Robotics and AI, vol. 8, no. 654298, 2021. @article{Pirni2021,
title = {Robot Care Ethics -Between Autonomy and Vulnerability: Coupling Principles and Practices in Autonomous Systems for Care},
author = {Alberto Pirni, Maurizio Balistreri, Marianna Capasso, Steven Umbrello, Federica Merenda},
url = {https://www.frontiersin.org/Ethics_in_Robotics_and_Artificial_Intelligence/10.3389/frobt.2021.654298/abstract},
doi = {10.3389/frobt.2021.654298},
year = {2021},
date = {2021-06-08},
journal = {Frontiers in Robotics and AI},
volume = {8},
number = {654298},
abstract = {Technological developments involving robotics and artificial intelligence devices are being employed evermore in elderly care and the healthcare sector more generally, raising ethical issues and practical questions warranting closer considerations of what we mean by “care” and, subsequently, how to design such software coherently with the chosen definition. This paper starts by critically examining the existing approaches to the ethical design of care robots provided by Aimee van Wynsberghe, who relies on the work on the ethics of care by Joan Tronto. In doing so, it suggests an alternative to their non-principled approach, an alternative suited to tackling some of the issues raised by Tronto and van Wynsberghe, while allowing for the inclusion of two orientative principles. Our proposal centres on the principles of autonomy and vulnerability, whose joint adoption we deem able to constitute an original revision of a bottom-up approach in care ethics. Conclusively, the ethical framework introduced here integrates more traditional approaches in care ethics in view of enhancing the debate regarding the ethical design of care robots under a new lens.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Technological developments involving robotics and artificial intelligence devices are being employed evermore in elderly care and the healthcare sector more generally, raising ethical issues and practical questions warranting closer considerations of what we mean by “care” and, subsequently, how to design such software coherently with the chosen definition. This paper starts by critically examining the existing approaches to the ethical design of care robots provided by Aimee van Wynsberghe, who relies on the work on the ethics of care by Joan Tronto. In doing so, it suggests an alternative to their non-principled approach, an alternative suited to tackling some of the issues raised by Tronto and van Wynsberghe, while allowing for the inclusion of two orientative principles. Our proposal centres on the principles of autonomy and vulnerability, whose joint adoption we deem able to constitute an original revision of a bottom-up approach in care ethics. Conclusively, the ethical framework introduced here integrates more traditional approaches in care ethics in view of enhancing the debate regarding the ethical design of care robots under a new lens. |
 | Umbrello, Steven; Capasso, Marianna; Pirni, Alberto; Balistreri, Maurizio; Merenda, Federica Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots Journal Article In: Minds and Machines, vol. 31, no. 3, pp. 395–419, 2021. @article{Umbrello2021b,
title = {Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots},
author = {Steven Umbrello and Marianna Capasso and Alberto Pirni and Maurizio Balistreri and Federica Merenda},
url = {https://doi.org/10.1007/s11023-021-09561-y},
doi = {10.1007/s11023-021-09561-y},
year = {2021},
date = {2021-05-23},
urldate = {2021-05-23},
journal = {Minds and Machines},
volume = {31},
number = {3},
pages = {395–419},
abstract = {The increasing automation and ubiquity of robotics deployed within the field of care boasts promising advantages. However, challenging ethical issues arise also as a consequence. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. It takes the value sensitive design (VSD) approach to technology design and extends its application to care robots by not only integrating the values of care, but also those specific to AI as well as higher-scale values such as the United Nations Sustainable Development Goals (SDGs). The ethical issues specific to care robots for the elderly as discussed as well as examples of specific design requirements to ameliorate those issues.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The increasing automation and ubiquity of robotics deployed within the field of care boasts promising advantages. However, challenging ethical issues arise also as a consequence. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. It takes the value sensitive design (VSD) approach to technology design and extends its application to care robots by not only integrating the values of care, but also those specific to AI as well as higher-scale values such as the United Nations Sustainable Development Goals (SDGs). The ethical issues specific to care robots for the elderly as discussed as well as examples of specific design requirements to ameliorate those issues. |
 | Umbrello, Steven; Wood, Nathan Gabriel Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status Journal Article In: Information, vol. 12, no. 5, pp. 216, 2021. @article{Umbrello2021l,
title = {Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status},
author = {Steven Umbrello and Nathan Gabriel Wood},
url = {https://doi.org/10.3390/info12050216},
doi = {10.3390/info12050216},
year = {2021},
date = {2021-05-20},
journal = {Information},
volume = {12},
number = {5},
pages = {216},
abstract = {Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving in-creasing attention in public discourse and scholarship. Much of this interest is connected with policy makers and the emerging ethical and legal problems linked to the full autonomy of weap-ons systems, however there is a general lack of recognition for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more so-phisticated and more capable than ground troops, soldiers will be at the mercy of enemy AWS and unable to defend themselves. We argue that these soldiers ought to be considered hors de combat, and not targeted. We contend that hors de combat status must be viewed contextually, with close reference to the capabilities of combatants on both sides of any engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions, each individual AWS will need its own standard for when enemy soldiers are deemed hors de combat. The difficulties of achieving this with the limits of modern technology should also be acknowledged. We conclude by examining how these nuanced views of hors de combat status might impact on the “meaningful human control” of AWS.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving in-creasing attention in public discourse and scholarship. Much of this interest is connected with policy makers and the emerging ethical and legal problems linked to the full autonomy of weap-ons systems, however there is a general lack of recognition for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more so-phisticated and more capable than ground troops, soldiers will be at the mercy of enemy AWS and unable to defend themselves. We argue that these soldiers ought to be considered hors de combat, and not targeted. We contend that hors de combat status must be viewed contextually, with close reference to the capabilities of combatants on both sides of any engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions, each individual AWS will need its own standard for when enemy soldiers are deemed hors de combat. The difficulties of achieving this with the limits of modern technology should also be acknowledged. We conclude by examining how these nuanced views of hors de combat status might impact on the “meaningful human control” of AWS. |
 | Umbrello, Steven AI Winter Book Section In: Frana, Philip L.; Klein, Michael J. (Ed.): Encyclopedia of Artificial Intelligence: The Past, Present, and Future of AI, ABC-CLIO, 2021, ISBN: 9781440853265. @incollection{Umbrello2021,
title = {AI Winter},
author = {Steven Umbrello},
editor = {Philip L. Frana and Michael J. Klein},
url = {https://products.abc-clio.com/ABC-CLIOCorporate/product.aspx?pc=A5303C},
isbn = {9781440853265},
year = {2021},
date = {2021-04-30},
booktitle = {Encyclopedia of Artificial Intelligence: The Past, Present, and Future of AI},
publisher = {ABC-CLIO},
abstract = {Coined in 1984 at the American Association of Artificial intelligence (now the Association for the Advancement of Artificial Intelligence or AAAI), the various boom and bust periods of AI research and funding lead AI researchers Marvin Minsky and Roger Schank to refer to the then-impending bust period as an AI Winter. Canadian AI researcher Daniel Crevier describes the phenomenon as a domino effect that begins with cynicism in the AI research community that then trickles to mass media and finally to funding bodies. The result is a freeze in serious AI research and development. This initial pessimism is mainly attributed to the overly ambitious promises that AI can yield with the actual results being far humbler than expectations.},
keywords = {},
pubstate = {published},
tppubtype = {incollection}
}
Coined in 1984 at the American Association of Artificial intelligence (now the Association for the Advancement of Artificial Intelligence or AAAI), the various boom and bust periods of AI research and funding lead AI researchers Marvin Minsky and Roger Schank to refer to the then-impending bust period as an AI Winter. Canadian AI researcher Daniel Crevier describes the phenomenon as a domino effect that begins with cynicism in the AI research community that then trickles to mass media and finally to funding bodies. The result is a freeze in serious AI research and development. This initial pessimism is mainly attributed to the overly ambitious promises that AI can yield with the actual results being far humbler than expectations. |
 | Umbrello, Steven Leadership Strategy and Tactics: Field Manual Journal Article In: Journal of Military Ethics, vol. 20, no. 1, pp. 82-83, 2021. @article{Umbrello2021n,
title = {Leadership Strategy and Tactics: Field Manual},
author = {Steven Umbrello},
url = {https://doi.org/10.1080/15027570.2021.1920686},
doi = {10.1080/15027570.2021.1920686},
year = {2021},
date = {2021-04-30},
urldate = {2021-04-30},
journal = {Journal of Military Ethics},
volume = {20},
number = {1},
pages = {82-83},
abstract = {A new book by Jocko Willink, "Leadership Strategy and Tactics: Field Manual", is reviewed. Leadership Strategy and Tactics explore the nature of leadership styles and strategies in both narrative forms as the author discusses past experiences in the military, as well as in real-world applications beyond the military domain. The author provides timely, yet timeless advice for aspiring leaders in an easily digestible form, with quick reference chapters and simple tactical points.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A new book by Jocko Willink, "Leadership Strategy and Tactics: Field Manual", is reviewed. Leadership Strategy and Tactics explore the nature of leadership styles and strategies in both narrative forms as the author discusses past experiences in the military, as well as in real-world applications beyond the military domain. The author provides timely, yet timeless advice for aspiring leaders in an easily digestible form, with quick reference chapters and simple tactical points. |
 | Umbrello, Steven The Ecological Turn in Design: Adopting a Posthumanist Ethics to Inform Value Sensitive Design Journal Article In: Philosophies, vol. 6, no. 2, pp. 29, 2021. @article{Umbrello2021k,
title = {The Ecological Turn in Design: Adopting a Posthumanist Ethics to Inform Value Sensitive Design},
author = {Steven Umbrello},
url = {https://www.mdpi.com/2409-9287/6/2/29},
doi = {10.3390/philosophies6020029},
year = {2021},
date = {2021-04-02},
journal = {Philosophies},
volume = {6},
number = {2},
pages = {29},
abstract = {Design for Values (DfV) philosophies are a series of design approaches that aim to incorporate human values into the early phases of technological design to direct innovation into beneficial outcomes. The difficulty and necessity of directing advantageous futures for transformative technologies through the application and adoption of value-based design approaches are apparent. However, questions of whose values to design are of critical importance. DfV philosophies typically aim to enrol the stakeholders who may be affected by the emergence of such a technology. However, regardless of which design approach is adopted, all enrolled stakeholders are human ones who propose human values. Contemporary scholarship on metahumanisms, particularly those on posthumanism, have decentred the human from its traditionally privileged position among other forms of life. Arguments that the humanist position is not (and has never been) tenable are persuasive. As such, scholarship has begun to provide a more encompassing ontology for the investigation of nonhuman values. Given the potentially transformative nature of future technologies as relates to the earth and its many assemblages, it is clear that the value investigations of these design approaches fail to account for all relevant stakeholders (i.e., nonhuman animals). This paper has two primary objectives: (1) to argue for the cogency of a posthuman ethics in the design of technologies; and (2) to describe how existing DfV approaches can begin to envision principled and methodological ways of incorporating non-human values into design. To do this, the paper provides a rudimentary outline of what constitutes DfV approaches. It then takes up a unique design approach called Value Sensitive Design (VSD) as an illustrative example. Out of all the other DfV frameworks, VSD most clearly illustrates a principled approach to the integration of values in design.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Design for Values (DfV) philosophies are a series of design approaches that aim to incorporate human values into the early phases of technological design to direct innovation into beneficial outcomes. The difficulty and necessity of directing advantageous futures for transformative technologies through the application and adoption of value-based design approaches are apparent. However, questions of whose values to design are of critical importance. DfV philosophies typically aim to enrol the stakeholders who may be affected by the emergence of such a technology. However, regardless of which design approach is adopted, all enrolled stakeholders are human ones who propose human values. Contemporary scholarship on metahumanisms, particularly those on posthumanism, have decentred the human from its traditionally privileged position among other forms of life. Arguments that the humanist position is not (and has never been) tenable are persuasive. As such, scholarship has begun to provide a more encompassing ontology for the investigation of nonhuman values. Given the potentially transformative nature of future technologies as relates to the earth and its many assemblages, it is clear that the value investigations of these design approaches fail to account for all relevant stakeholders (i.e., nonhuman animals). This paper has two primary objectives: (1) to argue for the cogency of a posthuman ethics in the design of technologies; and (2) to describe how existing DfV approaches can begin to envision principled and methodological ways of incorporating non-human values into design. To do this, the paper provides a rudimentary outline of what constitutes DfV approaches. It then takes up a unique design approach called Value Sensitive Design (VSD) as an illustrative example. Out of all the other DfV frameworks, VSD most clearly illustrates a principled approach to the integration of values in design. |
 | Umbrello, Steven Coupling Levels of Abstraction in Understanding Meaningful Human Control of Autonomous Weapons: A Two-Tiered Approach Journal Article In: Ethics and Information Technology, vol. 23, no. 3, pp. 455-464, 2021. @article{Umbrello2021i,
title = {Coupling Levels of Abstraction in Understanding Meaningful Human Control of Autonomous Weapons: A Two-Tiered Approach},
author = {Steven Umbrello},
url = {https://doi.org/10.1007/s10676-021-09588-w
},
doi = {10.1007/s10676-021-09588-w},
year = {2021},
date = {2021-04-01},
urldate = {2021-04-01},
journal = {Ethics and Information Technology},
volume = {23},
number = {3},
pages = {455-464},
abstract = {The international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of moral responsibility. To show how military operations can be coupled with design ethics, this paper marries two different kinds of meaningful human control (MHC) termed levels of abstraction. Under this two-tiered understanding of MHC, the contentious notion of ‘full’ autonomy becomes unproblematic. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of moral responsibility. To show how military operations can be coupled with design ethics, this paper marries two different kinds of meaningful human control (MHC) termed levels of abstraction. Under this two-tiered understanding of MHC, the contentious notion of ‘full’ autonomy becomes unproblematic. |
 | Umbrello, Steven Can humans dream of electric sheep? Journal Article In: Metascience, vol. 30, no. 2, pp. 269-271, 2021. @article{Umbrello2021h,
title = {Can humans dream of electric sheep?},
author = {Steven Umbrello},
url = {https://link.springer.com/article/10.1007/s11016-021-00629-0},
doi = {10.1007/s11016-021-00629-0},
year = {2021},
date = {2021-02-26},
journal = {Metascience},
volume = {30},
number = {2},
pages = {269-271},
abstract = {As an idea, transhumanism has received increasing attention in recent years and across numerous domains. Despite presidential candidates such as Zoltan Istvan, who ran on an explicitly Transhumanist platform in 2016 but later dropped out to endorse Hillary Clinton, transhumanism has taken root more recently in the conspiratorial imaginations of the dark web. Given the philosophy’s central emphasis on technology as an inherent good, imaginations in supposed alt-right internet circles have criticised it as an ideological gateway to global, fully automated Communism. This is not to say that such discussions on transhumanism are exclusively siloed and on the margins of society. Related discussions are happening at various well-known institutions and research centres such as the Institute for Ethics and Emerging Technologies, a non-profit think tank dedicated to techno-progressivism where I have been managing director for half a decade. What I mean to say here is that transhumanism is not monolithic. It is best described as multi-faceted and existing in different instantiations across multiple domains. James Michael MacFarlane’s recent book, Transhumanism as a New Social Movement: The Techno-Centred Imagination, is an attempt to trace the history, meaning, and practices that characterise this variegated term.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
As an idea, transhumanism has received increasing attention in recent years and across numerous domains. Despite presidential candidates such as Zoltan Istvan, who ran on an explicitly Transhumanist platform in 2016 but later dropped out to endorse Hillary Clinton, transhumanism has taken root more recently in the conspiratorial imaginations of the dark web. Given the philosophy’s central emphasis on technology as an inherent good, imaginations in supposed alt-right internet circles have criticised it as an ideological gateway to global, fully automated Communism. This is not to say that such discussions on transhumanism are exclusively siloed and on the margins of society. Related discussions are happening at various well-known institutions and research centres such as the Institute for Ethics and Emerging Technologies, a non-profit think tank dedicated to techno-progressivism where I have been managing director for half a decade. What I mean to say here is that transhumanism is not monolithic. It is best described as multi-faceted and existing in different instantiations across multiple domains. James Michael MacFarlane’s recent book, Transhumanism as a New Social Movement: The Techno-Centred Imagination, is an attempt to trace the history, meaning, and practices that characterise this variegated term. |
 | Steven; van de Poel Umbrello, Ibo Mapping value sensitive design onto AI for social good principles Journal Article In: AI and Ethics, vol. 1, no. 3, pp. 283–296, 2021. @article{Umbrello2021e,
title = {Mapping value sensitive design onto AI for social good principles},
author = {Umbrello, Steven; van de Poel, Ibo},
url = {https://link.springer.com/article/10.1007/s43681-021-00038-3},
doi = {10.1007/s43681-021-00038-3},
year = {2021},
date = {2021-02-01},
urldate = {2021-02-01},
journal = {AI and Ethics},
volume = {1},
number = {3},
pages = {283–296},
abstract = {Value sensitive design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: (1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; (2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good, and (3) extending the VSD process to encompass the whole life cycle of an AI technology to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Value sensitive design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: (1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; (2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good, and (3) extending the VSD process to encompass the whole life cycle of an AI technology to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app. |
 | Umbrello, Steven Reckoning with assessment: can we responsibly innovate? Journal Article In: Metascience, pp. 1-3, 2021. @article{Umbrello2021d,
title = {Reckoning with assessment: can we responsibly innovate?},
author = {Steven Umbrello},
url = {https://link.springer.com/article/10.1007/s11016-021-00605-8},
doi = {10.1007/s11016-021-00605-8},
year = {2021},
date = {2021-01-15},
journal = {Metascience},
pages = {1-3},
abstract = {Assessment of Responsible Innovation argues, contrary to common imagination, that the profit motive underpinning private sector decision-making about innovation neither excludes—nor is even necessarily in tension with—responsible innovation. Responsible innovation is not a clear-cut thing, principle, or clearly formulated grouping of practices. Rather, it consists in a plurality of engagements, strategies, and interactions oriented around the general goal of technological development towards socially desirable ends. The assessment of responsible innovation faces a lacuna partly due to this plurality, and partly because responsible research and innovation (RRI) has primarily been the domain of research institutions, higher education, and public sector entities—those who are not responsible for the majority of innovations. There is thus a gap between past RRI research and the actual nexus of innovation programmes.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Assessment of Responsible Innovation argues, contrary to common imagination, that the profit motive underpinning private sector decision-making about innovation neither excludes—nor is even necessarily in tension with—responsible innovation. Responsible innovation is not a clear-cut thing, principle, or clearly formulated grouping of practices. Rather, it consists in a plurality of engagements, strategies, and interactions oriented around the general goal of technological development towards socially desirable ends. The assessment of responsible innovation faces a lacuna partly due to this plurality, and partly because responsible research and innovation (RRI) has primarily been the domain of research institutions, higher education, and public sector entities—those who are not responsible for the majority of innovations. There is thus a gap between past RRI research and the actual nexus of innovation programmes. |
 | Doorn, Neelke; Michelfelder, Diane; Barrella, Elise; Bristol, Terry; Dechesne, Francien; Fritzsche, Albrecht; Johnson, Gearold; Poznic, Michael; Robison, Wade; Sain, Barbara; Stone, Taylor; Rodriguez-Nikl, Tonatiuh; Umbrello, Steven; Vermaas, Pieter E; Wilson, Richard L Reimagining the future of engineering Book Section In: Doorn, Neelke; Michelfelder, Diane P (Ed.): Routledge Handbook of Philosophy of Engineering, Taylor & Francis, 2021, ISBN: 9781138244955. @incollection{Doorn2021,
title = {Reimagining the future of engineering},
author = {Neelke Doorn and Diane Michelfelder and Elise Barrella and Terry Bristol and Francien Dechesne and Albrecht Fritzsche and Gearold Johnson and Michael Poznic and Wade Robison and Barbara Sain and Taylor Stone and Tonatiuh Rodriguez-Nikl and Steven Umbrello and Pieter E Vermaas and Richard L Wilson},
editor = {Neelke Doorn and Diane P Michelfelder},
url = {https://www.routledge.com/The-Routledge-Handbook-of-the-Philosophy-of-Engineering/Michelfelder-Doorn/p/book/9781138244955},
isbn = {9781138244955},
year = {2021},
date = {2021-01-01},
booktitle = {Routledge Handbook of Philosophy of Engineering},
publisher = {Taylor & Francis},
abstract = {Reimagining suggests the idea of opening up new, unconventional spaces of possibilities for an activity or an entity that already exists. At its most transformative, the activity of reimagining develops spaces of possibilities that alter the very definition of that activity or entity. What then would it be to reimagine the future of engineering? Such a question cannot be addressed by a single individual but rather requires the combined perspectives and insights of a number of individuals. The tentative answer presented in this chapter had its beginnings in a workshop on this topic which took place at a meeting of the Forum on Philosophy, Engineering and Technology (fPET) at the University of Maryland, College Park, in 2018. Because participants in the workshop came from the fPET community, they included philosophers and engineers from both inside and outside the academy. On this account, reimagining the future of engineering is a matter of reimagining and redrawing the spaces of engineering itself: spaces for designing, action, problem framing, professional and disciplinary identity, and for the training of future engineers. },
keywords = {},
pubstate = {published},
tppubtype = {incollection}
}
Reimagining suggests the idea of opening up new, unconventional spaces of possibilities for an activity or an entity that already exists. At its most transformative, the activity of reimagining develops spaces of possibilities that alter the very definition of that activity or entity. What then would it be to reimagine the future of engineering? Such a question cannot be addressed by a single individual but rather requires the combined perspectives and insights of a number of individuals. The tentative answer presented in this chapter had its beginnings in a workshop on this topic which took place at a meeting of the Forum on Philosophy, Engineering and Technology (fPET) at the University of Maryland, College Park, in 2018. Because participants in the workshop came from the fPET community, they included philosophers and engineers from both inside and outside the academy. On this account, reimagining the future of engineering is a matter of reimagining and redrawing the spaces of engineering itself: spaces for designing, action, problem framing, professional and disciplinary identity, and for the training of future engineers. |
 | Umbrello, Steven Conceptualizing Policy in Value Sensitive Design: A Machine Ethics Approach Book Section In: Thompson, Steven John (Ed.): Machine Law, Ethics, and Morality in the Age of Artificial Intelligence, pp. 108-125, IGI Global, Hershey, Pennsylvania, USA, 2021, ISBN: 9781799848943. @incollection{Umbrello2021a,
title = {Conceptualizing Policy in Value Sensitive Design: A Machine Ethics Approach},
author = {Steven Umbrello},
editor = {Steven John Thompson},
url = {https://www.igi-global.com/chapter/conceptualizing-policy-in-value-sensitive-design/265716},
doi = {10.4018/978-1-7998-4894-3.ch007},
isbn = {9781799848943},
year = {2021},
date = {2021-01-01},
booktitle = {Machine Law, Ethics, and Morality in the Age of Artificial Intelligence},
pages = {108-125},
publisher = {IGI Global},
address = {Hershey, Pennsylvania, USA},
chapter = {7},
abstract = {The value sensitive design (VSD) approach to designing transformative technologies for human values is taken as the object of study in this chapter. VSD has traditionally been conceptualized as another type of technology or instrumentally as a tool. The various parts of VSD's principled approach would then aim to discern the various policy requirements that any given technological artifact under consideration would implicate. Yet, little to no consideration has been given to how laws, regulations, policies and social norms engage within VSD practices. Similarly, how the interactive nature of the VSD approach can, in turn, influence those directives. This is exacerbated when we consider machine ethics policy that have global consequences outside their development spheres. What constructs and models will position AI designers to engage in policy concerns? How can the design of AI policy be integrated with technical design? How might VSD be used to develop AI policy? How might law, regulations, social norms, and other kinds of policy regarding AI systems be engaged within value sensitive design? This chapter takes the VSD as its starting point and aims to determine how laws, regulations and policies come to influence how value trade-offs can be managed within VSD practices. It shows that the iterative and interactional nature of VSD both permits and encourages existing policies to be integrated both early on and throughout the design process. The chapter concludes with some potential future research programs.},
keywords = {},
pubstate = {published},
tppubtype = {incollection}
}
The value sensitive design (VSD) approach to designing transformative technologies for human values is taken as the object of study in this chapter. VSD has traditionally been conceptualized as another type of technology or instrumentally as a tool. The various parts of VSD's principled approach would then aim to discern the various policy requirements that any given technological artifact under consideration would implicate. Yet, little to no consideration has been given to how laws, regulations, policies and social norms engage within VSD practices. Similarly, how the interactive nature of the VSD approach can, in turn, influence those directives. This is exacerbated when we consider machine ethics policy that have global consequences outside their development spheres. What constructs and models will position AI designers to engage in policy concerns? How can the design of AI policy be integrated with technical design? How might VSD be used to develop AI policy? How might law, regulations, social norms, and other kinds of policy regarding AI systems be engaged within value sensitive design? This chapter takes the VSD as its starting point and aims to determine how laws, regulations and policies come to influence how value trade-offs can be managed within VSD practices. It shows that the iterative and interactional nature of VSD both permits and encourages existing policies to be integrated both early on and throughout the design process. The chapter concludes with some potential future research programs. |
2020
|
 | Umbrello, Steven Maurizio Balistreri, Sex robot: l'amore al tempo delle macchine Journal Article In: Filosofia, no. 65, pp. 191–193, 2020, ISSN: 2704-8195. @article{Umbrello2020,
title = {Maurizio Balistreri, Sex robot: l'amore al tempo delle macchine},
author = {Steven Umbrello},
url = {https://www.ojs.unito.it/index.php/filosofia/article/view/5245},
doi = {10.13135/2704-8195/5245},
issn = {2704-8195},
year = {2020},
date = {2020-10-30},
journal = {Filosofia},
number = {65},
pages = {191--193},
abstract = {A new book by Maurizio Balistreri, "Sex robot. L’amore al tempo delle macchine", is reviewed. Sex robots not only exacerbate social, ethical and cultural issues that already exist, but also come with emergent and novel ones. This book is intended to build on the recent research on both robotics and the growing scholarship on sex robots more generally, however with greater attention to the developments of the philosophical issues of how to deal with these new artefacts and steps for living among these types of systems into the future. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A new book by Maurizio Balistreri, "Sex robot. L’amore al tempo delle macchine", is reviewed. Sex robots not only exacerbate social, ethical and cultural issues that already exist, but also come with emergent and novel ones. This book is intended to build on the recent research on both robotics and the growing scholarship on sex robots more generally, however with greater attention to the developments of the philosophical issues of how to deal with these new artefacts and steps for living among these types of systems into the future. |
 | Umbrello, Steven Combinatory and Complementary Practices of Values and Virtues in Design: A Reply to Reijers and Gordijn Journal Article In: Filosofia, no. 65, pp. 107–121, 2020, ISSN: 2704-8195. @article{Umbrello2020a,
title = {Combinatory and Complementary Practices of Values and Virtues in Design: A Reply to Reijers and Gordijn},
author = {Steven Umbrello},
url = {https://www.ojs.unito.it/index.php/filosofia/article/view/5236},
doi = {10.13135/2704-8195/5236},
issn = {2704-8195},
year = {2020},
date = {2020-10-30},
journal = {Filosofia},
number = {65},
pages = {107--121},
abstract = {The purpose of this paper is to review and critique Wessel Reijers and Bert Gordijn's paper Moving from value sensitive design to virtuous practice design. In doing so, it draws on recent literature on developing value sensitive design (VSD) to show how the authors' virtuous practice design (VPD), at minimum, is not mutually exclusive to VSD. This paper argues that virtuous practice is not exclusive to the basic methodological underpinnings of VSD. This can therefore strengthen, rather than exclude the VSD approach. Likewise, this paper presents not only a critique of what was offered as a “potentially fruitful alternative to VSD” but further clarifies and contributes to the VSD scholarship in extending its potential methodological practices and scope. It is concluded that VPD does not appear to offer any original contribution that more recent instantiations of VSD have not already proposed and implemented.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The purpose of this paper is to review and critique Wessel Reijers and Bert Gordijn's paper Moving from value sensitive design to virtuous practice design. In doing so, it draws on recent literature on developing value sensitive design (VSD) to show how the authors' virtuous practice design (VPD), at minimum, is not mutually exclusive to VSD. This paper argues that virtuous practice is not exclusive to the basic methodological underpinnings of VSD. This can therefore strengthen, rather than exclude the VSD approach. Likewise, this paper presents not only a critique of what was offered as a “potentially fruitful alternative to VSD” but further clarifies and contributes to the VSD scholarship in extending its potential methodological practices and scope. It is concluded that VPD does not appear to offer any original contribution that more recent instantiations of VSD have not already proposed and implemented. |
 | Longo, Francesco; Padovano, Antonio; Umbrello, Steven Value-Oriented and Ethical Technology Engineering in Industry 5.0: A Human-Centric Perspective for the Design of the Factory of the Future Journal Article In: Applied Sciences, vol. 10, no. 12, pp. 4182, 2020, ISSN: 2076-3417. @article{Longo2020,
title = {Value-Oriented and Ethical Technology Engineering in Industry 5.0: A Human-Centric Perspective for the Design of the Factory of the Future},
author = {Francesco Longo and Antonio Padovano and Steven Umbrello},
url = {https://www.mdpi.com/2076-3417/10/12/4182},
doi = {10.3390/app10124182},
issn = {2076-3417},
year = {2020},
date = {2020-06-01},
journal = {Applied Sciences},
volume = {10},
number = {12},
pages = {4182},
institution = {University of Turin; University of Calabria},
abstract = {Although manufacturing companies are currently situated at a transition point in what has been called Industry 4.0, a new revolutionary wave—Industry 5.0—is emerging as an ‘Age of Augmentation' when the human and machine reconcile and work in perfect symbiosis with one another. Recent years have indeed assisted in drawing attention to the human-centric design of Cyber-Physical Production Systems (CPPS) and to the genesis of the ‘Operator 4.0', two novel concepts that raise significant ethical questions regarding the impact of technology on workers and society at large. This paper argues that a value-oriented and ethical technology engineering in Industry 5.0 is an urgent and sensitive topic as demonstrated by a survey administered to industry leaders from different companies. The Value Sensitive Design (VSD) approach is proposed as a principled framework to illustrate how technologies enabling human–machine symbiosis in the Factory of the Future can be designed to embody elicited human values and to illustrate actionable steps that engineers and designers can take in their design projects. Use cases based on real solutions and prototypes discuss how a design-for-values approach aids in the investigation and mitigation of ethical issues emerging from the implementation of technological solutions and, hence, support the migration to a symbiotic Factory of the Future.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Although manufacturing companies are currently situated at a transition point in what has been called Industry 4.0, a new revolutionary wave—Industry 5.0—is emerging as an ‘Age of Augmentation' when the human and machine reconcile and work in perfect symbiosis with one another. Recent years have indeed assisted in drawing attention to the human-centric design of Cyber-Physical Production Systems (CPPS) and to the genesis of the ‘Operator 4.0', two novel concepts that raise significant ethical questions regarding the impact of technology on workers and society at large. This paper argues that a value-oriented and ethical technology engineering in Industry 5.0 is an urgent and sensitive topic as demonstrated by a survey administered to industry leaders from different companies. The Value Sensitive Design (VSD) approach is proposed as a principled framework to illustrate how technologies enabling human–machine symbiosis in the Factory of the Future can be designed to embody elicited human values and to illustrate actionable steps that engineers and designers can take in their design projects. Use cases based on real solutions and prototypes discuss how a design-for-values approach aids in the investigation and mitigation of ethical issues emerging from the implementation of technological solutions and, hence, support the migration to a symbiotic Factory of the Future. |
 | Umbrello, Steven Imaginative Value Sensitive Design: Using Moral Imagination Theory to Inform Responsible Technology Design Journal Article In: Science and Engineering Ethics, vol. 26, no. 2, pp. 575–595, 2020, ISSN: 1353-3452. @article{Umbrello2020cb,
title = {Imaginative Value Sensitive Design: Using Moral Imagination Theory to Inform Responsible Technology Design},
author = {Steven Umbrello},
url = {http://link.springer.com/10.1007/s11948-019-00104-4},
doi = {10.1007/s11948-019-00104-4},
issn = {1353-3452},
year = {2020},
date = {2020-04-01},
journal = {Science and Engineering Ethics},
volume = {26},
number = {2},
pages = {575--595},
abstract = {Safe-by-Design (SBD) frameworks for the development of emerging technologies have become an ever more popular means by which scholars argue that transformative emerging technologies can safely incorporate human values. One such popular SBD methodology is called Value Sensitive Design (VSD). A central tenet of this design methodology is to investigate stakeholder values and design those values into technologies during early stage research and development (R&D). To accomplish this, the VSD framework mandates that designers consult the philosophical and ethical literature to best determine how to weigh moral trade-offs. However, the VSD framework also concedes the universalism of moral values, particularly the values of freedom, autonomy, equality trust and privacy justice. This paper argues that the VSD methodology, particularly applied to nano-bio-info-cogno (NBIC) technologies, has an insufficient grounding for the determination of moral values. As such, an exploration of the value-investigations of VSD are deconstructed to illustrate both its strengths and weaknesses. This paper also provides possible modalities for the strengthening of the VSD methodology, particularly through the application of moral imagination and how moral imagination exceeds the boundaries of moral intuitions in the development of novel technologies.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Safe-by-Design (SBD) frameworks for the development of emerging technologies have become an ever more popular means by which scholars argue that transformative emerging technologies can safely incorporate human values. One such popular SBD methodology is called Value Sensitive Design (VSD). A central tenet of this design methodology is to investigate stakeholder values and design those values into technologies during early stage research and development (R&D). To accomplish this, the VSD framework mandates that designers consult the philosophical and ethical literature to best determine how to weigh moral trade-offs. However, the VSD framework also concedes the universalism of moral values, particularly the values of freedom, autonomy, equality trust and privacy justice. This paper argues that the VSD methodology, particularly applied to nano-bio-info-cogno (NBIC) technologies, has an insufficient grounding for the determination of moral values. As such, an exploration of the value-investigations of VSD are deconstructed to illustrate both its strengths and weaknesses. This paper also provides possible modalities for the strengthening of the VSD methodology, particularly through the application of moral imagination and how moral imagination exceeds the boundaries of moral intuitions in the development of novel technologies. |
 | Umbrello, Steven; Torres, Phil; Bellis, Angelo F De The future of war: could lethal autonomous weapons make conflict more ethical? Journal Article In: AI & SOCIETY, vol. 35, no. 1, pp. 273–282, 2020, ISSN: 0951-5666. @article{Umbrello2020d,
title = {The future of war: could lethal autonomous weapons make conflict more ethical?},
author = {Steven Umbrello and Phil Torres and Angelo F {De Bellis}},
url = {http://link.springer.com/10.1007/s00146-019-00879-x},
doi = {10.1007/s00146-019-00879-x},
issn = {0951-5666},
year = {2020},
date = {2020-03-01},
journal = {AI & SOCIETY},
volume = {35},
number = {1},
pages = {273--282},
abstract = {Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey of the implications of employing such ethical devices to replace humans in warfare is taken into account, this paper will engage on matters related to current scholarship on the rejection or acceptance of LAWs—including contemporary technological shortcomings of LAWs to differentiate between targets and the behavioral and psychological volatility of humans—and current and proposed regulatory infrastructures for developing and using such devices. After careful consideration of these factors, this paper will conclude that only ethical LAWs should be used to replace human involvement in war, and, by extension of their consiste},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey of the implications of employing such ethical devices to replace humans in warfare is taken into account, this paper will engage on matters related to current scholarship on the rejection or acceptance of LAWs—including contemporary technological shortcomings of LAWs to differentiate between targets and the behavioral and psychological volatility of humans—and current and proposed regulatory infrastructures for developing and using such devices. After careful consideration of these factors, this paper will conclude that only ethical LAWs should be used to replace human involvement in war, and, by extension of their consiste |
 | Umbrello, Steven Values, Imagination, and Praxis: Towards a Value Sensitive Future with Technology Journal Article In: Science and Engineering Ethics, vol. 26, no. 1, pp. 495–499, 2020, ISSN: 1353-3452. @article{Umbrello2020b,
title = {Values, Imagination, and Praxis: Towards a Value Sensitive Future with Technology},
author = {Steven Umbrello},
url = {http://link.springer.com/10.1007/s11948-019-00122-2},
doi = {10.1007/s11948-019-00122-2},
issn = {1353-3452},
year = {2020},
date = {2020-02-01},
journal = {Science and Engineering Ethics},
volume = {26},
number = {1},
pages = {495--499},
abstract = {A new book by Batya Friedman and David G. Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination, is reviewed. Value Sensitive Design is a project into the ethical and design issues that emerge during the engineering programs of new technologies. This book is intended to build on the over two decades of value sensitive design research, however with a greater emphasis on the developments of the theoretical underpinnings of the approach as well as initial steps that designers can employ to put the method into practice.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A new book by Batya Friedman and David G. Hendry, Value Sensitive Design: Shaping Technology with Moral Imagination, is reviewed. Value Sensitive Design is a project into the ethical and design issues that emerge during the engineering programs of new technologies. This book is intended to build on the over two decades of value sensitive design research, however with a greater emphasis on the developments of the theoretical underpinnings of the approach as well as initial steps that designers can employ to put the method into practice. |
 | Umbrello, Steven Nihilism and Technology Journal Article In: Prometheus: Critical Studies in Innovation, vol. 36, no. 4, 2020, ISSN: 08109028. @article{Umbrello,
title = {Nihilism and Technology},
author = {Steven Umbrello},
issn = {08109028},
year = {2020},
date = {2020-01-01},
journal = {Prometheus: Critical Studies in Innovation},
volume = {36},
number = {4},
abstract = {At times uncanny, yet thoroughly unsettling, Nolan Gertz's Nihilism and Technology is an unquestionable synthesis of Nietzschean philosophy of nihilism brought to bear on our often overlooked uses and co-construction of technologies. What Nihilism and Technology is, more often than not, is a forceful analysis of how the human-technosocial world is becoming ever more nihilistic.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
At times uncanny, yet thoroughly unsettling, Nolan Gertz's Nihilism and Technology is an unquestionable synthesis of Nietzschean philosophy of nihilism brought to bear on our often overlooked uses and co-construction of technologies. What Nihilism and Technology is, more often than not, is a forceful analysis of how the human-technosocial world is becoming ever more nihilistic. |
 | Friedman, Batya; Hendry, David G; Umbrello, Steven; Hoven, Jeroen Van Den; Yoo, Daisy The Future of Value Sensitive Design Proceedings Article In: Borondo, Jorge Pelegrín; Oliva, Mario Arias; Murata, Kiyoshi Kiyoshi; Palma, Ana María Lara (Ed.): 18th International Conference ETHICOMP 2020, pp. 217–220, Universidad de La Rioja, Logroño, Spain, 2020, ISBN: 978-84-09-20272-0. @inproceedings{Friedman2020,
title = {The Future of Value Sensitive Design},
author = {Batya Friedman and David G Hendry and Steven Umbrello and Jeroen {Van Den Hoven} and Daisy Yoo},
editor = {Jorge Pelegrín Borondo and Mario Arias Oliva and Kiyoshi {Kiyoshi Murata} and Ana María Lara Palma},
isbn = {978-84-09-20272-0},
year = {2020},
date = {2020-01-01},
booktitle = {18th International Conference ETHICOMP 2020},
pages = {217--220},
publisher = {Universidad de La Rioja},
address = {Logroño, Spain},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
|
 | Umbrello, Steven Meaningful Human Control over Smart Home Systems: A Value Sensitive Design Approach Journal Article In: Humana.Mente Journal of Philosophical Studies, vol. 13, no. 37, pp. 40–65, 2020. @article{Umbrello2020c,
title = {Meaningful Human Control over Smart Home Systems: A Value Sensitive Design Approach},
author = {Steven Umbrello},
url = {https://www.humanamente.eu/index.php/HM/article/view/315},
year = {2020},
date = {2020-01-01},
journal = {Humana.Mente Journal of Philosophical Studies},
volume = {13},
number = {37},
pages = {40--65},
abstract = {The last decade has witnessed the mass distribution and adoption of smart home systems and devices powered by artificial intelligence systems ranging from household appliances like fridges and toasters to more background systems such as air and water quality controllers. The pervasiveness of these sociotechnical systems makes analyzing their ethical implications necessary during the design phases of these devices to ensure not only sociotechnical resilience, but to design them for human values in mind and thus preserve meaningful human control over them. This paper engages in a conceptual investigations of how meaningful human control over smart home devices can be attained through design. The value sensitive design (VSD) approach is proposed as a way of attaining this level of control. In the proposed framework, values are identified and defined, stakeholder groups are investigated and brought into the design process and the technical constraints of the technologies in question are considered. The paper concludes with some initial examples that illustrate a more adoptable way forward for both ethicists and engineers of smart home devices.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
The last decade has witnessed the mass distribution and adoption of smart home systems and devices powered by artificial intelligence systems ranging from household appliances like fridges and toasters to more background systems such as air and water quality controllers. The pervasiveness of these sociotechnical systems makes analyzing their ethical implications necessary during the design phases of these devices to ensure not only sociotechnical resilience, but to design them for human values in mind and thus preserve meaningful human control over them. This paper engages in a conceptual investigations of how meaningful human control over smart home devices can be attained through design. The value sensitive design (VSD) approach is proposed as a way of attaining this level of control. In the proposed framework, values are identified and defined, stakeholder groups are investigated and brought into the design process and the technical constraints of the technologies in question are considered. The paper concludes with some initial examples that illustrate a more adoptable way forward for both ethicists and engineers of smart home devices. |
 | Gazzaneo, Lucia; Padovano, Antonio; Umbrello, Steven Designing Smart Operator 4.0 for Human Values: A Value Sensitive Design Approach Journal Article In: Procedia Manufacturing, vol. 42, pp. 219–226, 2020, ISSN: 23519789. @article{Gazzaneo2020,
title = {Designing Smart Operator 4.0 for Human Values: A Value Sensitive Design Approach},
author = {Lucia Gazzaneo and Antonio Padovano and Steven Umbrello},
url = {https://www.sciencedirect.com/science/article/pii/S2351978920306375 https://linkinghub.elsevier.com/retrieve/pii/S2351978920306375},
doi = {10.1016/j.promfg.2020.02.073},
issn = {23519789},
year = {2020},
date = {2020-01-01},
journal = {Procedia Manufacturing},
volume = {42},
pages = {219--226},
publisher = {Elsevier},
address = {Rende, CS},
abstract = {Emerging technologies such as cloud computing, augmented and virtual reality, artificial intelligence and robotics, among others, are transforming the field of manufacturing and industry as a whole in unprecedent ways. This fourth industrial revolution is consequentially changing how operators that have been crucial to industry success go about their practices in industrial environments. This paper briefly introduces a novel way of conceptualizing the human operator necessarily implicates human values in the technologies that constitute it. Similarly, the design methodology known as value sensitive design (VSD) is drawn upon to discuss how these Operator 4.0 technologies can be designed for human values.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Emerging technologies such as cloud computing, augmented and virtual reality, artificial intelligence and robotics, among others, are transforming the field of manufacturing and industry as a whole in unprecedent ways. This fourth industrial revolution is consequentially changing how operators that have been crucial to industry success go about their practices in industrial environments. This paper briefly introduces a novel way of conceptualizing the human operator necessarily implicates human values in the technologies that constitute it. Similarly, the design methodology known as value sensitive design (VSD) is drawn upon to discuss how these Operator 4.0 technologies can be designed for human values. |
2019
|
 | Umbrello, Steven Atomically Precise Manufacturing and Responsible Innovation: A Value Sensitive Design Approach to Explorative Nanophilosophy Journal Article In: International Journal of Technoethics, vol. 10, no. 2, pp. 1–21, 2019, ISSN: 1947-3451. @article{Umbrello2019a,
title = {Atomically Precise Manufacturing and Responsible Innovation: A Value Sensitive Design Approach to Explorative Nanophilosophy},
author = {Steven Umbrello},
url = {http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/IJT.2019070101},
doi = {10.4018/IJT.2019070101},
issn = {1947-3451},
year = {2019},
date = {2019-07-01},
journal = {International Journal of Technoethics},
volume = {10},
number = {2},
pages = {1--21},
institution = {Institute for Ethics and Emerging Technologies},
abstract = {Although continued investments in nanotechnology are made, atomically precise manufacturing (APM) to date is still regarded as speculative technology. APM, also known as molecular manufacturing, is a token example of a converging technology, has great potential to impact and be affected by other emerging technologies, such as artificial intelligence, biotechnology, and ICT. The development of APM thus can have drastic global impacts depending on how it is designed and used. This article argues that the ethical issues that arise from APM - as both a standalone technology or as a converging one - affects the roles of stakeholders in such a way as to warrant an alternate means furthering responsible innovation in APM research. This article introduces a value-based design methodology called value sensitive design (VSD) that may serve as a suitable framework to adequately cater to the values of stakeholders. Ultimately, it is concluded that VSD is a strong candidate framework for addressing the moral concerns of stakeholders during the preliminary stages of technological development.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Although continued investments in nanotechnology are made, atomically precise manufacturing (APM) to date is still regarded as speculative technology. APM, also known as molecular manufacturing, is a token example of a converging technology, has great potential to impact and be affected by other emerging technologies, such as artificial intelligence, biotechnology, and ICT. The development of APM thus can have drastic global impacts depending on how it is designed and used. This article argues that the ethical issues that arise from APM - as both a standalone technology or as a converging one - affects the roles of stakeholders in such a way as to warrant an alternate means furthering responsible innovation in APM research. This article introduces a value-based design methodology called value sensitive design (VSD) that may serve as a suitable framework to adequately cater to the values of stakeholders. Ultimately, it is concluded that VSD is a strong candidate framework for addressing the moral concerns of stakeholders during the preliminary stages of technological development. |
 | Umbrello, Steven; Sorgner, Stefan Lorenz Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence Journal Article In: Philosophies, vol. 4, no. 2, pp. 24, 2019, ISSN: 2409-9287. @article{Umbrello2019d,
title = {Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence},
author = {Steven Umbrello and Stefan Lorenz Sorgner},
url = {https://www.mdpi.com/2409-9287/4/2/24},
doi = {10.3390/philosophies4020024},
issn = {2409-9287},
year = {2019},
date = {2019-05-01},
journal = {Philosophies},
volume = {4},
number = {2},
pages = {24},
abstract = {Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles's novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles's novel conception of nonconscious cognition in her analysis of the human cognition-consciousness connection is discussed in relation to how nonconscious cognition can be envisioned and exacerbated in embodied AI. Similarly, this paper offers a way of understanding the concept of suffering in a way that is different than the conventional sense of attributing it to either a purely physical state or a conscious state, instead of grounding at least a type of suffering in this form of cognition. |