World Complexity Science Academy

BEYOND TECHNOLOGY: THE ETHICS OF ARTIFICIAL INTELLIGENCE

Ellen Taricani1* and Nicholas Saris1

1 Department of Communication Arts and Sciences, Pennsylvania State University, University
Park, PA USA
*Corresponding author, Ellen Taricani, email: ext2@psu.edu

Article information:
Volume 1, issue 2, article number 17
Article first published online: September 18, 2020
https://doi.org/10.46473/WCSAJ27240606/18-09-2020-0017
Review paper

ABSTRACT

Consideration of the intersection of human and technology issues and the influences of each of these is the key issue discussed in this paper. Artificial intelligence (AI) technologies have provided many situations to contemplate with the numerous impacts on society both positive and negative. These new and innovative functions will bring with them additional questions and many overt and covert ethical considerations. Along with the rise of big data, many believe that we have exceeded our worst fears about giving over control and manipulation of our private information. This leads us to the question whether technologies are empowering us or subjecting us. Technologies are becoming more and more capable of performing tasks previously assigned to humans. In many cases, this is a good thing to eliminate routine human tasks. Even in the preliminary phases of understanding AI, there is an infinite amount of questions and concerns. With these concerns, it is imperative that we consider all questions with equal attention and security. Debates will certainly continue over fundamental issues such as the changes to our employment and daily life.

Keywords: Artificial Intelligence, ethics, innovation.

Artificial Intelligence lacks common sense and the ability to reason—even if it can also make incredible discoveries that no human could, such as detecting third- or higher-order interactions (when three or more variables must interact in order to have an effect) in complex biological networks.
Jonathan Shaw (2019)

1. Introduction

The future of Artificial intelligence (AI) technologies will have considerable impacts on society and the way we operate and interact. These new and innovative functions will bring with them new questions and many overt and covert ethical considerations. AI has the potential to ease the human resources crisis in healthcare by facilitating diagnostics, decision-making, big data analytics and administration, among others. In considering the rise of Big Data, many believe that we have exceeded our worst fears about Big Brother and the future of technology domination. This leads us to the question whether technologies are empowering us or enslaving us. Technologies are becoming more and more capable of performing tasks previously assigned to humans. Humans are becoming more dependent on the capabilities of technology. (Gertz, p.2, 2018). In all this we must first tackle the technological, ethical and legal obstacles. The human resource crisis is widening worldwide, and it is obvious that it is not possible to provide care without a knowledgeable workforce. How can disruptive technologies in healthcare help solve the variety of human resource problems? Will technology empower physicians or replace them? How can the medical curriculum, including post-graduate education prepare professionals for the meaningful use of technology? Many fear that AI technologies will replace human work. There is intelligent power in many AI devices that can work more efficiently than humans in some cases. Will these ever match human intelligence and can they process ethical issues? These questions have been mounting for decades. All of life will be altered in some manner.

2. Moving forward

Every second post-millennial believes that they will work together with robots and artificial intelligence (AI) within 10 years (Hao, 2020). We examine the questions of what this desire means for the future of the workforce, and whether it has any implications for the healthcare industry. Many robotics are being used to assist in surgery and other health-related needs. As a game-changing technology, robotics naturally will create ripple effects through society. Some of them may become devastating storms. As clearly stated by Lin, Abney and Jenkins (2017): “It is no surprise that “robot ethics” -the study of these effects on ethics, law and policy-has caught the attention of governments, industry, and the broader society, especially in the past several years.” With more release of autonomy, robots begin to accumulate more data and application. Among other bizarre events, a robot car has killed its driver, and a kamikaze police robot bomb has killed a sniper. Given these new and evolving worries, we now enter the second generation of the debates-robot ethics 2.0. Without presuming much familiarity with either robotics or ethics, there are many discussions about accessibility and issues related to policymakers and the broader public, as well as academic audiences. Moving beyond tangible needs, there are new use-cases for robots and their challenges-not just robot cars, but also space robots, AI, and the internet of things (as massively distributed robots)-all impacting many global perspectives. Many of these provide access more quickly and efficiently.

As robots leave their origins of creation, the industrial production facilities, their diversity is increasing, and humans will probably interact with a lot of different kind of robots. Robots have infiltrated many global scenes and are integrated into societies. There is great diversity of present and future robots because of the needs and continued research and the possibilities of use. Considering the very recent phenomena of the Internet of Things (IOT) that represents a unique and special case of robots. There are claims that, because of the IoT’s nature, two layers of ethical concerns emerge. Safety is the first layer of concern, because of the physical nature of the IoT; the second layer concerns the ethics of information, its production, storage and usage. Between those two layers of ethical concerns, this makes it important for every future user, designer and policymaker. In many ways the most pressing concerns are with possible implications and dangers of artificial beings superior to human mental capacities (Waldheuser, 2018). Some of the new designs include varying companion robots, therapeutic robots or as a kind of colleague at work. These designs include assimilation of anthropomorphic traits and including factors such as being trustworthy and friendly.

As many AI devices exhibit more human traits, each of the devices become more easily integrated into daily tasks. Other areas of impact include education and more possibilities for learning. Using these technologies, as well as those emerging within the new cyber-enabled landscape of social networking and advancing neural computer technology in an emerging global technological workforce, are disrupting traditional education practice; producing new learning processes, environments, and tools; and expanding scientific discovery beyond anything this world has ever seen (Psotka, 2013). Technological simulations and games provide new methods of learning and practical ways to implement ideas. In the future robots and other virtual tools will be used to extend the possibilities of education. Using tested algorithms provides many new possibilities, each step of experience is potentially used in many weight updates, which allows for greater data efficiency. (Mnih et al, 2015) Each new iteration can generalize past experience to new situations. These often replicate the human processes of learning. Productivity of learning and resulting outputs are some of the most important aspects of education. Each of these advanced technologies can be improved with the integration of immersive technologies and AI devices. It can also foster a type of revolution in the manner that they are used and implemented.

3. A Bumpy Road to Revolution

A revolution can bring about significant change to a society. In considering the impact of using computers in the beginning days of desktop computing, many feared the idea of replacing employees with computer technologies. It is often a sudden change that replaces and modifies populations and ways of life. Those who are producers and innovators think the changes are very positive and important while others fear the many possibilities and displacement of what is considered “normal”. The first industrial revolution was at the end of the 18th century and demonstrated mechanization through the society. The second revolution was created as a result of massive technological advancements in the field of industries that helped the emergence of a new sources of energy. Some of the new developments included automobiles and planes. The next revolution in the second half of the 20th century was the rise of electronics, telecommunications and computers. Some significant aspects were space expeditions, research, and biotechnology. Currently, many consider that the fourth revolution includes such aspects such as the Internet, Artificial Intelligence (Al) and Virtual Reality. Many use social media platforms to connect, learn, and share information. An important trend is the development of technology-enabled platforms that will enable people to have more control of buying, selling and investing. Using these very accessible devices and platforms also provides plenty of data for others to use and analyze. This type of vulnerability in open access of data causes some to fear the uncertainty of use.

When you think of Artificial Intelligence, what comes to mind first? Is it robotics? Big Data? Or maybe it’s potential for industry disruption? For many, the initial thought of AI is limited to its technological application, and the possible ways it could change our lives. Just as the initial implementation of personal computers caused anxiety of replacing human interactions and use, the new developments of AI cause people to fret about being replaced. Workers could be replaced with a much more efficient manner of completing basic work tasks. Even in education, a robot or other object could be teaching and demonstrating techniques. This isn’t necessarily a “bad thing”, as it is importance for society to begin thinking about a future with AI. With that being said, there is much more to this emerging technology that requires attention and consideration.

Speculation for future technologies is very complex in attempting to determine impact of AI. A critical point of understanding in the discussion of AI is its ethical implications. The ethics of AI are often ignored in the consideration of the topic, posing as a big dilemma to its general foundation. Broad theoretical implications of AI have resulted in a sense of public fear surrounding the technology, instilling the idea that self-learning machines will eventually consume the human race. Though this may sound a bit silly, the idea in itself is not far off. Given that the applicability power of AI is relatively unknown, our population is released to speculate its societal reach and potential consequences. It is critical that we confront this uncertainty with knowledge and assurance, guiding creativity in a positive direction. Machine learning is one of the most successful deep structural learning techniques for solving a variety of tasks, including use in language translations, image identification and generation, and mapping. There are many challenges coming to light in the development of said technologies, and how the emphasis of ethics in AI can help mitigate future concerns in the years to come. Some of these include the disclosure of private information. There is a prioritization of ethics in AI that will not only give structure to the technology and its application but will simultaneously pave the way for more effective and trustworthy systems that will greatly benefit our world.

There are many factors that play into the creation of an AI-based system. One of the most important components necessary for the technology’s efficient establishment is the utilization of data. Data is the driving force in the development of AI, serving as the foundational basis of its construction. Without data, AI systems would be practically obsolete, as a deep learning algorithm would have no information to use for computation. Furthermore, AI tools require large amounts of data, allowing for the technology to produce the most accurate outcomes possible. Given the importance of data in developing AI technologies, we ask ourselves, “How will we gather all of this information?”. It is within this relationship that we find our first ethical crossroad. Technology today has given us the ability to track and record high volumes of data without the knowledge of who has the data and how it is used. This raises a big red flag in individual privacy, as user information is practically free for public consumption. The issue at
hand is one that spans widely in terms of its public concern. In an article published by Big Four accounting firm Deloitte (Dalmia and Schatsky, 2019), they highlight how private sector companies and governments alike have placed user privacy in the backseat, taking advantage of data availability to accomplish certain activities. They state that “Citizens face widespread threats to their privacy, such as data collected on smartphones, while governments could potentially examine a citizen’s online activity. Law enforcement agencies worldwide are deploying facial recognition technology, and retail outlets have begun cataloging shoppers with facial recognition, which can be matched to their credit cards—often without customers’ awareness or consent”. The two examples provided by Deloitte are just a glimpse into how privacy is endangered based on the accessibility of user data.

In the spring of 2020, there was great concern worldwide about the COVID-19 virus (coronavirus). The impact was great, and the total of infections and deaths were numerous. In light of this, many countries decided to push tracking devices to help slow down the spread and monitor people as they moved from place to place. India decided to force people to use its COVID-19 app, unlike any other democracy (O’Neill, 2020). Millions of Indians have no choice but to download the country’s tracking technology if they want to keep their jobs or avoid reprisals. It tracks Bluetooth contact events and location—as many other apps do—but also gives each user a color-coded badge showing infection risk. Aarogya Setu (which means “a bridge to health” in Hindi) also offers access to telemedicine, an e-pharmacy, and diagnostic services. It’s whitelisted by all Indian telecom companies, so using it does not count against mobile data limits. The Australian government launched a smartphone app called COVIDSafe to find and alert the contacts of people infected with the coronavirus. The idea is that such digital contact tracing will identify people potentially exposed to the coronavirus who should self- isolate—and that they’ll voluntarily do so. Skeptics worry the apps will amount to a high-tech distraction. Advocates of centralized apps say the design makes it easy to check whether the right people are getting notifications. Researchers can see all the phones that got an alert and whether those users later reported symptoms or a positive test through the app. Privacy issues used to be centered around evading online activity trackers as they follow you around with ads for things you don’t want (or do you?). Now exposed as central to all too many political and ethical scandals, data privacy has become one of the defining social and cultural issues of our era (Meehan, 2019). In many cases, there is travel restriction as well as tracking of individuals to keep track of all movement. Some of this is important for health concerns, but some may only be to promote an underlying agenda.

While we could go on about the global issue of data privacy and its impact on AI, there are other, equally as important, ethical predicaments that the technology faces. One of these surrounds the creation of algorithms in an AI-system. By definition, an algorithm is a “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer”. In other words, an algorithm is a series of steps that are placed together in order to accomplish something and produce an output. In the development of an AI-system, algorithms are what data scientists and technicians use to apply data for executing machine learning capabilities. Why are algorithms relevant to the ethics of AI? Though AI-systems are centered around the idea of autonomy, human intervention is required to write the algorithm and actually develop the technology into a functioning model. This has resulted in the concern for potential biases and discrimination. In addition to this, AI-systems have the ability to develop biases on their own, based on the information it is given. Whether it be through the data gathered, the way that an algorithm is constructed, or the results of deep learning, there is a significant amount of room for error that exists in the process, potentially hindering the effectiveness of AI. The task of reducing bias in data and algorithms is complex, as it’s problematic source stems deeper than a technical perspective. While AI- systems do a sufficient job of achieving their purpose as a computing model, they often fail to account for non- quantitative factors that play into measurement beyond numbers. In an article featured in the MIT Technology Review, author Karen Hao (2020) identifies this as a “lack of societal context.” In formulating the viewpoint, Hao references a recently published paper at the Data & Society Research Institute that offers a “real-world” scope to better understand the analysis, stating “…the way in which computer scientists are taught to frame problems often isn’t compatible with the best way to think about social problems. For example, Andrew Selbst (2019), a postdoc at the Data & Society Research Institute, identifies what he calls the “portability trap.” Within computer science, it is considered good practice to design a system that can be used for different tasks in different contexts. “But what that does is ignore a lot of social context. You can’t have a system designed in Utah and then applied in Kentucky directly because different communities have different versions of fairness. Or you can’t have a system that you apply for ‘fair’ criminal justice results then applied to employment. How we think about fairness in those contexts is just totally different” (Selbst, p.2). The point made here is that it gives perspective into how AI-systems require careful specialization in order to produce an accurate output. Because of the fact that an algorithm at its core is unable to account for a difference in demographic, its overall effectiveness is placed into question.

The issues surrounding Artificial Intelligence in regard to data management and process control are clear and prescriptive in their own understandings, but what about a broad predicament that the technology has incurred? Governance and policy are critical elements that encapsulates all of the activities and methodologies associated with AI. Appropriate legal framework that covers the technology is essential to its development and deployment, both in the present, and the future. While there are existing laws in place that apply to AI, there is a lack of AI-specific legislation on a global scale. As corporations across a variety of industries begin to move in on AI and other emerging technologies, it is vital that international guidelines be set to steer business practices into the right direction. This area of concern is one of the most commonly discussed topics in relation to AI, and the private sector is on the forefronts of the conversation. In a white paper published by Google focused on the issues of AI governance, the company writes, “To date, self- and co-regulatory approaches informed by current laws and perspectives from companies, academia, and associated technical bodies have been largely successful at curbing inopportune AI use. We believe in the vast majority of instances such approaches will continue to suffice, within the constraints provided by existing governance mechanisms (e.g., sector-specific regulatory bodies). However, this does not mean that there is no need for action by government. To the contrary, this paper is a call for governments and civil society groups worldwide to make a substantive contribution to the AI governance discussion” (Google, 2019). As a leader within the AI community, Google has identified the need for a
higher level of engagement in tackling the issue. The tech giant is not alone, as several other companies have published similar literature to try and bring attention to the topic.

4. Ethics as an Antidote

Artificial Intelligence faces an array of conflicts as it continues to bloom within society. Fortunately, we are early enough within the technology’s lifespan to tend to these obstructions without harming its advancement. In our conquest to get ahead of AI, many of the solutions we seek could be found in the prioritization of ethics. Different from a technical approach, ethics is concerned with morality and behavioral principles that differentiate between what is “good” and “bad.” In respect to AI, this introduces an entirely separate aspect of the technology that is unique from its core composition. Putting a dominant focus on the numeric consumption of information, there are many things that AI as a technology is unable to account for. In an article featured in the Scientific American, author Shohini Kundu (2019) recognizing this quandary, stating, “Consistency is indispensable to ethics and integrity. Our decisions must adhere to a standard higher than statistical accuracy; for centuries, the shared virtues of mutual trust, harm reduction, fairness and equitability have proved to be essential cornerstones for the survival of any system of reasoning. Without internal logical consistency, AI systems lack robustness and accountability—two critical measures for engendering trust in a society. By creating a rift between moral sentiment and logical reasoning, the inscrutability of data-driven decisions forecloses the ability to engage critically with decision-making processes.” Kundu’s observation highlights how applying ethics to AI can introduce the positive implantation of humanistic cognition into its developmental process.

Given the general novelty of Artificial Intelligence, industry leaders within the private sector have answered the call of applying ethics to its construct. This has presented itself in the form of central literacy, to explicit guidelines for how to utilize the technology is the best ways possible. IBM, known for its promotion of AI, recently published a document for public consumption that focuses solely on the importance of ethics in the AI creation process. In the paper, the company bridges ethics and AI together, stating, “As designers and developers of AI systems, it is an imperative to understand the ethical considerations of our work. A tech-centric focus that solely revolves around improving the capabilities of an intelligent system doesn’t sufficiently consider human needs. An ethical, human-centric AI must be designed and developed in a manner that is aligned with the values and ethical principles of a society or the community its affects.” Beyond comprehensive knowledge, companies are offering more business-driven perspectives to inform the public. In an article titled “Confronting the risks of Artificial Intelligence,” tier-one consulting firm McKinsey & Co highlights ethics as a vital consideration in the utilization of AI within a business process, stating, “Another imperative is to engage in a serious debate about the ethics of applying AI and where to draw lines that limit its use. Collective action, which could involve industry-level debate about self-policing and engagement with regulators, is poised to grow in importance as well. Organizations that nurture those capabilities will be better positioned to serve their customers and society effectively; to avoid ethical, business, reputational, and regulatory predicaments; and to avert a potential existential crisis that could bring the organization to its knees.” These articles, like many others, give insight as to how companies are approaching AI, and how society can apply a holistic perspective to the technology in an effective manner.

5. Conclusion

In the today’s realm of Artificial Intelligence, uncertainty hovers above its foundation as we continue to learn it’s reach and limitations. Even in the preliminary phases of understanding AI, we as a society have a generated an infinite amount of questions and concerns surrounding the topic. In light of this observation, it is imperative that we consider all questions with equal attention and care. The engagement of AI embedded within the average person is dominated by the theoretical of the technology, letting the science fiction portion of our minds run wild. Applied to a larger scale, this results in the potential neglect for understanding exactly “how” these potential technologies can be created. Ethics in the formation of Al-systems is just as important as it’s technical attributes, serving as the premise for all of the “exterior” considerations in its application. There continues to be many debates about future such as the impact on our lives of the loss of control over our data. So much to consider in using new biotechnology and Al. All of these innovations redefine the human aspects and rights. They bring change to overall personal views of life span and health. With all this brings many to the point of finding new moral and ethical boundaries.
Society often overlooks or even fails to consider the significance of ethics in Al, which is something that must change. As we continue our journey in developing Al, it is the responsibility of academia, industry leaders, and government institutions to further inform the general public of the ethical implications of Al, creating a structured and sustainable path for growth in the future.

Applied to a larger scale, this results in the potential neglect for understanding exactly “how” these potential technologies can be created and mirror human interaction. Ethics in the formation of AI-systems is just as important as it’s technical attributes, serving as the foundation for all of the “exterior” considerations in its application. Society often overlooks or even fails to consider the significance of ethics in AI until it has already been implemented and integrated. As we continue our journey in developing AI, it is the responsibility of academia, industry leaders, and government institutions to further inform the general public of the ethical implications of AI, creating a structured and sustainable path for growth in the future.

References

Cheatham, N., Javanmardian, K., and Samandari H. (2020), Confronting the Risks of Artificial Intelligence. Accessed June 15, 2020.

https://www.mckinsey.com/business- functions/mckinsey-analytics/our-insights/confronting-the-risks-of-artificial-intelligence.

Dalmia, N. and Schatsky, D. (2019), “The Rise of Data and AI Ethics”, avaiable at: https://www2.deloitte.com/us/en/insights/industry/public-sector/government- trends/2020/government-data-ai-ethics.html#endnote-sup-17 (accessed 15 June 2020).

Gertz, N. (2018), Nihilism and Technology. Rowman & Littlefield, London.

Hao, K. (2019), “This is How AI Bias Really Happens–And Why It’s So Hard to Fix”, MIT Technology Review, February 4, available at: www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/ (accessed 25 April 2020).

Howell O’Neill, P. (2020), “India is Forcing People to Use its Covid App, Unlike Any Other Democracy”, available at: https://www.technologyreview.com/2020/05/07/1001360/india-aarogya-setu-covid-app- mandatory/ (accessed 23 June 2020).

Kundu, S. (2019), “Ethics in the Age of Artificial Intelligence”, available at: https://blogs.scientificamerican.com/observations/ethics-in-the-age-of-artificial- intelligence/ (accessed 15 June 2020).

Google (2019), “Perspectives on Issues in AI Governance”, available at:.
https://ai.google/static/documents/perspectives-on-issues-in-ai-governance.pdf (accessed 15 June 2020).

IBM (2019), “Everyday Ethics for Artificial Intelligence”, available at: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf (accessed 15 June 2020).

Waldheuser, A. Lin, P., Abney, K., and Jenkins, R. (Eds.) (2018), “Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence”, Ethic Theory Moral Practice, Vol. 21, pp. 751–753. doi.: 10.1007/s10677-018-9909-3.

Meehan, M. (2019), “Data Privacy Will Be the Most Important Issue In The Next Decade”, Forbes, 26 November, available at: https://www.forbes.com/sites/marymeehan/2019/11/26/data-privacy-will-be-the-most-important-issue-in-the-next-decade/#56a453071882 (accessed 15 June 2020).

Meskó, B., Hetényi, G., and Győrffy, Z. (2018), “Will Artificial Intelligence Solve the Human Resource Crisis in Healthcare?” BMC Health Service Research, Vol. 18, 545. doi: 10.1186/s12913-018-3359-4.

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., … and Petersen, S. (2015), “Human-level control through deep reinforcement learning”, Nature, Vol 518, pp. 529-533. doi: 10.1038/nature14236.

Lin, P., Abney K., and Jenkins, R. (2017), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Oxford University Press, Oxford.

Psotka, J., (2013), “Educational Games and Virtual Reality as Disruptive Technologies”, Educational Technology & Society, Vol. 16 No. 2, pp. 69–80.

Selbst, A., Boyd, D., Friedler, D., Sorelle, A., Venkatasubramanian, S., and Vertesi J. (2019), “Fairness and Abstraction in Sociotechnical Systems”, in Chouldechova, A. & Diaz, F. (Ed.s), FAT* ’19: Proceedings of the conference on Fairness, Accountability, and Transparency,ACM, New York, pp. 59–68.

Shaw, J. (2019), “Artificial intelligence and Ethics”, Harvard Magazine, Vol. 30, pp: 1-11.

Waldheuser, A. Lin, P., Abney, K., and Jenkins, R. (Eds.) (2018), “Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence”, Ethic Theory Moral Practice, Vol. 21, pp. 751–753. doi:10.1007/s10677-018-9909-3

BACK ON TOP

World Complexity Science Academy Journal
a peer-reviewed open-access quarterly published
by the World Complexity Science Academy
Address: Via del Genio 7, 40135, Bologna, Italy
For inquiries, contact: Dr. Massimiliano Ruzzeddu, Editor in Chief
Email: massimiliano.ruzzeddu@unicusano.it
World complexity science Academy journal
ISSN online: 2724-0606

cc

Copyright© 2020 – WCSA Journal WCSA Journal by World Complexity Science Academy is licensed under a Creative Commons Attribution 4.0 International License.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close