Join Our Discord (940+ Members)

Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hybrid Models for Theoretical and Applied Ethics in Artificial Intelligence

Content License: cc-by-nc-sa

Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hybrid Models for Theoretical and Applied Ethics in Artificial Intelligence

Papers is Alpha. This content is part of an effort to make research more accessible, and (most likely) has lost some details from the original. You can find the original paper here.

Introduction

As a meta-ethnography, this paper will take an anthropological approach to the culture and development of artificial intelligence (AI) ethics and practices. This is in no way exhaustive, however it will magnify some of the key angles and tensions in the ethical AI field. We will be using the previously publishedframework of top-down and bottom-up ethics in AI and examining what this means in three different contexts: theoretical, technical, and political. Strategies for artificial morality have been discussed in a top-down, bottom-up and hybrid frameworks in order to create a foundation for philosophical and technical reflections on AI system development.Although there can be distinctions made between ethics and morality, we use the terms interchangeably. Top-down ethics in AI can be described as a rule-based system of ethics. These can come from philosophical moral theories (theoretical perspective), from top-down programming (technical perspective),or by principles designated by authorities (political perspective). Bottom-up ethics in AI is contrastive to Top-down approaches and works without overarching rules. Bottom-up ethics in AI can come from learned experiences (theoretical perspective), from machine learning and reinforcement learning (technical perspective),or from everyday people and users of technology calling for ethics (political perspective). Hybrid versions of top-down and bottom-up methods are a mixture of the two, or else somewhere in the middle, and can have various outcomes. The conclusions from this analysis show that ethics around AI is complex, and especially when deployed globally, it’s implementation needs to be considered from multiple angles: there is no one correct way to make AI that is ethical. Rather, care must be taken at every turn to right the wrongs of society through the application of AI ethics, which could create a new way of looking at what ethics means in our current digital age.

This paper contextualizes top-down and bottom-up ethics in AI through analyzing theoretical, technical, and political implementations. Section 2 includes a literature review which states our research contributions and how we have built on existing research, as well as an outline of the framework utilized throughout. In section 3, the first angle of the framework described is the theoretical moral ethics viewpoint: Ethics can formulate from the top-down, coming from rules and philosophies, or bottom-up which mirrors the behaviors of people and what is socially acceptable for individuals as well as groups, varying greatly by culture. Section 3.1 gives an example of fairness as a measure of the complexity of theoretical ethics in applied AI. Section 4 regards the technical perspective, which is exemplified by programming from the top and applied machine learning from the bottom: Essentially how to think about implementing algorithmic policies with balanced data that will lead to fair and desirable outcomes. Section 5 will examine the angle of top-down ethics dictated from the powers that be, and bottom-up ethics derived from the demands of the people: We will call this the political perspective. In section 6, we will connect the perspectives all back together and reintegrate these concepts whilst we examine how they intertwine by first looking at the bottom-up method of AI being taught ethics using the example of reinforcement learning in section 6.1. Section 7 combines perspectives from the top-down and illustrates this with section 7.1 that provides examples of principles of AI ethics. An understanding of hybrid ethics in AI which incorporates top-down and bottom-up methods is included in section 8, followed by case studies on data mining in Africa (section 8.1) and the varying efficiency of contact tracing apps for COVID-19 in Korea and Brazil (section 8.2). Following is a discussion in section 9, and finally the paper’s conclusion in section 10. This is an exercise in exploration to reach a deeper understanding of morality around AI. How ethics for AI works in reality is a blend of all of these theories and ideas acting on and in conjunction with one another. The aim of this qualitative analysis is to provide a framework to understand and think critically about AI ethics, and our hope is that this will influence the development of ethical AI in the future.

Literature Review

The purpose of this paper is to deepen the understanding of the development of ethics in AI by cross-referencing a political perspective into the framework of top-down, bottom-up and hybrid approaches, which only previously covered theoretical and technical angles.Adding the third political perspective fills some of the gaps that were left by only viewing AI ethics theoretically and technically. The political perspective allows for the understanding about where the influence of power affects ethics in AI. Issues of power imbalances are often systemic, and systems of oppression can now be replicated without oversight with the use of AI, as it learns this from past human behaviour, such as favoring white faces and names in algorithms that are used for hiring and loans.Furthermore,it is important to not ignore the fact that AI serves to increase wealth and power for large corporations and governments who have the most political influence.While the influence of politics on the development of AI ethics has been described previously, never has it been discussed in comparison with the technical and theoretical lens while also utilizing the top-down, bottom-up, and hybrid frameworks for development.

By engaging with and expanding on the framework of top-down, bottom-up and hybrid ethics in AI, we can gain a deeper understanding of how ethics could be applied to technology. There has been debate about whether ethical AI is even possible,and it is not only due to programming restrictions but to the fact that AI exists in an unethical world which values wealth and power for the people already in power above all else.This is why the political perspective is so vital to this discussion. By political, we do not refer to any particular political parties or sides, but rather, as a way of talking about power, both corporate and government. As defined in the Merriam Webster dictionary, political affairs or business refer to ‘competition between competing interest groups or individuals for power and leadership (as in a government).The articles we chose to include in this literature review paint a picture of the trends in AI ethics development in the past two decades. Table 1 features a list of primary research contributions and which frameworks they discuss. The first mention of the framework we use to talk about ethics in AI from the top-down, bottom-up and a mix of the two (hybrid) originated with Allen, Smit and Wallach in 2005.The same authors penned a second article on this framework for machine morality in 2008.The authors described two different angles of approaching AI morality as being the theoretical angle and the technical angle (but not the political angle). In 2017,an article was written that explored humanity’s ethics and the influence on AI ethics, for better or worse. Although the authors did not acknowledge politics or power directly, they did discuss the difference between legality and personal ethics. Next we see a great deal of focus on AI principles, as exemplified by the work of Whittlestone et al in 2019,which also discusses political tensions and other ethical tensions in principles of AI ethics outright. This is important as it questions where the power is situated, as is a central tenet to our addition to this area of research. These papers which describe the political lens and tensions of power and political influence are lacking in the framework of top-down, bottom-up, and hybrid ethical AI implementations.

Literature Contributions including Frameworks Cross-referenced

Table Label: tab-princ

Download PDF to view table

Our methodology utilizes the top-down, bottom-up, and hybrid framework for the development of AI ethics and cross-references it with the technical, theoretical, and political lenses which we will explore more deeply in the following sections. Table 2 lists these cross-references in simplified terms for the reader.

Framework for Contextualizing AI Ethics

Table Label: tab-princ

Download PDF to view table

Theoretical AI Morality: Top-Down vs Bottom-Up

The first area to consider is ethics from a theoretical moral perspective. The primary point to mention in this part of the analysis is that ethics has been historically made for people, and people are complex in how they understand and apply ethics, especially top-down ethics. At an introductory glance, “top-down” ethical theories amount to rule-utilitarianism and deontological ethics, where “bottom-up” refers to case-based reasoning.Theoretical ethics from a top-down perspective includes examples such as the Golden Rule, the Ten Commandments, consequentialist or utilitarian ethics, Kant’s moral imperative and other duty based theories, Aristotle’s virtues, and Asimov’s laws of robotics. These are coming from a wide range of perspectives, including literature, philosophy, and religion.Most of these collections of top-down ethical principles are made solely for humans. The one exception on this list that does not apply to people is of course Asimov’s laws of robotics, that are applied precisely for AI. However, Asimov himself said that they were flawed. Azimov used storytelling to demonstrate that his three laws, plus the ‘zeroth’ law added in 1985, had problems of prioritization and potential deadlock. He showed that ultimately, the laws would not work, despite the surface-level appearance of putting humanity’s interest above that of the individual. This has been echoed by other theorists on any rule based system implemented for ethical AI. Asimov’s rules of robotics seemingly encapsulate a wide array of ethical concerns, giving the impression of being intuitive and straightforward. However, in each of his stories, we can see how they fail time after time.Much of science fiction does not predict the future as much as warn us against its possibilities.

The top-down rule-based approach to ethics presents different challenges for AI than in systems that were originated for humans. As humans, we learn ethics as we go, from observation of our families and community, including how we react to our environment and how others react to us. Etzioni and Etzionimade the case that humans first acquire moral values from those who raise them, although it can be argued that individuals make decisions based on their chosen philosophies. As people are exposed to various inputs from new groups, cultures and subcultures, they modify their core value systems, gradually developing their own personal moral matrix.This personal moral mix could be thought of as a hybrid model of ethics for humans. The question is, how easy and practical is it to take human ethics and apply them to machines? Some would say it is impossible to teach AI right and wrong, if we could even come to an agreement on how to define those terms in the first place. Researchers have stressed the importance of differences in the details of ethical systems across cultures and between individuals, even though shared values exist that transcend cultural differences.There simply is not one set of ethical rules that will be inclusive for everyone.

The Example of Fairness in AI

Many of the common systems of values that experts agree need to be considered in AI include fairness, or to take it further, justice. We will work with this concept as an example. We have never had a fair and just world, so to teach an AI to be fair and just does not seem possible. But what if it was? We get into gray areas when we imagine the open-ended potential future of AI. Imagining it could actually improve the state of the world, as opposed to imagining how it could lead to further destruction of humanity could be what propels us in a more positive direction.

Artificial intelligence should be fair. The first step is to agree on what a word like fairness means when designing AI. Many people pose the question: Fairness for whom?

Then there is the question of how to teach fairness to AI. AI systems as we know are machines. Machines are good at math. Mathematical fairness and social fairness are two very different things. How can this be codified? Can an equation which solves or tests for fairness between people be developed?

Most AI is built to solve problems of convenience and to automate tedious or monotonous tasks in order to free up our time and make more money. There is a disconnect between what Allen et al.refer to as the spiritual worldviews from which much of our ethical understanding originates, and the materialistic worldview of computers and robots, which is not completely compatible.

It can be seen every day in the algorithms that discriminate and codify what retailers think we are most likely to consume. For example, in the ads they show us as we scroll, we often see elements that don’t align with our values but rather appeal to our habits of consumption. At the core level, these values are twisted to benefit the current capitalistic systems and have little to do with actually improving our lives. We cannot expect AI to jump from corporate materialism to social justice, or reach a level of fairness, simply by tweaking the algorithms.

Teaching ethics to AI is extremely challenging – if not impossible – on multiple fronts. In order to have ethical AI we need to first evaluate our own ethics. Douglas Rushkoff, well-known media theorist, author and Professor of Media at City University of New York, wrote:

“…the reason why I think AI won’t be developed ethically is because AI is being developed by companies looking to make money – not to improve the human condition. . . My concern is that even the ethical people still think in terms of using technology on human beings instead of the other way around. So, we may develop a ‘humane’ AI, but what does that mean? It extracts value from us in the most ‘humane’ way possible?”This is a major consideration for AI ethics, and the realities of capitalism don’t align with ethics and virtues such as fairness. One of the biggest questions when considering ethics for AI is how to implement something so complex and philosophical into machines that are contrastingly good at precision. Some say this is impossible. “Ethics is not a technical enterprise, there are no calculations or rules of thumb that we could rely on to be ethical. Strictly speaking, an ethical algorithm is a contradiction in terms.” (Vachnadze, 2021)The potential possibilities for technical application of AI morality.

Technical AI Morality: Top-Down vs Bottom-Up

One way to think about top-down AI from the technical perspective, as noted by Eckart,is to think of it as a decision tree, often implemented in the form of a call center chat bot. The chat bot guides the user through a defined set of options depending on the answers inputted. Eckart continues by describing bottom-up AI as what we typically think of when we hear artificial intelligence: utilizing machine learning and deep learning. As an example, we can think about the AI utilized for diagnostic systems in healthcare and self-driving cars. These bottom-up systems can learn automatically without explicit programming from the start.Top-down systems of learning can be very useful for some tasks that machines can be programmed to do, like the chatbot example above. However, if they are not monitored, they could make mistakes, and it is up to us as people to catch those mistakes and correct them, which is not always possible with black boxes in effect. There may also be a lack of exposure to sufficient data to make a decision or prediction in order to solve a problem, leading to system failure. Here is the value of having a ‘human in the loop’. This gets complicated even further when we move into attempting to program the more theoretical concepts of ethics.

Bottom-up from the technical perspective, which will be described in depth below, follows the definition of machine learning. The system is given data to learn from, and it uses that information from the past to predict and make decisions for the future. This can work quite well for many tasks. This approach can also have a lot of flaws built in, because the world that it learns from is flawed. We can look at the classic example of harmful biases being learned and propagated through a system, for instance in who gets a job or a loan, because the data from the past reflects biased systems in our society.Technical top-down and bottom-up ethics in AI primarily concerns how AI learns ethics. Machines don’t learn like people do. They learn from the data that is fed to them, and they are very good at certain narrow tasks, such as memorization or data collection. However, AI systems can fall short in areas such as objective reasoning, which is at the core of ethics. Whether coming from the top-down or bottom-up, the underlying concern is that teaching ethics to AI is extremely difficult, both technically and socially.

Ethics and morality prove difficult to come to an actual consensus on. We live in a very polarized world. What is fair to some will undoubtedly be unfair to others. There are several hurdles to overcome. Wallach et aldescribe three specific challenges to address in this matter, paraphrased below:

  • Scientists must break down moral decision making into its component parts, which presents an engineering task of building autonomous systems in order to safeguard basic human values.

  • Which decisions can and cannot be codified and managed by mechanical systems needs to be recognized.

  • Designing effective and cognitive systems which are capable of managing ambiguity and conflicting perspectives needs to be learned.Here we will include the use of a hybrid model of top-down and bottom-up ethics for AI, that has a base of rules or instructions, but then also is fed data to learn from as it goes. This method claims to be the best of both worlds, and covers some of the shortcomings of both top-down and bottom-up models. For instance, self-driving cars can be programmed with laws and rules of the road, and also can learn from observing human drivers. In the next section we will explore more of the political angle of this debate.

Political AI Morality: Top-Down vs Bottom-Up

We use the term political to talk about where the power and decision making is coming from, which then has an effect that radiates outward and influences systems, programmers, and users alike. As an example of top-down from a political perspective, this paper will largely concern itself with principles of ethics in AI, often stated by corporations and organizations. Bottom-up ethics in AI from a political standpoint concerns the perspectives of individuals and groups who are not in positions of power, yet still need a voice.

The Asilomar AI Principlesare an example of a top-down model and have their critiques. This is a comprehensive list of rules that was put out by the powers that be in tech and AI, with the hopes of offering guidelines for developing ethics in AI. Published in 2017, this is one key example of top-down ethics from the officials including 1,797 AI/Robotics Researchers and 3,923 other Endorsers affiliated with the Future of Life Institute. These principles outline ethics and values that the use of AI must respect, provide guidelines on how research should be conducted, and offer important considerations for thinking about long-term issues.Congruently, another set of seven principles for Algorithmic Transparency and Accountability were published by the US Association for Computing Machinery (ACM) which addressed a narrower but closely related set of issues.Since then we have seen an explosion of lists of principles for AI ethics. A deeper discussion of principles can be found in section 7.1 of this paper.

The bottom-up side of the political perspective is not as prevalent but could look like crowd-collected considerations about ethics in AI, such as from employees at a company, students on a campus, or online communities. The key feature of bottom-up ethics from a political perspective is determinism by everyday people, mainly, the users of the technology. MIT’s moral machine (which collected data from millions of people on their decisions in a game-like program to assess what a self-driving vehicle should do in life or death situations), is one example of this.However, it still has top-down implications such as obeying traffic laws imposed by municipalities. A pure bottom-up community-driven ethics initiative could include guidelines, checklists, and case studies specific to the ethical challenges of crowdsourced tasks.Even when utilizing bottom-up “crowdsourcing” and employing the moral determination of the majority, these systems often fail to serve minority participants. In a roundtable discussion from the Open Data Initiative (ODI),they found that marginalized communities have a unique placement for understanding and identifying the contradictions and tensions of the systems we all operate in. Their unique perspectives could be leveraged to create change. If a system works for the majority, which is often the goal, it may be invisibly dysfunctional for people outside of the majority. This insight is invaluable to alleviate ingrained biases.

There is an assumption that bottom-up data institutions will represent everyone in society and always be benign. Alternatively, there is a counter-argument that their narrow focus leads to niche datasets and lacks applicability to societal values. In the best light, bottom-up data institutions are viewed as revolutionary mechanisms that could rebalance power between big tech companies and communities.An important point to keep in mind when thinking about bottom-up ethics is that there will always be different ideals coming from different groups of people, and the details of the applications are where the disagreements abound.

The Bottom-up Method of AI Being Taught Ethics through Reinforcement Learning

To recombine the perspectives of theoretical, technical, and political bottom-up ethics for AI is a useful analytical thought experiment. Allen et. al.describe bottom-up approaches to ethics in AI as those which learn through experience and strive to create environments where appropriate behavior is selected or rewarded, instead of functioning under a specific moral theory. These approaches learn either by unconscious mechanistic trial and failure of evolution, by engineers or programmers adjusting to new challenges they encounter, or by the learning machine’s own educational development.The authors explain the difficulties of evolving and developing strategies that hold the promise of a rise in skills and standards that are integral to the overall design of the system. Trial and error are the fundamental tenets of evolution and learning, which rely heavily on learning from unsuccessful strategies and mistakes. Even in the fast-paced world of computer processing and evolutionary algorithms, this is an extremely time consuming process. Additionally, we need safe spaces for these mistakes to be made and learned from, where ethics can be developed without real-world consequences.

Reinforcement Learning as a Methodology for Teaching AI Ethics

Reinforcement learning (RL) is a technique of machine learning where an agent learns by trial and error in an interactive environment, utilizing feedback from its own actions and experiences.Reinforcement learning is different from other forms of learning that rely on top-down rules. Rather, this system learns as it goes, making many mistakes but learning from them, and it adapts through sensing the environment. RL is commonly used in training algorithms to play games, such as Alpha Go and chess. When it originated, RL was studied in animals, as well as early computers. The trial and error beginnings of this technique have origins in the psychology of animal learning in the early 1900s (Pavlov), as well as in some of the earliest work in AI. This coalesced in the 1980s to develop into the modern field of reinforcement learning.RL utilizes a goal-oriented approach, as opposed to having explicit rules of operation. A ‘rule’ in RL can come about as a temporary side-effect as it attempts to solve the problem, however if the rule proves ineffective later on, it can be discarded. The function of RL is to compensate for machine learning draw-backs by mimicking a living organism as much as possible.This style of learning that throws the rule book out the window could be promising for something like ethics, where the rules are not overly consistent or even agreed upon. Ethics is more situation-dependent, therefore teaching a broad rule is not always sufficient. It is a worthwhile investigation to question if RL could be methodized in integrating ethics into AI.

The problems addressed by RL consist of learning what to do and how to navigate situations into actions in order to maximize a numerical reward signal. The three most important distinguishing features of RL are: First, that it is essentially a closed-loop; second, that it is not given direct instructions on what actions to take; and third, that there are consequences (reward signals) playing out over extended periods of time. Turning ethics into numerical rewards can pose many challenges, but may be a hopeful consideration for programming ethics into AI systems. Critically, the agent must be able to sense its environment to some degree and it must be able to take actions that affect the state.One of the ways that RL can work in an ethical sense, and to avoid pitfalls, is by utilizing systems that keep a human in the loop. “Interactive learning constitutes a complementary approach that aims at overcoming these limitations by involving a human teacher in the learning process.”Keeping a human in the loop is critical for many issues, including those around transparency. Moral uncertainty needs to be considered, purely because ethics is an area of vast uncertainty, and is not an answerable math problem with predictable results.Could an RL program eventually learn how to compute all the different ethical possibilities?

This may take a lot of experimentation. It is important to know the limitations, while also remaining open to being surprised. We worry a lot about the unknowns of AI: Will it truly align with our values? Only through experimentation can we find out. Researchers stress the importance of RL systems needing a ‘safe learning environment’ where they can learn without any harm being caused to humans, assets, or the external environment. The gap between simulated and actual environments, however, complicates this issue, particularly related to differentiating societal and human values.## The Top-Down Method of AI Being Taught Ethics

Summarizing top-down ethics for AI brings together the philosophical principles, programming rules, and authoritative control in this area. A common thread among all sets of top-down principles is ensuring AI is used for “social good” or “the benefit of humanity”. These phrases carry with them few if any real commitments, hence, a great majority of people can agree on them. However, many of these proposed principles for AI ethics are simply too broad to be action guiding.Furthermore, if these principles are being administered from big Tech or the government in a top-down political manner, there could be a lot that slips under the radar because it sounds good. Relating to the earlier example in section 2.1, ‘fairness’ is something we can all agree is good, but we can’t all agree what it means. Fair for one person or group could equate to really unfair to another.

According to Wallach et. al,the price of top-down theories can amount to static definitions which fail to accommodate new conditions, or may potentially be hostile. The authors note that the meaning and application of principle goals can be subject to debate due to them being overly vague and abstract.This is a problem that will need to be addressed going forward. A machine doesn’t implicitly know what ‘fairness’ means. So how can we teach it a singular definition when fairness holds a different context for everyone? Next, we turn to the area of principles of AI ethics to explore the top-down method further.

Practical Principles for AI Ethics

Principles of AI are a top-down approach to ethics for artificial intelligence. In the last few years, we have been seeing lists of principles for AI ethics emerging prolifically. These lists are very useful, not only for AI and its impact, but also on a larger social level. Because of AI, people are thinking about ethics in a whole new way: How do we define and digest ethics in order to codify it?

Principles can be broken into two categories: principles for people who program AI systems to follow, and principles for the AI itself. Some of the principles for people, mainly programmers and data scientists, read like commandments. For instance, The Institute for Ethical AI and MLhas a list of eight principles geared toward technologists that can be viewed in Table 3.

Principles and their commitments for technologists to develop machine learning systems responsibly as described in the practical framework to develop AI responsibly by The Institute for Ethical AI & Machine Learning

Table Label: tab-princ

Download PDF to view table

Other lists of principles are geared towards the ethics of AI systems themselves and what they should adhere to. One such list consists of four principles, published by the National Institute of Standards and Technology (NIST)and are intended to promote explainability. These can be viewed in Table 4.

Principles and their commitments for responsible machine learning and AI systems

Table Label: tab-princ

Download PDF to view table

Many of the principles overlap across corporations and agencies. A detailed graphic and writeup published by the Berkman Klein Center for Internet and Society at Harvard gives a detailed overview offorty seven principles that various organizations, corporations, and other entities are adopting, including where they overlap and their definitions. The authors provide many lists and descriptions of ethical principles for AI, and categorize them into eight thematic trends, listed on the following page:

  • Privacy

  • Accountability

  • Safety and security

  • Transparency and explainability

  • Fairness and non-discrimination

  • Human control of technology

  • Professional responsibility

  • Promotion of human valuesOne particular principle that is missing from these lists regards taking care of the natural world and non-human life. As Boddington states in her book, Toward a Code of Ethics for Artificial Intelligence (2018), “. . . we are changing the world, AI will hasten these changes, and hence, we’d better have an idea of what changes count as good and what count as bad.”We will all have different opinions on this, but it needs to be part of the discussion. We can’t continue to destroy the planet while trying to create super AI, and still be under the illusion that our ethics principles are saving the world.

Many of these principles are theoretically sound, yet act as a veil that presents the illusion of ethics. This can be dangerous because it makes us feel like we are practicing ethics while business carries on as usual. Part of the reason for this is because the field of ethical AI development is so new and more research must be done yet to ensure the overall impact is a benefit to society. “Despite the proliferation of these ‘AI principles,’ there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.”Principles are a double sided coin. On one hand, making the stated effort to follow a set of ethical principles is good. It is beneficial for people to be thinking about doing what is right and ethical, and not just blindly entering code that could be detrimental in unforeseen ways. Some principles are simple in appearance yet incredibly challenging in practice. For example, if we look at the commonly adopted principle of transparency, there is quite a difference between saying that algorithms and machine learning should be explainable and actually developing ways to see inside of the black box. As datasets get bigger, this presents more and more technical challenges.Furthermore, some of the principles can conflict with each other, which can land us in a less ethical place than where we started. For example, transparency can conflict with privacy, another popular principle. We can run into a lot of complex problems around this, which needs to be addressed quickly and thoroughly as we move into the future.

Overall, we want these concepts in people’s minds: such as Fairness. Accountability, and Transparency. These are the core tenets and namesake of the FAccT conferencethat addresses these principles in depth. It is incredibly important for corporations and programmers to be concerned about the commonly addressed themes of bias, discrimination, oppression, and systemic violence. Yet, what can happen is that these principles make us feel like we are doing the right thing, however, how much does writing out these ideals actually change things?

In order for AI to be ethical, A LOT has to change, and not just in the tech world. There seems to be an omission of the unspoken principles: the value of money for corporations and those in power and convenience for those who can afford it. If we are aiming to create fairness, accountability, and transparency in AI, we need to do some serious work on society to adjust our core principles away from money and convenience and towards taking care of everyone’s basic needs and the Earth.

Could AI be a tool that has a side effect of starting an ethics revolution?

How do we accomplish this? The language that we use is important, especially when it comes to principles. Moss and Metcalf pointed out the importance of using market-friendly terms. If we want morality to win out, we need to justify the organizational resources necessary, when more times than not, companies will choose profit over social good.Whittlestone et al. describe the need to focus on areas of tension in ethics in AI, and point out the ambiguity of terms like‘fairness’, ‘justice’, and ‘autonomy’. The authors prompt us to question how these terms might be interpreted differently across various groups and contexts.They go on to say that principles need to be formalized into standards, codes and ultimately regulation in order to be useful in practice. Attention is drawn to the importance of acknowledging tensions between high-level goals of ethics, which can differ and even contradict each other. In order to be effective,it is vital to include a measure of guidance on how to resolve different scenarios. In order to reflect genuine agreement, there must be acknowledgement and accommodation of different perspectives and values as much as possible.The authors then introduce four reasons that discussing tensions is beneficial and important for AI ethics:

  • Bridging the gap between principles and practice

  • Acknowledging differences in values

  • Highlighting areas where new solutions are needed ambiguities and knowledge gapsEach of these needs to be considered ongoing, as these tensions don’t get solved overnight. Particularly, creating a bridge between principles and practice is important.

“We need to balance the demand to make our moral reasoning as robust as possible, with safeguarding against making it too rigid and throwing the moral baby out with the bathwater by rejecting anything we can’t immediately explain. This point is highly relevant both to drawing up codes of ethics, and to the attempts to implement ethical reasoning in machines.”Codes of ethics and ethical principles for AI are important and help start important conversations. However, it can’t stop there. The future will see more and more ways that these principles are put into action, and bring technologists and theorists together to investigate ways to make them function efficiently and ethically. We must open minds to ideas beyond making money for corporations and creating conveniences, and rather toward addressing tensions and truly creating a world that works for everyone.

The Hybrid of Bottom-Up and Top-Down Ethics for AI

We have reviewed the benefits and flaws of a top-down approach to ethics in AI, and visited the upsides and pitfalls of the bottom-up approach as well. Many argue that the solution lies somewhere in between, in a hybrid model.

“If no single approach meets the criteria for designating an artificial entity as a moral agent, then some hybrid will be necessary. Hybrid approaches pose the additional problems of meshing both diverse philosophies and dissimilar architectures.”Many agree that a hybrid of top-down and bottom-up would be the most effective model for ethical AI. Further, some argue that we need to question the ethics of people, both as the producers and consumers of technology, before we can start to assess fairness in AI.

Researchers state that hybrid AI combines the most desirable aspects of bottom-up, such as neural networks, and top-down, also referred to as symbiotic AI.When huge data sets are combined, neural networks are allowed to extract patterns. Then, information can be manipulated and retrieved by rule-based systems utilizing algorithms to manipulate symbols.Further research has observed the complementary strengths and weaknesses of bottom-up and top-down strategies. Raza et al. developed a hybrid program synthesis approach, improving top-down interference by utilizing bottom-up analysis.When we apply this to ethics and values, ethical concerns that arise from outside of the entity are emphasized by top-down approaches, while the cultivation of implicit values arising from within the entity are addressed by bottom-up approaches.While the authors stated that hybrid systems lacking effective or advanced cognitive faculties will be functional across many domains, they noted how essential it is to recognize times when additional capabilities will be required.Theoretically, hybrid ethic for AI which features the best of top-down and bottom-up methods in combination is incredibly promising, but in reality, many of the semi-functional or non-functional applications of supposed ethical AI prove challenging and have unforeseen side effects. Many real-world examples could be seen as a hybrid of ethics in AI, and not all have beaming qualities of top-down and bottom-up ethics; rather, they represent the messiness and variance of life. Next we will explore a selection of case studies, which will reflect some ethical AI concerns in real-world examples from across the globe.

Data Mining Case Study: The African Indigenous Context

Data sharing, or data mining, is a prime example of conflicting principles of AI ethics. On one hand, it is the epitome of transparency and a crucial element to scientific and economic growth. On the other hand, it brings up serious concerns about privacy, intellectual property rights, organizational and structural challenges, cultural and social contexts, unjust historical pasts, and potential harms to marginalized communities.We can reflect on this as a hybrid of top-down and bottom-up ethics in AI, since it utilizes top-down politics, bottom-up data collection, and is theoretically a conflict between the principles of the researchers and the researched communities.

The term data colonialism can be used to describe some of the challenges of data sharing, or data mining, which reflect the historical and present-day colonial practices such as in African and Indigenous contexts. When we use terms such as ‘mining’ to discuss how data is collected from people, the question remains, who benefits from the data collection? The use of data can paradoxically be harmful to communities it is collected from. Trust is challenging due to the historical actions taken by data collectors while mining data from Indigenous populations. What barriers exist that prevent collected data from being of benefit to African people? We must address the entrenched legacies of power disparities concerning what challenges they present for modern data sharing.One problematic example is of a non-government organization (NGO) that tried to ‘fix’ problems for marginalized ethnic groups and ended up causing more harm than good. In this case, a European-based NGO planned to address the problem of access to clean potable water in Buranda, while simultaneously testing new water accessibility technology and online monitoring of resources.The NGO failed to understand the perspective of the community on the true central issues and potential harms. Sharing the data publicly, including geographic locations, put the community at risk, as collective privacy was violated. In the West privacy is often thought of as a personal concern, however collective identity serves as a great importance to a multitude of African and Indigenous communities. This introduced trust issues due to the disempowerment of local communities in the decision-making process.

Another case study in Zambia observed that up to 90% of health research funding comes from external funders, meaning the bargaining power gives little room for negotiations for Zambian scholars. In the study, power imbalances were reported in everything from funding to agenda setting, data collection, analysis, interpretation, and reporting of results.This example exhibits further the understanding that trust cannot be formed on the foundation of these imbalances of power.

Due to this lack of trust, many researchers have run into hurdles with collecting data from marginalized communities. Many of these research projects lead with good intentions, yet there was a lack of forethought into the ethical use of data, during and after the project, which can create unforeseen and irreparable harms to the wellbeing of communities. This creates a hostile environment to build relationships of respect and trust.To conclude this case study in data mining, we can pose the ethical question, “is data sharing good/beneficial?” First and foremost, local communities must be the primary beneficiaries of responsible data sharing practices.It is important to specify who benefits from data sharing, and to make sure that it is not doing any harm to the people behind the data.

Contact Tracing for COVID-19 Case Study

Another complex example of ethics in AI can be seen in the use of contact tracing during the COVID-19 pandemic. Contact tracing can be centralized or non-centralized, which directly relates to top-down and bottom-up methods. The centralized approach is what was deplored in South Korea, where by law, and for the purposes of infectious disease control, the national authority is permitted to collect and use information on all COVID-19 patients and their contacts.In 2020, Germany and Israel tried and failed at adopting centralized approaches, due to a lack of exceptions for public health emergencies in their privacy laws.Getting past the legal barriers can be a lengthy and complex process and not conducive to applying a centralized contract tracing system for the outbreak.Non-centralized approaches to contact tracing are essentially smartphone apps which track proximal coincidence with less invasive data collection methods. These approaches have thus been adopted by many countries, and don’t have the same cultural and political obstacles as centralized approaches, avoiding legal pitfalls and legislative reform.Justin Fendos, a professor of cell biology at Dongseo University in Busan, South Korea, wrote that in supporting the public health response to COVID-19, Korea had the political willingness to use technological tools to their full potential.The Korean government had collected massive amounts of transaction data to investigate tax fraud even before the COVID-19 outbreak. Korea’s government databases hold records of literally every credit card and bank transaction, and this information was repurposed during the outbreak to retroactively track individuals. In Korea, 95% of adults own a smartphone and many use cashless tools everywhere they go, including on buses and subways.Hence, contact tracing in Korea was extremely effective.

Public opinion about surveillance in Korea has been stated to be overwhelmingly positive. Fatalities in Korea due to COVID-19 were a third of the global average as of April 2020, when it was also said that they were one of the few countries to have successfully flattened the curve. There have been concerns, despite the success, regarding the level of personal details released by health authorities, which have motivated updated surveillance guidelines for sensitive information.Turning to the other side of the planet, a very different picture can be painted. One study focused on three heavily impacted cities in Brazil which had the most deaths from COVID-19 until the first half of 2021. The researchers provided a methodology for applying data mining as a public health management tool, including identifying variables of climate and air quality in relation to the number of COVID-19 cases and deaths. They used rules-based forecasting models and provided forecasting models of new COVID-19 cases and daily deaths in the three Brazilian cities studied. (São Paulo, Rio de Janeiro and Manaus)However, the researchers noted that counting of cases in Brazil was affected by high underreporting due to low testing, as well as technical and political problems, hence the study stated that cases may have been up to 12 times greater than investigations indicated.This shows us that the same technology cannot necessarily be scaled to work for all people in all places across the globe, and that individual concern must be taken when looking for the best solutions for everyone.

Discussion

In the primary paper that this research builds on titled Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches, the authors lead by stating: “Artificial morality shifts some of the burden for ethical behavior away from designers and users, and onto the computer systems themselves,”This is a questionable claim. Machines cannot be held responsible for what they learn from people, ever. Machines do not have an inherent conscience or morality as humans do. Moreover, AI can act as a mirror, and the problems that arise in AI often reflect the problems we have in society. People need to assume responsibility, both as individuals and as a society at large. Corporations and governments need to cooperate, and individual programmers and technologists should continually question and evaluate these systems and their morality. In this way, we can use AI technology in an effort to improve society, and create a more sustainable world for everyone.

The approach of moral uncertainty is intriguing because there isn’t ever one answer or solution to an ethical question, and to admit uncertainty leaves it open to continued questioningthat can lead us to the answers that may be complex and decentralized. This path could possibly create a system that can adapt to meet the ethical considerations of everyone involved.Ultimately, societal ethics need to be considered, as AI does not exist in a vacuum. A large consideration is technology in service of making money, primarily for big corporations, and not for improving lives and the world. As long as this is the backbone driving AI and other new technology, we cannot reach true ethics in this field. Given our tendency for individualism over collectivism, who gets to decide what codes of ethics AI follows? If it is influenced by Big Tech, which is often the case, it will serve to support the ethics of a company, which generally has the primary goal of making money for that company. The value of profit over all else needs to shift.

“Big Tech has transformed ethics into a form of capital — a transactional object external to the organization, one of the many ‘things’ contemporary capitalists must tame and procure. . . By engaging in an economy of virtue, it was not the corporation that became more ethical, but rather ethics that became corporatised. That is, ethics was reduced to a form of capital — another industrial input to maintain a system of production, which tolerated change insofar as it aligned with existing structures of profit-making.”This reflects the case study of data mining in African communities, whose researchers set out to do good, however were still working in old frameworks around mining resources for personal gain, regurgitating colonialism. Until we can break free from these harmful systems, building an ethical AI is either going to continue to get co-opted and re-capitalized, or possibly, it will find a way to create brand new systems where it can truly be ethical, creating a world where other worlds are possible.

To leave us with a final thought:“Ethical issues are never solved, they are navigated and negotiated as part of the work of ethics owners.”## Conclusion

We have explored ethics in AI implementation in three ways:theoretically, technically, and politically as described through top-down, bottom-up, and hybrid frameworks. Within this paper, we reviewed reinforcement learning as a bottom-up example, and principles of AI ethics as a top-down example. The concept of fairness as a key ethical value for AI was discussed throughout. Case studies were reviewed to exemplify just how complex and variant ethics in AI can be in different cultures and at different times. The conclusion is that ethics in AI needs a lot more research and work, and needs to be considered from multiple angles while being continuously monitored for unforeseen side effects and consequences. Furthermore, societal ethics need to be accounted for. Our hope is that at least for those who are intending to build and deploy ethical AI systems, they will consider all angles and blind spots, including who might be marginalized or harmed by the technology, especially when it aims to help. By continuing to work on the seemingly impossible task of creating ethical AI, this will radiate out to society and ethics will become more and more of a power in itself that can have wider implications for the betterment of all.

Bibliography

  1@misc{Williams2021ItsCraze,
  2  month = {7},
  3  author = {Williams, Joe},
  4  booktitle = {www.protocol.com/},
  5  year = {2021},
  6  title = {{'It’s almost like software equals AI': Inside tech’s latest funding craze}},
  7}
  8
  9@article{Suresh2022AModel,
 10issn = {2722-2578},
 11doi = {10.11591/ijece.v12i2.pp1831-1838},
 12volume = {12},
 13pages = {1831},
 14month = {4},
 15number = {2},
 16author = {Suresh, Tamilarasi and Assegie, Tsehay Admassu and Rajkumar, Subhashni and Komal Kumar, Napa},
 17journal = {International Journal of Electrical and Computer Engineering (IJECE)},
 18year = {2022},
 19title = {{A hybrid approach to medical decision-making: diagnosis of heart disease with machine-learning model}},
 20}
 21
 22@incollection{vanRysewyk2015AData,
 23doi = {10.1007/978-3-319-08108-3{\_}7},
 24pages = {93--110},
 25author = {van Rysewyk, Simon Peter and Pontier, Matthijs},
 26year = {2015},
 27title = {{A Hybrid Bottom-Up and Top-Down Approach to Machine Medical Ethics: Theory and Data}},
 28}
 29
 30@misc{Galindo-Rueda2021AAI-related,
 31month = {7},
 32author = {Galindo-Rueda, Fernando and Cairns, Stephanie},
 33booktitle = {oecd.ai},
 34year = {2021},
 35title = {{A new approach to measuring government investment in AI-related}},
 36}
 37
 38@misc{2022ACMConference,
 39month = {1},
 40booktitle = {https://facctconference.org/},
 41year = {2022},
 42title = {{ACM FAccT Conference}},
 43}
 44
 45@misc{Mehta2021AITime,
 46month = {12},
 47author = {Mehta, Bijal and Mousavizadeh, Alexandra and Darrah, Kim},
 48booktitle = {www.tortoisemedia.com},
 49year = {2021},
 50title = {{AI Boom Time}},
 51}
 52
 53@misc{2022AIInstitute,
 54month = {1},
 55booktitle = {https://futureoflife.org/2017/08/11/ai-principles/},
 56year = {2022},
 57title = {{AI Principles - Future of Life Institute}},
 58}
 59
 60@misc{Garychl2018ApplicationsWorld,
 61month = {8},
 62author = {{Garychl}},
 63booktitle = {towardsdatascience.com},
 64year = {2018},
 65title = {{Applications of Reinforcement Learning in Real World}},
 66}
 67
 68@article{Allen2005ArtificialApproaches,
 69issn = {1388-1957},
 70doi = {10.1007/s10676-006-0004-4},
 71volume = {7},
 72pages = {149--155},
 73month = {9},
 74number = {3},
 75author = {Allen, Colin and Smit, Iva and Wallach, Wendell},
 76journal = {Ethics and Information Technology},
 77year = {2005},
 78title = {{Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches}},
 79}
 80
 81@misc{2022BerkmanLibrary,
 82month = {1},
 83booktitle = {https://wilkins.law.harvard.edu/events/Misc/2018-10-04{\_}networkpropaganda/},
 84year = {2022},
 85title = {{Berkman Klein Center Media Library}},
 86}
 87
 88@inproceedings{Shmueli2021BeyondCrowdsourcing,
 89doi = {10.18653/v1/2021.naacl-main.295},
 90address = {Stroudsburg, PA, USA},
 91publisher = {Association for Computational Linguistics},
 92pages = {3758--3769},
 93author = {Shmueli, Boaz and Fell, Jan and Ray, Soumya and Ku, Lun-Wei},
 94booktitle = {Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
 95year = {2021},
 96title = {{Beyond Fair Pay: Ethical Implications of NLP Crowdsourcing}},
 97}
 98
 99@article{Maxmen2018CanLives,
100month = {5},
101author = {Maxmen, Amy},
102journal = {Nature News},
103year = {2018},
104title = {{Can tracking people through phone-call data improve lives?}},
105}
106
107@article{Barcellos2021DataMetropolis,
108issn = {2045-2322},
109doi = {10.1038/s41598-021-04029-6},
110volume = {11},
111pages = {24491},
112month = {12},
113number = {1},
114author = {Barcellos, Demian da Silveira and Fernandes, Giovane Matheus Kayser and de Souza, Fábio Teodoro},
115journal = {Scientific Reports},
116year = {2021},
117title = {{Data based model for predicting COVID-19 morbidity and mortality in metropolis}},
118}
119
120@article{Phan2021EconomiesTech,
121issn = {0950-5431},
122doi = {10.1080/09505431.2021.1990875},
123pages = {1--15},
124month = {11},
125author = {Phan, Thao and Goldenfein, Jake and Mann, Monique and Kuch, Declan},
126journal = {Science as Culture},
127year = {2021},
128title = {{Economies of Virtue: The Circulation of ‘Ethics’ in Big Tech}},
129}
130
131@article{Martin2019EthicalAlgorithms,
132issn = {0167-4544},
133doi = {10.1007/s10551-018-3921-3},
134volume = {160},
135pages = {835--850},
136month = {12},
137number = {4},
138author = {Martin, Kirsten},
139journal = {Journal of Business Ethics},
140year = {2019},
141title = {{Ethical Implications and Accountability of Algorithms}},
142}
143
144@unpublished{ExperimentalismDocs.,
145institution = {University of Sussex Open Data Institute (ODI)},
146title = {{Experimentalism and the Fourth Industrial Revolution {\#}OPEN Roundtable Summary Note: Experimentalism - Le Guin Part 2. Google Docs.}},
147}
148
149@misc{Rainie2021ExpertsDecade,
150month = {6},
151author = {Rainie, Lee and Anderson, Janna and Vogels, Emily A.},
152booktitle = {Pew Research Center: Internet, Science {\&} Tech},
153year = {2021},
154title = {{Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade}},
155}
156
157@article{Rainie2021ExpertsDecade.,
158month = {6},
159author = {Rainie, Lee and Anderson, Janna and Vogels, Emily},
160journal = {Pew Research Center: Internet, Science {\&} Tech},
161year = {2021},
162title = {{Experts doubt ethical AI design will be broadly adopted as the norm within the next decade. }},
163}
164
165@techreport{Phillips2021FourIntelligence,
166doi = {10.6028/NIST.IR.8312},
167address = {Gaithersburg, MD},
168institution = {National Institute of Standards and Technology},
169month = {9},
170author = {Phillips, P. Jonathon and Hahn, Carina A. and Fontana, Peter C. and Yates, Amy N. and Greene, Kristen and Broniatowski, David A. and Przybocki, Mark A.},
171year = {2021},
172title = {{Four Principles of Explainable Artificial Intelligence}},
173}
174
175@inproceedings{Montoya2019GovernmentCaribbean,
176doi = {10.1109/ISTAS48451.2019.8937869},
177isbn = {978-1-7281-5480-0},
178publisher = {IEEE},
179pages = {1--8},
180month = {11},
181author = {Montoya, Laura and Rivas, Pablo},
182booktitle = {2019 IEEE International Symposium on Technology and Society (ISTAS)},
183year = {2019},
184title = {{Government AI Readiness Meta-Analysis for Latin America And The Caribbean}},
185}
186
187@article{Bezuidenhout2018HiddenScientists,
188issn = {1128-7462},
189doi = {10.1080/11287462.2018.1441780},
190volume = {29},
191pages = {39--54},
192month = {1},
193number = {1},
194author = {Bezuidenhout, Louise and Chakauya, Ereck},
195journal = {Global Bioethics},
196year = {2018},
197title = {{Hidden concerns of sharing research data by low/middle-income country scientists}},
198}
199
200@article{Fendos2020HowResponse,
201month = {4},
202author = {Fendos, Justin},
203journal = {Brookings Tech Stream},
204year = {2020},
205title = {{How surveillance technology powered South Korea's COVID-19 response}},
206}
207
208@article{Etzioni2017IncorporatingIntelligence,
209issn = {1382-4554},
210doi = {10.1007/s10892-017-9252-2},
211volume = {21},
212pages = {403--418},
213month = {12},
214number = {4},
215author = {Etzioni, Amitai and Etzioni, Oren},
216journal = {The Journal of Ethics},
217year = {2017},
218title = {{Incorporating Ethics into Artificial Intelligence}},
219}
220
221@article{Hanson2012IndigenousMethodologies,
222issn = {1837-0144},
223doi = {10.5204/ijcis.v5i1.97},
224volume = {5},
225pages = {93--95},
226month = {1},
227number = {1},
228author = {Hanson, Cindy},
229journal = {International Journal of Critical Indigenous Studies},
230year = {2012},
231title = {{Indigenous Research Methodologies}},
232}
233
234@inproceedings{Rodriguez2019IntroducingLearning,
235publisher = {https://spidercenter.org/wp-content/blogs.dir/437/files/2019/05/RAIA2019{\_}paper{\_}7.pdf},
236author = {Rodriguez, Manel and Lopez-Sanchez, Maite and Rodriguez-Aguilar, Juan Antonio},
237booktitle = {Responsible Artificial Intelligence Agents 2019},
238year = {2019},
239title = {{Introducing Ethical Reinforcement Learning}},
240}
241
242@misc{Gonfalonieri2018InverseLearning,
243month = {12},
244author = {Gonfalonieri, Alexandre},
245booktitle = {https://towardsdatascience.com/},
246year = {2018},
247title = {{Inverse reinforcement learning}},
248}
249
250@article{Batuo2018LinkagesAfrica,
251issn = {02755319},
252doi = {10.1016/j.ribaf.2017.07.148},
253volume = {45},
254pages = {168--179},
255month = {10},
256author = {Batuo, Michael and Mlambo, Kupukile and Asongu, Simplice},
257journal = {Research in International Business and Finance},
258year = {2018},
259title = {{Linkages between financial development, financial instability, financial liberalisation and economic growth in Africa}},
260}
261
262@article{Wallach2008MachineFaculties,
263issn = {0951-5666},
264doi = {10.1007/s00146-007-0099-0},
265volume = {22},
266pages = {565--582},
267month = {4},
268number = {4},
269author = {Wallach, Wendell and Allen, Colin and Smit, Iva},
270journal = {AI {\&} SOCIETY},
271year = {2008},
272title = {{Machine morality: bottom-up and top-down approaches for modelling human moral faculties}},
273}
274
275@misc{Merriam-Webster2022Merriam-Webster,
276author = {{Merriam-Webster}},
277booktitle = {Merriam-Webster},
278year = {2022},
279title = {{Merriam-Webster}},
280}
281
282@inproceedings{Abebe2021NarrativesAfrica,
283doi = {10.1145/3442188.3445897},
284author = {Abebe, Rediet and Aruleba, Kehinde and Birhane, Abeba and Kingsley, Sara and Obaido, George and Remy, Sekou L. and Sadagopan, Swathi},
285booktitle = {FAccT 2021 - Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency},
286year = {2021},
287title = {{Narratives and counternarratives on data sharing in Africa}},
288}
289
290@misc{Constantouros2020OverviewFunding,
291month = {5},
292author = {Constantouros, Jason},
293booktitle = {lao.ca.gov},
294year = {2020},
295title = {{Overview of Federal COVID-19 Research Funding}},
296}
297
298@article{Fendos2020PARTReplicate,
299month = {10},
300author = {Fendos, Justin},
301journal = {Georgetown Journal of International Affairs},
302year = {2020},
303title = {{PART I: COVID-19 Contact Tracing: Why South Korea’s Success is Hard to Replicate}},
304}
305
306@article{Fjeld2020PrincipledAI,
307issn = {1556-5068},
308doi = {10.2139/ssrn.3518482},
309author = {Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika},
310journal = {SSRN Electronic Journal},
311year = {2020},
312title = {{Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI}},
313}
314
315@misc{Bhatt2018Reinforcement101,
316month = {3},
317author = {Bhatt, Shweta},
318booktitle = {https://towardsdatascience.com/reinforcement-learning-101-e24b50e1d292},
319year = {2018},
320title = {{Reinforcement learning 101}},
321}
322
323@inproceedings{Abel2016ReinforcementMaking,
324author = {Abel, David and MacGlashan, James and Littman, Michael L.},
325booktitle = {The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence AI, Ethics, and Society: Technical Report},
326year = {2016},
327title = {{Reinforcement Learning as a Framework for Ethical Decision Making}},
328}
329
330@article{Ecoffet2020ReinforcementUncertainty,
331arxivid = {2006.04734},
332month = {6},
333author = {Ecoffet, Adrien and Lehman, Joel},
334year = {2020},
335title = {{Reinforcement Learning Under Moral Uncertainty}},
336}
337
338@article{Najar2021ReinforcementSurvey,
339issn = {2296-9144},
340doi = {10.3389/frobt.2021.584075},
341volume = {8},
342month = {6},
343author = {Najar, Anis and Chetouani, Mohamed},
344journal = {Frontiers in Robotics and AI},
345year = {2021},
346title = {{Reinforcement Learning With Human Advice: A Survey}},
347}
348
349@book{Sutton2018ReinforcementEdition,
350publisher = {MIT Press},
351edition = {2},
352author = {Sutton, Richard and Barto, Andrew},
353year = {2018},
354title = {{Reinforcement Learning, Second Edition}},
355}
356
357@misc{Vachnadze2021ReinforcementMachines.,
358month = {2},
359author = {Vachnadze, Giorgi},
360booktitle = {Medium.com},
361year = {2021},
362title = {{Reinforcement learning: Bottom-up programming for ethical machines.}},
363}
364
365@article{Rahwan2018Society-in-the-loop:Contract,
366issn = {1388-1957},
367doi = {10.1007/s10676-017-9430-8},
368volume = {20},
369pages = {5--14},
370month = {3},
371number = {1},
372author = {Rahwan, Iyad},
373journal = {Ethics and Information Technology},
374year = {2018},
375title = {{Society-in-the-loop: programming the algorithmic social contract}},
376}
377
378@inproceedings{Noothigattu2019TeachingOrchestration,
379doi = {10.24963/ijcai.2019/891},
380isbn = {978-0-9992411-4-1},
381address = {California},
382publisher = {International Joint Conferences on Artificial Intelligence Organization},
383pages = {6377--6381},
384month = {8},
385author = {Noothigattu, Ritesh and Bouneffouf, Djallel and Mattei, Nicholas and Chandra, Rachita and Madan, Piyush and Varshney, Kush R. and Campbell, Murray and Singh, Moninder and Rossi, Francesca},
386booktitle = {Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence},
387year = {2019},
388title = {{Teaching AI Agents Ethical Values Using Reinforcement Learning and Policy Orchestration}},
389}
390
391@misc{TheInstituteforEthicalAiMachineLearning2021TheSystems.,
392month = {12},
393author = {{The Institute for Ethical Ai {\&} Machine Learning}},
394booktitle = {https://ethical.institute/principles.html},
395year = {2021},
396title = {{The 8 principles for responsible development of AI {\&} Machine Learning systems.}},
397}
398
399@book{Couldry2019TheCapitalism,
400address = {Stanford},
401publisher = {Stanford University Press},
402author = {Couldry, Nick and Mejias, Ulises A.},
403year = {2019},
404title = {{The Costs of Connection How Data Is Colonizing Human Life and Appropriating It for Capitalism}},
405}
406
407@misc{Moss2019TheCompanies.,
408month = {11},
409author = {Moss, Emmanuel and Metcalf, Jacob},
410booktitle = {Harvard Business Review},
411year = {2019},
412title = {{The ethical dilemma at the heart of Big Tech Companies.}},
413}
414
415@misc{2022TheSystems,
416month = {1},
417booktitle = {https://standards.ieee.org/},
418year = {2022},
419title = {{The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems}},
420}
421
422@article{Awad2018TheExperiment,
423issn = {0028-0836},
424doi = {10.1038/s41586-018-0637-6},
425volume = {563},
426pages = {59--64},
427month = {11},
428number = {7729},
429author = {Awad, Edmond and Dsouza, Sohan and Kim, Richard and Schulz, Jonathan and Henrich, Joseph and Shariff, Azim and Bonnefon, Jean-François and Rahwan, Iyad},
430journal = {Nature},
431year = {2018},
432title = {{The Moral Machine experiment}},
433}
434
435@article{Awad2018TheExperimentb,
436issn = {0028-0836},
437doi = {10.1038/s41586-018-0637-6},
438volume = {563},
439pages = {59--64},
440month = {11},
441number = {7729},
442author = {Awad, Edmond and Dsouza, Sohan and Kim, Richard and Schulz, Jonathan and Henrich, Joseph and Shariff, Azim and Bonnefon, Jean-François and Rahwan, Iyad},
443journal = {Nature},
444year = {2018},
445title = {{The Moral Machine experiment}},
446}
447
448@inproceedings{Whittlestone2019TheEthics,
449doi = {10.1145/3306618.3314289},
450isbn = {9781450363242},
451address = {New York, NY, USA},
452publisher = {ACM},
453pages = {195--200},
454month = {1},
455author = {Whittlestone, Jess and Nyrup, Rune and Alexandrova, Anna and Cave, Stephen},
456booktitle = {Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society},
457year = {2019},
458title = {{The Role and Limits of Principles in AI Ethics}},
459}
460
461@misc{Eckart2020Top-downAI,
462month = {2},
463author = {Eckart, Peter},
464booktitle = {10eqs.com},
465year = {2020},
466title = {{Top-down AI: The Simpler, Data-Efficient AI}},
467}
468
469@book{Boddington2017TowardsIntelligence,
470doi = {10.1007/978-3-319-60648-4},
471isbn = {978-3-319-60647-7},
472address = {Cham},
473publisher = {Springer International Publishing},
474author = {Boddington, Paula},
475year = {2017},
476title = {{Towards a Code of Ethics for Artificial Intelligence}},
477}
478
479@book{ONeil2016WeaponsDemocracy,
480publisher = {Crown Publishing Group},
481author = {O’Neil, Cathy},
482year = {2016},
483title = {{Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy}},
484}
485
486@inproceedings{Raza2020WebInference,
487doi = {10.1145/3318464.3380608},
488isbn = {9781450367356},
489address = {New York, NY, USA},
490publisher = {ACM},
491pages = {1967--1978},
492month = {6},
493author = {Raza, Mohammad and Gulwani, Sumit},
494booktitle = {Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data},
495year = {2020},
496title = {{Web Data Extraction using Hybrid Program Synthesis: A Combination of Top-down and Bottom-up Inference}},
497}
498
499@incollection{Bragg2018WhatLearning,
500doi = {10.1007/978-3-319-99229-7{\_}35},
501pages = {418--430},
502author = {Bragg, John and Habli, Ibrahim},
503year = {2018},
504title = {{What Is Acceptably Safe for Reinforcement Learning?}},
505}
506
507@article{SAGAR2021WhatAI,
508month = {7},
509author = {SAGAR, RITIKA},
510journal = {Analytics India Magazine},
511year = {2021},
512title = {{What is Hybrid AI?}},
513}
514
515@misc{Hooker2018WhyPrecision.,
516month = {7},
517author = {Hooker, Sara},
518booktitle = {https://towardsdatascience.com},
519year = {2018},
520title = {{Why “data for good” lacks precision.}},
521}
522
523@article{Walsh2016TheResearch,
524issn = {1475-9276},
525doi = {10.1186/s12939-016-0488-4},
526volume = {15},
527pages = {204},
528month = {12},
529number = {1},
530author = {Walsh, Aisling and Brugha, Ruairi and Byrne, Elaine},
531journal = {International Journal for Equity in Health},
532year = {2016},
533title = {{“The way the country has been carved up by researchers”: ethics and power in north–south public health research}},
534}
535
536@article{AnaneSarpong2018YouDatasharing,
537issn = {1471-8731},
538doi = {10.1111/dewb.12159},
539volume = {18},
540pages = {394--405},
541month = {12},
542number = {4},
543author = {Anane‐Sarpong, Evelyn and Wangmo, Tenzin and Ward, Claire Leonie and Sankoh, Osman and Tanner, Marcel and Elger, Bernice Simone},
544journal = {Developing World Bioethics},
545year = {2018},
546title = {{“You cannot collect data using your own resources and put It on open access”: Perspectives from Africa about public health data‐sharing}},
547}

Attribution

arXiv:2204.07612v2 [cs.AI]
License: cc-by-nc-sa-4.0

Related Posts

Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis

Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis

Introduction Emotions play a central role in our lives. Thus affective computing, which deals with emotions and computation (often through AI systems) is atremendously important and vibrant line of work.

AI Challenges for Society and Ethics

AI Challenges for Society and Ethics

Introduction AI is already being applied in and impacting many important sectors in society, including healthcare, finance, and policing.

Thinking Upstream: Ethics and Policy Opportunities in AI Supply Chains

Thinking Upstream: Ethics and Policy Opportunities in AI Supply Chains

Looking upstream After children were pictured sewing its running shoes in the early 1990s, Nike at first disavowed the “working conditions in its suppliers’ factories”, before public pressure led them to take responsibility for ethics in their upstream supply chain.