- Papers
Papers is Alpha. This content is part of an effort to make research more accessible, and (most likely) has lost some details from the original. You can find the original paper here.
Introduction
Artificial Intelligence (AI) becomes necessary across a vast array of industries including health, manufacturing, agriculture, and banking. AI technologies have the potential to substantially transform society and offer various technical and societal benefits, which are expected to happen from high-level productivity and efficiency. In line with this, the ethical guidelines presented by the independent high-level expert group on artificial intelligence (AI HLEG) highlights that:
“AI is not an end in itself, but rather a promising means to increase human flourishing, thereby enhancing individual and societal well-being and the common good, as well as bringing progress and innovation.”
However, the promising advantages of AI technologies have been considered with worries that the complex and opaque systems might bring more social harms than benefits. People start thinking beyond the operational capabilities of AI technologies and investigating the ethical aspects of developing strong and potentially life consequential technologies. For example, US government and many private companies do not use the virtual implications of decision-making systems in health, criminal justice, employment, and creditworthiness without ensuring that these systems are not coded intentionally or unintentionally with structural biases.
Concomitant with advances in AI systems, we witness the ethical failure scenarios. For example, a high rate of unsuccessful job applications that were processed by the Amazon recruitment system was later found biased in analysis of the selection criteria against women applicants and triggered discriminatory issues. Since decisions and recommendations made by AI systems may undergone people lives, the need for developing pertinent policies and principles addressing the ethical aspects of AI systems is crucial. Otherwise, the harms caused by AI systems will jeopardize the control, safety, livelihood and rights of people. AI systems are not only concerned with technical efforts, but also need to consider the social, political, legal, and intellectual aspects. However, AI’s current state of ethics is broadly unknown to the public, practitioners, policy, and lawmakers.
Extensively, the ethically aligned AI system should meet the following three components through the entire life cycle: 1) compliance with all the applicable laws and regulations, 2) adherence to ethical principles and values, and 3) technical and social robustness. To the best of our knowledge, there is a dearth of empirical study to uncover the above core components in the view of industrial practitioners and lawmakers. For instance, as will be elaborated in Section 7, Vakkuri et al.conducted a survey study to determine industrial perceptions based only on four AI ethics principles. Lu et al.conducted interviews with researchers and practitioners to understand the AI ethics principles implications and the motivation for rooting these principles in the design practices. Similarly, Leikas et al.mainly focused on AI ethics guidelines. Given the lack of empirical studies exploring principles and challenges associated with AI ethics, we strive to answer the following research questions:
[sharp corners, boxrule=0.1mm,] RQ1:What are the practitioners’ and lawmakers’ insights on AI ethics principles and challenges?
Rationale: RQ1 aims to digest the perceptions of practitioners and lawmakers to empirically evaluate the systematic literature review (SLR) study based identified AI ethics principles and challenges. The answer to RQ1 provides a better understanding of the most common AI ethics principles and challenges with respect to practitioners and lawmakers point of views.
[sharp corners, boxrule=0.1mm,] RQ2: What would be the severity impacts of identified challenges across the AI ethics principles?
Rationale: RQ2 aims to measure the severity impacts of challenging factors across AI ethics principles. The answer to RQ2 would inform practitioners for the most severe challenges before initiating the AI ethics activities.
[sharp corners, boxrule=0.1mm,] RQ3: How these challenges and principles are differently perceived by practitioners and lawmakers?
Rationale: The empirical data were collected from two types of populations (practitioners and lawmakers). The answer to RQ3 would portray a better understanding of significant differences between the opinion of targeted populations for AI ethics principles and challenges.
To address these RQs, we conducted a survey study by encapsulating the views and opinions of practitioners and lawmakers regarding AI ethics principles and challenges by collecting data from 99 respondents across 20 different countries.
Background
Generally, the AI ethics is classified under the umbrella of applied ethics, which mainly concerns with ethical issues associated with developing and using the AI systems. It focuses on linking how an AI system could raise worries related to human autonomy, freedom in a democratic society, and quality of life. Ethical reflection across AI technologies could serve in achieving multiple societal purposes. For instance, it can stimulate focusing on innovations that aim to foster ethical values and bring collective well-being. Ethically aligned or trustworthy AI technologies can flourish sustainable well-being in society by bringing prosperity, wealth maximization, and value creation.
It is vital to understand the development, deployment, and use of AI technologies to ensure that everyone can build a better future and live a thriving life in the AI-based world. However, the increasing popularity of AI systems has raised concerns such as reliability and impartiality of decision-making scenarios. We need to make sure that decision-making support of AI technologies must have an accountable process to ensure that their actions are ethically aligned with human values that should not be compromised.
In this regard, different organizations and technology giants developed committees to draft the AI ethics guidelines. Google and SAP presented the guidelines and policies to develop ethically aligned AI systems. Similarly, the Association of Computing Machinery (ACM), Access Now, and Amnesty International jointly proposed the principles and guidelines to develop an ethically mature AI system. In Europe, the (AI HLEG) guidelines are developed for promoting trustworthy AI. The Ethically Aligned Design (EAD) guidelines are presented by IEEE, consisting of a set of principles and recommendations that focus on the technical and ethical values of AI systems. In addition, the joint ISO/IEC international standard committee proposed the ISO/IEC JTC 1/SC 42 standard which covers the entire AI ecosystem, including trustworthiness, computational approach, governance, standardization, and social concerns.
However, various researchers claim that the extant AI ethics guidelines and principles are not effectively adopted in industrial settings. McNamara et al.conducted an empirical study to understand the influence of the ACM code of ethics in the software engineering decision-making process. Surprisingly, the study findings reveal that no evidence has been found that the ACM code of ethics regulate decision-making activities. Vakkuri et al.conducted multiple interviews to know the status of ethical practices in the domain of the AI industry. The study findings uncover the fact that various guidelines are available; however, their deployment in industrial domains are far from being mature. The gap between AI ethics research and practice remains an ongoing challenge. To bridge this gap, we previously conducted an SLR study to provide a comprehensive and state-of-the-art overview of AI ethics principles and challenges. This study is extended based on the SLR findingsto provide empirical insights to know the significance of AI ethics principles, challenges, and their impact by encapsulating the views of AI practitioners and lawmakers.
Methodology
We deemed two groups of research participants would be relevant to this survey- AI practitioners and lawmakers.On one hand, practitioners often make the design decisions and have higher ethical responsibilities compared to others. Practitioners often make the design decisions of complex autonomous systems with less ethical knowledge. The magnitude of risks in AI systems makes practitioners responsible for understanding ethical attributes. To achieve reliable outcomes, it is essential to know the practitioners understanding of AI ethics principles and challenges.
On the other hand, law resolves everyday conflicts and sustains order in social life. People consider law an information source as it impacts social norms and values. The aim of considering this type of population (lawmakers) is to understand the application of the law to AI ethics. The data collected from legislation personnel will uncover the question, of whether standing AI ethics principles are sufficient, or is there a need for innovative standards?
We used industrial collaboration contacts to search the AI practitioners and sent a formal invitation to participate in this survey. Moreover, various law forums across the world were contacted and requested to participate in this study. The targeted populations were approached using social media networks including LinkedIn, WeChat, ResearchGate, Facebook, and personal email addresses. The overview of research methodology is depicted in Figure fig-research-methodology
< g r a p h i c s >
Overview of the research methodology
The survey instrument consisted of four core sections: 1) demographics 2) AI ethics principles 3) challenges 4) challenges impact on principles. The survey questionnaire also includes open-ended questions to know the novel principles and challenges that were not identified in the SLR study. The Likert scale is used to evaluate the significance of each principle and challenge and assess the severity level of the challenging factors. The survey instrument is structured both in English and Chinese language. The software industry in China is flourishing like never before, where AI is taking the front seat and is home to some of the leading technology giants in the world, such as Huawei, Alibaba, Baidu, Tencent, and Xiaomi. However, it would be challenging to collect the data from the Chinese industry because of the language barriers. Mandarin is the national and official language in China, unlike India, where English is commonly used for official purposes. Therefore, the Chinese version of the survey instrument is developed to cover the major portion of the targeted population. Both English and Chinese versions of the survey instrument are available online for replication.
The piloting of the questionnaire is performed by inviting three external subject/domain experts. The experts’ suggestions were mainly related to the overall design, and understandability of the survey questions. The suggested changes were incorporated, and the survey instrument was finalized based on the authors’ consensus (see Figure fig-research-methodology). The final survey instrument was online deployed using Google forms (English version) and Tencent questionnaire (Chinese version). The first two authors engaged with the data collection process, while the next co-authors frequently monitored/screened the participants’ responses. The data collection process was started in September 2021 and ended up in April 2022 with initial 107 total responses. It should be noted, we provided the consensus details in the information sheet of the survey questionnaireand only considered the agreed responses for further analysis.
The manual review revealed that eight responses were incomplete and we only considered 99 responses for the final data analysis.The third author mainly extracted and analysed the survey data. The descriptive data were analyzed using the frequency analysis approach. The frequency-based tables and charts are created for the identified AI ethics principles and challenges (see Section sec-results). Frequency analysis is more suitable for analyzing a group of variables and for both numeric and ordinal types of data. The significance of identified AI ethics principles and challenges is evaluated based on the level of agreement between the two types of populations (AI practitioners, lawmakers) (see Section sec-statistical-inferences-rq3). The same data analysis approach has been used in different other similar nature of studies. Finally, various Zoom consent meetings were called and invited all the authors to overview the study results and provide feedback. The study replication package is provided in.
Results and Discussions
We now present the final results and discussions of the survey findings based on the final agreement of all the authors, particularly (i)demography details of survey participants, (ii)survey participants’ perceptions of AI ethics principles and challenges, (iii)severity impact of identified challenges across the AI ethics principles, and (IV) statistically significant differences between opinion of both type of populations (practitioners and lawmakers) for the identified principles and challenges.
Demographic details
Frequency analysis was performed to organize the descriptive data and it is more suitable for analyzing a group of variables both for numeric and ordinal data. We noticed that 99 respondents from 20 countries across 5 continents with 9 roles and 10 different backgrounds participated in the survey study (see Figure fig-demographics(a-c)). The organizational size (number of employees) of survey participants mostly ranges from 50 to 249, which is 28% of the total responses (see Figure fig-demographics(d)). Of all the responses, majority (48%) have 3-5 years of experience working with AI focused projects as practitioners or lawmakers (see Figure fig-demographics(e)).
Participants were asked to explain their opinions about the perceived importance of AI systems in their organization. The majority of the participants positively agreed. For instance, 77% mentioned that their respective organizations consider ethical aspects in AI processes or develop policies for AI projects, 12% answered negatively, and 10% were not sure about it (see Figure fig-demographics(f)). We mapped the respondents’ roles across nine different categories using thematic mapping (see Figure fig-demographics(b)). The final results show that the most of the respondents (29%) are classified across the law practitioner category. Similarly, the working domains of the participants’ organizations are conceptually framed in 10 core categories and the results revealed that most (19%) of the organizations are working on smart systems (see Figure fig-demographics(c)).
< g r a p h i c s >
Demographic details of survey participants
AI ethics principles and challenges (RQ1)
The survey responses are classified as average agree, neutral and average disagree (see Figure fig-surveyfindings(a-b)). We observed that (approx. 65%) of the respondents positively confirmed the AI ethics principles and challenges identified in the SLR study.
< g r a p h i c s >
Survey participants perceptions on AI ethics principles and challenges
AI ethics principles
The results illustrate that the majority of the survey participants positively agreed (approx. 64%) to consider the identified list of AI ethics principles (see Figure fig-surveyfindings(a)). For instance, one survey participant mentioned that:
“The listed AI ethics principles are comprehensive and extensive to cover various aspects of ethics in AI.”
We noticed that 77.8% of survey respondents thought transparency as the most significant AI ethics principle. This is an interesting observation as transparency is equally confirmed as one of the core seven essential requirements by AI HLEGfor realizing the ‘trustworthy AI’Transparency_ provides detailed explanations of logical AI models and decision-making structures understandable to the system stakeholders. Moreover, it deals with the public perceptions and understanding of how AI systems work. Broadly, it is a societal and normative ideal of “openness”.
The second most significant principle to the survey participants was accountability (71.7%). It refers to the expectations or requirements that the organizations or individuals need to ensure throughout the lifecycle of an AI system. They should be accountable according to their roles and applicable regulatory frameworks for the system design, development, deployment, and operation by providing documentation on the decision-making process or conducting regular auditing with proper justificationPrivacy* is the third most frequently occurred principle, supported by 69.7% of the survey participants. It refers to preventing harm, a fundamental right specifically affected by the decision-making systemPrivacy* compels data governance throughout the system lifecycle, covering data quality, integrity, application domain, access protocols, and capability to process the data in a way that safeguards privacy. It must be ensured that the data collected and manipulated by the AI system shall not be used unlawfully or unfairly discriminate against human beings. For example, one of the respondents mentioned that
“The privacy of hosted data used in AI applications and the risk of data breaches must be considered.”
In general, the survey findings of AI ethics principles are confirmatory to the widely adopted accountability, responsibility, and transparency (ART) frameworkand the findings of an industrial empirical study conducted by Ville et al.. Both studiesjointly considered transparency and accountability as the core AI ethics principles, which is consistent with the findings in this survey. On contrary, we noticed that privacy has been ignored in both mentioned studies, but is placed as the third most significant principle in this survey. The reason might be that, as more and more AI systems have been placed online, the significance of privacy and data protection is increasingly recognized. Presently, various countries embarked on legislation to ensure the protection of data and privacy.
AI ethics challenges
Further, the results reveal that the majority of the survey respondents (approx. 66%) confirmed the identified challenging factors(see Figure fig-surveyfindings(b))Lack of ethical knowledge* is considered as the most frequently cited challenge by (81.8%) of the survey participants. It exhibits that knowledge of ethical aspects across AI systems is largely ignored in industrial settings. There is a significant gap between research and practice in AI ethics. Extant guidelines and policies devised by researchers and regulatory bodies discussed different ethical goals for AI systems. However, these goals have not been widely adopted in the industrial domain because of limited knowledge of scaling them in practice. The findings are in agreement with the results of industrial study conducted by Ville et al., concluding that ethical aspects of AI systems are not exclusively considered, and it mainly happened because of a lack of knowledge, awareness, and personal commitment. We noticed that _no legal frameworks* (69.7%) is ranked as the second most common challenge for considering ethics in the AI domain. The proliferation of AI technologies in high-risk areas starts mounting the pressure of designing ethical and legal standards and frameworks to govern them. It highlights the nuances of the debate on AI law and lays the groundwork for a more inclusive AI governance framework. The framework shall focus on most pertinent ethical issues raised by the AI systems, the use of AI across industry and government organisations, and economic displacement (i.e. the ethical reply to the loss of jobs as a result of AI-based automation). The third most common challenging factor is lacking monitoring bodies, and it was highlighted by (68.7%) of the survey participantsLacking monitoring bodies_ refers to the lack of regulatory oversight to assess ethics in AI systems. It raises the issue of public bodies’ empowerment to monitor, and audit the enforcement of ethical concerns in AI technologies by the domain (e.g., health, transport, education). One survey respondent mentioned that
“I believe it shall be mandatory for the industry to get standard approval from monitoring bodies to consider ethics in the development process of AI systems.”
Monitoring bodies extensively promote and observe the ethical values in society and evaluate technology development associated with ethical aspects of AI. They would be tasked to advocate and define responsibilities and develop rules, regulations, and practices in a situation where the system takes a decision autonomously. The monitoring group should ensure “ethics by, in and for design” as mentioned in AI HLEGguidelines.
Additionally, the survey participants elaborated on new challenging factors. For instance, one of the participants mentioned that
“Implicit biases in AI algorithms such as data discrimination and cognitive biases could impact system transparency.”
Similarly, the other respondent reported that
“Biases in the AI system’s design might bring distress to a group of people or individuals.”
Moreover, a survey respondent explicitly considered the lack of tools for ethical transparency and AI biases as significant challenges to AI ethics. We noticed that AI biases is reported as the most common additional challenge. It will be interesting to further explore (i) the type of biases that might be embedded with the AI algorithms, (ii) the causes of these biases, and (iii) corresponding countermeasures to minimize the negative impact on AI ethics.
Severity impacts of identified challenges (RQ2)
We selected the most frequently reported seven challenging factors and six principles discussed in our SLR study. The aim is to investigate the severity impact of the seven challenges (i.e., lack of ethical knowledge, vague principles, highly general principles, conflict in practice, interpret principles differently, lack of technical understanding, and extra constraints) across the six AI ethics principles (i.e. transparency, privacy, accountability, fairness, autonomy, and explainability). The survey participants were asked to rate the severity impact using the Likert scale: short-term (insignificant, minor, moderate) and long-term (major, and catastrophic)(see Figure fig-surveyfindings(c)). The results revealed that most challenges have long-term impacts on the principles (major, and catastrophic).
For the transparency principle, we noticed that the challenging factor interpret principles differently has significant long-term impacts, and 77% (i.e., 50% major, and 27% catastrophic) of the survey participants agreed to it. The interpretation of ethical concepts can change for a group of people and individuals. For instance, the practitioners might perceive transparency differently (more focused on technical aspects) than law and policymakers, who have broad social concerns. Furthermore, lack of ethical knowledge has a short-term impact on the transparency principle, and it is evident from the survey findings supported by 52% (7% insignificant, 25% minor, and 20% moderate) responses. Lack of knowledge could be instantly covered by attaining knowledge, understanding, and awareness of transparency concepts.
Conflict in practice is deemed the most significant challenge to the privacy principle. Hence, 74% (i.e., 53% major, and 21% catastrophic) survey respondents considered it a long-term severe challenge. Various groups, organizations, and individuals might have opinion conflicts associated with privacy in AI ethics. It is critical to interpret and understand privacy conflicts in practice. We noticed that (82%) of survey participants considered the challenging factor extra constraints as the most severe (long-term) challenge for both accountability and fairness principles. Situational constraints, including organizational politics, lack of information, and management interruption, could possibly interfere with the accountability and fairness measures. It could negatively impact the employee’s motivation and interest to explicitly consider ethical aspects across the AI activities. Interestingly, (79%) of the survey respondents considered conflict in practice as the most common (long-term) challenge for autonomy and explainability principles.
Overall, we could interpret that conflict in practice_is the most severe challenge, and its average occurrence is $>$60% for all the principles. It gives a general understanding to propose specific solutions that focus on tackling the opinion conflict regarding the real-world implication of AI ethics principles. The results further reveal _lack of ethical knowledge has an average (28%) short-term impact across selected AI ethics principles. The lack of knowledge gap could be covered by conducting training sessions, workshops, certification, and encouraging social awareness of AI ethics. Knowledge increases the possibility of AI ethics success and acceptance in the best practice of the domain.
Statistical inferences (RQ3)
We performed non-parametric statistical analysisto evaluate the significant differencesand similarities between the opinion of lawmakers and software practitioners. The same non-parametric statistical analysis are previously performed in different other similar nature of studies. The frequency-based ranking of both datasets is identified for AI ethics principles (see Table tab-rank-principles) and challenges (see Tabletab-ai-ethics-challenges-ranks) to set common measures for non-parametric Spearman’s Rank-order correlation coefficient. It gives the lineardependence between a set of variables, ranging from (rs (co-relation coefficient) = +1 to -1), where +1 indicates a total linear dependency.
Significant differences for AI ethics principles
The Spearman’s Rank-order correlation test was applied to statistically evaluate the significant differences between the practitioners and lawmakers perceptions on AI ethics principles. We obtained the Spearman’s Rank-order correlation coefficient value (rs$=$0.819), which is statistically significant (p$=$0.000) (see Table tab-correlation-principles). The value (rs$=$0.819) and the scatter plot given in Figure fig-scatter-ranks-principles show the strong correlation between the ranks of both datasets (lawmakers and software practitioners). The identified principles are widely discussed across multiple AI ethics guidelines, and it might be the reason why both practitioners and lawmakers equally agreed with the significance and implications of these principles. For example, transparency is a common AI ethics principle, and practitioners and lawmakers ranked it in the first position. However, we also noticed significant differences (p$=$0.000) between both types of the population. For instance, lawmakers ranked fairness at position five as the most important principle; however, the software practitioners placed it at position seven. It shows that fairness across AI activities is relatively important based on lawmakers’ perceptions. It is because fairness is a non-technical and more socially used term. Laws like EU GDPR impose concrete requirements on AI development organizations to safeguard fairness in AI system design, deployment, and data processing. The low-ranked placement of fairness by the practitioners might be because of limited knowledge and understanding of interpreting fairness technically, e.g., fairness in AI by design.
Practitioners and lawmakers perceptions co-relation for AI ethics principles
Table Label: tab-correlation-principles
Download PDF to view table< g r a p h i c s >
Scatter plot of ranks for AI ethics principles
In addition to Spearman’s Rank order co-relation, we also applied the independent t-test to compare the mean differences of the ranks obtained from both types of population (see Table tab-ttest-principples and Table tab-groupstatistics-principles). Since Levene’s Test is slightly significant (i.e., p$=$0.051$>$0.05), therefore, we assume that the variances are approximately equal. Based on this assumption, the results of t-test (i.e., t $=$ 0.942, p $=$ 0.661 $>$ 0.05) show that there is no high-level significant differences between both variables. The results show that the degree of agreement between lawmakers and practitioners concerning AI ethics principles is positive. It means that both populations (lawmakers and software practitioners) equally consider the importance of AI ethics principles. The group statistics for both variables are given in Table tab-groupstatistics-principles.
Independent samples t-test of AI ethics principles
Table Label: tab-ttest-principples
Download PDF to view tableAI ethic principles group statistics
Table Label: tab-groupstatistics-principles
Download PDF to view tableSignificant differences for AI ethics challenges
Similar to AI ethics principles, the identified challenges are ranked (see Table tab-ai-ethics-challenges-ranks) and applied Spearman’s rank-order correlation coefficient test to measure the significant differences. The correlation coefficient value (rs$=$0.628) shows a positive and statistically significant (p$=$0.012) correlation between both types of population (see Table tab-corr-challenges). It indicate a moderate and statistically significant agreement between the opinions of lawmakers and practitioners concerning the AI ethics challenges (see Figure fig-scatter-ranks-challenges). For example, lacking monitoring bodies is ranked second by the practitioners and fifth by the lawmakers. The practitioners mainly engage in team-oriented activities and are more concerned about human bias. Continuous socio-technical monitoring ensures delivering reliable, unbiased and fair outcomes. Avoiding proper monitoring deems to bring high ethical harm to practitioners and increase reputational risk.
< g r a p h i c s >
Scatter plot of ranks for AI ethics challenges
We also applied the independent t-test (see Table tab-ttest-challenges, Table tab-group-stat-challenges) to assess the mean differences between both types of the population with respect to AI ethics challenges. The calculated significance value of Levene’s Test is (p$=$0.051$>$0.05); therefore, we assume the variances equally (see Table tab-ttest-challenges). The t-test results, assuming equal variances (t$=$1.291, p$=$0.207$>$0.05), show that practitioners and lawmakers consider the signaficance of identified challenges equally. We could suppose that practitioners and lawmakers are equally aware of the reported challenges and understand their importance. The group statistics for both variables are provided in Table tab-group-stat-challenges.
Practitioners and lawmakers perceptions co-relation for AI ethics challenges
Table Label: tab-corr-challenges
Download PDF to view tableIndependent samples t-test of AI ethics challenges
Table Label: tab-ttest-challenges
Download PDF to view tableAI ethic challenges group statistics
Table Label: tab-group-stat-challenges
Download PDF to view tableOverall, we believe that practitioners and lawmakers are on the same page in considering AI ethics principles and challenges. However, for AI ethics challenges, the perceptions of practitioners and lawmakers are slightly different. We noticed that various AI ethics principles and guidelines are released in private and public sectors, which are very abstract, and incoherent for various stakeholders to implement. The challenges of interpreting these vague principles are different with respect to the targeted group of stakeholders e.g., industrial and legislation perspectives. In conclusion, there is a gap between high-minded principles and industrial practices, which needs alternative approaches based on mutual industrial and legislation consensus.
Key findings and implications
We now outline the key findings of the study to answer the RQs - AI ethics principles, challenges, severity impact and the statistically significant differences between the perceptions of practitioners and lawmakers. We also report the research and practical implications of the study findings.
Summary and interpretation of the key findings
The results of each RQ are thoroughly discussed in Section sec-results and the summary of core findings is presented in Table tab-summary-of-key-findings, addressing RQ1 - confirming the identified AI ethics principles and challenges, RQ2 - measuring the severity impacts of the challenges across principles and RQ3 - practitioners and lawmakers perceptions of AI ethics principles and challenges. For RQ1, the summary of the study results highlights that both practitioners and lawmakers empirically confirm the AI ethics principles and challenges identified in our recent SLR study. We noticed that ($≥$ 60%) of the survey participants agreed to consider the reported principles and challenges. They further define the ranks of identified principles and challenges across a five-point Likert scale, which indicates that transparency, accountability and privacy are the most critical principles; on the other hand, lack of ethical knowledge, no legal frameworks and monitoring bodies appeared as the most frequently occurred challenging factors. Similarly, the summary of findings to address RQ2 reveal that conflict in practice emerged as the most severe challenging factor for the identified AI ethics principles (see Table tab-summary-of-key-findings). For certain cases, the AI ethics principles come into conflict, and their practical values become unrealistic- prioritizing one might inadvertently compromise another. Whittlestone et al.argue that thorough exploration is required to encounter and articulate the conflict and tensions across the AI ethics principles. For RQ3, we noticed that the perceptions of both types of populations (practitioners, lawmakers) are correlated and statistically significant for specific challenges. It is because the existing principles are too vague, generic, conceptual and no match for the specific and complex AI problems. Stakeholders have different perceptions of the challenges raised because of implementing the generic AI ethics principles. A broader consensus of multiple stakeholders- practitioners, lawmakers, and regulatory bodies required to define domain-specific principles and guidelines. Overall, the summary of the findings in Table tab-summary-of-key-findings is self-explanatory and encapsulates the core results discussed in Section sec-results.
Research implications
- We found that most survey respondents agreed to consider the reported principles of AI ethics; however, transparency, accountability and privacy are identified as the most common principles. The study findings complement the existing literature by revealing the most critical principles and call for future research to define the best solutions for scaling the highly significant principles in practice.
- Regarding the challenges of AI ethics, the survey results confirm the findings of our recent SLR study, and determine lack of ethical knowledge, no legal frameworks and lacking monitoring bodies as the high-ranked barriers. The identified challenges are core focus areas that need further research to explore the root causes and best practices to mitigate them.
- The study findings indicate that conflict in practice is the most severe challenge of AI ethics principles. It opens the door for action-guiding future research - AI ethics principles must be contextualized to balance the conflicts. The principles need to structure as standards, regulations and codes to resolve the conflicting tensions.
Overall, the study findings complement the emerging research on AI ethics, particularly recognizing the perceptions of two different types of populations (practitioners and lawmakers). Researchers can quickly look up the study results and develop new hypotheses to streamline the mentioned gaps, e.g., solutions to scale the identified principles in practices, explore the causes and mitigation practices of reported challenges, and tailor the existing principles to fit in specific scenarios.
Practical implications
- The study findings provide an overview of AI ethics principles, challenges, and practitioners can consider the overall understanding of study findings for defining ethically mature AI processes.
- Manifesting AI ethics principles in practice is hard because of various challenging factors. However, we measured the erroneous impacts of these challenges - revealing the most severe barriers practitioners need to tackle before embarking on ethics in AI.
- In general, the study results can facilitate practitioners to get an overview and analyze the extent to which the reported principles and challenges can be leveraged to support AI ethics in the industrial setting.
AI ethics in practice is still a widely unexplored research area. We invite researchers from academia and practitioners from industry to jointly contribute by sharing their experiences and to present potential solutions for AI ethics problems. This effort will bridge the gap between academia and industrial practices.
Threats to validity
Various threats could affect the validity of the study findings. We followed the guidelines presented inand categorized the potential threats across the following four different types:
Internal validity
Internal validity refers to particular factors that impact methodological rigor. The first internal validity threat in this study is the understandability of the survey instrument. The survey participants might have a different understanding of the survey content; however, the survey instrument was piloted based on the expert’s opinions to improve the readability and understandability of the questions (see Section settingthestage). Moreover, the domain expertise of survey participants could be a potential internal validity threat. We tried to mitigate this threat by exploring various social media networks and used personal links to approach the most suitable candidates. Furthermore, we explicitly mentioned the characteristics of prospective participants in the survey information sheet. The interpersonal bias in the data collection and analysis process could threaten the internal validity of study findings. However, the survey data is collected, analyzed, organized and reported based on the final consensus made by all the authors (see Section settingthestage and Section sec-results).
Construct validity
Construct validity is the extent to which the study constructs are well-interpreted and defended. In this study, AI ethics principles and challenges are the core constructs. The reliability and authenticity of the selected data sources (platforms) is a possible construct validity threat. This threat has been alleviated by searching social media and professional research networks to identify the most relevant groups or individuals. We thoroughly read the group discussions to ensure that the group members mainly discussed AI ethics issues. Similarly, we explored the profile details and interests of the targeted individuals.
External validity
External validity is the extent to which the study findings based on a particular data sample could be broadly generalized to other contexts. The survey sample size might not be representative to provide a concrete foundation for generalizing the study findings. However, we received 99 valid responses from 20 countries across 5 continents, having a diverse range of experiences, working in various domains on distinct roles in different size organizations (see Figure fig-demographics). We concede that the study findings could not be generalized at a large scale or consider the identified principles and challenges for all types of AI systems. However, considering the demographic details of the survey respondents (see Figure fig-demographics), we believe that the study results can support the overall generalizability to some extent.
Conclusion validity
Conclusion validity is the extent to which certain factors affect valid conclusions in empirical research. To lessen this threat, the first two authors mainly participated in the data collection process; however, the next authors participated in the consent meetings to share feedback and review the survey activities (see Section settingthestage). Similarly, the third author conducted the data analysis, and the final results were presented based on the feedback shared by all the authors (see Section sec-results). Finally, all the authors were invited to participate in the brainstorming sessions to discuss the core findings and draw concrete conclusions.
Related Work
We review the most relevant existing work classified intotwo categories, focused on i) AI ethics principles and guidelines, and ii) AI ethics frameworks. A conclusive summary at the end position the scope and contributions of the proposed study.
AI ethics principles and guidelines
Lu et al.interviewed 21 practitioners and verified that the existing AI ethics principles are broad and do not provide tangible guidance to develop ethically aligned AI systems. Their study findings uncover the fact that AI ethics practices are often considered ad hoc and ignored for continuous learning. Based on the interview findings, Lu et al.proposed a list of patterns and processes which can be embedded as product features to design a responsible AI system. The proposed design patterns are mainly used to support the core AI ethics principles mentioned by the interview participants. Similarly, Lu et al.also conducted an SLR study and defined a software engineering roadmap to develop AI systems. The proposed roadmap covers the entire process life cycle focusing on responsible AI governance, defines process-oriented practices, and presents architectural patterns, styles and methods to build responsible AI systems by design.
Vakkuri et al.conducted an industrial survey with 249 practitioners to understand and verify the mentioned research gap based on the EU AI ethics guidelines. The survey results highlight that most of the companies ignored considering the societal and environmental requirements for developing AI systems. Moreover, the surveyed participants largely considered the product customers as the only stakeholders of AI ethics perspectives; however, it is more narrow in the AI domain, covering customers, regulatory bodies, practitioners, and society. Consequently, the focus should be on multiple AI ethics principles, e.g., accountability, responsibility, and transparency.
Ibanez and Olmedaconducted semi-structured interviews with 22 practitioners and two focus groups to know how software development organizations address ethical concerns in AI systems. The interview findings raised various issues related to AI ethics principles and practice including governance, accountability, privacy, fairness, and explainability. Moreover, the interview participants provide some suggestions to operationalize AI ethics, e.g., promoting domain focus standardization, embracing data-driven organizational culture, presenting a particular code of ethics and fostering AI ethics awareness. In conclusion, Ibanez and Olmedacalled for a set of actions to distinguish project stakeholders, develop a socio-technical project team, and regularly evaluate the AI projects practices, processes and policies.
AI ethics frameworks
Vakkuri et al.developed the ECCOLA framework to provide a tool for implementing ethics in AI. ECCOLA aims to assist practitioners, and AI-specific software development organizations in adopting ethically aligned development processes. The proposed framework supports iterative development and consists of a deck of cards (modules) that could be tailored to a specific context. The card mainly defines various themes of AI ethics, which were identified in the existing AI ethics guidelines. The ECCOLA framework is evaluated in both the academia and industrial domain to understand its real-world implications and limitations.
Floridi et al.proposed the AI4People framework comprising five principles and twenty recommendations to lay the foundation for “Good AI Society”. The available sets of AI ethics principles are comparatively synthesized to understand the commonalities and significant differences. The comparison findings reveal four AI ethics principles (beneficence, non‑maleficence, autonomy, justice) with a new additional principle(explicability) to structure the AI4People framework. Finally, twenty action points were devised to scale the mentioned principles in practice. The overall aim of the proposed framework is to move the dialogue forward from theoretical principles to in-action policies. Such policies shield human autonomy, increase social empowerment, and decrease inequality.
Leikas et al.presented a framework that focuses on ethics by design in decision-making systems. The current design approaches, practices, theories and concepts of autonomous intelligent systems are reviewed to structure the proposed ethical framework. The framework could use to recommend a set of AI ethics principles and practices for a specific scenario. The framework captured the human-centric details of a particular case study and used the details to identify the ethical requirements of concerned stakeholders and transfer them to the design goals. Leikas et al.called for future studies to evaluate the real-world significance of the proposed framework in industrial scenarios.
Conclusive summary
The reported studiesare grounded on empirical findings and fine-granular analysis of extant AI ethics principles and guidelines. To complement empiricism in exploring AI ethics principles and challenges, this study explicitly analyzed and discussed the principles based on the perceptions of two different types of populations (practitioners and lawmakers). Studiesare conducted todesign various frameworks to operationalize the AI ethics principles; however, no research has yet been done to streamline the plethora of challenges in adopting the widely defined principles and frameworks. Our study preliminarily focused on survey-driven validation of AI ethics principles and challenges by practitioners and lawmakers to complement the body of research comprising the recent industrial studies on AI ethics principlesand frameworks.
Conclusions and future work
This empirical study explored the perceptions of representative practitioners and lawmakers on AI ethics principles and potential challenges. We outlined the following observations based on the data collected from 99 respondents working in 20 different countries on various roles with diverse working domains:
Emerging Roles: Besides practitioners, the role of policy and lawmakers is also important in defining the ethical solutions for AI-based systems. Based on our knowledge, this study is the first effort made to encapsulate the views and opinions of both types of populations.
Confirmatory Findings: This study empirically confirms the AI ethics principles and challenging factors discussed in our published SLR study. Based on the survey findings, most participants agreed that the identified principles and challenges should take into consideration for defining ethics in AI.
Adherence to AI principles and challenges: The most common principles (e.g., transparency, privacy, accountability) and challenges (e.g., lack of ethical knowledge, no legal frameworks, lacking monitoring bodies) must be carefully realized in AI ethics. Companies must consider the mentioned common principles and challenges to define ethically aligned design methods and frameworks in practice.
Risk-aware AI ethics: The challenging factors have mainly long-term severity impacts across the AI ethics principles. It opens a new research call to identify the causes of the most severe challenging factors and propose solutions for minimizing or mitigating their impacts.
Practitioners and lawmakers perceptions: The identified principles and challenges are statically analyzed to understand the significant differences between practitioners’ and lawmakers’ perceptions. We noticed that the opinions of both types of populations are positively and significantly correlated. In the long term, these findings could use to develop lawful (complying with applicable laws) and robust (technically and socially) AI ethics solutions (adhering to ethical principles).
Future research: Our final catalogue (see Figure fig-surveyfindings) of principles and challenging factors can be used as a guideline for defining ethics in the AI domain. Moreover, the catalogue is a starting point for further research on AI ethics. It is essential to mention that the identified principles and challenging factors only reflect the perceptions of 99 practitioners and lawmakers in 20 countries. More deep and comprehensive empirical investigation with wider groups of practitioners to discuss the causes and solutions of the identified challenges would be useful to generalize the study findings at large scale. This, together with proposing a robust solution (AI ethics maturity model) for integrating ethical aspects in AI design and process flow, will be part of our future work.
Attribution
arXiv:2207.01493v2
[cs.CY]
License: cc-by-4.0