Join Our Discord (750+ Members)

Responsible AI Research Needs Impact Statements Too

Content License: cc-by

Responsible AI Research Needs Impact Statements Too

Papers is Alpha. This content is part of an effort to make research more accessible, and (most likely) has lost some details from the original. You can find the original paper here.

All types of research, development, and policy work can have unintended, adverse consequences—work in responsible artificial intelligence (RAI), ethical AI, or ethics in AI is no exception.

The work of the responsible AI community has illustrated how the design, deployment, and use of computational systems—including machine learning (ML), artificial intelligence (AI), and natural language processing (NLP) systems—engender a range of adverse impacts. As a result, in recent years, the authors of ML, AI, and NLP research papers have been required to include reflections on possible unintended consequences and negative social impacts as dedicated sections or extensive checklists. Even though such requirement traces its roots to work done within the responsible AI community, responsible AI conferences and publication venues, such as FAccT,[ACM Conference on Fairness, Accountability, and Transparency https://facctconference.org/ ] AIES,[AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society https://www.aies-conference.com ] FORC,[Symposium on Foundations of Responsible Computing https://responsiblecomputing.org ] or EAAMO,[ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization https://eaamo.org ] have yet to explicitly enforce similar requirements.  Surprisingly, many papers on responsible AI, ethical AI, ethics in AI, or related topics[Throughout this viewpoint, we use the acronym RAI to broadly refer to work on responsible AI/computing, ethical AI/computing, trustworthy AI/computing, ethics in AI/computing, or any related topics.] do not include similar reflections on possible adverse impacts. RAI research and work is often taken to be inherently beneficial with little to no potential for harm and can thus paradoxically fail to consider any possible adverse consequences it may give rise to. This is also the case for many RAI artifacts which were found to, e.g., “not contend with the organizational, labor, and political implications of AI ethics work in practice”.

This trend of failing to reflect on the possible negative impact of our own work should concern all of us, as the research we conduct and the artifacts we build are more often than not value-laden, and thus encode all kinds of implicit practices, assumptions, norms, and values. Similarly to our colleagues from other research communities, we—RAI researchers and practitioners—can and often do also suffer from similar “failures of imagination” when it comes to the impact of our own work, and we need to at least keep ourselves to the same standard that we expect other communities to adhere to.

We believe responsible AI research needs impact statements, too.

Requiring adverse impact statements for RAI research is long overdue. There have been growing concerns about how our own work has routinely failed to engage with and address deeper patterns of injustice and inequality, often assuming that many elements of the status quo are immutable. We know that common RAI values may conflict in certain deployment settings and that different groups assess and prioritize responsible AI values differently, with RAI research still largely centering mostly Westernized andUS-centric perspectives. All these can have profound implications for what problems and solutions end up being prioritized.

Even well-intentioned applications, policies, or interventions to mitigate known issues can and often do lead to harm, e.g.,. Scrutiny is required even when a system, practice or framework seems to address real needs stakeholders might have, as there can be subtle patterns of problematic uses, system behavior, or outcomes that might be harder to discern. For instance,discuss how fixating on certain notions of fairness can reinforce existing dynamics and exacerbate harms. Indeed, blindly adhering to some RAI frameworks without considering what exactly are we trying to make, e.g., fair or transparent, can lead to these frameworks being used to legitimize harmful, absurd technologiesand to a “checkbox culture”where researchers and practitioners do not meaningfully engage with RAI considerations or the social, economic, and political origins of these considerations.

Furthermore, a focus on bias and fairness claims often assumes that these issues are due to poor implementation of a system and center the algorithmic systems themselves, distracting from both basic validation of functionalityand the factors that led to injustices in the first place. RAI research, similarly to much of AI research, inadvertently may take for granted that AI systems work or that they are inevitable, failing to reflect on whether techno-solutions are even justifiable. Similarly, RAI interventions targeting the design phase of AI life-cycle tend to ignore important contextual factors that determine the outcomes resulting from the implementation, deployment, and use of AI systems. This is because many algorithms developed to help guarantee various, e.g., “fairness” requirements are developed “without policy and societal contexts in mind.”[https://www.wired.com/story/bias-statistics-artificial-intelligence-healthcare/ ]

There are also concerns about how RAI practice and research risks to facilitate paying lip service to the issues it ostensibly aims to address, rather than driving meaningful changes. RAI work might ignore organizational power dynamics and structuresthat are critical to enacting change, as well as the fact that in practice the responsibility of doing this work many times falls on the shoulders of either individuals coming from marginalized backgroundsand/or on those of time-constrained and untrained practitioners. Raising concerns and performing RAI work can also take a psychological toll on RAI practitionersas, e.g., they might be exposed to harmful content or might need to take great personal risks.

Examples of how RAI research and work can thus also inadvertently lead to harmful outcomes abound.

What are other research communities doing? Following the call byfor researchers to disclose possible negative consequences of their work, conferences like the Conference on Neural Processing Information Systems (NeurIPS)and the International Conference on Machine Learning (ICML) have started requiring authors to reflect on “whenever there are risks associated with the proposed methods, methodology, application or data collection and data usage, authors are expected to elaborate on the rationale of their decision and potential mitigations.”[NeurIPS Code of Ethics: https://neurips.cc/public/EthicsGuidelines ] These requirements have evolved over time from dedicated statements on“potential broader impact of their work, including its ethical aspects and future societal consequences”to a detailed paper checklist.[NeurIPS Paper Checklist Guidelines: https://neurips.cc/public/guides/PaperChecklist ] Similarly, the International Conference on Learning Representations (ICLR) also encourages authors to include an Ethics Statement in their papers that covers reflections about “potentially harmful insights, methodologies and applications.”[Author guide for the International Conference on Learning Representations: https://iclr.cc/Conferences/2024/AuthorGuide ] The current Association for Computational Linguistics (ACL) rolling review call for papers—used by most ACL venues—explicitly encourages the authors “to discuss the limitations of their work in a dedicated section” and “devote a section of their paper to concerns about the ethical impact of the work and to a discussion of broader impacts of the work,”[https://aclrollingreview.org/cfp ] while also providing a responsible NLP research checklist.[https://aclrollingreview.org/responsibleNLPresearch/ ] The conference on Empirical Methods in Natural Language Processing (EMNLP) made the “discussion of limitations” mandatory in 2023, while also encouraging authors to include “an optional broader impact statement or other discussion of ethics.”[https://2023.emnlp.org/calls/main_conference_papers/ ] To nudge authors to be comprehensive in their discussions of limitations, ethical considerations, and adverse impacts, these venues typically do not count these sections or discussions towards the page limit.

What do RAI venues do? The ACM Conference on Fairness, Accountability, and Transparency (FAccT) recent calls for papers have mainly guided authors towards the new ACM Code of Ethics and Professional Conduct,[The ACM Code of Ethics and Professional Conduct notes that computing professionals’ responsibilities include: “Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks.” and “Foster public awareness and understanding of computing, related technologies, and their consequences.”] and ask them to “adhere to precepts of ethical research and community norms.” Similarly, past call for papers of other RAI venues such as FORC and EEAMO only briefly note that authors e.g., “are encouraged to reflect on relevant ethics guidelines” such as the ACM Code of Ethics, respectively that “papers should include a discussion of ethical impacts and precautions taken, including disclosure regarding whether the study was approved by an Institutional Review Board (IRB).” AIES’ call for papers[https://www.aies-conference.com/2022/call-for-papers/index.html ] does not seem to include any language requiring or encouraging papers to include ethical considerations, limitations, or impact statements. Overall, these CFPs do not feature explicit calls for authors to include reflections on possible adverse impacts their work might give rise to, do not explicitly enforce such requirements, and do not provide explicit guidance or incentives to do so (e.g., extra pages, checklists).

Suggestions for More Meaningful Engagement with the Impact of RAI Research

To help others understand not only the benefits or positive outcomes, but also the possible harmful outcomes or adverse impacts of our own research, we believe RAI papers should go one step beyondwhat other research communities are currently doing and include: [1)]

  • reflections on how the researchers’ disciplinary background, lived experiences, and goals might affect the way they approach their work (as part of researcher positionality statements),

  • a description of the ethical concerns the authors grappled with and mitigated while conducting the work (as part of ethical considerations statements),

  • reflections on the limitations of their methodological choices (as part of a discussion of limitations), and—informed by a researcher positionality, known ethical concerns, and known limitations—

  • reflections on possible adverse impacts the work might lead to once published (as part of adverse impact statements).

By distinguishing between these four different elements of research practice and outcomes—which have at times been conflated—without being too prescriptive, we hope to provide both some clarity and guidance about what each of these statements could include. In doing so, we draw on emerging practices in other communities. However, we recognize that the RAI community comes from diverse disciplinary backgrounds, and some of these elements might be unfamiliar or might be less applicable for some types of work than others.

  1. RAI papers should include researcher positionality statements. Our research, development, and policy work necessarily rely on various (explicit and implicit) assumptions that we make and that are being shaped by our values, disciplinary backgrounds, knowledge, and lived experiences. We collectively hold a variety of goals and theories of change that motivate and guide our work. Positionality statements are meant to provide added transparency and scaffold readers’ understanding of how our background and experiences influence or inform our work, and how our perspectives might as a result differ from those of others. If the authors believe that their worldview does not affect their work, that by itself reflects a position that the authors operate under, and they could simply state that in their positionality statement.

We, however, recognize that authors might also be concerned about how such statements may end up disclosing axes of their identity that might negatively impact how their work is being perceived and evaluated. Positionality statements, however, do not necessarily need to disclose demographic or other sensitive attributes, or “include an identity disclosure”. They can instead focus on any other aspects that help the reader understand where the authors are coming from by providing clarity about the lenses they are using when conducting the work. As a starting point, we recommend checking the researcher guide byand the thoughtful suggestions and examples provided by.

  1. RAI papers should include ethical considerations statements. By its very nature, RAI work centers humans. It is thus critical that ethical considerations remain top of mind for researchers and practitioners, who should carefully consider how the individual autonomy, agency, and well-being—of e.g., those producing or represented in datasets, of those involved in (or excluded from) any other part of the research and development processes (e.g., study participants, researchers, engineers, content moderators, red team-ers), or of those expected to benefit or engage with the research outcomes—are impacted by the use of data or by how the research was conducted. Ethical considerations statements should especially cover ethical concerns the authors had and mitigated while conducting the work. These statements could include whether the authors obtained an IRB’s approval for any human subject research and the concerns covered by the IRB. However, while IRBs to some extent set common standards and provide researchers with a framework to reflect critically about risks and benefits, and whether these risks and benefits are justly apportioned, the ethical considerations statements should not necessarily be limited to them.

  2. RAI papers should include discussions of limitations. Reflecting on and making any data and methodological limitations explicit can further help illustrate the issues these limitations (and the resulting work) might lead to. Such limitations can include aspects related to research design choices such as problem framing or data and methodological choices, or aspects related to constraints that researchers need to navigate, such as access to participants, computing, or other resources. The discussion about limitations could, for instance, include reflections on the assumptions that a given problem framing or methodological approach makes and when those assumptions might not hold, or on the way that data biases or lack of data coverage limits the insights that can be drawn from it. It could also include considerations related to internal, external, or construct validity. If the authors believe their work has no limitations, they could note this. While discussions of methodological limitations are more commonly included in research papers across disciplines, we believed it is worth foregrounding them here as well, even only to clarify how they are different from ethical concerns and adverse impacts. The work bymight provide a useful starting point for thinking about limitations.

  3. RAI papers should include adverse impact statements. While impact statements about possible adverse impacts can be informed by the researcher positionality, ethical considerations, and discussions of methodological limitations, they are not the same. For example, positionality statements are important when thinking about impacts as they help contextualize how authors prioritize problems, and thus help understand possible blind-spots they might have. In a good impact statement, authors critically reflect on not only the impact of how the work was done (which might be covered by ethical concerns), but also on the impact the work will have once it is put out into the world and used by others—e.g., work using crowd judges to label harmful content does not only raise concerns when the research is conducted (which could go under ethical considerations), but also due to possibly recommending others to do the same. Adverse impact statements could also include reflections of how unintended consequences could possibly be handled, including recourse mechanisms and possible checks and balances that might help identify such consequences early on.

Decoupling the anticipation of adverse impacts (e.g., ideating about what harms our work can give rise to) from their mitigation (e.g., how can we mitigate these possible harms), might also help authors avoid conflating the two and avoid hyper-focusing only on issues they might know how to mitigate. For those unfamiliar with this practice, the guide for writing the NeurIPS impact statementsmight provide a helpful starting point, along with papers that have examined such practices at NeurIPS and ACL.

                                  ***

We recommend all four reflections and discussions to center marginalized and vulnerable communities, particularly those at the intersection of race, ethnicity, class, gender, nationality, and other characteristics that historically and at present have led to marginalization. For instance, a domain that historically has motivated the development of a large portion of algorithmic fairness research are technologies and algorithms used by police, prisons, and/or judicial authorities. Adverse impacts, ethical considerations, limitations, and researcher positionality statements are particularly critical and urgent for research motivated by, conducted in, or impacting situations in which the capacity to exercise one’s rights might be diminished, especially that of marginalized and vulnerable populations such as prison inmates, heavily policed communities, migrants, and asylum seekers. Authors should also remember that those conducting the RAI research, those developing RAI policies and practices, or those being expected to enforce RAI policies and practices (e.g., practitioners who volunteer to do RAI, red teamers, content moderators) can and often do come from more marginalized backgrounds.

Concluding Reflections

We echo the growing body of work and the calls for embracing critical, dissenting voicesand self-reflection in our own community. We believe our community should do more to critically reflect on and mitigate the possible risks and harms that RAI research and work might also give rise to. We hope this perspective provides a starting point and some guidance on how authors of RAI research could more meaningfully engage with possible adverse impacts of their own work.

Adverse impact statement. Articulating, writing, and sharing this viewpoint is not without risks either. Anticipating harms or unintended consequences is hard even when guidance is provided, and researchers and practitioners often lack training and are time and resource-constrained. Thus, while we believe there are benefits from requiring reflections 1) on how our backgrounds shape our work, 2) on what ethical issues we identified while conducting the work and how we engaged with them, 3) on the limitations of our work, and 4) on the types of unintended consequences that the resulting work can have, we also recognize that these are not a panaceaas many factors affect by whom, whether and when adverse impacts are foreseeable. These practices might, in fact, end up just reflecting and promoting the same perspective and values of the status quo regarding what should be prioritized as those shaping the work our community does.

There is also a risk of overwhelming researchers and practitioners with too many requirements, and thus disincentivizing them from meaningfully engaging with the task of ideating about adverse impacts, or the task of reporting on ethical concerns and limitations, or even from conducting RAI research. As other communities are already doing, it might be worth for our community to explore various formats for how authors could report limitations, ethical concerns, and possible adverse impacts.

We also want to once more acknowledge that there are concerns surrounding how positionality statements could inadvertently affect marginalized researchers and practitioners. Authors might be affected not only by being perceived as belonging to a group but also by being assumed as not being part of a group, as “careless requests for such statements or using them in absolutist ways that control who can and cannot do the work can cause some of the very same harms that those who request them are hoping to mitigate”. There is also a risk of misguided deference where the authors’ identity is used to misconstrue their position as representative of an entire marginalized group. Positionality statements might also accidentally de-anonymize authors during peer-review if authors disclose attributes that are shared by only a few, and venues should explore how these might interact with specific anonymization requirements.

Finally, our perspective might be failing to foresee situations where it might not be appropriate to ask authors to include some or any of the statements we highlighted earlier. RAI researchers and practitioners might also face more opposition and friction while conducting their work, and these requirements might unwittingly and unnecessarily further strain already overwhelmed researchers and practitioners.

Positionality statement. The research, disciplinary background, and personal views of the lead author, AO, have significantly influenced this perspective, as her own work has examined how our choices of what problems to prioritize and work on, of how we do our work, and of how we interpret research results are often shaped by unstated or implicit values, norms, goals, practices, and assumptions, as well as by our own “failures of imagination.” ME similarly draws from several years of efforts to bridge between different communities, particularly RAI and the recommendation and information retrieval (IR) communities, and his use of the pedagogical idea of “scaffolding” to model and advocate for continuous improvement in the quality of RAI work in these communities and the attention of that work to the needs and impact on marginalized communities (including shifts in his own research methods and writing). CC is influenced by a perspective centered on computing research, computing applications, and computing education, and by the specific concerns of the FAccT conference with which they have been involved for the past five years. JS draws from her research at the intersection of technology and human well-being where she examines the role of technologies, design choices, and values embedded in them in shifting power dynamics and improving individual and organizational well-being. In relation to the perspective presented in this article, she draws on her research on worker well-being, especially surrounding the invisible forms of labor that underlie the creation and deployment of technologies.

                                  ***

While these two statements are imperfect examples of adverse impacts and researcher positionality, we hope they illustrate how even such a viewpoint can benefit from them. Our critique, however, very much applies to our own work as well, and even to this perspective. We might have also failed to recognize and highlight possible ethical concerns and limitations of both how our perspective on how “RAI research needs impact statements too” came together, and of what it currently covers.

*

Acknowledgements We would like to thank Alexandra Chouldechova for early discussions about impact statements for RAI research, and Reuben Binns for insightful feedback about positionality statements.

Bibliography

  1@article{boyarskaya2020overcoming,
  2  year = {2020},
  3  journal = {arXiv preprint arXiv:2011.13416},
  4  author = {Boyarskaya, Margarita and Olteanu, Alexandra and Crawford, Kate},
  5  title = {Overcoming failures of imagination in AI infused system development and deployment},
  6}
  7
  8@inproceedings{keyes2019mulching,
  9  year = {2019},
 10  pages = {1--11},
 11  booktitle = {Extended abstracts of the 2019 CHI conference on human factors in computing systems},
 12  author = {Keyes, Os and Hutson, Jevan and Durbin, Meredith},
 13  title = {A mulching proposal: Analysing and improving an algorithmic system for turning the elderly into high-nutrient slurry},
 14}
 15
 16@article{prunkl2021institutionalizing,
 17  publisher = {Nature Publishing Group UK London},
 18  year = {2021},
 19  pages = {104--110},
 20  number = {2},
 21  volume = {3},
 22  journal = {Nature Machine Intelligence},
 23  author = {Prunkl, Carina EA and Ashurst, Carolyn and Anderljung, Markus and Webb, Helena and Leike, Jan and Dafoe, Allan},
 24  title = {Institutionalizing ethics in AI through broader impact requirements},
 25}
 26
 27@inproceedings{nanayakkara2021unpacking,
 28  year = {2021},
 29  pages = {795--806},
 30  booktitle = {Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society},
 31  author = {Nanayakkara, Priyanka and Hullman, Jessica and Diakopoulos, Nicholas},
 32  title = {Unpacking the expressed consequences of AI research in broader impact statements},
 33}
 34
 35@inproceedings{ashurst2022ai,
 36  year = {2022},
 37  pages = {2047--2056},
 38  booktitle = {Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency},
 39  author = {Ashurst, Carolyn and Hine, Emmie and Sedille, Paul and Carlier, Alexis},
 40  title = {Ai ethics statements: analysis and lessons learnt from neurips broader impact statements},
 41}
 42
 43@inproceedings{balayn2023fairness,
 44  year = {2023},
 45  pages = {482--495},
 46  booktitle = {Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society},
 47  author = {Balayn, Agathe and Yurrita, Mireia and Yang, Jie and Gadiraju, Ujwal},
 48  title = {``Fairness Toolkits, A Checkbox Culture?'' On the Factors that Fragment Developer Practices in Handling Algorithmic Harms},
 49}
 50
 51@inproceedings{young2022confronting,
 52  year = {2022},
 53  pages = {1375--1386},
 54  booktitle = {Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency},
 55  author = {Young, Meg and Katell, Michael and Krafft, PM},
 56  title = {Confronting power and corporate capture at the FAccT Conference},
 57}
 58
 59@article{buccinca2023aha,
 60  year = {2023},
 61  journal = {arXiv preprint arXiv:2306.03280},
 62  author = {Bu{\c{c}}inca, Zana and Pham, Chau Minh and Jakesch, Maurice and Ribeiro, Marco Tulio and Olteanu, Alexandra and Amershi, Saleema},
 63  title = {AHA!: Facilitating AI Impact Assessment by Generating Examples of Harms},
 64}
 65
 66@inproceedings{ali2023walking,
 67  year = {2023},
 68  pages = {217--226},
 69  booktitle = {Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency},
 70  author = {Ali, Sanna J and Christin, Ang{\`e}le and Smart, Andrew and Katila, Riitta},
 71  title = {Walking the Walk of AI Ethics: Organizational Challenges and the Individualization of Risk among Ethics Entrepreneurs},
 72}
 73
 74@inproceedings{widder2023s,
 75  year = {2023},
 76  pages = {467--479},
 77  booktitle = {Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency},
 78  author = {Widder, David Gray and Zhen, Derrick and Dabbish, Laura and Herbsleb, James},
 79  title = {It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?},
 80}
 81
 82@inproceedings{birhane2022forgotten,
 83  year = {2022},
 84  pages = {948--958},
 85  booktitle = {Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency},
 86  author = {Birhane, Abeba and Ruane, Elayne and Laurent, Thomas and S. Brown, Matthew and Flowers, Johnathan and Ventresque, Anthony and L. Dancy, Christopher},
 87  title = {The forgotten margins of AI ethics},
 88}
 89
 90@article{rakova2021responsible,
 91  publisher = {ACM New York, NY, USA},
 92  year = {2021},
 93  pages = {1--23},
 94  number = {CSCW1},
 95  volume = {5},
 96  journal = {Proceedings of the ACM on Human-Computer Interaction},
 97  author = {Rakova, Bogdana and Yang, Jingying and Cramer, Henriette and Chowdhury, Rumman},
 98  title = {Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices},
 99}
100
101@inproceedings{gansky2022counterfacctual,
102  year = {2022},
103  pages = {1982--1992},
104  booktitle = {Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency},
105  author = {Gansky, Ben and McDonald, Sean},
106  title = {CounterFAccTual: How FAccT undermines its organizing principles},
107}
108
109@inproceedings{septiandri2023weird,
110  year = {2023},
111  pages = {160--171},
112  booktitle = {Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency},
113  author = {Septiandri, Ali Akbar and Constantinides, Marios and Tahaei, Mohammad and Quercia, Daniele},
114  title = {WEIRD FAccTs: How Western, Educated, Industrialized, Rich, and Democratic is FAccT?},
115}
116
117@article{bennett2020point,
118  publisher = {ACM New York, NY, USA},
119  year = {2020},
120  pages = {1--1},
121  number = {125},
122  journal = {ACM SIGACCESS Accessibility and Computing},
123  author = {Bennett, Cynthia L and Keyes, Os},
124  title = {What is the point of fairness? Disability, AI and the complexity of justice},
125}
126
127@inproceedings{robertson2021can,
128  year = {2021},
129  pages = {1--18},
130  booktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
131  author = {Robertson, Ronald E and Olteanu, Alexandra and Diaz, Fernando and Shokouhi, Milad and Bailey, Peter},
132  title = {“I can’t reply with that”: Characterizing problematic email reply suggestions},
133}
134
135@article{olteanu2019social,
136  publisher = {Frontiers Media SA},
137  year = {2019},
138  pages = {13},
139  volume = {2},
140  journal = {Frontiers in big data},
141  author = {Olteanu, Alexandra and Castillo, Carlos and Diaz, Fernando and K{\i}c{\i}man, Emre},
142  title = {Social data: Biases, methodological pitfalls, and ethical boundaries},
143}
144
145@article{sandvig2014auditing,
146  year = {2014},
147  pages = {4349--4357},
148  number = {2014},
149  volume = {22},
150  journal = {Data and discrimination: converting critical concerns into productive inquiry},
151  author = {Sandvig, Christian and Hamilton, Kevin and Karahalios, Karrie and Langbort, Cedric},
152  title = {Auditing algorithms: Research methods for detecting discrimination on internet platforms},
153}
154
155@misc{liang2021reflexivity,
156  publisher = {Sep},
157  url = {},
158  year = {2021},
159  author = {Liang, Calvin},
160  title = {Reflexivity, positionality, and disclosure in HCI},
161}
162
163@article{stahl2023systematic,
164  publisher = {Springer},
165  year = {2023},
166  pages = {1--33},
167  journal = {Artificial Intelligence Review},
168  author = {Stahl, Bernd Carsten and Antoniou, Josephina and Bhalla, Nitika and Brooks, Laurence and Jansen, Philip and Lindqvist, Blerta and Kirichenko, Alexey and Marchal, Samuel and Rodrigues, Rowena and Santiago, Nicole and others},
169  title = {A systematic review of artificial intelligence impact assessments},
170}
171
172@inproceedings{smith2022real,
173  year = {2022},
174  pages = {587--597},
175  booktitle = {Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency},
176  author = {Smith, Jessie J and Amershi, Saleema and Barocas, Solon and Wallach, Hanna and Wortman Vaughan, Jennifer},
177  title = {Real ml: Recognizing, exploring, and articulating limitations of machine learning research},
178}
179
180@article{taiwo2020being,
181  year = {2020},
182  pages = {61--70},
183  number = {4},
184  volume = {108},
185  journal = {The Philosopher},
186  author = {T\'a\'iw\`o, Ol\'uf\d{\'e}mi},
187  title = {Being-in-the-room privilege: Elite capture and epistemic deference},
188}
189
190@inproceedings{liu2022examining,
191  year = {2022},
192  pages = {424--435},
193  booktitle = {Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society},
194  author = {Liu, David and Nanayakkara, Priyanka and Sakha, Sarah Ariyan and Abuhamad, Grace and Blodgett, Su Lin and Diakopoulos, Nicholas and Hullman, Jessica R and Eliassi-Rad, Tina},
195  title = {Examining Responsibility and Deliberation in AI Impact Statements and Ethics Reviews},
196}
197
198@inproceedings{blodgett2021stereotyping,
199  year = {2021},
200  pages = {1004--1015},
201  booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
202  author = {Blodgett, Su Lin and Lopez, Gilsinia and Olteanu, Alexandra and Sim, Robert and Wallach, Hanna},
203  title = {Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets},
204}
205
206@inproceedings{jacobs2021measurement,
207  year = {2021},
208  pages = {375--385},
209  booktitle = {Proceedings of the 2021 ACM conference on fairness, accountability, and transparency},
210  author = {Jacobs, Abigail Z and Wallach, Hanna},
211  title = {Measurement and fairness},
212}
213
214@inproceedings{jakesch2022different,
215  year = {2022},
216  pages = {310--323},
217  booktitle = {Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency},
218  author = {Jakesch, Maurice and Bu{\c{c}}inca, Zana and Amershi, Saleema and Olteanu, Alexandra},
219  title = {How different groups prioritize ethical values for responsible AI},
220}
221
222@inproceedings{benotti2022ethics,
223  year = {2022},
224  pages = {4509--4516},
225  booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
226  author = {Benotti, Luciana and Blackburn, Patrick},
227  title = {Ethics consideration sections in natural language processing papers},
228}
229
230@inproceedings{zhou2022deconstructing,
231  year = {2022},
232  pages = {314--324},
233  booktitle = {Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
234  author = {Zhou, Kaitlyn and Blodgett, Su Lin and Trischler, Adam and Daum{\'e} III, Hal and Suleman, Kaheer and Olteanu, Alexandra},
235  title = {Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications},
236}
237
238@inproceedings{nanayakkara2021unpacking,
239  year = {2021},
240  pages = {795--806},
241  booktitle = {Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society},
242  author = {Nanayakkara, Priyanka and Hullman, Jessica and Diakopoulos, Nicholas},
243  title = {Unpacking the expressed consequences of AI research in broader impact statements},
244}
245
246@article{wong2023seeing,
247  publisher = {ACM New York, NY, USA},
248  year = {2023},
249  pages = {1--27},
250  number = {CSCW1},
251  volume = {7},
252  journal = {Proceedings of the ACM on Human-Computer Interaction},
253  author = {Wong, Richmond Y and Madaio, Michael A and Merrill, Nick},
254  title = {Seeing like a toolkit: How toolkits envision the work of AI ethics},
255}
256
257@article{Boyarskaya_Olteanu_Crawford_2020,
258  journal = {arXiv preprint arXiv:2011.13416},
259  year = {2020},
260  author = {Boyarskaya, Margarita and Olteanu, Alexandra and Crawford, Kate},
261  title = {Overcoming Failures of Imagination in AI Infused System Development and Deployment},
262}
263
264@misc{brest2010power,
265  publisher = {Stanford Social Innovation Review Stanford, California (USA)},
266  year = {2010},
267  author = {Brest, Paul},
268  title = {The power of theories of change},
269}
270
271@inproceedings{abebe2020roles,
272  year = {2020},
273  pages = {252--260},
274  booktitle = {Proceedings of the 2020 conference on fairness, accountability, and transparency},
275  author = {Abebe, Rediet and Barocas, Solon and Kleinberg, Jon and Levy, Karen and Raghavan, Manish and Robinson, David G},
276  title = {Roles for computing in social change},
277}
278
279@inproceedings{raji2022fallacy,
280  year = {2022},
281  pages = {959--972},
282  booktitle = {2022 ACM Conference on Fairness, Accountability, and Transparency},
283  author = {Raji, Inioluwa Deborah and Kumar, I Elizabeth and Horowitz, Aaron and Selbst, Andrew},
284  title = {The fallacy of AI functionality},
285}
286
287@article{hecht2021s,
288  year = {2018},
289  journal = {ACM Future of Computing Blog},
290  author = {Hecht, Brent and Wilcox, Lauren and Bigham, Jeffrey P and Sch{\"o}ning, Johannes and Hoque, Ehsan and Ernst, Jason and Bisk, Yonatan and De Russis, Luigi and Yarosh, Lana and Anjum, Bushra and others},
291  title = {It's time to do something: Mitigating the negative impacts of computing through a change to the peer review process},
292}
293
294@article{gabriel2020artificial,
295  publisher = {Springer},
296  year = {2020},
297  pages = {411--437},
298  number = {3},
299  volume = {30},
300  journal = {Minds and machines},
301  author = {Gabriel, Iason},
302  title = {Artificial intelligence, values, and alignment},
303}
304
305@inproceedings{barocas2020not,
306  year = {2020},
307  pages = {695--695},
308  booktitle = {Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency},
309  author = {Barocas, Solon and Biega, Asia J and Fish, Benjamin and Niklas, J{\k{e}}drzej and Stark, Luke},
310  title = {When not to design, build, or deploy},
311}
312
313@inproceedings{green2020algorithmic,
314  year = {2020},
315  pages = {19--31},
316  booktitle = {Proceedings of the 2020 conference on fairness, accountability, and transparency},
317  author = {Green, Ben and Viljoen, Salom{\'e}},
318  title = {Algorithmic realism: expanding the boundaries of algorithmic thought},
319}
320
321@article{ashurst2020guide,
322  year = {2020},
323  journal = {Centre for the Governance of AI. URL: https://perma. cc/B5R8-2B9V},
324  author = {Ashurst, Carolyn and Anderljung, Markus and Prunkl, Carina and Leike, Jan and Gal, Yarin and Shevlane, Toby and Dafoe, Allan},
325  title = {A guide to writing the NeurIPS impact statement},
326}
327
328@inproceedings{laufer2022four,
329  year = {2022},
330  pages = {401--426},
331  booktitle = {2022 ACM Conference on Fairness, Accountability, and Transparency},
332  author = {Laufer, Benjamin and Jain, Sameer and Cooper, A Feder and Kleinberg, Jon and Heidari, Hoda},
333  title = {Four years of FAccT: A reflexive, mixed-methods analysis of research contributions, shortcomings, and future prospects},
334}
335
336@misc{craftTOC2023,
337  year = {2023},
338  author = {Wilkinson, Daricia and {Michael Ekstrand} and {Janet A. Vertesi} and {Alexandra Olteanu}},
339  title = {Theories of Change in Responsible AI. {\em CRAFT Session at the 2023 Conference on Fairness, Accountability, and Transparency.}},
340}
341
342@misc{matthews2022embracing,
343  publisher = {ACM New York, NY, USA},
344  year = {2022},
345  pages = {7--7},
346  number = {7},
347  volume = {65},
348  journal = {Communications of the ACM},
349  author = {Matthews, Jeanna},
350  title = {Embracing critical voices},
351}
352
353@article{heikkila2022responsible,
354  year = {2022},
355  pages = {2022},
356  volume = {28},
357  journal = {MIT Technology Review. October},
358  author = {Heikkil{\"a}, Melissa},
359  title = {Responsible AI has a burnout problem},
360}
361
362@article{liang2021embracing,
363  publisher = {ACM New York, NY, USA},
364  year = {2021},
365  pages = {1--47},
366  number = {2},
367  volume = {28},
368  journal = {ACM Transactions on Computer-Human Interaction (TOCHI)},
369  author = {Liang, Calvin A and Munson, Sean A and Kientz, Julie A},
370  title = {Embracing four tensions in human-computer interaction research with marginalized people},
371}
372
373@article{hecht2020suggestions,
374  year = {2020},
375  journal = {Medium. https://medium. com/@ BrentH/suggestions-for-writing-neurips-2020-broader-impacts-statements-121da1b765bf},
376  author = {Hecht, Brent},
377  title = {Suggestions for Writing NeurIPS 2020 Broader Impacts Statements},
378}
379
380@inproceedings{buolamwini2018gender,
381  organization = {PMLR},
382  year = {2018},
383  pages = {77--91},
384  booktitle = {Conference on fairness, accountability and transparency},
385  author = {Buolamwini, Joy and Gebru, Timnit},
386  title = {Gender shades: Intersectional accuracy disparities in commercial gender classification},
387}
388
389@inproceedings{holstein2019improving,
390  year = {2019},
391  pages = {1--16},
392  booktitle = {Proceedings of the 2019 CHI conference on human factors in computing systems},
393  author = {Holstein, Kenneth and Wortman Vaughan, Jennifer and Daum{\'e} III, Hal and Dudik, Miro and Wallach, Hanna},
394  title = {Improving fairness in machine learning systems: What do industry practitioners need?},
395}
396
397@book{eubanks2018automating,
398  publisher = {St. Martin's Press},
399  year = {2018},
400  author = {Eubanks, Virginia},
401  title = {Automating inequality: How high-tech tools profile, police, and punish the poor},
402}
403
404@article{angwin2016machine,
405  year = {2016},
406  author = {Angwin, Julia and Larson, Jeff and Mattu, Surya and Kirchner, Lauren},
407  journal = {ProPublica},
408  title = {Machine bias},
409}
410
411@inproceedings{pinneyMuchAdoGender2023,
412  keywords = {auditing,gender,information access,systematic review},
413  address = {{New York, NY, USA}},
414  publisher = {{Association for Computing Machinery}},
415  pages = {269--279},
416  month = {March},
417  year = {2023},
418  author = {Pinney, Christine and Raj, Amifa and Hanna, Alex and Ekstrand, Michael D},
419  booktitle = {{{CHIIR}} '23},
420  title = {Much {{Ado About Gender}}: {{Current Practices}} and {{Future Recommendations}} for {{Appropriate Gender-Aware Information Access}}},
421}
422
423@article{holmes2020researcher,
424  publisher = {ERIC},
425  year = {2020},
426  pages = {1--10},
427  number = {4},
428  volume = {8},
429  journal = {Shanlax International Journal of Education},
430  author = {Holmes, Andrew Gary Darwin},
431  title = {Researcher Positionality--A Consideration of Its Influence and Place in Qualitative Research--A New Researcher Guide.},
432}

Attribution

arXiv:2311.11776v1 [cs.AI]
License: cc-by-4.0

Related Posts

U.S. Public Opinion on the Governance of Artificial Intelligence

U.S. Public Opinion on the Governance of Artificial Intelligence

Introduction Advances in artificial intelligence (AI) could impact nearly all aspects of society, including the labor market, transportation, healthcare, education, and national security.

Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond

Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond

Introduction Artificial intelligence (AI) has shown immense promise in transforming healthcare through the application of advanced technologies like machine learning (ML) and deep learning (DL).

The Fallacy of AI Functionality

The Fallacy of AI Functionality

Introduction As one of over 20,000 cases falsely flagged for unemployment benefit fraud by Michigan’s MIDAS algorithm, Brian Russell had to file for bankruptcy, undermining his ability to provide for his two young children.