Join Our Discord (940+ Members)

Responsible Machine Learning Systems

Unlock the potential of AI responsibly! Explore the critical role of AI governance in managing risks and maximizing value for organizations of all sizes and industries.

Content License: CC-BY-SA-4.0

Introduction

In this position paper, we share our insights about AI Governance in companies, which enables new connections between various aspects and properties of trustworthy and socially responsible Machine Learning: security, robustness, privacy, fairness, ethics, interpretability, transparency, etc.

For a long time Artificial intelligence (AI) was something enterprise organizations adopted due to the huge amounts of resources they have at their fingertips. Today, smaller companies are able to take advantage of AI due to newer technologies, e.g. cloud software, which are significantly more affordable than what was available in the past* * * * . AI has been on an upward trajectory in recent years and it will increase significantly over the next several years * * *. However, every investment has its pros and cons. Unfortunately, the cons associated with AI adoption are caused by its inherent uncertainties, and the builders of such AI systems who do not take the necessary steps to avoid problems down the road* * . Note that, in this work, AI comprises modern Machine Learning (ML) and Deep Learning (DL) systems, yet not their software around them - which represents other threats and vulnerabilities by itself* * * -.

Problems within Industry

Some popular applications of AI are: anomaly detection and forecasting* * * , recommender systems*medical diagnosis* * , natural sciences* * * , and search engines* * .

However, these applications of AI in industry is still in its infancy. With that said, many problems have arisen since its adoption* * * , which can be attributed to several factors:

Lack of risk awareness and management: Too much attention is given to applications of AI and its potential success and not enough attention is given to its potential pitfalls and risks.

AI adoption is moving too fast: According to a survey by KPMG in 2021*many respondents noted that AI technology is moving too fast for their comfort in industrial manufacturing (55

AI adoption needs government intervention: According to the same survey by KPMG,*(see surveys), an overwhelming percentage of respondents agreed that governments should be involved in regulating AI technology in the industrial manufacturing (94

Companies are still immature when it comes to adopting AI: Some companies are not prepared for business conditions to change once a ML model is deployed into the real world.

Many of these problems can be avoided with proper governance mechanisms. AI without such mechanisms is a dangerous game with detrimental outcomes due its inherent uncertainty* * . With that said, adding governance into applications of AI is imperative to ensure safety in production.

(Left) `'Rate of AI adoption skyrocketed during COVID-19’’ by KPMG*. (Right) IBM Global AI Adoption Index 2022*. We refer the reader to the surveys* * for more details.

AI Governance

What is AI Governance (AIG)?

AI Governance is a framework to operationalize responsible artificial intelligence at organizations. This framework encourages organizations to curate and use bias-free data, consider societal and end-user impact, and produce unbiased models; the framework also enforces controls on model progression through deployment stages. The potential risks associated with AI need to be considered when designing models, before they affect the quality of models and algorithms. If left unmonitored, AI may not only produce undesirable results, but can also have a significant adverse impact on the organization.

In order for organizations to realize the maximum value out of AI projects and develop consistency for organization-wide adoption of AI, while managing significant risks to their business, they must implement AI governance* * * * ; this enables organizations to not only develop AI projects in a responsible way, but also ensure that there is consistency across the entire organization and the business objective is front and center. With the AI governance implemented (as illustrated in teaser), the following benefits can be realized:

Alignment and Clarity: teams would be aware and aligned on what the industry, international, regional, local, and organizational policies are that need to be adhered to.

Thoughtfulness and Accountability: teams would put deliberate effort into justifying the business case for AI projects, and put conscious effort into thinking about end-user experience, adversarial impacts, public safety & privacy. This also places greater accountability on the teams developing their respective AI projects.

Consistency and Organizational Adoption: teams would have a more consistent way of developing and collaborating on their AI projects, leading to increased tracking and transparency for their projects. This also provides an overarching view of all AI projects going on within the organization, leading to increased visibility and overall adoption.

Process, Communication, and Tools: teams would have complete understanding of what the steps are in order to move the AI project to production to start realizing business value. They would also be able to leverage tools that take them through the defined process, while being able to communicate with the right stakeholders through the tool.

Trust and Public Perception: as teams build out their AI projects more thoughtfully, this will inherently build trust amongst customers and end users, and therefore a positive public perception.

AI governance requires the following:

  1. A structured organization that gives AIG leaders the correct information they need to establish policies and accountability for AI efforts across their entire organization. For smaller organizations, this might require a more phased approach in which they will work towards the desired structural framework of AIG. For larger organizations, this process might be more attainable due to resources alone, e.g. people, IT infrastructure, larger budgets.

  2. A concrete and specific AI workflow that collects information needed by AIG leaders will help enforce the constructed policies. This provides information to various parties in a consumable manner. Having such information can be used to minimize mistakes, errors, and bias, amongst other things.

The requirements for AI governance manifest into a framework that an organization must work towards developing. The components of this framework need to be transparent and comprehensive to achieve a successful implementation of AIG. Specifically, this should focus on organizational and use case planning, AI development, and AI `'operationalization’’, which come together to make a 4 stage AI life cycle approach.

Illustration of the AI Governance application towards responsible AI in companies.
Illustration of the AI Governance application towards responsible AI in companies.

Stages of a Governed AI Life cycle

Organizational Planning

An AI Governance Program* * * should be organized in such a way that (a) there is comprehensive understanding of regulations, laws, and policies amongst all team members (b) resources and help available for team members who encounter challenges (c) there is a light weight, yet clear process to assist team members.

  1. Regulations, Laws, Policies

    • Laws and regulations that apply to a specific entity should be identified, documented, and available for others to review and audit. These regulations, laws, and policies vary across industry and sometimes by geographical location. Organizations should, if applicable, develop policies for themselves, which reflect their values and ethical views* * * ; this enables teams to be more autonomous and make decisions with confidence.
  2. Organization (Center of Competency)

    • Establishing groups within an organization that provide support to teams with AI projects can prove to be quite beneficial. This includes a group that is knowledgeable with regulations, laws, and policies and can answer any questions that AI teams may have; a group that is able to share best practices across different AI teams within the organization; a group that is able to audit the data, model, process, etc. to ensure there isn’t a breach or non-compliance. For more information, we refer the reader to to the survey by Floridi et al.*.
  3. Process

    • Developing a light-weight process that provides guidelines to AI teams can help with their efficiency, rather than hinder their progress and velocity. This involves identifying what the approval process and incident response would be for data, model, deployments, etc.

Use Case Planning

Building use cases involves establishing business value, technology stack, and model usage. The group of people involved in this process can include: subject matter experts, data scientists/analysts/annotators and ML engineers, IT professionals, and finance departments.

Business Value Framework. The AI team should ensure that the motivation for the AI use case is documented and communicated amongst all stakeholders. This should also include the original hypothesis, and the metrics that would be used for evaluating the experiments.

Tools, Technology, Products. The AI team should either select from a set of pre-approved tools and products from the organization or get a set of tools and products approved before using in an AI user case. If tools for AI development are not governed, it not only leads to high costs and inability to manage the tools (as existing IT teams are aware), it also leads to not being able to create repeatability and traceability into AI models.

Model Usage. Once a sense of value is attached to the use case, then the next step would be to break down the use case to its sub-components which include, but are not limited to, identifying the consumer of the model, the model’s limitations, and potential bias that may exist within the model, along with its implications. Also, one would want to ensure inclusiveness of the target, public safety/user privacy, and identification of the model interface needed for their intended use case.

AI Development

Development of a machine learning model, including data handling and analysis, modeling, generating explanations, bias detection, accuracy and efficacy analysis, security and robustness checks, model lineage, validation, and documentation.

  1. Data Handling, Analysis and Modeling

    • The first technical step to any AI project is the procurement and analysis of data, which is critical as it lays the foundation for all work going forward. Once data is analyzed, then one must decipher if modeling is needed for the use case at hand. If modeling is needed, then the application of AI can take place. Such an application is an iterative process spanned across many different types of people.
  2. Explanations and Bias

    • The goal of model explanations is to relate feature values to model predictions in a human-friendly manner*. What one does with these explanations breaks down to 3 personas: modeler, intermediary user, and the end user. The modeler would use explanations for model debugging and gaining understanding of the model they just built. The intermediary user would use what the modeler made for actionable insights. And finally, the end user is the person the model affects directly. For these reasons, Explainable Artificial Intelligence (XAI) is a very active research topic* * .

    • Bias, whether intentional (disparate treatment) or unintentional (disparate impact), is a cause for concern in many applications of AI* * . Common things to investigate when it comes to preventing bias include the data source used for the modeling process, performance issues amongst different demographics, disparate impact, identifying known limitations & potential adverse implications, and the models impact on public safety*. We refer the reader to the survey by Mehrabi et al.*for more details about bias and fairness in AI.

  3. Accuracy, Efficacy, & Robustness

    • Accuracy of a machine learning model is critical for any business application in which predictions drive potential actions. However, it is not the most important metric to optimize. One must also consider the efficacy of a model, i.e. is the model making the intended business impact?

    • When a model is serving predictions in a production setting, the data can be a little or significantly different from the data that the project team had access to. Although model drift and feature drift can capture this discrepancy, it is a lagging indicator, and by that time, the model has already made predictions. This is where Robustness comes in: project teams can proactively test for model robustness, using “out of scope” data, to understand whether the model perturbs. The out of scope data can be a combination of manual generation (toggle with feature values) and automatic generation (system toggles feature values).

  4. Security

    • ML systems today are subject to general attacks that can affect any public facing IT system*\cite {papernot2018marauder}; specialized attacks that exploit insider access to data and ML code; external access to ML prediction APIs and endpoints* *; and trojans that can hide in third-party ML artifacts. Such attacks must be accounted for and tested against before sending a machine learning model out into the real world.
  5. Documentation & Validation

    • An overall lineage of the entire AI project life-cycle should be documented to ensure transparency and understanding* *which will be useful for the AI team working on the project and also future teams who must reference this project for their own application.

    • Model validation* is the set of processes and activities that are carried out by a third party, with the intent to verify that models are robust and performing as expected, in line with the business use case. It also identifies the impact of potential limitations and assumptions. From a technical standpoint, the following should be considered: (i) Sensitivity Analysis. (ii) In-sample vs. Out-of-sample performance. (iii) Replication of results from model development team. (iv) Stability analysis. Model `'validators’’ should document all of their findings and share with relevant stakeholders.

AI Operationalization

Deploying a machine learning model into production (i.e. MLOps* * ) is the first step to potentially receiving value out of it. The steps that go into the deployment process should include the following:

Review-Approval Flow: Model building in an AI project will go through various stages: experimentation, model registration, deployment, and decommissioning. Moving from one stage to the next would require `'external’’ reviewer(s) who will vet and provide feedback.

Monitoring & Alerts: Once a model is deployed, it must be monitored for various metrics to ensure there is not any degradation in the model. The cause for a model degrading when deployed can include the following: feature and/or target drift, lack of data integrity, and outliers, amongst other things. In terms of monitoring, accuracy, fairness, and explanations of predictions are of interest* * .

Decision Making: The output of a machine learning model is a prediction, but that output must be turned into a decision. How to decide? Will it be autonomous? Will it involve a human in the loop? The answers to these questions vary across different applications, but the idea remains the same, ensuring decisions are made in the proper way to decrease risk for everyone involved.

Incident Response and Escalation Process: With AI models being used in production, there is always going to be a chance for issues to arise. Organizations should have an incident response plan and escalation process documented and known to all project teams.

Companies who successfully implement AI governance for AI applications will result in a highly impactful use of artificial intelligence. While those who fail to do so, risk catastrophic outcomes and an arduous road to recovery as shown in the following use-case.

AIG Use Case

In this section we describe a recent use case where - we believe - AI Governance could have avoided a terrible outcome.

In 2021, the online real estate technology giant, Zillow, shut down its AI-powered house-flipping business. At it’s core, this line of business relied heavily on forecasting from their machine learning models. Zillow found itself overpaying for homes due to overestimating the price of a property. Such overestimation of property values led to a loss of $569 million, and 28% of their valuation* * . The monetary loss and laying off of 2,000 employees also came with a reputational cost.

This AI failure begs the question ‘What mistakes did Zillow make?’ and `Could this have been prevented?’. The answer to the second question is a firm ‘yes’. However, the answer to the first question has many facets, but we try to break down the key mistakes below:

  1. Removing `'Human in the Loop’’

    • In its early days, this company hired local real estate agents and property experts to verify the output of their home price prediction ML algorithm; however, as the business scaled, the human verification process around the algorithms was minimized*and the offer-making process was automated, which helped to cut expenses and increase acquisitions. However, removing human verification in such a volatile domain led to predictions taken at face value without any verification, which played a big role in overpricing of properties.

    • Keeping the human in the loop can help companies from relying on overestimated and biased predictions. Considering a ML ecosystem for decision-making in organizations, having a human in the loop is a safety harness and should be used in any high stake decision making.

  2. Not Accounting for Concept Drift and Lack of Model Monitoring

    • Taking a look back at the timeline of events, it appears that the ML algorithms were not adjusted accordingly to the real market status*. The algorithms continued to assume that the market was still `'hot’’ and overestimated home prices.

    • To avoid problems of drift, specifically concept drift, companies should leverage tools for monitoring and maintaining the quality of AI models. Ideally, in this example, they should have set up an infrastructure that automatically alerts data science teams when there is drift or performance degradation, support root cause analysis, and inform model updates with humans-in-the-loop.

  3. Lack of Model Validation

    • Before organizations deploy any of their algorithms, they should have enact a set of processes and activities intended to verify that their models are performing as expected and that they are in line with the business use case. Effective validation ensures that models are robust. It also identifies potential limitations and assumptions, and it assesses their possible impact. A lack of model validation played a crucial role in this case, and led to the overestimation of home prices. It should be noted that -ideally- model validation needs to be carried out by a third party that did not take part in the model building process.
  4. Lack of Incident Response

    • It is not clear when this company started to realize that their model’s were degrading and producing erroneous results. The lack of having such an incident response plan can cost businesses an exuberant amount of money, time, and human resources. In retrospect, if proper AI Governance would had been implemented, the company could have started an incident response i.e. incident identification and documentation, human review, and redirection of model traffic. These three steps alone could have prevented a lot of damage to their business.

This industry use case is an example of what can happen without proper AI governance. The mistakes that were made could have been avoided if proper AI governance principles were taken into account from the inception of their use case. Specifically, if this company continued to rely on their subject matter experts, accounted for various types of drift, added model validation, and implemented concrete model monitoring with proper incident response planning, then this whole situation could have been avoided or the damage could have been at a much lower magnitude.

Conclusion

AI systems are used today to make life-altering decisions about employment, bail, parole, and lending, and the scope of decisions delegated by AI systems seems likely to expand in the future. The pervasiveness of AI across many fields is something that will not slowdown anytime soon and organizations will want to keep up with such applications. However, they must be cognisant of the risks that come with AI and have guidelines around how they approach applications of AI to avoid such risks. By establishing a framework for AI Governance, organizations will be able to harness AI for their use cases while at the same time avoiding risks and having plans in place for risk mitigation, which is paramount.

Social Impact

As we discuss in this paper, governance and certain control over AI applications in organizations should be mandatory. AI Governance aims to enable and facilitate connections between various aspects of trustworthy and socially responsible machine learning systems, and therefore it accounts for security, robustness, privacy, fairness, ethics, and transparency. We believe the implementation of these ideas should have a positive impact in the society.

Acknowledgements

We thank the Trustworthy and Socially Responsible Machine Learning (TSRML) Workshop at NeurIPS 2022. This work was supported by H2O.ai.

Attribution

1@misc{gill2022brief,
2      title={A Brief Overview of AI Governance for Responsible Machine Learning Systems}, 
3      author={Navdeep Gill and Abhishek Mathur and Marcos V. Conde},
4      year={2022},
5      eprint={2211.13130},
6      archivePrefix={arXiv},
7      primaryClass={cs.CY}
8}

Bibliography

  1% Intro
  2@misc{IBM,
  3	Title = {{IBM} {G}lobal {AI} {A}doption {I}ndex},
  4	note = {URL: \url{https://www.ibm.com/downloads/cas/GVAGA3JP}},
  5    author = {IBM},
  6	Year = {2022}}
  7@misc{PwC,
  8	Title = {{PwC} 2022 {AI} {B}usiness {S}urvey},
  9	note = {URL: \url{https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-business-survey.html}},
 10    author = {PwC (PriceWaterhouseCoopers)},
 11	Year = {2022}}
 12@misc{KPMG,
 13    Title = {{AI} adoption accelerated during the pandemic but many say it’s moving too fast: {KPMG} survey},
 14    note = {URL: \url{https://info.kpmg.us/news-perspectives/technology-innovation/thriving-in-an-ai-world/ai-adoption-accelerated-during-pandemic.html}},
 15    author = {KPMG},
 16    Year = {2021}}
 17    
 18@article{hbr,
 19    Author = {Joe McKendrick},
 20    Journal = {Harvard Business Review},
 21    Title = {{AI} {A}doption {S}kyrocketed {O}ver {t}he {L}ast 18 {M}onths.},
 22    note = {URL: \url{https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months}},
 23    Year = {2021}}
 24@misc{ai_incidents,
 25    title={{AI} {I}ncident {D}atabase},
 26    author = {Artificial Intelligence Incident Database},
 27    year = {2022},
 28    note = {URL: \url{https://incidentdatabase.ai/}}},
 29% Security
 30@article{security_of_ml,
 31  	title={The {S}ecurity of {M}achine {L}earning},
 32 	author={Barreno, Marco and Nelson, Blaine and Joseph, Anthony D and Tygar, J Doug},
 33  	journal={Machine Learning},
 34  	volume={81},
 35  	number={2},
 36  	pages={121--148},
 37  	year={2010},
 38  	publisher={Springer},
 39	note={URL: \url{https://people.eecs.berkeley.edu/~adj/publications/paper-files/SecML-MLJ2010.pdf}}}
 40@inproceedings{papernot2018marauder,
 41  	title={A {M}arauder's {M}ap of {S}ecurity and {P}rivacy in {M}achine {L}earning: {A}n overview of current and future research directions for making machine learning secure and private},
 42  	author={Papernot, Nicolas},
 43  	booktitle={Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security},
 44  	year={2018},
 45  	organization={ACM},
 46	note={URL: \url{https://arxiv.org/pdf/1811.01134.pdf}}}
 47@inproceedings{membership_inference,
 48 	title={Membership {I}nference {A}ttacks {A}gainst {M}achine {L}earning {M}odels},
 49 	author={Shokri, Reza and Stronati, Marco and Song, Congzheng and Shmatikov, Vitaly},
 50 	booktitle={2017 IEEE Symposium on Security and Privacy (SP)},
 51 	pages={3--18},
 52 	year={2017},
 53 	organization={IEEE}, 
 54	note={URL: \url{https://arxiv.org/pdf/1610.05820.pdf}}}	
 55@inproceedings{model_stealing,
 56	title={Stealing {M}achine {L}earning {M}odels via {P}rediction {A}{P}{I}s},
 57	author={Tram{\`e}r, Florian and Zhang, Fan and Juels, Ari and Reiter, Michael K and Ristenpart, Thomas},
 58	booktitle={25th $\{$USENIX$\}$ Security Symposium ($\{$USENIX$\}$ Security 16)},
 59	pages={601--618},
 60	year={2016},
 61	note={URL: \url{https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf}}}
 62
 63% Explanations
 64@book{molnar,
 65    title={{I}nterpretable {M}achine {L}earning},
 66    author={Christopher Molnar},
 67    year={2022},
 68    note={URL: \url{https://christophm.github.io/interpretable-ml-book/}}}
 69
 70@article{gunning2019xai,
 71  title={XAI—Explainable artificial intelligence},
 72  author={Gunning, David and Stefik, Mark and Choi, Jaesik and Miller, Timothy and Stumpf, Simone and Yang, Guang-Zhong},
 73  journal={Science robotics},
 74  volume={4},
 75  number={37},
 76  pages={eaay7120},
 77  year={2019},
 78  publisher={American Association for the Advancement of Science}
 79}
 80
 81@article{arrieta2020explainable,
 82  title={Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI},
 83  author={Arrieta, Alejandro Barredo and D{\'\i}az-Rodr{\'\i}guez, Natalia and Del Ser, Javier and Bennetot, Adrien and Tabik, Siham and Barbado, Alberto and Garc{\'\i}a, Salvador and Gil-L{\'o}pez, Sergio and Molina, Daniel and Benjamins, Richard and others},
 84  journal={Information fusion},
 85  volume={58},
 86  pages={82--115},
 87  year={2020},
 88  publisher={Elsevier}
 89}
 90
 91% Bias
 92@misc{bias,
 93    title={{F}airness and {M}achine {L}earning: {L}imitaions and {O}pportunities},
 94    author={Barocas, S. and Hardt, M. and Narayanan, A.},
 95    year={2022},
 96    note={URL: \url{https://fairmlbook.org/pdf/fairmlbook.pdf}}}
 97    
 98@misc{bias_nist,
 99  author = {Reva Schwartz and Apostol Vassilev and Kristen Greene and Lori Perine and Andrew Burt and Patrick Hall},
100  title = {Towards a Standard for Identifying and Managing Bias in Artificial Intelligence},
101  year = {2022},
102  month = {2022-03-15 04:03:00},
103  publisher = {Special Publication (NIST SP), National Institute of Standards and Technology, Gaithersburg, MD},
104  url = {https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934464},
105  doi = {https://doi.org/10.6028/NIST.SP.1270},
106  language = {en},
107}
108
109@article{mehrabi2021survey,
110  title={A survey on bias and fairness in machine learning},
111  author={Mehrabi, Ninareh and Morstatter, Fred and Saxena, Nripsuta and Lerman, Kristina and Galstyan, Aram},
112  journal={ACM Computing Surveys (CSUR)},
113  volume={54},
114  number={6},
115  pages={1--35},
116  year={2021},
117  publisher={ACM New York, NY, USA}
118}
119
120
121% Model validation and documentation
122@misc{SR117,
123  title = {{S}upervisory {G}uidance on {M}odel {R}isk {M}anagement, {SR} {L}etter 11-7},
124  year = {2011},
125  url = {https://www.federalreserve.gov/supervisionreg/srletters/sr1107a1.pdf},
126  pages={6--16},
127  author = {Board of Governors of the Federal Reserve System Office of the Comptroller of the Currency},
128}
129
130@article{landry1983model,
131  title={Model validation in operations research},
132  author={Landry, Maurice and Malouin, Jean-Louis and Oral, Muhittin},
133  journal={European journal of operational research},
134  volume={14},
135  number={3},
136  pages={207--220},
137  year={1983},
138  publisher={Elsevier}
139}
140
141
142@inproceedings{Mitchell_2019,
143	doi = {10.1145/3287560.3287596},
144	url = {https://doi.org/10.1145%2F3287560.3287596},
145	year = 2019,
146	publisher = {{ACM}},
147	author = {Margaret Mitchell and Simone Wu and Andrew Zaldivar and Parker Barnes and Lucy Vasserman and Ben Hutchinson and Elena Spitzer and Inioluwa Deborah Raji and Timnit Gebru},
148	title = {Model Cards for Model Reporting},
149	booktitle = {Proceedings of the Conference on Fairness, Accountability, and Transparency}
150}
151@misc{data_cards,
152  doi = {10.48550/ARXIV.2204.01075},
153  url = {https://arxiv.org/abs/2204.01075},
154  author = {Pushkarna, Mahima and Zaldivar, Andrew and Kjartansson, Oddur},
155  keywords = {Human-Computer Interaction (cs.HC), Artificial Intelligence (cs.AI), Databases (cs.DB), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
156  title = {Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI},
157  publisher = {arXiv},
158  year = {2022},
159  copyright = {Creative Commons Attribution 4.0 International}
160}
161
162%% Adoption
163
164@article{alsheibani2018artificial,
165  title={Artificial intelligence adoption: AI-readiness at firm-level},
166  author={AlSheibani, Sulaiman and Cheung, Yen and Messom, Chris},
167  year={2018},
168  journal = {PACIS 2018 Proceedings}
169}
170
171@article{cubric2020drivers,
172  title={Drivers, barriers and social considerations for AI adoption in business and management: A tertiary study},
173  author={Cubric, Marija},
174  journal={Technology in Society},
175  volume={62},
176  pages={101257},
177  year={2020},
178  publisher={Elsevier}
179}
180
181@article{duan2019artificial,
182  title={Artificial intelligence for decision making in the era of Big Data--evolution, challenges and research agenda},
183  author={Duan, Yanqing and Edwards, John S and Dwivedi, Yogesh K},
184  journal={International journal of information management},
185  volume={48},
186  pages={63--71},
187  year={2019},
188  publisher={Elsevier}
189}
190
191@article{floridi2018ai4people,
192  title={AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations},
193  author={Floridi, Luciano and Cowls, Josh and Beltrametti, Monica and Chatila, Raja and Chazerand, Patrice and Dignum, Virginia and Luetge, Christoph and Madelin, Robert and Pagallo, Ugo and Rossi, Francesca and others},
194  journal={Minds and machines},
195  volume={28},
196  number={4},
197  pages={689--707},
198  year={2018},
199  publisher={Springer}
200}
201
202@article{marr2018artificial,
203  title={Is Artificial Intelligence dangerous? 6 AI risks everyone should know about},
204  author={Marr, Bernard},
205  journal={Forbes},
206  year={2018}
207}
208
209
210% AI Governance
211
212@article{wang2018artificial,
213  title={Artificial intelligence: A study on governance, policies, and regulations},
214  author={Wang, Weiyu and Siau, Keng},
215  journal={MWAIS 2018 proceedings},
216  volume={40},
217  year={2018}
218}
219
220
221@article{reddy2020governance,
222  title={A governance model for the application of AI in health care},
223  author={Reddy, Sandeep and Allan, Sonia and Coghlan, Simon and Cooper, Paul},
224  journal={Journal of the American Medical Informatics Association},
225  volume={27},
226  number={3},
227  pages={491--497},
228  year={2020},
229  publisher={Oxford University Press}
230}
231
232@article{dafoe2018ai,
233  title={AI governance: a research agenda},
234  author={Dafoe, Allan},
235  journal={Governance of AI Program, Future of Humanity Institute, University of Oxford: Oxford, UK},
236  volume={1442},
237  pages={1443},
238  year={2018}
239}
240
241@article{gasser2017layered,
242  title={A layered model for AI governance},
243  author={Gasser, Urs and Almeida, Virgilio AF},
244  journal={IEEE Internet Computing},
245  volume={21},
246  number={6},
247  pages={58--62},
248  year={2017},
249  publisher={IEEE}
250}
251
252
253@article{kuziemski2020ai,
254  title={AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings},
255  author={Kuziemski, Maciej and Misuraca, Gianluca},
256  journal={Telecommunications policy},
257  volume={44},
258  number={6},
259  pages={101976},
260  year={2020},
261  publisher={Elsevier}
262}
263
264%% AI aplications
265
266@article{vaishya2020artificial,
267  title={Artificial Intelligence (AI) applications for COVID-19 pandemic},
268  author={Vaishya, Raju and Javaid, Mohd and Khan, Ibrahim Haleem and Haleem, Abid},
269  journal={Diabetes \& Metabolic Syndrome: Clinical Research \& Reviews},
270  volume={14},
271  number={4},
272  pages={337--339},
273  year={2020},
274  publisher={Elsevier}
275}
276
277
278%% AI uncertainty
279
280@article{bresina2012planning,
281  title={Planning under continuous time and resource uncertainty: A challenge for AI},
282  author={Bresina, John and Dearden, Richard and Meuleau, Nicolas and Ramkrishnan, Sailesh and Smith, David and Washington, Richard},
283  journal={arXiv preprint arXiv:1301.0559},
284  year={2012}
285}
286
287@incollection{zadeh1986probability,
288  title={Is probability theory sufficient for dealing with uncertainty in AI: A negative view},
289  author={Zadeh, Lotfi A},
290  booktitle={Machine intelligence and pattern recognition},
291  volume={4},
292  pages={103--116},
293  year={1986},
294  publisher={Elsevier}
295}
296
297
298% AI Operationalization
299
300@incollection{zhu2022ai,
301  title={AI and Ethics—Operationalizing Responsible AI},
302  author={Zhu, Liming and Xu, Xiwei and Lu, Qinghua and Governatori, Guido and Whittle, Jon},
303  booktitle={Humanity Driven AI},
304  pages={15--33},
305  year={2022},
306  publisher={Springer}
307}
308
309@inproceedings{ehsan2021operationalizing,
310  title={Operationalizing human-centered perspectives in explainable AI},
311  author={Ehsan, Upol and Wintersberger, Philipp and Liao, Q Vera and Mara, Martina and Streit, Marc and Wachter, Sandra and Riener, Andreas and Riedl, Mark O},
312  booktitle={Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
313  pages={1--6},
314  year={2021}
315}
316
317@article{amodei2016concrete,
318  title={Concrete problems in AI safety},
319  author={Amodei, Dario and Olah, Chris and Steinhardt, Jacob and Christiano, Paul and Schulman, John and Man{\'e}, Dan},
320  journal={arXiv preprint arXiv:1606.06565},
321  year={2016}
322}
323
324@article{leike2017ai,
325  title={AI safety gridworlds},
326  author={Leike, Jan and Martic, Miljan and Krakovna, Victoria and Ortega, Pedro A and Everitt, Tom and Lefrancq, Andrew and Orseau, Laurent and Legg, Shane},
327  journal={arXiv preprint arXiv:1711.09883},
328  year={2017}
329}
330
331
332@book{treveil2020introducing,
333  title={Introducing MLOps},
334  author={Treveil, Mark and Omont, Nicolas and Stenac, Cl{\'e}ment and Lefevre, Kenji and Phan, Du and Zentici, Joachim and Lavoillotte, Adrien and Miyazaki, Makoto and Heidmann, Lynn},
335  year={2020},
336  publisher={O'Reilly Media}
337}
338
339@incollection{alla2021mlops,
340  title={What is mlops?},
341  author={Alla, Sridhar and Adari, Suman Kalyan},
342  booktitle={Beginning MLOps with MLFlow},
343  pages={79--124},
344  year={2021},
345  publisher={Springer}
346}
347
348@article{mittelstadt2019principles,
349  title={Principles alone cannot guarantee ethical AI},
350  author={Mittelstadt, Brent},
351  journal={Nature Machine Intelligence},
352  volume={1},
353  number={11},
354  pages={501--507},
355  year={2019},
356  publisher={Nature Publishing Group}
357}
358
359@article{bloomberg,
360    author = {Patrick Clark},
361    journal = {Bloomberg},
362    title = {Zillow’s Algorithm-Fueled Buying Spree Doomed Its Home-Flipping Experiment},
363    note = {URL: \url{https://www.bloomberg.com/news/articles/2021-11-08/zillow-z-home-flipping-experiment-doomed-by-tech-algorithms}},
364    Year = {2021}}
365
366@article{bloomberg2,
367    author = {Patrick Clark, Sridhar Natarajan, Heather Perlberg},
368    journal = {Bloomberg},
369    title = {Zillow Seeks to Sell 7,000 Homes for \$2.8 Billion After Flipping Halt},
370    note = {URL: \url{https://www.bloomberg.com/news/articles/2021-11-01/zillow-selling-7-000-homes-for-2-8-billion-after-flipping-halt}},
371    Year = {2021}}
372
373@article{sr711,
374    author = {Board of Governors of the Federal Reserve System},
375    title = {SR 11-7 attachment: Supervisory Guidance on Model Risk Management},
376    note = {URL: https://www.federalreserve.gov/supervisionreg/srletters/sr1107a1.pdf},
377    Year = {2011}}
378
379%%%%%%%%%%%%%% SOFTWARE SECURITY
380@article{alhazmi2007measuring,
381  title={Measuring, analyzing and predicting security vulnerabilities in software systems},
382  author={Alhazmi, Omar Hussain and Malaiya, Yashwant K and Ray, Indrajit},
383  journal={computers \& security},
384  volume={26},
385  number={3},
386  pages={219--228},
387  year={2007},
388  publisher={Elsevier}
389}
390
391@book{dowd2006art,
392  title={The art of software security assessment: Identifying and preventing software vulnerabilities},
393  author={Dowd, Mark and McDonald, John and Schuh, Justin},
394  year={2006},
395  publisher={Pearson Education}
396}
397
398@article{mcgraw2004software,
399  title={Software security},
400  author={McGraw, Gary},
401  journal={IEEE Security \& Privacy},
402  volume={2},
403  number={2},
404  pages={80--83},
405  year={2004},
406  publisher={IEEE}
407}
408
409@article{conde2022general,
410  title={General Image Descriptors for Open World Image Retrieval using ViT CLIP},
411  author={Conde, Marcos V and Aerlic, Ivan and J{\'e}gou, Simon},
412  journal={arXiv preprint arXiv:2210.11141},
413  year={2022}
414}
415
416@article{conde2021weakly,
417  title={Weakly-Supervised Classification and Detection of Bird Sounds in the Wild. A BirdCLEF 2021 Solution},
418  author={Conde, Marcos V and Shubham, Kumar and Agnihotri, Prateek and Movva, Nitin D and Bessenyei, Szilard},
419  journal={arXiv preprint arXiv:2107.04878},
420  year={2021}
421}
422
423@article{conde2022few,
424  title={Few-shot Long-Tailed Bird Audio Recognition},
425  author={Conde, Marcos V and Choi, Ui-Jin},
426  journal={arXiv preprint arXiv:2206.11260},
427  year={2022}
428}
429
430@inproceedings{sultani2018real,
431  title={Real-world anomaly detection in surveillance videos},
432  author={Sultani, Waqas and Chen, Chen and Shah, Mubarak},
433  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
434  pages={6479--6488},
435  year={2018}
436}
437
438@article{portugal2018use,
439  title={The use of machine learning algorithms in recommender systems: A systematic review},
440  author={Portugal, Ivens and Alencar, Paulo and Cowan, Donald},
441  journal={Expert Systems with Applications},
442  volume={97},
443  pages={205--227},
444  year={2018},
445  publisher={Elsevier}
446}
447
448@article{gordeev2020backtesting,
449  title={Backtesting the predictability of COVID-19},
450  author={Gordeev, Dmitry and Singer, Philipp and Michailidis, Marios and M{\"u}ller, Mathias and Ambati, SriSatish},
451  journal={arXiv preprint arXiv:2007.11411},
452  year={2020}
453}
454
455@article{henkel2021recognizing,
456  title={Recognizing bird species in diverse soundscapes under weak supervision},
457  author={Henkel, Christof and Pfeiffer, Pascal and Singer, Philipp},
458  journal={arXiv preprint arXiv:2107.07728},
459  year={2021}
460}
461
462@article{makridakis2018statistical,
463  title={Statistical and Machine Learning forecasting methods: Concerns and ways forward},
464  author={Makridakis, Spyros and Spiliotis, Evangelos and Assimakopoulos, Vassilios},
465  journal={PloS one},
466  volume={13},
467  number={3},
468  pages={e0194889},
469  year={2018},
470  publisher={Public Library of Science San Francisco, CA USA}
471}
472
473@article{esteva2021deep,
474  title={Deep learning-enabled medical computer vision},
475  author={Esteva, Andre and Chou, Katherine and Yeung, Serena and Naik, Nikhil and Madani, Ali and Mottaghi, Ali and Liu, Yun and Topol, Eric and Dean, Jeff and Socher, Richard},
476  journal={NPJ digital medicine},
477  volume={4},
478  number={1},
479  pages={1--9},
480  year={2021},
481  publisher={Nature Publishing Group}
482}
483
484@inproceedings{conde2021clip,
485  title={CLIP-Art: contrastive pre-training for fine-grained art classification},
486  author={Conde, Marcos V and Turgutlu, Kerem},
487  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
488  pages={3956--3960},
489  year={2021}
490}
491
492
493@article{suzuki2017overview,
494  title={Overview of deep learning in medical imaging},
495  author={Suzuki, Kenji},
496  journal={Radiological physics and technology},
497  volume={10},
498  number={3},
499  pages={257--273},
500  year={2017},
501  publisher={Springer}
502}