How to Comment on NTIA AI Open Model Weights RFC
- Justin Riddiough
- February 27, 2024
The National Telecommunications and Information Administration (NTIA) is asking for public comments on the implications of “open-weight ” AI models. These models have the potential to democratize AI tools and accelerate innovation, but also raise concerns about safety and misuse.
The comment deadline has closed as of March 27th.
Why should you comment?
The NTIA, an agency advising the President on communication technology, will use the public comments to inform its recommendations to the President. These recommendations will then be considered by policymakers as they develop regulations or guidelines for open-weight AI models. Your comments can help ensure these policies strike a balance between promoting the benefits of open AI, such as innovation and accessibility, and mitigating potential risks like misuse and safety concerns.
- Balanced policy decisions: Sharing your perspective can inform regulations that promote both the benefits and responsible use of open-weight AI.
- A focus on societal good: Emphasize the potential for open models to benefit individuals, communities, and small businesses.
- Prioritizing safety and security: Address concerns about potential misuse and advocate for safeguards like responsible research practices and clear licensing terms.
Talking points for advocating open weights and open source AI:
- Increased accessibility and innovation: Open models allow smaller players and researchers to access powerful tools, fostering innovation and competition.
- Transparency and accountability: Openness allows for scrutiny and improvement of AI models, leading to more trustworthy and reliable systems.
- Faster advancement of AI research: Open sharing of knowledge and resources can accelerate research and development in the field of AI.
- Economic growth and job creation: Open AI tools can empower businesses of all sizes, leading to economic opportunities and job creation.
Steps to submit a comment:
1: Visit NTIA’s Federal Register page
https://www.regulations.gov/document/NTIA-2023-0009-0001
2: Click “Comment Now” to submit your comment.
3: Provide your input on the questions NTIA is seeking comment on. You can find the questions below.
Questions the NTIA is Seeking Comment On
- Is there evidence or historical examples suggesting that weights of models similar to currently-closed AI systems will, or will not, likely become widely available? If so, what are they?
- Is it possible to generally estimate the timeframe between the deployment of a closed model and the deployment of an open foundation model of similar performance on relevant tasks? How do you expect that timeframe to change? Based on what variables? How do you expect those variables to change in the coming months and years?
- Should “wide availability” of model weights be defined by level of distribution? If so, at what level of distribution (e.g., 10,000 entities; 1 million entities; open publication; etc.) should model weights be presumed to be “widely available”? If not, how should NTIA define “wide availability?”
- Do certain forms of access to an open foundation model (web applications, Application Programming Interfaces (API), local hosting, edge deployment) provide more or less benefit or more or less risk than others? Are these risks dependent on other details of the system or application enabling access?
- Are there promising prospective forms or modes of access that could strike a more favorable benefit-risk balance? If so, what are they?
- What, if any, are the risks associated with widely available model weights? How do these risks change, if at all, when the training data or source code associated with fine tuning, pretraining, or deploying a model is simultaneously widely available?
- Could open foundation models reduce equity in rights and safety-impacting AI systems (e.g. healthcare, education, criminal justice, housing, online platforms, etc.)?
- What, if any, risks related to privacy could result from the wide availability of model weights?
- Are there novel ways that state or non-state actors could use widely available model weights to create or exacerbate security risks, including but not limited to threats to infrastructure, public health, human and civil rights, democracy, defense, and the economy?
- How do these risks compare to those associated with closed models?
- How do these risks compare to those associated with other types of software systems and information resources?
- What, if any, risks could result from differences in access to widely available models across different jurisdictions?
- Which are the most severe, and which the most likely risks described in answering the questions above? How do these set of risks relate to each other, if at all?
- What benefits do open model weights offer for competition and innovation, both in the AI marketplace and in other areas of the economy? In what ways can open dual-use foundation models enable or enhance scientific research, as well as education/training in computer science and related fields?
- How can making model weights widely available improve the safety, security, and trustworthiness of AI and the robustness of public preparedness against potential AI risks?
- Could open model weights, and in particular the ability to retrain models, help advance equity in rights and safety-impacting AI systems (e.g. healthcare, education, criminal justice, housing, online platforms etc.)?
- How can the diffusion of AI models with widely available weights support the United States’ national security interests? How could it interfere with, or further the enjoyment and protection of human rights within and outside of the United States?
- How do these benefits change, if at all, when the training data or the associated source code of the model is simultaneously widely available?
- What model evaluations, if any, can help determine the risks or benefits associated with making weights of a foundation model widely available?
- Are there effective ways to create safeguards around foundation models, either to ensure that model weights do not become available, or to protect system integrity or human well-being (including privacy) and reduce security risks in those cases where weights are widely available?
- What are the prospects for developing effective safeguards in the future?
- Are there ways to regain control over and/or restrict access to and/or limit use of weights of an open foundation model that, either inadvertently or purposely, have already become widely available? What are the approximate costs of these methods today? How reliable are they?
- What if any secure storage techniques or practices could be considered necessary to prevent unintentional distribution of model weights?
- Which components of a foundation model need to be available, and to whom, in order to analyze, evaluate, certify, or red-team the model? To the extent possible, please identify specific evaluations or types of evaluations and the component(s) that need to be available for each.
- Are there means by which to test or verify model weights? What methodology or methodologies exist to audit model weights and/or foundation models?
- In which ways is open-source software policy analogous (or not) to the availability of model weights? Are there lessons we can learn from the history and ecosystem of open-source software, open data, and other “open” initiatives for open foundation models, particularly the availability of model weights?
- How, if at all, does the wide availability of model weights change the competition dynamics in the broader economy, specifically looking at industries such as but not limited to healthcare, marketing, and education?
- How, if at all, do intellectual property-related issues—such as the license terms under which foundation model weights are made publicly available—influence competition, benefits, and risks? Which licenses are most prominent in the context of making model weights widely available? What are the tradeoffs associated with each of these licenses?
- Are there concerns about potential barriers to interoperability stemming from different incompatible “open” licenses, e.g., licenses with conflicting requirements, applied to AI components? Would standardizing license terms specifically for foundation model weights be beneficial? Are there particular examples in existence that could be useful?
- What security, legal, or other measures can reasonably be employed to reliably prevent wide availability of access to a foundation model’s weights, or limit their end use?
- How might the wide availability of open foundation model weights facilitate, or else frustrate, government action in AI regulation?
- When, if ever, should entities deploying AI disclose to users or the general public that they are using open foundation models either with or without widely available weights?
- What role, if any, should the U.S. government take in setting metrics for risk, creating standards for best practices, and/or supporting or restricting the availability of foundation model weights?
- Should other government or non-government bodies, currently existing or not, support the government in this role? Should this vary by sector?
- What should the role of model hosting services (e.g. HuggingFace, GitHub, etc.) be in making dual-use models with open weights more or less available? Should hosting services host models that do not meet certain safety standards? By whom should those standards be prescribed?
- Should there be different standards for government as opposed to private industry when it comes to sharing model weights of open foundation models or contracting with companies who use them?
- What should the U.S. prioritize in working with other countries on this topic, and which countries are most important to work with?
- What insights from other countries or other societal systems are most useful to consider?
- Are there effective mechanisms or procedures that can be used by the government or companies to make decisions regarding an appropriate degree of availability of model weights in a dual-use foundation model or the dual-use foundation model ecosystem? Are there methods for making effective decisions about open AI deployment that balance both benefits and risks? This may include responsible capability scaling policies, preparedness frameworks, et cetera.
- Are there particular individuals/entities who should or should not have access to open-weight foundation models? If so, why and under what circumstances?
- How should these potentially competing interests of innovation, competition, and security be addressed or balanced?
- Noting that E.O. 14110 grants the Secretary of Commerce the capacity to adapt the threshold, is the amount of computational resources required to build a model, such as the cutoff of 1026 integer or floating-point operations used in the Executive Order, a useful metric for thresholds to mitigate risk in the long-term, particularly for risks associated with wide availability of model weights?
- Are there more robust risk metrics for foundation models with widely available weights that will stand the test of time? Should we look at models that fall outside of the dual-use foundation model definition?
Good to Know Before You Comment
Instructions from Federal Register
Through this Request for Comment, we hope to gather information on the following questions. These are not exhaustive, and commenters are invited to provide input on relevant questions not asked below. Commenters are not required to respond to all questions. When responding to one or more of the questions below, please note in the text of your response the number of the question to which you are responding.
Commenters should include a page number on each page of their submissions.
Commenters are welcome to provide specific actionable proposals, rationales, and relevant facts.
Please do not include in your comments information of a confidential nature, such as sensitive personal information or proprietary information. All comments received are a part of the public record and will generally be posted to Regulations.gov without change. All personal identifying information (e.g., name, address) voluntarily submitted by the commenter may be publicly accessible.Overview
A comment can express simple support or dissent for a regulatory action. However, a constructive, information-rich comment that clearly communicates and supports its claims is more likely to have an impact on regulatory decision making.
These tips are meant to help the public submit comments that have an impact and help agency policy makers improve federal regulations.
Summary
- Read and understand the regulatory document you are commenting on
- Feel free to reach out to the agency with questions
- Be concise but support your claims
- Base your justification on sound reasoning, scientific evidence, and/or how you will be impacted
- Address trade-offs and opposing views in your comment
- There is no minimum or maximum length for an effective comment
- The comment process is not a vote – one well supported comment is often more influential than a thousand form letters
Comment periods close at 11:59 eastern time on the date comments are due - begin work well before the deadline.
Attempt to fully understand each issue; if you have questions or do not understand a part of the regulatory document, you may ask for help from the agency contact listed in the document.
Note: Although the agency contact can answer your questions about the document’s meaning, official comments must be submitted through the comment form.
Clearly identify the issues within the regulatory action on which you are commenting. If you are commenting on a particular word, phrase or sentence, provide the page number, column, and paragraph citation from the federal register document.
If a rule raises many issues, do not feel obligated to comment on every one – select those issues that concern and affect you the most and/or you understand the best.
Agencies often ask specific questions or raise issues in rulemaking proposals on subjects where they are actively looking for more information. While the agency will still accept comments on any part of the proposed regulation, please keep these questions and issues in mind while formulating your comment.
Although agencies receive and appreciate all comments, constructive comments (either positive or negative) are the most likely to have an influence.
If you disagree with a proposed action, suggest an alternative (including not regulating at all) and include an explanation and/or analysis of how the alternative might meet the same objective or be more effective.
The comment process is not a vote. The government is attempting to formulate the best policy, so when crafting a comment it is important that you adequately explain the reasoning behind your position.
Identify credentials and experience that may distinguish your comments from others. If you are commenting in an area in which you have relevant personal or professional experience (i.e., scientist, attorney, fisherman, businessman, etc.) say so.
Agency reviewers look for sound science and reasoning in the comments they receive. When possible, support your comment with substantive data, facts, and/or expert opinions. You may also provide personal experience in your comment, as may be appropriate. By supporting your arguments well you are more likely to influence the agency decision making.
Consider including examples of how the proposed rule would impact you negatively or positively.
Comments on the economic effects of rules that include quantitative and qualitative data are especially helpful.
Include the pros and cons and trade-offs of your position and explain them. Your position could consider other points of view, and respond to them with facts and sound reasoning.
If you are uploading more than one attachment to the comment web form, it is recommend that you use the following file titles: * Attachment1_
* Attachment2_ * Attachment3_ This standardized file naming convention will help agency reviewers distinguish your submitted attachments and aid in the comment review process. Keep a copy of your comment in a separate file – this practice helps ensure that you will not lose your comment if you have a problem submitting it using the Regulations.gov web form.
Posted Comments
After submission, your comment will be processed by the agency and posted to Regulations.gov. At times, an agency may choose not to post a submitted comment. Reasons for not posting the comment can include:
- The comment is part of a mass submission campaign or is a duplicate.
- The comment is incomplete.
- The comment is not related to the regulation.
- The comment has been identified as spam.
- The comment contains Personally Identifiable Information (PII) data.
- The comment contains profanity or other inappropriate language.
- The submitter requested the comment not be posted.
Form Letters
Organizations often encourage their members to submit form letters designed to address issues common to their membership. Organizations including industry associations, labor unions, and conservation groups sometimes use form letters to voice their opposition or support of a proposed rulemaking. Many in the public mistakenly believe that their submitted form letter constitutes a “vote” regarding the issues concerning them. Although public support or opposition may help guide important public policies, agencies make determinations for a proposed action based on sound reasoning and scientific evidence rather than a majority of votes. A single, well-supported comment may carry more weight than a thousand form letters.
* Throughout this document, the term “Comment” is used in place of the more technically accurate term “Public Submission” in order to make the recommendations easier to read and understand.
Disclaimer: This document is intended to serve as a guide; it is not intended and should not be considered as legal advice. Please seek counsel from a lawyer if you have legal questions or concerns.
Additional Resources
- https://crfm.stanford.edu/open-fms/
- https://heathermeeker.com/2023/06/08/toward-an-open-weights-definition/
- https://github.com/Open-Weights/Definition
- https://opensource.org/deepdive
- https://opensource.org/blog/ntia-engages-civil-society-on-questions-of-open-foundation-models-for-ai-hears-benefits-of-openness-in-the-public-interest
Follow AI Models on Google News
An easy & free way to support AI Models is to follow our google news feed! More followers will help us reach a wider audience!
Google News: AI Models