Join Our Discord (750+ Members)
How to Comment on NTIA AI Open Model Weights RFC

How to Comment on NTIA AI Open Model Weights RFC

  • Home /
  • Blog /
  • How to Comment on NTIA AI Open Model Weights RFC

The National Telecommunications and Information Administration (NTIA) is asking for public comments on the implications of “open-weight ” AI models. These models have the potential to democratize AI tools and accelerate innovation, but also raise concerns about safety and misuse.

The comment deadline has closed as of March 27th.

Why should you comment?

The NTIA, an agency advising the President on communication technology, will use the public comments to inform its recommendations to the President. These recommendations will then be considered by policymakers as they develop regulations or guidelines for open-weight AI models. Your comments can help ensure these policies strike a balance between promoting the benefits of open AI, such as innovation and accessibility, and mitigating potential risks like misuse and safety concerns.

  • Balanced policy decisions: Sharing your perspective can inform regulations that promote both the benefits and responsible use of open-weight AI.
  • A focus on societal good: Emphasize the potential for open models to benefit individuals, communities, and small businesses.
  • Prioritizing safety and security: Address concerns about potential misuse and advocate for safeguards like responsible research practices and clear licensing terms.

Talking points for advocating open weights and open source AI:

  • Increased accessibility and innovation: Open models allow smaller players and researchers to access powerful tools, fostering innovation and competition.
  • Transparency and accountability: Openness allows for scrutiny and improvement of AI models, leading to more trustworthy and reliable systems.
  • Faster advancement of AI research: Open sharing of knowledge and resources can accelerate research and development in the field of AI.
  • Economic growth and job creation: Open AI tools can empower businesses of all sizes, leading to economic opportunities and job creation.

Steps to submit a comment:

1: Visit NTIA’s Federal Register page

https://www.regulations.gov/document/NTIA-2023-0009-0001

2: Click “Comment Now” to submit your comment.

img.png

3: Provide your input on the questions NTIA is seeking comment on. You can find the questions below.

img_2.png

Questions the NTIA is Seeking Comment On

  1. Is there evidence or historical examples suggesting that weights of models similar to currently-closed AI systems will, or will not, likely become widely available? If so, what are they?
  2. Is it possible to generally estimate the timeframe between the deployment of a closed model and the deployment of an open foundation model of similar performance on relevant tasks? How do you expect that timeframe to change? Based on what variables? How do you expect those variables to change in the coming months and years?
  3. Should “wide availability” of model weights be defined by level of distribution? If so, at what level of distribution (e.g., 10,000 entities; 1 million entities; open publication; etc.) should model weights be presumed to be “widely available”? If not, how should NTIA define “wide availability?”
  4. Do certain forms of access to an open foundation model (web applications, Application Programming Interfaces (API), local hosting, edge deployment) provide more or less benefit or more or less risk than others? Are these risks dependent on other details of the system or application enabling access?
    1. Are there promising prospective forms or modes of access that could strike a more favorable benefit-risk balance? If so, what are they?

  1. What, if any, are the risks associated with widely available model weights? How do these risks change, if at all, when the training data or source code associated with fine tuning, pretraining, or deploying a model is simultaneously widely available?
  2. Could open foundation models reduce equity in rights and safety-impacting AI systems (e.g. healthcare, education, criminal justice, housing, online platforms, etc.)?
  3. What, if any, risks related to privacy could result from the wide availability of model weights?
  4. Are there novel ways that state or non-state actors could use widely available model weights to create or exacerbate security risks, including but not limited to threats to infrastructure, public health, human and civil rights, democracy, defense, and the economy?
    1. How do these risks compare to those associated with closed models?
    2. How do these risks compare to those associated with other types of software systems and information resources?
  5. What, if any, risks could result from differences in access to widely available models across different jurisdictions?
  6. Which are the most severe, and which the most likely risks described in answering the questions above? How do these set of risks relate to each other, if at all?

  1. What benefits do open model weights offer for competition and innovation, both in the AI marketplace and in other areas of the economy? In what ways can open dual-use foundation models enable or enhance scientific research, as well as education/training in computer science and related fields?
  2. How can making model weights widely available improve the safety, security, and trustworthiness of AI and the robustness of public preparedness against potential AI risks?
  3. Could open model weights, and in particular the ability to retrain models, help advance equity in rights and safety-impacting AI systems (e.g. healthcare, education, criminal justice, housing, online platforms etc.)?
  4. How can the diffusion of AI models with widely available weights support the United States’ national security interests? How could it interfere with, or further the enjoyment and protection of human rights within and outside of the United States?
  5. How do these benefits change, if at all, when the training data or the associated source code of the model is simultaneously widely available?

  1. What model evaluations, if any, can help determine the risks or benefits associated with making weights of a foundation model widely available?
  2. Are there effective ways to create safeguards around foundation models, either to ensure that model weights do not become available, or to protect system integrity or human well-being (including privacy) and reduce security risks in those cases where weights are widely available?
  3. What are the prospects for developing effective safeguards in the future?
  4. Are there ways to regain control over and/or restrict access to and/or limit use of weights of an open foundation model that, either inadvertently or purposely, have already become widely available? What are the approximate costs of these methods today? How reliable are they?
  5. What if any secure storage techniques or practices could be considered necessary to prevent unintentional distribution of model weights?
  6. Which components of a foundation model need to be available, and to whom, in order to analyze, evaluate, certify, or red-team the model? To the extent possible, please identify specific evaluations or types of evaluations and the component(s) that need to be available for each.
  7. Are there means by which to test or verify model weights? What methodology or methodologies exist to audit model weights and/or foundation models?

  1. In which ways is open-source software policy analogous (or not) to the availability of model weights? Are there lessons we can learn from the history and ecosystem of open-source software, open data, and other “open” initiatives for open foundation models, particularly the availability of model weights?
  2. How, if at all, does the wide availability of model weights change the competition dynamics in the broader economy, specifically looking at industries such as but not limited to healthcare, marketing, and education?
  3. How, if at all, do intellectual property-related issues—such as the license terms under which foundation model weights are made publicly available—influence competition, benefits, and risks? Which licenses are most prominent in the context of making model weights widely available? What are the tradeoffs associated with each of these licenses?
  4. Are there concerns about potential barriers to interoperability stemming from different incompatible “open” licenses, e.g., licenses with conflicting requirements, applied to AI components? Would standardizing license terms specifically for foundation model weights be beneficial? Are there particular examples in existence that could be useful?

  1. What security, legal, or other measures can reasonably be employed to reliably prevent wide availability of access to a foundation model’s weights, or limit their end use?
  2. How might the wide availability of open foundation model weights facilitate, or else frustrate, government action in AI regulation?
  3. When, if ever, should entities deploying AI disclose to users or the general public that they are using open foundation models either with or without widely available weights?
  4. What role, if any, should the U.S. government take in setting metrics for risk, creating standards for best practices, and/or supporting or restricting the availability of foundation model weights?
    1. Should other government or non-government bodies, currently existing or not, support the government in this role? Should this vary by sector?
  5. What should the role of model hosting services (e.g. HuggingFace, GitHub, etc.) be in making dual-use models with open weights more or less available? Should hosting services host models that do not meet certain safety standards? By whom should those standards be prescribed?
  6. Should there be different standards for government as opposed to private industry when it comes to sharing model weights of open foundation models or contracting with companies who use them?
  7. What should the U.S. prioritize in working with other countries on this topic, and which countries are most important to work with?
  8. What insights from other countries or other societal systems are most useful to consider?
  9. Are there effective mechanisms or procedures that can be used by the government or companies to make decisions regarding an appropriate degree of availability of model weights in a dual-use foundation model or the dual-use foundation model ecosystem? Are there methods for making effective decisions about open AI deployment that balance both benefits and risks? This may include responsible capability scaling policies, preparedness frameworks, et cetera.
  10. Are there particular individuals/entities who should or should not have access to open-weight foundation models? If so, why and under what circumstances?

  1. How should these potentially competing interests of innovation, competition, and security be addressed or balanced?
  2. Noting that E.O. 14110 grants the Secretary of Commerce the capacity to adapt the threshold, is the amount of computational resources required to build a model, such as the cutoff of 1026 integer or floating-point operations used in the Executive Order, a useful metric for thresholds to mitigate risk in the long-term, particularly for risks associated with wide availability of model weights?
  3. Are there more robust risk metrics for foundation models with widely available weights that will stand the test of time? Should we look at models that fall outside of the dual-use foundation model definition?

Good to Know Before You Comment

Instructions from Federal Register

Through this Request for Comment, we hope to gather information on the following questions. These are not exhaustive, and commenters are invited to provide input on relevant questions not asked below. Commenters are not required to respond to all questions. When responding to one or more of the questions below, please note in the text of your response the number of the question to which you are responding.

Commenters should include a page number on each page of their submissions.

Commenters are welcome to provide specific actionable proposals, rationales, and relevant facts.

Please do not include in your comments information of a confidential nature, such as sensitive personal information or proprietary information. All comments received are a part of the public record and will generally be posted to Regulations.gov without change. All personal identifying information (e.g., name, address) voluntarily submitted by the commenter may be publicly accessible.

Overview

A comment can express simple support or dissent for a regulatory action. However, a constructive, information-rich comment that clearly communicates and supports its claims is more likely to have an impact on regulatory decision making.

These tips are meant to help the public submit comments that have an impact and help agency policy makers improve federal regulations.

Summary

  • Read and understand the regulatory document you are commenting on
  • Feel free to reach out to the agency with questions
  • Be concise but support your claims
  • Base your justification on sound reasoning, scientific evidence, and/or how you will be impacted
  • Address trade-offs and opposing views in your comment
  • There is no minimum or maximum length for an effective comment
  • The comment process is not a vote – one well supported comment is often more influential than a thousand form letters

  1. Comment periods close at 11:59 eastern time on the date comments are due - begin work well before the deadline.

  2. Attempt to fully understand each issue; if you have questions or do not understand a part of the regulatory document, you may ask for help from the agency contact listed in the document.

    Note: Although the agency contact can answer your questions about the document’s meaning, official comments must be submitted through the comment form.

  3. Clearly identify the issues within the regulatory action on which you are commenting. If you are commenting on a particular word, phrase or sentence, provide the page number, column, and paragraph citation from the federal register document.

  4. If a rule raises many issues, do not feel obligated to comment on every one – select those issues that concern and affect you the most and/or you understand the best.

  5. Agencies often ask specific questions or raise issues in rulemaking proposals on subjects where they are actively looking for more information. While the agency will still accept comments on any part of the proposed regulation, please keep these questions and issues in mind while formulating your comment.

  6. Although agencies receive and appreciate all comments, constructive comments (either positive or negative) are the most likely to have an influence.

  7. If you disagree with a proposed action, suggest an alternative (including not regulating at all) and include an explanation and/or analysis of how the alternative might meet the same objective or be more effective.

  8. The comment process is not a vote. The government is attempting to formulate the best policy, so when crafting a comment it is important that you adequately explain the reasoning behind your position.

  9. Identify credentials and experience that may distinguish your comments from others. If you are commenting in an area in which you have relevant personal or professional experience (i.e., scientist, attorney, fisherman, businessman, etc.) say so.

  10. Agency reviewers look for sound science and reasoning in the comments they receive. When possible, support your comment with substantive data, facts, and/or expert opinions. You may also provide personal experience in your comment, as may be appropriate. By supporting your arguments well you are more likely to influence the agency decision making.

  11. Consider including examples of how the proposed rule would impact you negatively or positively.

  12. Comments on the economic effects of rules that include quantitative and qualitative data are especially helpful.

  13. Include the pros and cons and trade-offs of your position and explain them. Your position could consider other points of view, and respond to them with facts and sound reasoning.

  14. If you are uploading more than one attachment to the comment web form, it is recommend that you use the following file titles: * Attachment1_ * Attachment2_ * Attachment3_This standardized file naming convention will help agency reviewers distinguish your submitted attachments and aid in the comment review process.

  15. Keep a copy of your comment in a separate file – this practice helps ensure that you will not lose your comment if you have a problem submitting it using the Regulations.gov web form.

Posted Comments

After submission, your comment will be processed by the agency and posted to Regulations.gov. At times, an agency may choose not to post a submitted comment. Reasons for not posting the comment can include:

  • The comment is part of a mass submission campaign or is a duplicate.
  • The comment is incomplete.
  • The comment is not related to the regulation.
  • The comment has been identified as spam.
  • The comment contains Personally Identifiable Information (PII) data.
  • The comment contains profanity or other inappropriate language.
  • The submitter requested the comment not be posted.

Form Letters

Organizations often encourage their members to submit form letters designed to address issues common to their membership. Organizations including industry associations, labor unions, and conservation groups sometimes use form letters to voice their opposition or support of a proposed rulemaking. Many in the public mistakenly believe that their submitted form letter constitutes a “vote” regarding the issues concerning them. Although public support or opposition may help guide important public policies, agencies make determinations for a proposed action based on sound reasoning and scientific evidence rather than a majority of votes. A single, well-supported comment may carry more weight than a thousand form letters.

* Throughout this document, the term “Comment” is used in place of the more technically accurate term “Public Submission” in order to make the recommendations easier to read and understand.

Disclaimer: This document is intended to serve as a guide; it is not intended and should not be considered as legal advice. Please seek counsel from a lawyer if you have legal questions or concerns.

Additional Resources

Follow AI Models on Google News

An easy & free way to support AI Models is to follow our google news feed! More followers will help us reach a wider audience!

Google News: AI Models

Subscribe or Contribute

Related Posts

How to Comment on DOE AI in Electricity Delivery RFC

How to Comment on DOE AI in Electricity Delivery RFC

The Department of Energy (DOE) is seeking public input on how Artificial Intelligence (AI) can transform the way we deliver electricity.

Smart Cities and Artificial Intelligence

Smart Cities and Artificial Intelligence

Innovation is as inevitable as the sunrise, our urban landscapes are undergoing a transformation of epic proportions.

Elon Musk files suit against Microsoft `subsidiary` OpenAI for Closed Nature

Elon Musk files suit against Microsoft `subsidiary` OpenAI for Closed Nature

Elon Musk, in a legal filing on Thursday (pdf) , contends that Sam Altman and OpenAI have forsaken their initial commitment to developing artificial intelligence (AI) for the betterment of humanity, instead prioritizing profits.