
OpenAI Launches Security Fellowship to Fund External AI Research Study
- By John K. Waters
- 04/16/26
OpenAI is broadening safety efforts beyond its walls with a brand-new Security Fellowship that will fund external researchers to study AI dangers. The OpenAI Safety Fellowship will run for 6 months from September 2026 to February 2027, according to a news announcement, expanding the company’s involvement in positioning and safety work. The effort comes as AI companies face increasing analysis over how they manage risks related to rapidly advancing systems.
The program is open to researchers, engineers, and practitioners from outside the business. Participants will get stipends, access to OpenAI models, and technical assistance to carry out research study in locations such as toughness, privacy, agent oversight, and misuse avoidance. Fellows are expected to produce outputs such as research papers, benchmarks, or datasets.
OpenAI stated the fellowship is meant to “support high-impact research on the safety and alignment of innovative AI systems” and to expand the variety of people dealing with technical safety obstacles. The program reflects a broader trend amongst significant AI designers to fund external research study through fellowships, residencies, and academic partnerships.
For instance, Anthropic, a rival AI company focused on security, runs a comparable fellows program that supports independent scientists working on alignment, interpretability, and AI security. The program supplies funding, mentorship, and calculate resources, with individuals normally producing publicly available research study.
Google and its DeepMind unit operate a range of trainee scientist and fellowship programs that position participants on research teams for several months. These programs cover a broad variety of AI subjects, consisting of safety-related work, though they are not constantly clearly branded as alignment-focused.
Microsoft and Meta have likewise expanded financing for external AI research study through scholastic collaborations, grants, and residency-style programs, often targeted at advancing work on accountable AI and system dependability.
Together, these efforts form a growing ecosystem of externally moneyed research connected to leading AI laboratories.
OpenAI stated the priority locations for its fellowship consist of “agentic oversight” and “high-severity abuse domains,” reflecting issues about systems efficient in taking multi-step actions with limited human intervention. Current advances in AI capabilities have actually made it possible for systems to perform more complicated tasks, consisting of coding, research study assistance, and workflow automation. This has actually shifted some safety concerns from harmful outputs towards the capacity for unintended or hazardous actions taken by self-governing or semi-autonomous systems.
The development of fellowship programs comes amidst increasing need for AI safety scientists, a reasonably small but broadening field. Business are using competitive settlement and access to computing resources to attract skill, as they compete to develop advanced models. At the very same time, governments and regulators are increasing pressure on AI developers to show that systems can be deployed securely and dependably.
While external programs may broaden involvement in security work, they do not change internal decision-making processes at AI business. Researchers participating in fellowships normally do not have direct authority over item releases. Their work is usually advisory, focused on identifying dangers and proposing mitigation methods. Obligation for releasing AI systems remains with the business that construct and run them.
OpenAI stated the fellowship becomes part of a more comprehensive effort to support research study and enhance understanding of AI risks, however did not supply information on how findings from the program would be integrated into product decisions.
The very first associate of the OpenAI Security Fellowship is expected to be selected later this year. For more details, go to the OpenAI site.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end advancement, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than 20 years, and he’s composed more than a dozen books. He likewise co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email safeguarded]