AI Tool

Center for AI Safety (CAIS) Review

The Center for AI Safety (CAIS) is a non-profit organization that conducts research, builds the field of AI safety, and advocates for safety standards to reduce societal-scale risks associated with AI.

Center for AI Safety (CAIS) - AI tool for center safety cais. Professional illustration showing core functionality and features.
1Founded in 2022 by Dan Hendrycks and Oliver Zhang, CAIS is a San Francisco-based non-profit.
2CAIS notably organized the May 2023 statement on AI risk of extinction, signed by hundreds of AI leaders.
3The organization offers free access to its compute cluster for AI safety researchers.
4In March 2026, the CAIS Action Fund endorsed the 'Chip Security Act'.

Center for AI Safety (CAIS) at a Glance

Best For
ai
Pricing
freemium
Key Features
ai
Integrations
See website
Alternatives
See comparison section

Similar Tools

Compare Alternatives

Other tools you might consider

Connect

</>Embed "Featured on Stork" Badge
Badge previewBadge preview light
<a href="https://www.stork.ai/en/center-for-ai-safety-cais" target="_blank" rel="noopener noreferrer"><img src="https://www.stork.ai/api/badge/center-for-ai-safety-cais?style=dark" alt="Center for AI Safety (CAIS) - Featured on Stork.ai" height="36" /></a>
[![Center for AI Safety (CAIS) - Featured on Stork.ai](https://www.stork.ai/api/badge/center-for-ai-safety-cais?style=dark)](https://www.stork.ai/en/center-for-ai-safety-cais)

overview

What is Center for AI Safety (CAIS)?

Center for AI Safety (CAIS) is a non-profit organization focused on AI safety developed by Center for AI Safety (CAIS) that enables researchers, engineers, policymakers, and the public to conduct research, build the field, and advocate for safety standards to reduce societal-scale risks from AI. It was founded in 2022 by Dan Hendrycks and Oliver Zhang.

CAIS is a San Francisco-based non-profit organization dedicated to mitigating high-consequence, societal-scale risks posed by artificial intelligence, which it categorizes alongside global priorities such as pandemics and nuclear war. The organization operates through three primary pillars: advancing safety research, field-building, and advocacy.

Its research initiatives encompass both technical and conceptual approaches. Technical research focuses on developing foundational benchmarks and methodologies to identify and address AI safety issues, including the removal of dangerous behaviors, the study of deceptive AI, the training of AIs for moral conduct, and the improvement of system reliability and security. Conceptual research examines AI safety from multidisciplinary perspectives, integrating insights from safety engineering, complex systems theory, international relations, and philosophy to construct frameworks for understanding challenges and societal risks.

For field-building, CAIS aims to expand the AI safety research ecosystem by providing funding, research infrastructure, and educational resources. This includes offering free access to its compute cluster for qualified researchers, administering multidisciplinary fellowships such as the AI and Society Fellowship and the Philosophy Fellowship, and developing educational content like the "Intro to ML Safety" course.

In its advocacy role, CAIS advises industry leaders, policymakers, and governmental bodies to elevate public awareness of AI risks, inform policy development, and establish guidelines for the safe and responsible deployment of AI. A significant advocacy effort was the May 2023 statement on AI risk of extinction, which garnered signatures from over 1,000 AI professors, industry leaders including Sam Altman and Elon Musk, and public figures.

quick facts

Quick Facts

AttributeValue
DeveloperCenter for AI Safety (CAIS)
Business ModelNon-profit
PricingFreemium
PlatformsWeb
API AvailableNo
Founded2022
HQSan Francisco, USA

features

Key Features of Center for AI Safety (CAIS)

The Center for AI Safety (CAIS) provides a range of features designed to advance AI safety research, field development, and policy advocacy.

  • 1Conducts technical research to develop benchmarks and methods for identifying and addressing AI safety issues.
  • 2Performs conceptual research integrating multidisciplinary perspectives on AI safety.
  • 3Builds the AI safety field by providing funding and research infrastructure.
  • 4Offers free access to its compute cluster for AI safety researchers.
  • 5Runs multidisciplinary fellowships, including the AI and Society Fellowship and Philosophy Fellowship.
  • 6Develops educational resources such as the "Intro to ML Safety" course.
  • 7Advocates for AI safety standards and advises policymakers and industry leaders.
  • 8Focuses on reducing societal-scale risks associated with AI, including extinction risk.
  • 9Aims to achieve safe superintelligence (SSI) as a primary mission.
  • 10Does not offer a public API for its services.

use cases

Who Should Use Center for AI Safety (CAIS)?

Center for AI Safety (CAIS) serves various stakeholders interested in the responsible development and deployment of artificial intelligence, particularly concerning high-consequence risks.

  • 1Engineers and researchers dedicated to building safe superintelligence (SSI) and mitigating AI risks.
  • 2Academics and scholars seeking to engage in multidisciplinary research on AI safety, ethics, and governance.
  • 3Policymakers and governmental bodies requiring expert advice and frameworks for AI regulation and standards.
  • 4Organizations and individuals interested in understanding and contributing to the reduction of societal-scale risks from advanced AI.
  • 5Students and professionals looking for educational resources and fellowships in the field of machine learning safety.

pricing

Center for AI Safety (CAIS) Pricing & Plans

The Center for AI Safety (CAIS) operates on a freemium model, consistent with its non-profit status and mission to build the AI safety field. Core resources and educational materials are provided without direct cost to users.

  • 1Freemium: Access to CAIS research publications, educational courses like "Intro to ML Safety," and the compute cluster for qualified researchers is provided free of charge. The organization is funded through donations and grants to support its operational costs and initiatives.

competitors

Center for AI Safety (CAIS) vs Competitors

The Center for AI Safety (CAIS) distinguishes itself within the AI safety ecosystem through its explicit focus on societal-scale and extreme risks, its blend of technical and conceptual research, and its direct advocacy efforts.

1
Responsible AI Institute

Focuses on operationalizing responsible AI through benchmarking, conformity assessments, and certification for AI systems.

While CAIS emphasizes foundational research and advocacy for AI safety standards, Responsible AI Institute provides practical tools, frameworks, and verification programs for organizations to implement and demonstrate responsible AI practices, including compliance with global standards.

2
Partnership on AI

Convenes a diverse, global community of academic, civil society, industry, and media organizations to create solutions for responsible AI development and deployment.

Similar to CAIS in its non-profit, field-building, and advocacy aspects, Partnership on AI distinguishes itself by its broad multi-stakeholder collaborative model, bringing together a wide array of entities to develop actionable guidance and inform public policy, whereas CAIS focuses more on technical research and specific risk mitigation.

3
AI Risk Mitigation Fund

A non-profit that provides grants to support technical research, policy development, and training programs aimed at reducing catastrophic risks from advanced AI.

Unlike CAIS, which conducts its own research and field-building, the AI Risk Mitigation Fund primarily functions as a grant-making organization, financially supporting other entities and researchers within the AI risk mitigation ecosystem.

4
The AI Safety Foundation

Increases awareness and scientific understanding of catastrophic AI risks by advancing solutions-oriented education and research.

The AI Safety Foundation shares CAIS's core mission of increasing awareness and conducting research on catastrophic AI risks. AISF also places a strong emphasis on solutions-oriented education and public understanding, including through initiatives like journalism awards, aligning with CAIS's advocacy efforts.

5
Centre for the Governance of AI (GovAI)

Dedicated to understanding and managing the risks and opportunities posed by advanced AI, with a specific focus on informing policymakers and shaping AI governance.

Both CAIS and GovAI conduct research and advocate for AI safety and standards. However, GovAI has a more explicit and dedicated focus on the policy and governance aspects of AI, aiming to directly inform and influence decision-makers in government and industry, while CAIS has a broader scope encompassing technical research and general field-building alongside advocacy.

Frequently Asked Questions

+What is Center for AI Safety (CAIS)?

Center for AI Safety (CAIS) is a non-profit organization focused on AI safety developed by Center for AI Safety (CAIS) that enables researchers, engineers, policymakers, and the public to conduct research, build the field, and advocate for safety standards to reduce societal-scale risks from AI. It was founded in 2022 by Dan Hendrycks and Oliver Zhang.

+Is Center for AI Safety (CAIS) free?

Yes, the Center for AI Safety (CAIS) operates on a freemium model. As a non-profit, it provides access to its research, educational materials, and compute cluster for qualified researchers free of charge. The organization is supported by donations and grants.

+What are the main features of Center for AI Safety (CAIS)?

The main features of Center for AI Safety (CAIS) include conducting technical and conceptual AI safety research, building the AI safety field through funding and infrastructure, advocating for AI safety standards and policy, offering free access to its compute cluster for researchers, running multidisciplinary fellowships, and developing educational resources like the "Intro to ML Safety" course. Its primary mission is to reduce societal-scale risks associated with AI and achieve safe superintelligence.

+Who should use Center for AI Safety (CAIS)?

Center for AI Safety (CAIS) is intended for engineers and researchers dedicated to building safe superintelligence, academics and scholars engaged in multidisciplinary AI safety research, policymakers seeking guidance on AI governance, organizations focused on mitigating societal-scale AI risks, and students or professionals interested in AI safety education and fellowships.

+How does Center for AI Safety (CAIS) compare to alternatives?

Center for AI Safety (CAIS) differentiates itself from alternatives by its explicit focus on extreme, societal-scale AI risks, its blend of technical and conceptual research, and its direct advocacy. For instance, unlike the Responsible AI Institute which focuses on operationalizing responsible AI through certification, CAIS prioritizes foundational research and advocacy. Compared to the AI Risk Mitigation Fund, which is a grant-making body, CAIS conducts its own research and field-building. While sharing goals with The AI Safety Foundation and Centre for the Governance of AI (GovAI), CAIS maintains a broader scope encompassing both technical research and general field-building alongside its policy-focused advocacy.