Press
AI Safety Institute Consortium
A red, white, and blue city.

CivAI Joins the U.S. AI Safety Institute Consortium

CivAI collaborates with leading AI stakeholders in a consortium under the new U.S. AI Safety Institute established at NIST.

BERKELEY, Calif. -- Feb. 8, 2024 - CivAI, a nonprofit dedicated to preparing state and local governments for AI security risks, is proud to announce its founding membership in the U.S. Artificial Intelligence Safety Institute Consortium (AISIC). Established by the Department of Commerce’s National Institute of Standards and Technology (NIST), the AISIC brings together Fortune 500 companies, academic teams, nonprofit organizations, state and local governments, and other U.S. Government agencies, to enable safe, trustworthy AI systems and underpin future standards and policies.

CivAI brings to the consortium its focus on state and local governments and its experience operationalizing AI best practices for them. Through its GenAI Toolkit, CivAI has provided concrete steps for state and local governments who are looking to adopt the recommendations of the NIST AI Risk Management Framework. Additionally, CivAI contributes to the consortium its experience creating interactive educational content, which has recently explored the effects of AI on cybersecurity and public policy.

"We have tremendous respect for the work NIST has undertaken in setting a foundation for AI safety and security thus far, and we are excited to support them in that endeavor through the AISIC. We will work hard to ensure that state and local governments have the tools and guidance they need to navigate the many impacts of AI."

— Co-Founder of CivAI Lucas Hansen

"The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do. Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly."

— U.S. Secretary of Commerce Gina Raimondo

CivAI's participation as a founding member of the AISIC underscores its commitment to advancing AI security for state and local governments. By contributing to the AISIC, CivAI aims to further its mission of equipping state and local governments with the knowledge, tools, and support they need to protect themselves and their constituents against AI security risks.

About CivAI

CivAI is a nonprofit that disseminates knowledge about AI capabilities and dangers in a unique way. Instead of research papers and declarative statements, we produce interactive software experiences that provide a deep, intuitive sense of where AI is and where it’s going. We present these demos to targeted audiences and make scalable versions available to the public.

CivAI is aiming for a world where people deeply grasp AI capabilities, and make better decisions because of it. CivAI’s work brings a new kind of evidence to the discourse—simple, intuitive, and incontrovertible because users can interact with it firsthand.

Contact

Current Work

Demos of AI-Powered Cyber Attacks

See the impact of AI on cybersecurity, from hyper-personalized phishing emails to instant deepfakes.

GenAI Toolkit for the NIST AI Framework

Concrete risk analysis steps that government entities can take when adopting a GenAI system.

Newsletter for Concretely Understanding AI

In this issue: OpenAI's new image generation AI, California's executive order, a deepfake scam on TikTok.
© 2024 Civic AI Security Program. All Rights Reserved.