AI Gov.

Christopher Denq

AI Governance Researcher, European Parliament
Data Scientist (NSIN Fellow), US Department of Defense
BA Philosophy and Art History, University of Pennsylvania

I am an AI governance researcher currently staffed to Dr. Koen Holtman (Dutch representative on the CEN-CENELEC JTC21, a committee responsible for drafting technical standards for the European Parliament's AI Act). I have 1 year of AI governance research experience. I am committed to developing a robust, international policybase that properly safeguards our AI-intertwined future from malicious actors. Currently, I am interested in contributing to US (NIST) and EU based governance efforts, with hope for UK policy collaboration in the near future. It is my ultimate goal to leverage these user-centered safety regulations in not just own my data science work, but also towards building a safety-conscious AI culture across industry.

My research interests lie in GPAI risk management systems in both soft and hard law contexts (eg. auditing, guidelines, regulation). Overall, I am targetting GPAI regulation, with particular impact on safety measures for generative AIs like LLMs or foundation models.

I have a bachelor's degree in philosophy from the University of Pennsylvania and an pursuing a second bachelors in applied mathematics from the University of Illinois Springfield. I have 2 years of experience in the private sector, specifically with developing NLP and LLM AI products, well as, 2 years of technical AI research, specifically within interpretability. My multidisciplinary education and experiences enable me to not only effectively incorporate industry best practices into regulatory processes, but also understand what kinds of technical regulations and policy strategies are feasible for AI systems.

Last updated: March 27, 2024.

Education      •       Research      •       Field-Building      •       Conferences      •       Grants      •       Publications

NOTE: If you are a member of the EA, rationalist, or AI safety communities, please reach out! I am always interested in building long-lasting relationships with others who act to improve our world.

Education

University of Pennsylvania

Bachelors of Arts, Philosophy and Art History

Concentration in Aesthetics

University of Illinois Springfield

Bachelors of Arts, Applied Mathematics and Philosophy
Minor in Computer Science

Concentration in Ethics

Research Experience
(1 yr 5 mos)

Senior AI Standards Contributor

European Parliament, AI Standards Lab & AISC8
Staffed to: Dr. Koen Holtman (EU CEN-CENELEC JTC21 Committee)

Mar 2023 - Mar 2024  •  1 yr 1 mo
Brussels, Belgium & Eindhoven, Netherlands  •  Remote

• Developing minimally acceptable best practices for AI risk management systems and legal drafts for the EU's AI Act
• Reviewing legal formats and paper content of junior contributors; advises junior contributors on relevant AI safety sources
• Specialized in information hazards and risk levels from unregulated GPAI/LLM models at both the user and societal level


Skills: AI Governance (EU) • AI Risk/AI Safety • Policy Strategy • Contract Law • Risk Management Systems • Data Science • Legal Writing

CAISH Policy Fellow

Cambridge University, Cambridge AI Safety Hub
Supervisor: Gabor Fuisz (CambridgeEA)

Nov 2023 - Dec 2023  •  2 mos
Cambridge, England  •  Remote

• Handpicked as top 16% finalist (50 of 300) to partake in AI Gov. Fellowship (2-month reading group for AI governance)
• Engaged in 6 3-hour open forums related to relevant AI safety topics in governance, regulation, and ethics
• Topics included alignment theory, time horizon/takeoff speeds, frontier AI regulation, model evaluations, compute monitoring, safety deployment strategies (e.g. Windfall Clause), and societal harms


Skills: Python (Matplotlib, Statsmodel, Numpy, Pandas) • Paper Writing • AI Governance (US)

AI Governance Researcher

Stanford University, Supervised Program for Alignment Research (SPAR 2023)
Supervisor: Robi Rahman (EpochAI, Ex-Stanford Center for Human-Centered AI)

Jan 2023 - Mar 2023  •  3 mos
Stanford, CA  •  Remote

• Handpicked as one of 2 out of 25 applicants to partake in Stanford AI Alignment’s SPAR: Public Policy project
• Analyzed historical AI metrics (parameter, compute, dataset) across 31 countries to verify regional forecasting trends and inform governance strategies


Skills: Python (Matplotlib, Statsmodel, Numpy, Pandas) • Paper Writing • AI Governance (US)

AI Governance Intern

University of California Berkeley, Good Futures Initiative (GFI 2022)
Advisor: Lauren Aris Richardson (RAND Corporation, Ex-SERI MATS)

Dec 2022 - Jan 2023  •  2 mos
Berkeley, CA  •  Remote

• Performed literature review of auditing policies across 6 industries (hardware, software, robotics, cybersecurity, pharmaceutical, and financial) with goal of establishing precedents for AI-specific auditing frameworks


Skills: AI Governance (US) • Corporate Policy Research • Paper Writing

Conferences

Research Presentations

Stanford: Stanford AI Alignment (SAIA) Research Symposium 2023 Mar 2023
UCBerkeley: Berkeley Good Futures Initiative (GFI) Research Symposium 2023 Jan 2023

Conferences, Conventions, Workshops

Virtual AI Safety Unconference Aug 2023
UC Berkeley CLTC: NIST Second Draft Framework Workshop (invite only) Aug 2023
UC Berkeley CLTC: NIST First Draft Framework Workshop (invite only) May 2023
EAG Bay Area 2023 Feb 2023
AI Alignment Retreat (AIR) 2022 Dec 2022
EAGx Berkeley 2022 Dec 2022

Publications

Papers, Articles, and Legal Drafts

Infohazards and Privacy Risk from LLMs* | JTC21, EU Parliament Aug 2023
Trends in landmark machine learning model training* | Stanford AI Index Mar 2023
Survey review of AI-adjacent audit regulations | UC Berkeley GFI Showcase Jan 2023
*Pending reivew

Pending

Informational risk management systems for LLMs | JTC21, EU Parliament May 2024
Trends in landmark machine learning model training (follow-up) | Stanford AI Index May 2024