AI Governance Researcher, European Parliament
Data Scientist (NSIN Fellow), US Department of Defense
BA Philosophy and Art History, University of Pennsylvania
I am an AI governance researcher currently staffed to Dr. Koen Holtman (Dutch representative on the CEN-CENELEC JTC21, a committee
responsible for drafting technical standards for the European Parliament's AI Act). I have 1 year of AI governance research experience.
I am committed to developing a robust, international policybase that properly safeguards our AI-intertwined future
from malicious actors. Currently, I am interested in contributing to US (NIST) and EU based governance efforts,
with hope for UK policy collaboration in the near future. It is my ultimate goal to leverage these user-centered safety regulations
in not just own my data science work, but also towards building a safety-conscious AI culture across industry.
My research interests lie in GPAI risk management systems in both soft and hard law contexts (eg. auditing, guidelines, regulation).
Overall, I am targetting GPAI regulation, with particular impact on safety measures for generative AIs like LLMs or foundation models.
I have a bachelor's degree in philosophy from the University of Pennsylvania and an pursuing a second bachelors in applied mathematics
from the University of Illinois
Springfield. I have 2 years of experience in the private sector, specifically with developing NLP and LLM AI products, well as, 2 years of technical AI research,
specifically within interpretability.
My multidisciplinary education and experiences enable me to not only effectively incorporate industry best practices into regulatory processes, but also understand
what kinds of technical regulations and policy strategies are feasible for AI systems.
Last updated: March 27, 2024.
Education • Research • Field-Building • Conferences • Grants • Publications
NOTE: If you are a member of the EA, rationalist, or AI safety communities, please reach out! I am always interested in building long-lasting relationships with others who act to improve our world.
• Developing minimally acceptable best practices for AI risk management systems and legal drafts for the EU's AI Act
• Reviewing legal formats and paper content of junior contributors; advises junior contributors on relevant AI safety sources
• Specialized in information hazards and risk levels from unregulated GPAI/LLM models at both the user and societal level
• Handpicked as top 16% finalist (50 of 300) to partake in AI Gov. Fellowship (2-month reading group for AI governance)
• Engaged in 6 3-hour open forums related to relevant AI safety topics in governance, regulation, and ethics
• Topics included alignment theory, time horizon/takeoff speeds, frontier AI regulation, model evaluations, compute monitoring, safety deployment strategies (e.g. Windfall Clause), and societal harms
• Handpicked as one of 2 out of 25 applicants to partake in Stanford AI Alignment’s SPAR: Public Policy project
• Analyzed historical AI metrics (parameter, compute, dataset) across 31 countries to verify regional forecasting trends and inform governance strategies
• Performed literature review of auditing policies across 6 industries (hardware, software, robotics, cybersecurity, pharmaceutical, and financial) with goal of establishing precedents for AI-specific auditing frameworks
Stanford: Stanford AI Alignment (SAIA) Research Symposium 2023 | Mar 2023 |
UCBerkeley: Berkeley Good Futures Initiative (GFI) Research Symposium 2023 | Jan 2023 |
Virtual AI Safety Unconference | Aug 2023 |
UC Berkeley CLTC: NIST Second Draft Framework Workshop (invite only) | Aug 2023 |
UC Berkeley CLTC: NIST First Draft Framework Workshop (invite only) | May 2023 |
EAG Bay Area 2023 | Feb 2023 |
AI Alignment Retreat (AIR) 2022 | Dec 2022 |
EAGx Berkeley 2022 | Dec 2022 |
Infohazards and Privacy Risk from LLMs* | JTC21, EU Parliament | Aug 2023 |
Trends in landmark machine learning model training* | Stanford AI Index | Mar 2023 |
Survey review of AI-adjacent audit regulations | UC Berkeley GFI Showcase | Jan 2023 |
Informational risk management systems for LLMs | JTC21, EU Parliament | May 2024 |
Trends in landmark machine learning model training (follow-up) | Stanford AI Index | May 2024 |