Faeze Brahman
Research Scientist. Ai2

Allen Institute for AI
Seattle, WA
I am currently a research scientist at Allen Institute for AI (Ai2). Until recently, I was a post-doctoral researcher at Allen Institute for AI and University of Washington working with Yejin Choi. I did my Ph.D. (2022) in Computer Science at the University of California, Santa Cruz, working with Snigdha Chaturvedi. I hold a master degree (2018) in Computer Science and a master (2014) and bachelor (2012) degree in Electrical Engineering. I have a broad set of research interests:
- Understanding language model’s capabilities and limitations in the wild under unseen or changing situations and developing efficient algorithms to address these limitations beyond just scaling. This also includes better design for collaborative strategies and HCI interfaces to combine the best of Human and AI abilities.
- Building Human-centered AI system that are more reliable and safe to use in various contexts by better alignment techniques.
- Developing robust and meaningful evaluation frameworks to investigate the emergent bahaviors of LLMs that are challenging to measure.
Please feel free to get in touch if you want to chat, collaborate or brainstorm!
Previously, I interned at Microsoft Research, working on controllable grounded text generation; and at AI2, working on unsupervised rationale generation for non-monotonic reasoning.
news
Apr, 2025 | 🎤 Keynote talk at 2025 Singapore Symposium on NLP (SSNLP). |
---|---|
Feb, 2025 | 🎤 Invited talk at UCLA NLP Seminar Series. |
Jan, 2025 | Our works on Reliable LLM-based evaluation with human agreement guarantee and Benchmarking LLMs using real-world user queries have been accepted to appear at ICLR 2025. |
Jan, 2025 | Our work on studying Utility-Truthfulness trade-off in LLM agents has been accepted to appear at NAACL 2025. |
Dec, 2024 | I am attending NeurIPS’24 in 🍁Vancouver to present WildTeaming (@ main), CoCoNoT (@ D&B Track), HAICOSYSTEM and AI-LieDar (@ Safety & Trusthworthy Agents workshop). Come say 👋🏼! |
Nov, 2024 | We released Tülu 3: a family of post-trained models with every step in the pipeline open |
Oct, 2024 | New papers on multi-turn safety evaluation, HAICOSYSTEM and preference annotation, HybridPref! |
Oct, 2024 | selected as a Rising Star to attend the GenAI Workshop at UMass Amherst! |
Sep, 2024 | Serving as Area Chair for COLING 2025. |
Sep, 2024 | WildTeaming and CoCoNoT are now accepted at NeurIPS 2024! |
Aug, 2024 | Supper excited about our recent work on Cascaded Selective Evaluation! |
Jul, 2024 | New paper on Contextual Noncompliance 🥥! |
Jul, 2024 | New papers on WildTeaming at Scale and WildBench 🦁! |
May, 2024 | I will be attending ICLR’24 in 🇦🇹 Vienna to present 3 papers! Check them out here |
May, 2024 | Invited talk at Bocconi University on Creativity and Constrained Problem Solving! slides |
Apr, 2024 | Macgyver led by my intern Yufei Tian was accepted at NAACL 2024. |
Apr, 2024 | Invited talk at UBC NLP Group on Creativity and Constrained Problem Solving! slides |
Oct, 2023 | I’m honored to serve as a Senior Area Chair for NAACL 2024! |
Oct, 2023 | Our workshop on Narrative Understanding is accepted to EMNLP 2024! |
May, 2023 | REV, led by our intern Hanjie Chen is accepted at ACL 2023! |
Mar, 2023 | Talk at UMass NLP seminars! |
Feb, 2023 | Guest lectures at the University of Washington (CSE 599)! |
selected publications
- To appear at ICLR, 2025
- Proceedings of NAACL, 2024
- International Conference on Learning Representations, 2024
- International Conference on Learning Representations, 2024
- Proceedings of EMNLP, 2023
- The Eleventh International Conference on Learning Representations , 2023
- Proceedings of EMNLP, 2022