I am a Senior Research Scientist at the National Research Council Canada and an Adjunct Professor at the University of Ottawa. In my research, I am interested in projects that leverage AI-enabled technologies to address societal challenges. ⁤⁤I am also dedicated to incorporating the principles of responsible AI into digital solutions to ensure ethical considerations are integrated into technological advancements. Previously, I have worked on building and deploying machine learning systems in various applications, including biomedical signal, image, speech and text processing, in both academic and industrial environments. My current work is mainly on natural language processing, and in a few projects, I work on multimodal systems that combine images and text. I served as an area chair at top-tier NLP conferences such as ACL2023, EMNLP2023, and EACL2023, and I am a committee member at several ACL workshops, including WOAH, TrustNLP and PrivateNLP. Outside work, I enjoy cooking, gardening and weight training.

Latest News

June 2024 - We organized a tutorial on Risks of General-Purpose LLMs for Settling Newcomers in Canada at the ACM Fairness, Accountability and Transparency (FAccT) 2024 conference. Read the Report

I co-organized this tutorial with Maryam Molamohammadi and Samir Bakhtawar. Our intern, Kosar Hemmati, has also put enormous efforts into supporting us for this tutorial.

Abstract: The non-profit settlement sector in Canada supports newcomers in achieving successful integration. This sector faces increasing operational pressures amidst rising immigration targets, which highlights a need for enhanced efficiency and innovation, potentially through reliable AI solutions. The ad-hoc use of general-purpose generative AI, such as ChatGPT, might become a common practice among newcomers and service providers to address this need. However, these tools are not tailored for the settlement domain and can have detrimental implications for immigrants and refugees. We explore the risks that these tools might pose on newcomers to first, warn against the unguarded use of generative AI, and second, to incentivize further research and development in creating AI literacy programs as well as customized LLMs that are aligned with the preferences of the impacted communities. Crucially, such technologies should be designed to integrate seamlessly into the existing workflow of the settlement sector, ensuring human oversight, trustworthiness, and accountability.


February 2024 - I was an invited participant of Frontiers of Science: Artificial Intelligence held in the gorgeous Chateau Laurier in Ottawa.

The Royal Society of Canada organized this event, and it was a fantastic multidisciplinary meeting focused on early- to mid-career researchers of artificial intelligence. The meeting brought together 20 researchers from Canada and 20 from the UK, with diverse backgrounds in computer science, history, communication, ethics and policy making. The flexible structure of the meeting gave us the opportunity to listen to eye-opening talks from pioneer thinkers of the field and the chance to discuss these ideas in the context of our work in group discussions. For me, the main takeaway was that we will not succeed in achieving much-needed regulations and safeguards around AI unless we break the vicious cycle of fast and profit-oriented science. A safe and reliable AI that benefits all is hard to achieve but worth the fight.


November 2023 - I co-organized a workshop at the Pathway to Prosperity national conference in Montreal.

Title of workshop: Responsible AI in Settlement Services: Challenges, Social Context, and Ethical AI Solutions

Description of workshop: In this workshop, we explored the potential and limitations of AI language technologies in the immigration context. We first explore the current landscape of settlement services that can be facilitated, assessed, or audited by language technologies. Then, we delve into assessing the potential for a responsible adoption of these technologies in the Canadian settlement service sectors. Some examples encompass translation services, structuring data, detecting patterns on an unparalleled scale, automatic reporting, and auditing fairness in decision-making processes. We conclude by discussing the risks and challenges of deploying these technologies, emphasizing the importance of fostering fair and inclusive technologies.

Workshop chair:

Anna Jahn, Director of Public Policy and Learning, MILA - Quebec Artificial Intelligence

Speakers:

Isar Nejadgholi, Senior Research Scientist, National Research Council Canada Presentation title: Promises of Language Technologies for Digital Transformation of the Immigration and Settlement Sector

Maryam Mollamohammadi, Responsible AI Advisor, MILA - Quebec Artificial Intelligence Institute Presentation title: Inclusive and Fair Digital Transformation: Responsible Use of Language Technologies in the Immigration Sector


July 2023 - Best Paper Award

Our paper, Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor, won the best paper award at the 7th Workshop on Online Abuse and Harms (WOAH). .[Paper]


July 2023 - We were at ACL2023 in Toronto and presented the following papers:

  • Svetlana Kiritchenko, Georgina Curto, Isar Nejadgholi, and Kathleen C. Fraser. (2023) Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor. In Proceedings of the 7th Workshop on Online Abuse and Harms (WOAH).[Paper]

  • Fraser, K.C., Kiritchenko, S., Nejadgholi, I., Kerkhof, A. (2023) What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text. In Proceedings of the First Workshop on Social Influence in Conversations (SICon).[paper]

  • Isar Nejadgholi, Svetlana Kiritchenko, Kathleen C. Fraser, and Esma Balkir. (2023) Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers. In Proceedings of the 7th Workshop on Online Abuse and Harms (WOAH).[paper]

  • Ghanadian, H., Nejadgholi, I., & Osman, H. A. (2023). ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations. 13th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA at ACL 2023).[paper]


June 2023 - Best Short Paper Award

Our paper, Diversity is Not a One-Way Street: Pilot Study on Ethical Interventions for Racial Bias in Text-to-Image Systems., won the best short paper award at the 14th International Conference on Computational Creativity (ICCC) in Waterloo. [Paper]