I am a Senior Research Scientist at the National Research Council Canada and an Adjunct Professor at the University of Ottawa. In my research, I am interested in projects that leverage AI-enabled technologies to address societal challenges. I am also dedicated to incorporating the principles of responsible AI into digital solutions to ensure ethical considerations are integrated into technological advancements. Previously, I have worked on building and deploying machine learning systems in various applications, including biomedical signal, image, speech and text processing, in both academic and industrial environments. My current work is mainly on natural language processing, and in a few projects, I work on multimodal systems that combine images and text. I served as an area chair at top-tier NLP conferences such as ACL2023, EMNLP2023, and EACL2023, EMNLP2024, and I am a committee member at several ACL workshops, including WOAH, TrustNLP and PrivateNLP. Outside work, I enjoy cooking, gardening and weight training.
Latest News
November 2024 - I have two papers which will be presented at the EMNLP this week.
The first paper is on evaluating eight large language models in moral reasoning. This is joint work with Rongchen Guo, Hillary Dawkins, Kathleen C. Fraser, and Svetlana Kiritchenko, and looks at how LLMs, despite their built-in safeguards, can be easily misused to justify harmful language. We also highlighted that they might be used to understand the roots of harmful beliefs to design well-informed interventions. Check out the full paper Here.
The second paper is on Gender Resolution in Speaker-Listener Dialogue Roles and is joint work with Hillary Dawkins and Jackie Lo. Check out this paper at Here.
November 2024 - Our work on the role of language technologies in immigration settlement has been featured in the Knowledge Mobilization for Settlement Series Here and Here and also in The McGill Daily.
Many thanks to Marco Campana and Raihana Kamal for spreading the word!
October 2024 - I attended the 7th AAAI/ACM Conference on AI, Ethics, and Society in San Jose, California.
I presented our work on Human-Centered AI Applications for Canada’s Immigration Settlement Sector. I also learned about many fascinating projects taking place at the intersection of AI and social scenes, featured in the conference Proceedings.
September 2024 - I was invited to the “Peel-Halton Professional Development and AI webinar Series for Newcomer-Serving Staff and Partner Organizations” to talk about the “Role and Implications of Generative AI and Larger Language Models in Supporting Newcomers.”
More than 150 people attended this webinar, and we had a very interesting Q&A session at the end. Link to the webinar
Abstract: A successful settlement requires access to accurate information at the right time and a reasonable cost. To enhance the efficiency of information delivery, the settlement sector can significantly benefit from AI-enabled tools. However, while AI research has previously focused on screening and selection processes in immigration, the potential applications of AI in the settlement sector remain understudied. In this talk, I will first explore various applications of AI-enabled language technologies that could be developed for the settlement sector and integrated into its existing service structures. Then, given the foundational role of large language models (LLMs) in developing language technologies, I will caution against the ad-hoc use of general-purpose LLMs and present examples of biases, hallucinations, and functional disparities that could negatively impact newcomers. The talk will conclude with recommendations for creating LLM-based tools specifically designed for the settlement sector and ensuring they are empowering, inclusive, and safe.
June 2024 - We organized a tutorial on Risks of General-Purpose LLMs for Settling Newcomers in Canada at the ACM Fairness, Accountability and Transparency (FAccT) 2024 conference. Read the Report
I co-organized this tutorial with Maryam Molamohammadi and Samir Bakhtawar. Our intern, Kosar Hemmati, has also put enormous efforts into supporting us for this tutorial.
Abstract: The non-profit settlement sector in Canada supports newcomers in achieving successful integration. This sector faces increasing operational pressures amidst rising immigration targets, which highlights a need for enhanced efficiency and innovation, potentially through reliable AI solutions. The ad-hoc use of general-purpose generative AI, such as ChatGPT, might become a common practice among newcomers and service providers to address this need. However, these tools are not tailored for the settlement domain and can have detrimental implications for immigrants and refugees. We explore the risks that these tools might pose on newcomers to first, warn against the unguarded use of generative AI, and second, to incentivize further research and development in creating AI literacy programs as well as customized LLMs that are aligned with the preferences of the impacted communities. Crucially, such technologies should be designed to integrate seamlessly into the existing workflow of the settlement sector, ensuring human oversight, trustworthiness, and accountability.
February 2024 - I was an invited participant of Frontiers of Science: Artificial Intelligence held in the gorgeous Chateau Laurier in Ottawa.
The Royal Society of Canada organized this event, and it was a fantastic multidisciplinary meeting focused on early- to mid-career researchers of artificial intelligence. The meeting brought together 20 researchers from Canada and 20 from the UK, with diverse backgrounds in computer science, history, communication, ethics and policy making. The flexible structure of the meeting gave us the opportunity to listen to eye-opening talks from pioneer thinkers of the field and the chance to discuss these ideas in the context of our work in group discussions. For me, the main takeaway was that we will not succeed in achieving much-needed regulations and safeguards around AI unless we break the vicious cycle of fast and profit-oriented science. A safe and reliable AI that benefits all is hard to achieve but worth the fight.
November 2023 - I co-organized a workshop at the Pathway to Prosperity national conference in Montreal.
Title of workshop: Responsible AI in Settlement Services: Challenges, Social Context, and Ethical AI Solutions
Description of workshop: In this workshop, we explored the potential and limitations of AI language technologies in the immigration context. We first explore the current landscape of settlement services that can be facilitated, assessed, or audited by language technologies. Then, we delve into assessing the potential for a responsible adoption of these technologies in the Canadian settlement service sectors. Some examples encompass translation services, structuring data, detecting patterns on an unparalleled scale, automatic reporting, and auditing fairness in decision-making processes. We conclude by discussing the risks and challenges of deploying these technologies, emphasizing the importance of fostering fair and inclusive technologies.
Workshop chair:
Anna Jahn, Director of Public Policy and Learning, MILA - Quebec Artificial Intelligence
Speakers:
Isar Nejadgholi, Senior Research Scientist, National Research Council Canada Presentation title: Promises of Language Technologies for Digital Transformation of the Immigration and Settlement Sector
Maryam Mollamohammadi, Responsible AI Advisor, MILA - Quebec Artificial Intelligence Institute Presentation title: Inclusive and Fair Digital Transformation: Responsible Use of Language Technologies in the Immigration Sector
July 2023 - Best Paper Award
Our paper, Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor, won the best paper award at the 7th Workshop on Online Abuse and Harms (WOAH). .[Paper]
July 2023 - We were at ACL2023 in Toronto and presented the following papers:
Svetlana Kiritchenko, Georgina Curto, Isar Nejadgholi, and Kathleen C. Fraser. (2023) Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor. In Proceedings of the 7th Workshop on Online Abuse and Harms (WOAH).[Paper]
Fraser, K.C., Kiritchenko, S., Nejadgholi, I., Kerkhof, A. (2023) What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text. In Proceedings of the First Workshop on Social Influence in Conversations (SICon).[paper]
Isar Nejadgholi, Svetlana Kiritchenko, Kathleen C. Fraser, and Esma Balkir. (2023) Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers. In Proceedings of the 7th Workshop on Online Abuse and Harms (WOAH).[paper]
Ghanadian, H., Nejadgholi, I., & Osman, H. A. (2023). ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations. 13th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA at ACL 2023).[paper]
June 2023 - Best Short Paper Award
Our paper, Diversity is Not a One-Way Street: Pilot Study on Ethical Interventions for Racial Bias in Text-to-Image Systems., won the best short paper award at the 14th International Conference on Computational Creativity (ICCC) in Waterloo. [Paper]