I am a Senior Research Scientist at the National Research Council Canada and an Adjunct Professor at the University of Ottawa. In my research, I am interested in applying machine learning models to make positive changes in society. I also work on making sure that AI systems are designed and used responsibly. Previously, I have worked on building and deploying machine learning systems in various applications, including biomedical signal, image, speech and text processing, in both academic and industrial environments. My current work is mainly on natural language processing, and in a few projects, I work on multimodal systems that combine images and text. I serve as a reviewer at ACL conferences and was an area chair at ACL2023, EMNLP2023, and EACL2023. I am also a committee member at Several ACL workshops, including WOAH, TrustNLP and PrivateNLP. Outside work, I practice Persian choir singing with Bahar Choir Group.

Latest News

February 2024 - I was an invited participant of Frontiers of Science: Artificial Intelligence held in the gorgeous Chateau Laurier in Ottawa.

The Royal Society of Canada organized this event, and it was a fantastic multidisciplinary meeting focused on early- to mid-career researchers of artificial intelligence. The meeting brought together 20 researchers from Canada and 20 from the UK, with diverse backgrounds in computer science, history, communication, ethics and policy making. The flexible structure of the meeting gave us the opportunity to listen to eye-opening talks from pioneer thinkers of the field and the chance to discuss these ideas in the context of our work in group discussions. For me, the main takeaway was that we will not succeed in achieving much-needed regulations and safeguards around AI unless we break the vicious cycle of fast and profit-oriented science. A safe and reliable AI that benefits all is hard to achieve but worth the fight.


November 2023 - I co-organized a workshop at the Pathway to Prosperity national conference in Montreal.

Title of workshop: Responsible AI in Settlement Services: Challenges, Social Context, and Ethical AI Solutions

Description of workshop: In this workshop, we explored the potential and limitations of AI language technologies in the immigration context. We first explore the current landscape of settlement services that can be facilitated, assessed, or audited by language technologies. Then, we delve into assessing the potential for a responsible adoption of these technologies in the Canadian settlement service sectors. Some examples encompass translation services, structuring data, detecting patterns on an unparalleled scale, automatic reporting, and auditing fairness in decision-making processes. We conclude by discussing the risks and challenges of deploying these technologies, emphasizing the importance of fostering fair and inclusive technologies.

Workshop chair:

Anna Jahn, Director of Public Policy and Learning, MILA - Quebec Artificial Intelligence

Speakers:

Isar Nejadgholi, Senior Research Scientist, National Research Council Canada Presentation title: Promises of Language Technologies for Digital Transformation of the Immigration and Settlement Sector

Maryam Mollamohammadi, Responsible AI Advisor, MILA - Quebec Artificial Intelligence Institute Presentation title: Inclusive and Fair Digital Transformation: Responsible Use of Language Technologies in the Immigration Sector


July 2023 - We were at ACL2023 in Toronto and presented the following papers:

  • Svetlana Kiritchenko, Georgina Curto, Isar Nejadgholi, and Kathleen C. Fraser. (2023) Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor. In Proceedings of the 7th Workshop on Online Abuse and Harms (WOAH).[Paper]

  • Fraser, K.C., Kiritchenko, S., Nejadgholi, I., Kerkhof, A. (2023) What Makes a Good Counter-Stereotype? Evaluating Strategies for Automated Responses to Stereotypical Text. In Proceedings of the First Workshop on Social Influence in Conversations (SICon).[paper]

  • Isar Nejadgholi, Svetlana Kiritchenko, Kathleen C. Fraser, and Esma Balkir. (2023) Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers. In Proceedings of the 7th Workshop on Online Abuse and Harms (WOAH).[paper]

  • Ghanadian, H., Nejadgholi, I., & Osman, H. A. (2023). ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations. 13th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA at ACL 2023).[paper]


June 2023 - Best Short Paper Award

Our paper, Diversity is Not a One-Way Street: Pilot Study on Ethical Interventions for Racial Bias in Text-to-Image Systems., won the best short paper award at the 14th International Conference on Computational Creativity (ICCC) in Waterloo. [Paper]