PALM Lab at USF 🌴
Data-centric AI, AI Safety, Multimodal Generative AI, Social Media AI
University of South Florida
Data-centric AI, AI Safety, Multimodal Generative AI, Social Media AI
University of South Florida
The Pioneering Advancements in Learning Methods (PALM 🌴) Lab is a team of researchers at the University of South Florida who work on holistically improving current and next-generation Artificial Intelligence models (e.g. Large Language Models) with respect to multiple facets of performance, such as utility, fairness, robustness, safety, and security. The team is especially interested in developing data-centric learning approaches that can benefit a large class of AI models in multiple domains such as Natural Language Processing and Computer Vision. Beyond core AI/ML problems, the team also works to translate these ideas to impact real-world models deployed in production systems (such as social media AIs), which by default are closed-source. More recently, we have also been working on problems related to Multimodal Generative AI and Cooperative AI systems.
Join us! The PALM Lab is always interested in recruiting talented and self-motivated PhD students who are interesting in working on cutting-edge AI problems. If interested, please read our recent works and reach out to Prof. Anshuman Chhabra (PI) over email with your specific interests to inquire about the opportunities available within the lab.Â
Join us! The PALM Lab is always interested in recruiting talented and self-motivated PhD students who are interesting in working on cutting-edge AI problems. If interested, please read our recent works and reach out to Prof. Anshuman Chhabra (PI) over email with your specific interests to inquire about the opportunities available within the lab.Â
You can find our existing and past group members here. More details on our recent research can be found here. We are located in sunny and beautiful Tampa Bay, FL.Â
News:
1/22/2025: Our paper "Watching the AI Watchdogs: A Fairness and Robustness Analysis of AI Safety Moderation Classifiers" was accepted at the NAACL 2025 Main Conference [pdf][code]
1/22/2025: Our paper "Assessing LLMs for Zero-shot Abstractive Summarization Through the Lens of Relevance Paraphrasing" was accepted at the NAACL 2025 Conference (Findings) [pdf][code]
9/24/2024: Our submission (PQN) to the Prosocial Ranking Challenge organized by UC Berkeley (Berkeley Center for Human-Compatible AI) was selected as one of the top three ranker finalists in the competition! [blog post]
8/12/2024: Our paper "Incentivizing News Consumption on Social Media Platforms Using Large Language Models and Realistic Bot Accounts" was accepted for publication in PNAS Nexus [pdf]
3/13/2024: Our paper "Revisiting Zero-Shot Abstractive Summarization in the Era of Large Language Models from the Perspective of Position Bias" was accepted at the NAACL 2024 Main Conference as an oral talk [pdf] [code]
1/16/2024: Our paper "What Data Benefits My Classifier? Enhancing Model Performance and Interpretability Through Influence-Based Data Selection" was accepted as an oral talk (top 1.2% of papers) at ICLR 2024 [pdf] [code]
12/1/2023: Invited to attend a research convening on LLMs and social media interventions at Google NYC organized by Google/Jigsaw and Prosocial Design Network [blog post]
11/29/2023: Our paper "Towards Fair Video Summarization" has been accepted for publication in Transactions on Machine Learning Research (TMLR) [pdf] [code]
9/21/2023: Our paper "Auditing YouTube’s Recommendation System for Ideologically Congenial, Extreme, and Problematic Recommendations" was accepted for publication in PNAS [pdf] [code]
1/20/2023: Our paper "Robust Fair Clustering: A Novel Fairness Attack and Defense Framework" was accepted at the ICLR 2023 Main Conference [pdf] [code] [poster]
10/15/2022: Prof. Chhabra was invited to give a seminar talk on Robust Clustering at Brandeis University, Boston by Prof. Hongfu LiuÂ
9/14/2022: Our paper "On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses" was accepted at the NeurIPS 2022 Main Conference [pdf] [supplementary] [code] [poster]
6/15/2022: Our paper on "Updatable Clustering via Patches" was accepted as a poster at the Updatable Machine Learning (UpML) workshop @ ICML 2022 [pdf]
1/25/2022: Invited by Kyle Polich as a guest on the Data Skeptic podcast [Spotify link]
10/21/2021: Our paper on "Fair Clustering Using Antidote Data" was accepted at the AFCR workshop @ NeurIPS 2021 for a contributed talk (top 6 papers), and is published in PMLR [pdf] [supplementary]
10/21/2021: Our paper on "Fairness Degrading Adversarial Attacks Against Clustering" was accepted at the AFCR workshop @ NeurIPS 2021 as a poster [pdf] [supplementary] [code]
10/11/2021: Invited keynote at MTD workshop @ ACM CCS 2021 for our paper on MTD for adversarial machine learning (talk by Prof. Mohapatra) [pdf] [supplementary]
9/17/2021: Our survey paper on fairness in clustering was accepted for publication in IEEE Access [pdf]
1/10/2020: Our paper "Suspicion-Free Adversarial Attacks Against Clustering Algorithms" was accepted in the AAAI 2020 Main Technical Conference [pdf] [code] [poster]
1/25/2019: Prof. Chhabra invited to talk about our research on adversarial attacks against clustering at Uber AI in SF by Ryan TurnerÂ
12/1/2018: Presented our paper on the Tensorflex framework at the MLOSS workshop @ NeurIPS 2018 [pdf] [code]