Visit UCR Return to Campus website - Take the COVID Screening Check survey

A team of ECE researchers, doctoral student Umit Yigit Basaran, Prof. Amit Roy-Chowdhury, and Prof. Basak Guler, developed a new method that allows artificial intelligence models to erase private or copyrighted information without needing access to the original training data. The method, called “source-free certified unlearning” is detailed in a recent paper “A Certified Unlearning Approach without Access to Source Data” presented in July at the International Conference on Machine Learning (ICML) in Vancouver, a leading conference in the field of machine learning. The technique uses a surrogate dataset along with carefully calibrated noise to guide machine learning models to “forget” targeted information while maintaining their predictive performance. The team has collaborated with Sk Miraj Ahmed from Brookhaven National Laboratory, who was also a former doctoral student at UC Riverside.
The team validated their methodology on synthetic and real-world datasets, and demonstrated performance guarantees comparable to retraining a new model from scratch but at a much lower computational cost. While the method is currently applied to simpler models, the team plans to extend their methodology to complex systems and build tools to make the technology broadly available, with potential benefits to a variety of sectors handling sensitive data, such as media, financial applications, and healthcare.
Link to the paper: https://icml.cc/virtual/2025/poster/46268
Link to UCR news: https://news.ucr.edu/articles/2025/08/28/ucr-pioneers-way-remove-private-data-ai-models