I am a Ph.D. student at The University of British Columbia (UBC) advised by Prof. Prashant Nair. My research surrounds optimizing
Recommender Systems, which spans the areas of System Optimizations and Hardware Acceleration.
I received my Master of Applied Science (M.A.Sc.) from ECE, UBC advised by Prof. Prashant Nair.
news
Jul 7, 2022
I received VLDB Endowment Travel SPEND Award for attending VLDB 2022 conference.
Oct 22, 2021
I completed my M.A.Sc. in Electrical and Computer Engineering from UBC.
Aug 15, 2021
Our paper “Accelerating Recommendation System Training by Leveraging Popular Choices” got accepted into VLDB 2022.
selected publications
High-Performance Training by Exploiting Hot-Embeddings in Recommendation Systems
Muhammad Adnan, Yassaman Maboud, Divya Mahajan, and Prashant J. Nair
In Proceedings of the 48th International Conference on Very Large Data Bases (VLDB) 2021
Recommender models are commonly used to suggest relevant items to a user for e-commerce and online advertisement-based applications. These models use massive embedding tables to store numerical representation of items’ and users’ categorical variables (memory intensive) and employ neural networks (compute intensive) to generate final recommendations. Training these large-scale recommendation models is evolving to require increasing data and compute resources. The highly parallel neural networks portion of these models can benefit from GPU acceleration however, large embedding tables often cannot fit in the limited-capacity GPU device memory. Hence, this paper deep dives into the semantics of training data and obtains insights about the feature access, transfer, and usage patterns of these models. We observe that, due to the popularity of certain inputs, the accesses to the embeddings are highly skewed with a few embedding entries being accessed up to 10000X more. This paper leverages this asymmetrical access pattern to offer a framework, called FAE, and proposes a hot-embedding aware data layout for training recommender models. This layout utilizes the scarce GPU memory for storing the highly accessed embeddings, thus reduces the data transfers from CPU to GPU. At the same time, FAE engages the GPU to accelerate the executions of these hot embedding entries. Experiments on production-scale recommendation models with real datasets show that FAE reduces the overall training time by 2.3X and 1.52X in comparison to XDL CPU-only and XDL CPU-GPU execution while maintaining baseline accuracy.
@inproceedings{adnan2022hotembeddings,title={High-Performance Training by Exploiting Hot-Embeddings in Recommendation Systems},author={Adnan, Muhammad and Maboud, Yassaman and Mahajan, Divya and Nair, Prashant J.},booktitle={Proceedings of the 48th International Conference on Very Large Data Bases (VLDB)},doi={10.14778/3485450.3485462},issn={2150-8097},year={2021},issue_date={September 2021},publisher={VLDB Endowment},journal={Proc. VLDB Endow.},volume={15},number={1},}