Yongjun Chen

photography 

About Me: I am currently a Senior Research Engineer at Salesforce Research. I am interested in research and productizing machine learning solutions to solve real-world problems. Currently, I work on Recommender Systems, Applied Machine Learning, and AutoML. I obtained my M.S. degree from Computer Science department at Washington State University, advised by Prof. Shuiwang Ji. Before that, I got my bachelor’s degree from Mathematics and Statistics department at Huazhong University of Science and Technology (HUST) in China.

Contact: yongjunchen1995@gmail.com

[Linkedin]
[Google Scholar] [Github]

Publications

(* denotes equal contribution)

Conference

ELECRec: Training Sequential Recommenders as Discriminators
Yongjun Chen, Jia Li, Caiming Xiong
The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2022
Intent Contrastive Learning for Sequential Recommendation
Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, Caiming Xiong
The Web Conference (WWW), 2022
Modeling Dynamic Attributes for Next Basket Recommendation
Yongjun Chen, Jia Li, Chenghao Liu, Chenxi Li, Markus Anderle, Julian McAuley, Caiming Xiong
Context-Aware Recommender Systems Workshop at ACM Conference on Recommender Systems (CARS@RecSys), 2021
Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods
Hao Yuan, Yongjun Chen, Xia Hu and Shuiwang Ji
The 33rd AAAI Conference on Artificial Intelligence (AAAI), 2019
Learning Graph Pooling and Hybrid Convolutional Operations for Text Representations
Hongyang Gao, Yongjun Chen, and Shuiwang Ji
The Web Conference (WWW), 2019
Dense Transformer Networks
Jun Li, Yongjun Chen, Lei Cai, Ian Davidson, and Shuiwang Ji
The 28th International Joint Conference on Artificial Intelligence (IJCAI), 2019
The key idea of current deep learning methods for dense prediction is to apply a model on a regular patch centered on each pixel to make pixel-wise predictions. These methods are limited in the sense that the patches are determined by network architecture instead of learned from data. In this work, we propose the dense transformer networks, which can learn the shapes and sizes of patches from data. The dense transformer networks employ an encoder-decoder architecture, and a pair of dense transformer modules are inserted into each of the encoder and decoder paths. The novelty of this work is that we provide technical solutions for learning the shapes and sizes of patches from data and efficiently restoring the spatial correspondence required for dense prediction. The proposed dense transformer modules are differentiable, thus the entire network can be trained. We apply the proposed networks on natural and biological image segmentation tasks and show superior performance is achieved in comparison to baseline methods.
Voxel Deconvolutional Networks for 3D Brain Image Labeling
Yongjun Chen, Hongyang Gao, Lei Cai, Min Shi, Dinggang Shen and Shuiwang Ji
The 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2018
Deep learning methods have shown great success in pixel-wise prediction tasks. One of the most popular methods employs an encoder-decoder network in which deconvolutional layers are used for up-sampling feature maps. However, a key limitation of the deconvolutional layer is that it su‚ers from the checkerboard artifact problem, which harms the prediction accuracy. Œis is caused by the independency among adjacent pixels on the output feature maps. Previous work only solved the checkerboard artifact issue of deconvolutional layers in the 2D space. Since the number of intermediate feature maps needed to generate a deconvolutional layer grows exponentially with dimensionality, it is more challenging to solve this issue in higher dimensions. In this work, we propose the voxel deconvolutional layer (VoxelDCL) to solve the checkerboard artifact problem of deconvolutional layers in 3D space. We also provide an ecient approach to implement VoxelDCL. To demonstrate the e‚ectiveness of VoxelDCL, we build four variations of voxel deconvolutional networks (VoxelDCN) based on the U-Net architecture with VoxelDCL. We apply our networks to address volumetric brain images labeling tasks using the ADNI and LONI LPBA40 datasets. Œe experimental results show that the proposed iVoxelDCNa achieves improved performance in all experiments. It reaches 83.34% in terms of dice ratio on the ADNI dataset and 79.12% on the LONI LPBA40 dataset, which increases 1.39% and 2.21% respectively compared with the baseline. In addition, all the variations of VoxelDCN we proposed outperform the baseline methods on the above datasets, which demonstrates the e‚ectiveness of our methods.

Preprints

Contrastive Self-supervised Sequential Recommendation with Robust Augmentation
Zhiwei Liu*, Yongjun Chen*, Jia Li, Philip S. Yu, Julian McAuley, Caiming Xiong
Preprint, 2021

Honors & Awards

Teaching Experiences

Teaching Assistant

Washington State University, WA, USA

Education