PlcH6HJku4@OpenReview

Total: 1

#1 Counterfactual Implicit Feedback Modeling [PDF] [Copy] [Kimi] [REL]

Authors: Chuan Zhou, Lina Yao, Haoxuan Li, Mingming Gong

In recommendation systems, implicit feedback data can be automatically recorded and is more common than explicit feedback data. However, implicit feedback poses two challenges for relevance prediction, namely (a) positive-unlabeled (PU): negative feedback does not necessarily imply low relevance and (b) missing not at random (MNAR): items that are popular or frequently recommended tend to receive more clicks than other items, even if the user does not have a significant interest in them. Existing methods either overlook the MNAR issue or fail to account for the inherent mechanism of the PU issue. As a result, they may lead to inaccurate relevance predictions or inflated biases and variances. In this paper, we formulate the implicit feedback problem as a counterfactual estimation problem with missing treatment variables. Prediction of the relevance in implicit feedback is equivalent to answering the counterfactual question that ``whether a user would click a specific item if exposed to it?". To solve the counterfactual question, we propose the Counterfactual Implicit Feedback (Counter-IF) prediction approach that divides the user-item pairs into four disjoint groups, namely definitely positive (DP), highly exposed (HE), highly unexposed (HU), and unknown (UN) groups. Specifically, Counter-IF first performs missing treatment imputation with different confidence levels from raw implicit feedback, then estimates the counterfactual outcomes via causal representation learning that combines pointwise loss and pairwise loss based on the user-item pairs stratification. Theoretically the generalization bound of the learned model is derived. Extensive experiments are conducted on publicly available datasets to demonstrate the effectiveness of our approach. The code is available at https://github.com/zhouchuanCN/NeurIPS25-Counter-IF.

Subject: NeurIPS.2025 - Poster