📝 Highlighted Research

Large Language Model Tuning

ACL 2025 Findings
sym

MoRE: A Mixture of Low-Rank Experts for Adaptive Multi-Task Learning

Dacao Zhang, Kun Zhang*, Shimao Chu, Le Wu, Xin Li, Si Wei

  • This work focuses on the multi-task fine-tuning of LLMs and develops a novel Mixture of Low-Rank Experts (MoRE) for efficient LLM tuning.
  • MoRE innovatively treats the rank $r$ as the expert and realizes MoE-based tuning in one LoRA module, which reduces the computation cost and ensures the LLM capability across various downstream tasks.

Sentence Semantic Representation

IEEE TNNLS2023
sym

Description-Enhanced Label Embedding Contrastive Learning for Text Classification

Kun Zhang, Le Wu, Guangyi Lv, Enhong Chen, Shulan Ruan, Jing Liu, Zhiqiang Zhang, Jun Zhou, Meng Wang

  • The previous work R$^2$-Net has been accepted by AAAI 2021
  • This work proposed a novel self-supervised learning framework to make full use of label information to achieve high-quality sentence representation generation and relation inference.

Causal Inference-based Debiasing

AI Open 2024
sym

Label-aware Debiased Causal Reasoning for Natural Language Inference

Kun Zhang*, Dacao Zhang, Le Wu, Richang Hong, Ye Zhao, Meng Wang, AI Open.

  • This work proposes that label information can be used to guide the spurious correlation identification. Thus, it treats label information as one variable in causal graph and utilizes counterfactual inference to remove the spurious correlations introduced by human annotations.Finally, it realize debiased and robust natural language inference.
  • We also extend this work into multi-modal scenarios and public one high-quality paper in Journal of Computer Research and Development.