Master's Candidate in Artificial Intelligence
Nanjing University
I am currently a Master's candidate at the School of Artificial Intelligence, Nanjing University (NJU), and a member of the NLP research group, led by Prof. Jiajun Chen. My advisor is Prof. Xinyu Dai.
My research is dedicated to advancing the reasoning capabilities of Large Language Models. I focus on developing and improving LLMs' ability to perform complex reasoning tasks, including mathematical reasoning, logical inference, table-based reasoning, and retrieval-augmented generation. My research explores various methodologies including In-Context Learning (ICL) and Reinforcement Learning (RL) to enhance the reasoning abilities of language models.
My research focuses on enhancing the reasoning capabilities of Large Language Models (LLMs), exploring various reasoning tasks and methodological approaches to advance AI's cognitive abilities.
Developing techniques for models to learn and adapt from contextual examples without parameter updates, enhancing few-shot reasoning capabilities.
Applying RL techniques to improve LLM reasoning through reward-based optimization and iterative learning from feedback.
Authors: Jiang-Zhou Ju, Yun-Lin Mao, Zhen Wu, Yu-Fei Chen, Xin-Yu Dai, Jia-Jun Chen
This paper presents a novel approach to numerical question-answering tasks involving both text and tables, introducing a multi-granularity cell contrast method to improve the model's ability to distinguish between relevant and irrelevant information in tabular data.
Location
Nanjing University, China
Affiliation
School of Artificial IntelligenceNLP Research Group
Part of the Natural Language Processing research group at Nanjing University
Visit Group Website