Enhancing Large Language Model Reasoning with Graph Neural Networks

Large Language Models (LLMs) have demonstrated exceptional performance in tasks such as text generation, question-answering, and summarization. However, they often struggle with tasks requiring multi-step reasoning, relational reasoning, and factual consistency. Graph Neural Networks (GNNs), known for their ability to learn and infer over graph-structured data, offer a promising solution to these limitations. By integrating GNNs with LLMs, we aim to enhance reasoning capabilities, enabling the model to better capture complex relationships, perform multi-hop reasoning, and reduce hallucinations in generated responses.

This project aims to design and evaluate a hybrid architecture that combines GNNs with LLMs to improve reasoning over structured and semi-structured knowledge.

We will have weekly meetings to address questions, discuss progress and think about future ideas.

Requirements

Strong programming skills (Python, etc.) and good knowledge of machine learning. Previous experience with PyTorch and other common deep-learning libraries is a plus.

Contact

Interested? Please reach out with a brief description of your motivation in the project, along with any relevant courses or prior projects (personal or academic) that demonstrate your background in the area.

LLM Monster ingesting a Knowledge Graph:

Switch to the German homepage or stay on this page