Reference Number: DEVCOM-091
Project Description
This research aims to develop comprehensive benchmarks and evaluation frameworks to assess the extent to which current LLMs understand causal relationships, including their ability to distinguish between correlation and causation and reason about interventions. A key focus would be on identifying systematic biases in how these models approach causal questions and developing training methodologies to enhance their causal reasoning without compromising their general language capabilities.
Technical Skills
- Python
- SQL
- Flask

