Olaf Werner
LLMs are new frontier technology and are becoming more popular. However, this technology is still imperfect when it comes to reasoning. While LLM is capable of reasoning by chain-of-thought (CoT) prompting, unfortunately, this reasoning is imperfect and can fail, especially when reasoning can take multiple paths. The way to enable this and make reasoning more transparent is to use Reasoning Engines such as Z3. However, their main limitation is that logical rules must be inputted in a special form. Unfortunately, LLMs were not explicitly trained for that. A unique contribution of this work is fine-tuning LLM for this task specifically.