Community
A recent Apple publication argued that Large Language Models (LLMs) cannot effectively reason. While there is some merit to this claim regarding out-of-the-box performance, this article demonstrates that with proper application, LLMs can indeed solve complex reasoning problems.
We set out to test LLM reasoning capabilities using Einstein's puzzle, a complex logic problem involving 5 houses with different characteristics and 15 clues to determine who owns a fish. Our initial tests with leading LLMs showed mixed results:
· OpenAI's model correctly guessed the answer, but without clear reasoning
· Claude provided an incorrect answer
· When we modified the puzzle with new elements (cars, hobbies, drinks, colors, and jobs), both models failed significantly
We implemented our Tree of Thoughts approach, where the model would:
1. Make guesses about house arrangements
2. Use critics to evaluate rule violations
3. Feed this information back for the next round
However, this revealed several interesting failures in reasoning:
The critics often struggled with basic logical concepts. For example, when evaluating the rule "The Plumber lives next to the Pink house," we received this confused response:
"The Plumber lives in House 2, which is also the Pink house. Since the Plumber lives in the Pink house, it means that the Plumber lives next to the Pink house, which is House 1 (Orange)."
The models sometimes inserted unfounded biases into their reasoning. For instance:
"The Orange house cannot be in House 1 because the Plumber lives there and the Plumber does not drive a Porsche."
The models also made assumptions about what music Porsche drivers would listen to, demonstrating how internal biases can interfere with pure logical reasoning.
While direct reasoning showed limitations, we discovered that LLMs could excel when used as code generators. We asked SCOTi to write MiniZinc code to solve the puzzle, resulting in a well-formed constraint programming solution. The key advantages of this approach were:
1. Each rule could be cleanly translated into code statements
2. The resulting code was highly readable
3. MiniZinc could solve the puzzle efficiently
The MiniZinc code demonstrated elegant translation of puzzle rules into constraints. For instance:
% Statement 11: The man who enjoys Music lives next to the man who drives Porsche % Note /\ means AND in minizinc constraint exists(i,j in 1..5)(abs(i-j) == 1 /\ hobbies[i] = Music /\ cars[j] = Porsche);
If you would like to get the full MiniZinc code, please DM me.
This experiment reveals several important insights about LLM capabilities:
1. Direct reasoning with complex logic can be challenging for LLMs
2. Simple rule application works well, but performance degrades when multiple steps of inference are required
3. LLMs excel when used as agents to generate code for solving logical problems
4. The combination of LLM code generation and traditional constraint solving tools creates powerful solutions
The key takeaway is that while LLMs may struggle with certain types of direct reasoning, they can be incredibly effective when properly applied as components in a larger system. This represents a significant advancement in software development capabilities, demonstrating how LLMs can be transformative when used strategically rather than as standalone reasoning engines.
This study reinforces the view that LLMs are best understood as transformational software components rather than complete reasoning systems. Their impact on software development and problem-solving will continue to evolve as we better understand how to leverage their strengths while working around their limitations.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Seth Perlman Global Head of Product at i2c Inc.
18 November
Dmytro Spilka Director and Founder at Solvid, Coinprompter
15 November
Kyrylo Reitor Chief Marketing Officer at International Fintech Business
Francesco Fulcoli Chief Compliance and Risk Officer at Flagstone
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.