Skip Navigation

You're viewing a single thread.

1 comments
  • Title: Machine intuition: Uncovering human-like intuitive decision-making in GPT-3

    Authors: Thilo Hagendorff, Sarah Fabi , and Michal Kosinski

    Word Count: Approximately 10,200 words

    Estimated Read Time: 35-40 minutes

    Summary:

    The paper investigates whether large language models (LLMs) like GPT-3 exhibit behaviors similar to human intuition and cognitive biases. The authors probe various LLMs with the Cognitive Reflection Test (CRT) and semantic illusions, which were originally designed to study intuitive decision-making in humans.

    The results show that early LLMs lack the mathematical abilities and knowledge to perform these tasks. However, as LLMs become more complex, they begin to show human-like intuitive behavior and make the same errors as humans. GPT-3 in particular exhibits a strong inclination for intuitive responses on the CRT and semantic illusions, responding correctly in only around 10% of cases.

    However, newer LLMs like ChatGPT and GPT-4 overcome these intuitive errors, responding correctly in around 80% and 97% of cases respectively. The authors attribute this to increases in ChatGPT and GPT-4's reasoning capabilities.

    The authors explore methods to reduce intuitive behavior in GPT-3, such as providing multiple choice options, eliciting deliberate reasoning, and providing training examples. These methods are effective, bringing GPT-3's performance close to ChatGPT's.

    The findings suggest that LLMs can develop probability distributions over language that mimic human intuition, even though they lack cognitive mechanisms. The authors argue that investigating LLMs with methods from psychology has the potential to reveal otherwise unknown behavior.

    In summary, the paper demonstrates that LLMs gradually develop the ability to make human-like intuitive decisions and errors. However, the newest LLMs seem to overcome these tendencies, suggesting major improvements in their reasoning capabilities. The findings highlight the value of using methods from psychology to study the abilities and behaviors of LLMs.

    The findings could inform the development of LLMs that are designed to avoid intuitive errors and be more robust reasoners. The methods used to study human-like behavior in LLMs could also be applied to new models as they are developed. The results also highlight the need for careful scrutiny of LLMs before deploying them in real-world applications.