Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MO
morrowind @lemm.ee
Posts 2
Comments 1
arxiv.org Chain of Draft: Thinking Faster by Writing Less

Large Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more efficient strategy: drafting concise intermedia...

Chain of Draft: Thinking Faster by Writing Less
0

Atom of Thoughts (AOT): lifts gpt-4o-mini to 80.6% F1 on HotpotQA, surpassing o3-mini and DeepSeek-R1

bsky.app Sung Kim (@sungkim.bsky.social)

Atom of Thoughts (AOT): lifts gpt-4o-mini to 80.6% F1 on HotpotQA, surpassing o3-mini and DeepSeek-R1 ! For each reasoning step, it: 1. Decompose the question into DAG 2. Contract the subquestions into a NEW simpler question 3. Iterate until reaching an atomic question

Sung Kim (@sungkim.bsky.social)
0