Chain-of-Thought (CoT) Prompting

2024. 7. 14. 20:16AI/프롬프트 엔지니어링

대규모 언어 모델(LLMs)이 복잡한 문제를 해결할 때 여러 단계로 나눠서 생각하게 하는 방법입니다.

모델의 추론 능력을 향상시켜 더 복잡한 문제를 해결할 때 도움이 된다고 합니다.

 

[2201.11903] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (arxiv.org) 의 예시를 보면

 


[Standard Prompting]

Model Input:

Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?

A: The answer is 11 .

Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?

 

Model Output:

A: The answer is 27.

 

답은 9이므로 틀린 답변 입니다.


 

[Chain-of-Thought Prompting]

Model Input:

Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls ach is 6 tennis balls. 5 + 6 = 11. The answer is 11 .

Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?

 

Model Output:

A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9 The answer is 9.

 

이 경우 답을 도출하는 추론단계를 제시함으로 모델이 9로 맞는 답변을 하는 모습을 볼 수 있습니다.


 

위 예시는 현제 최신 업데이트 된 ChatGPT 4o에서는 Standard Prompting 으로도 정상적인 답변이 도출되고 있습니다. (여러번 질문 해봄)

 

 

내용 출처:

https://www.ibm.com/topics/chain-of-thoughts

 

What is Chain of Thoughts (CoT)? | IBM

Chain of thought (CoT) mirrors human reasoning, facilitating systematic problem-solving through a coherent series of logical deductions.

www.ibm.com

https://deepgram.com/learn/chain-of-thought-prompting-guide

 

Chain-of-Thought Prompting: Helping LLMs Learn by Example | Deepgram

Learn about chain-of-thought prompting, a technique aimed at making large language models "think" logically and more effectively when completing complex task...

deepgram.com

https://ar5iv.labs.arxiv.org/html/2201.11903

 

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

We explore how generating a chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning …

ar5iv.labs.arxiv.org

https://attri.ai/generative-ai-wiki/chain-of-thought-prompting

 

Chain-of-Thought Prompting | Generative AI Wiki

The chain-of-thought(CoT) prompting method enables LLMs to explain their reasoning while enhancing their computational capabilities and understanding of complex problems.

attri.ai

https://arxiv.org/abs/2201.11903

 

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in su

arxiv.org

 

'AI > 프롬프트 엔지니어링' 카테고리의 다른 글

Self-consistency  (0) 2024.07.15
Zero-shot Chain-of-Thought (CoT) Prompting  (0) 2024.07.14
Expert Prompting  (0) 2024.07.14
할루시네이션 (Hallucination - 환각이나 환영, 환청)  (0) 2024.07.13
Role Prompting  (0) 2024.07.13