Sunday, April 2, 2023

Ask HN: How are you testing your LLM prompts?

How do you make sure changing your prompts will work well? How do you ensure it doesn't break anything that is already working?

Changing prompts in small ways can lead to loads of unpredictable behaviors. And that's even more concerning as we build larger apps on something like LangChain, that requires the output to be very rigid.

My instinct would be to run a unit test-suite for every prompt change. Is there some already-existing framework for those? Or otherwise, how are you testing your changes?


Comments URL: https://news.ycombinator.com/item?id=35414060

Points: 4

# Comments: 0



from Hacker News: Newest https://ift.tt/5IR2ODG
via IFTTT

No comments:

Post a Comment

Smith fires Wrexham in front against Chelsea

Wrexham's Sam Smith races past the Chelsea defence to put his side ahead in their FA Cup fifth-round tie. from BBC News https://ift.tt...