Large Language Models (LLMs) settings

When working with prompts, you engage with the LLM either through an API or directly. You have the option to adjust certain settings to achieve different outcomes for your prompts.

When working with prompts, you engage with the LLM either through an API or directly. You have the option to adjust certain settings to achieve different outcomes for your prompts.

Temperature - In simple terms, lower temperature results in more predictable outcomes where the most likely next token is consistently chosen. Raising the temperature introduces more randomness, promoting diverse and imaginative outputs. Essentially, this involves giving more weight to other potential tokens. In practical scenarios, using a lower temperature works well for tasks like fact-based QA, encouraging accurate and succinct responses. Conversely, for tasks like crafting poems or other creative challenges, increasing the temperature value can be advantageous.

Top_p - Similarly, by utilizing top_p, a sampling method combined with temperature known as nucleus sampling, you gain control over how definite the model's response generation is. For precise and factual answers, keep this value low. To generate a broader range of responses, consider raising it.

A general recommendation is to modify one parameter at a time, rather than adjusting both simultaneously. As you embark on some basic examples, keep in mind that your outcomes might differ based on the version of LLM you are using.

Visit Website

Related articles

More News

Subscribe to Thaka 
Whatsapp
Service

Start Free Trial

Subscribe to Thaka 
Whatsapp
Service

Start Free Trial
Join Thousands of subscribers! 🥳