NavigationContentFooter
Jump toSuggest an edit

Understanding the Llama-2-7b-chat model

Model overview

AttributeDetails
ProviderMeta
Model Namellama-2-7b-chat
Compatible InstancesH100 (FP16, FP8) - L4 (FP16, FP8)
Context size4,096 tokens

Model names

meta/llama-2-7b-chat:fp16
meta/llama-2-7b-chat:fp8

Compatible Instances

  • H100 (FP16, FP8)
  • L4 (FP16, FP8)

Model introduction

This is the Llama-2-7b-chat model, developed by Meta, fine-tuned on instructions to make it better a being a chat bot.

Why you will love it

The Llama-2-7b-chat model is versatile, knowledgeable, creative, constantly learning, and friendly, making it a valuable conversational companion and source of assistance.

How to use it

Sending LLM Inference requests

To perform inference tasks with your Llama-2 deployed at Scaleway, use the following command:

curl -s \
-H "X-Auth-Token: <IAM API key>" \
-H "Content-Type: application/json" \
--request POST \
--url "https://<Deployment UUID>.ifr.fr-par.scw.cloud/" \
--data '{"text_input": "[INST]There's a llama in my garden, what should I do? [/INST]", "max_tokens": 200, "temperature": 0.2, "random_seed": 1, "top_p": 0.9}' | jq -r .text_output

Make sure to replace <IAM API key> and <Deployment UUID> with your actual IAM API key and the Deployment UUID you are targeting.

Note

Ensure that the text_input data is properly formatted according to the model’s input requirements.

Prompt engineering

Here is an example with a format to define system and instruction prompts, designed as a virtual assistant to deliver only constructive and respectful responses.

<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
There's a llama in my garden, what should I do?
[/INST]

Receiving Inference responses

Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed LLM Inference server. Process the output data according to your application’s needs. The response will contain the output generated by the LLM model based on the input provided in the request.

Note

Despite efforts for accuracy, the possibility of generated text containing inaccuracies or hallucinations exists. Always verify the content generated independently.

Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway