NavigationContentFooter
Jump toSuggest an edit

Understanding the Mixtral-8x7B-Instruct-v0.1 model

Model overview

AttributeDetails
ProviderMistral
Model Namemixtral-8x7B-instruct-v0.1
Compatible InstancesH100 (INT8) - H100-2 (FP16)
Context size32k tokens

Model names

mistral/mixtral-8x7b-instruct-v0.1:int8
mistral/mixtral-8x7b-instruct-v0.1:fp16

Compatible Instances

  • H100-1 (INT8)
  • H100-2 (FP16)

Model introduction

Mixtral-8x7B-Instruct-v0.1, developed by Mistral, is tailored for instructional platforms and virtual assistants. Trained on vast instructional datasets, it provides clear and concise instructions across various domains, enhancing user learning experiences.

Why you will love it

Mixtral-8x7B-Instruct-v0.1, trained on the Nabuchodonosor supercomputer, delivers high-quality instruction generation with exceptional performance. This model excels in code generation and understanding multiple languages, making it an ideal choice for developing virtual assistants or educational platforms that require reliability and excellence.

How to use it

Sending Inference requests

To perform inference tasks with your Mixtral model deployed at Scaleway, use the following command:

curl -s \
-H "X-Auth-Token: <IAM API key>" \
-H "Content-Type: application/json" \
--request POST \
--url "https://<Deployment UUID>.ifr.fr-par.scw.cloud/" \
--data '{"text_input": "<s>[INST] Sing me a song about Scaleway [/INST] Model answer</s>", "max_tokens": 200, "temperature": 0.5, "random_seed": 1, "top_p": 0.9}' | jq -r .text_output

Make sure to replace <IAM API key> and <Deployment UUID> with your actual IAM API key and the Deployment UUID you are targeting.

Note

Ensure that the input data is properly formatted according to the model’s input requirements.

Prompt engineering

To effectively prompt the Mistral 8x7B Instruct and get optimal outputs, it is recommended to use the following chat template:

<s>[INST] Instruction [/INST] Model answer</s>[INST] Follow-up instruction [/INST]

Receiving LLM Inference responses

Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed LLM Inference server. Process the output data according to your application’s needs. The response will contain the output generated by the LLM model based on the input provided in the request.

Note

Despite efforts for accuracy, the possibility of generated text containing inaccuracies or hallucinations exists. Always verify the content generated independently.

Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway