The model name allows Scaleway to put your prompts in the expected format.
Understanding the Mixtral-8x7b-instruct-v0.1 model
Model overviewLink to this anchor
Attribute | Details |
---|---|
Provider | Mistral |
Compatible Instances | H100 (FP8) - H100-2 (BF16) |
Context size | 32k tokens |
Model namesLink to this anchor
mistral/mixtral-8x7b-instruct-v0.1:fp8mistral/mixtral-8x7b-instruct-v0.1:bf16
Compatible InstancesLink to this anchor
Instance type | Max context length |
---|---|
H100 | 32k (FP8) |
H100-2 | 32k (BF16) |
Model introductionLink to this anchor
Mixtral-8x7b-instruct-v0.1, developed by Mistral, is tailored for instructional platforms and virtual assistants. Trained on vast instructional datasets, it provides clear and concise instructions across various domains, enhancing user learning experiences.
Why is it useful?Link to this anchor
Mixtral-8x7b-instruct-v0.1, trained on the Nabuchodonosor supercomputer, delivers high-quality instruction generation with exceptional performance. This model excels in code generation and understanding multiple languages, making it an ideal choice for developing virtual assistants or educational platforms that require reliability and excellence.
How to use itLink to this anchor
Sending Inference requestsLink to this anchor
To perform inference tasks with your Mixtral model deployed at Scaleway, use the following command:
curl -s \-H "Authorization: Bearer <IAM API key>" \-H "Content-Type: application/json" \--request POST \--url "https://<Deployment UUID>.ifr.fr-par.scaleway.com/v1/chat/completions" \--data '{"model":"mistral/mixtral-8x7b-instruct-v0.1:fp8", "messages":[{"role": "user","content": "Sing me a song about Scaleway"}], "max_tokens": 200, "top_p": 1, "temperature": 1, "stream": false}'
Make sure to replace <IAM API key>
and <Deployment UUID>
with your actual IAM API key and the Deployment UUID you are targeting.
Ensure that the messages
array is properly formatted with roles (system, user, assistant) and content.
Receiving Managed Inference responsesLink to this anchor
Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server. Process the output data according to your application’s needs. The response will contain the output generated by the LLM model based on the input provided in the request.
Despite efforts for accuracy, the possibility of generated text containing inaccuracies or hallucinations exists. Always verify the content generated independently.