ScalewaySkip to loginSkip to main contentSkip to footer section

Next-level performance

Faster chip, more RAM, more storage — our most powerful Mac Mini configuration available.

Ready in a click

Launch your Mac mini M4 Pro in a click from our Public Cloud Console and access it right away through VNC or CLI.

Hosted in Europe

Rely on our Mac mini M4 Pro as a Service hosted in our European datacenter in Paris, France for a unique sovereign offer.

Technical specifications

  • processor

    CPU

    Apple M4 Pro chip. 14-core CPU. 20-core GPU. 16-core Neural Engine.

  • memory

    Memory

    64GB LPDDR5X

  • storage

    Storage

    2TB

  • storage_type

    Disk type

    SSD

  • bandwidth

    Bandwidth

    Up to 10 Gbps

  • service_level

    OS

    Latest macOS versions available

Mac mini M4 Pro for Artificial Intelligence

High-performance AI at lower cost

The M4-XL (64 GB) enables you to run models up to 24B (16 bits) for €0.49/h, whereas an L4 or L40S would cost two to three times more (up to €1.72/h).

Easy access

Mac mini M4 Pro is ideal for batch/CLI. With llm-mlx, directly download and launch models from the Hugging Face Hub using their ID.

Discover the performance

ModelInput tokens/secOutput tokens/secPeak memory (Gb)
70B, 4 bitsmlx-community/Llama-3.3-70B-Instruct-4bit29.16.239.8
32B, 8 bitsmlx-community/DeepSeek-R1-Distill-Llama-8B-8bit75.7308.7
8B, 4 bitsmlx-community/DeepSeek-R1-Distill-Llama-8B-4bit124.353.64.7
12B, 8 bitsmlx-community/Mistral-Nemo-Instruct-2407-8bit60.219.513.1
27B, 8 bitsmlx-community/gemma-2-27b-it-8bit15.68.429.5
32B, 8 bitsmlx-community/Qwen2.5-32B-Instruct-8bit48.27.134.9

Apple Intelligence at your fingertips

Apple frameworks

Train your models with MLX on Apple Silicon CPU & GPU and leverage the Foundation Models Framework to directly integrate Apple AI into your applications.

Daily boost

Execute LLM queries, summarize articles, structure your notes, or generate code in Xcode.

Integrated productivity

Intelligent assistants, autonomous agents, coding intelligence: Apple AI easily integrates into your tools.

A dedicated Apple server for heavy workloads

High performance

The M4 Pro chip provides significant improvements that makes it well-suited for demanding tasks such as machine learning, data processing, and 3D rendering.

Unified memory architecture

Apple’s unified memory system allows the CPU, GPU, and Neural Engine to access the same memory pool, which boosts performance and efficiency for AI workloads that require fast data exchange.

Optimized macOS ecosystem

macOS is increasingly optimized for AI and ML workflows, with native support for frameworks like Core ML, Create ML, and compatibility with popular libraries, or native Apple Silicon builds.

Full access and autonomy

All the performance dedicated to your use, with enhanced security fully in your hands. Customize settings, leverage tools, and manage your mac mini to suit your needs perfectly, optimizing its use to your preferences.

Get started with tutorials