Golem.ai is a fast-growing startup in the AI sector. With a linguistic universals approach, this startup stands out for its low carbon, explainable and sovereign technology.
Over 500 users analyze documents with Golem.ai’s two flagship products:
- InboxCare: analyze, classify and route all types of incoming messages (emails, text messages, social networks, chatbots, etc.) in order to offer the most suitable solution to users and accelerate the handling of incoming demands for companies
- DocuChecker: analyze documents to identify and transmit key information (for example – a call for tenders)
At the start of 2021, Golem.ai’s technical teams decided it was time to change their cloud services provider (CSP).
The lack of support, technical issues and the fact that some products were not offered by their current CSP pushed Golem.ai to consider migrating to another provider.
Building a sovereign and scalable AI solution
While Golem.ai was beginning to scale up, the technical teams decided to carry out a benchmark of possible solutions. The following criteria were brought into play:
- Most of Golem.ai’s clients are companies from the public sector, distribution and insurance. Data governance was not even up for discussion – all data needed to be hosted in France with a sovereign provider
- The new cloud provider needed to offer services such as managed Kubernetes, Object Storage and Managed Databases
- Golem.ai wanted to be free from all non-standard functions exclusive to a single hosting company, which would inevitably imply additional maintenance costs for their on-premise offer
- With these criteria in mind, Golem.ai had three options to consider – staying with their current provider, or migrating to OVH or to Scaleway.
Not just a cloud provider, but also a partner
Scaleway decided to accompany the ambitious startup, Golem.ai, along their adventure which would surely include many challenges. Scaleway’s teams worked alongside Golem.ai’s, via dedicated support. During the test phase, Golem.ai’s developers appreciated these advantages of Scaleway:
- Having access to the latest versions of Kubernetes
- Using Dedibox dedicated servers to manage temporary load increases and avoid overloading
- The easy implementation of a hybrid cloud with a SaaS cluster with Golem.ai, and an on-premise cluster with a client
- The test phase was also carried out with OVH at the same time.
While the prices of the two cloud providers are similar, environmental impact, values and certifications also weigh heavily in a client’s decision-making process.
At the end of the two months, the products and developer experience at Scaleway proved to be the winning combination, and marked the start of the partnership between Golem.ai and the cloud provider.
Shifting to a resilient, and more environmentally friendly architecture
Following the successful test phase, Golem.ai migrated their infrastructure to Scaleway and is already envisioning new use cases such as Kosmos, Scaleway’s managed Kubernetes engine.
This new infrastructure is also a way for Golem.ai to establish its environmental ambition. In fact, Artificial Intelligence, and Machine Learning in particular, currently get bad press for their energy consumption, so Golem.ai wants to build an environmentally friendly AI solution. The first step was to create Frugal AI, and then to host data in Scaleway’s fr-par-2 data center which has the lowest PUE (Power Usage Effectiveness) on the market.
Thanks to the two companies’ shared values of transparency, ethics and evangelization, this partnership was built on a solid foundation.
Key indicators of success
Golem.ai has now been one of Scaleway’s partners for over a year, and the startup considers this migration to be a success for three reasons:
- The stability of Scaleway’s services
- The supportive network and sharing within Scaleway’s Slack community, and the proximity of their technical teams
Scaleway’s teams’ innovation and constant development of new services
Boosted by this success, Golem.ai is planning to test Serverless Functions next in order to optimize the processing load of textual data.