We’re Neon, and we’re redefining the database experience with our cloud-native serverless Postgres solution. If you’ve been looking for a database for your RAG apps that adapts to your application loads, you’re in the right place. Learn more about Neon and give it a try, and let us know what you think. Neon is cloud-native Postgres and scales your AI apps to millions of users with pgvector. In this post, Raouf is going to tell you what you need to know about Mistral Large, the most advanced LLM by MistralAI.
Mistral AI has recently unveiled its most advanced open-source large language model (LLM) yet, Mistral Large, alongside its ChatGPT competitor, Le Chat (beta). Le Chat includes other models such as Next, and Small, to let you explore Mistral AI’s capabilities.
For those waiting to get their hands on Le Chat but stuck in the queue, this guide will show you how to deploy Mistral Large on Azure and start using it immediately with LangChain.
Before we dive into the deployment process, let’s briefly explore Mistral Large.
Mistral Large
Mistral Large is Mistral AI’s most advanced model with unparalleled reasoning capabilities across multiple languages, including French, Spanish, German and Italian. It has a generous 32k token context window making interesting for Retrieval Augmented Generation applications.
And most importantly, Mistral Large is pretty good at coding and math. The model ranks the highest in the MassiveText Benchmarks for Programming Problems (MBPP), which covers a wide range of difficulty levels and programming concepts and is designed to evaluate models on several fronts, including accuracy and efficiency.
Mistral Large also ranks the highest in the GSM8K, which measures the capabilities of AI models in educational contexts and reasoning in mathematics.
But don’t believe the benchmarks. Next, we’ll deploy the Mistral Large model to Azure and try it for ourselves.
Deploy your own Mistral Large model to Azure
As part of the launch, Mistral AI announced its partnership with Microsoft, making the Mistral Large model available on Azure. Below are the steps to deploy the model:
- Access Azure AI Studio: Sign into your Azure account and navigate to AI Studio.
- Deploy Mistral Large: Look for the “Deploy” option and select Mistral Large for deployment.
- Create a Project: If you haven’t already, set up a new project, opting for the Pay-As-You-Go plan and choosing France Central as your region.
- Review and Create: Double-check your resource information before finalizing your AI project.
- Finalize Deployment: After creating your AI project, proceed to deploy Mistral Large. Choose a name for your deployment; this will be your inference endpoint’s identifier.
- Select a Deployment Name: This is the name that will be displayed on your inference endpoint.
Congratulations 🎉 You’ve successfully deployed Mistral Large on Azure!
How to use Mistral Large with LangChain
After deployment, you’ll receive an API endpoint and a security key for making inferences. We’ll use those further below.
To use Mistral Large with LangChain, follow these steps:
- Create project
- Create and activate Python environment: Run the following command to create an environment.
- Install packages and project dependencies:
- Create a LangChain conversation: first, create a file:
Here’s an example of how to create a LangChain conversation chain with Mistral Large:
Copy/Paste the code above to the main.py
file and run the following:
Here is how the output should look like:
Conclusion
There has never been a better time to develop AI-powered applications. With rapid deployments to robust and scalable infrastructures such as Azure’s, developers can create applications that are more intelligent, interactive, and impactful.
If you are building a RAG application, or simply need a Postgres database that scales, Neon with its autoscaling capabilities offers elastic vector search and fast index build with pgvector, making your AI apps fast and scalable to millions of users.
Start building with Neon for free today, join us on Discord and let us know what you’re working on and how we can help you build better apps.