Skip to main content
Syscovery AI Center

Language Models (LLMs)

The Syscovery AI Centre offers a wide range of powerful language models (LLMs) that can be flexibly integrated to meet your company's specific requirements. Our models can be operated compliantly in the Azure AI Service in EU regions, installed on dedicated GPU cloud servers within our cloud offering or locally on your own GPU servers on site (on-premises). Our experienced team will support and advise you in the selection, implementation and optimisation of the most suitable models for you.

OpenAI

GPT-4o

Versatile model that seamlessly integrates text and images.

OpenAI GPT-4o provides a rich user experience with high efficiency and excellent performance in image processing and non-English languages. It enables enhanced customer service interactions and improves decision making through advanced data analytics capabilities.

Integration & Operation:

  • Azure AI Service only (compliant in EU regions).
OpenAI

GPT-4o-mini

Cost-efficient variant for fast and scalable applications.

OpenAI GPT-4o-mini is ideal for customer service chatbots and real-time applications due to its strong performance with parallel model calls. It supports a variety of input and output formats and offers improved handling of non-English texts.

Integration & Operation:

  • Azure AI Service only (compliant in EU regions).
OpenAI

o1-mini

Specialised model for complex problem solving and coding.

OpenAI o1-mini is particularly effective for scientific applications and technical support, while offering a more cost-effective solution. It supports developers in the creation and optimisation of code and in mastering complex technical tasks.

Integration & Operation:

  • Azure AI Service only (in the EU, prompt processing possible in the US).
Mistral AI
Mistral

Small

Highly efficient model for voice-based tasks with low latency.

Mistral Small is ideal for real-time applications and offers outstanding capabilities in multilingual environments as well as in coding. It is specially optimised for high efficiency and low latency for large language tasks.

Integration & Operation:

  • Azure AI Service (in the EU, prompt processing possible in the US).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Mistral

Large

Advanced model with first-class skills in reasoning, knowledge and coding.

Mistral Large is perfect for complex analysis tasks and multilingual dialogue applications. It offers state-of-the-art performance in mathematical and logical tasks as well as in the support of advanced coding projects.

Integration & Operation:

  • Azure AI Service (in the EU, prompt processing possible in the USA).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Mistral

Nemo

Powerful multimodal model with broad language support.

Mistral Nemo supports over 100 languages and offers outstanding capabilities in visual comprehension and complex linguistic tasks. It is ideal for multilingual image analyses and visual question answering.

Integration & Operation:

  • Azure AI Service (in the EU, prompt processing possible in the US).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Mistral

Ministral 3B

Compact model, optimised for edge computing and real-time applications.

Mistral Ministral 3B is ideal for local translations, smart assistants without an internet connection and autonomous robotics. It delivers fast and efficient results for applications with high real-time requirements.

Integration & Operation:

  • Azure AI Service (in the EU, prompt processing possible in the US).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Microsoft
Microsoft

Phi-4

State-of-the-art model focussing on precise instructions and robust safety measures.

Microsoft Phi-4 is ideal for demanding applications in areas such as science, law and technical fields. It ensures precise processing of instructions and high security standards.

Integration & Operation:

  • Azure AI Service (in the EU, prompt processing possible in the USA).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Microsoft

Phi-3.5 Vision

Lightweight multimodal model for processing text and images.

Microsoft Phi-3.5 Vision is ideal for applications that require both visual and textual data, such as image descriptions and visual analyses. It combines high efficiency with excellent performance in image processing.

Integration & Operation:

  • Azure AI Service (in the EU, prompt processing possible in the US).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Microsoft
Meta

Llama 3.1-405B

Multilingual optimised model for global dialogue applications.

Meta Llama 3.1-405B is also available in 8B and 70B sizes and outperforms many existing models in industry benchmarks. It is ideal for a wide range of communication requirements in multiple languages.

Integration & Operation:

  • Azure AI Service (in the EU, prompt processing possible in the US).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Meta

Llama 3.3-70B

Refined model for multilingual dialogues with excellent performance.

Meta Llama 3.3-70B supports a wide range of languages and offers outstanding performance in multilingual communication scenarios. It is particularly suitable for complex and diverse dialogue applications.

Integration & Operation:

  • Azure AI Service (in the EU, prompt processing possible in the USA).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Meta

Llama 3.2-11B Vision

Multimodal model for enhanced visual understanding.

Meta Llama 3.2-11B Vision combines text and image processing for applications such as image description, visual question answering and multilingual image analyses. It offers strong capabilities in the integration of visual and textual data.

Integration & Operation:

  • Azure AI Service (in the EU, prompt processing possible in the US).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Meta

Llama distilled models

Smaller, cost-efficient models for specific applications.

Meta Llama distilled model variants in sizes 1B, 3B, 7B, 8B and 13B are ideal for tasks that require fewer languages and specialised functions, while still offering strong performance capabilities. They are ideal for defined, specific application scenarios where efficiency, language and cost are important.

Integration & Operation:

  • Partial Azure AI service (in the EU, prompt processing possible in the US).
  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.
Deepseek
Deepseek

R1-671B

Impressive model for sophisticated reasoning and complex tasks.

DeepSeek R1-671B is perfect for scientific research, extensive data analyses and advanced coding activities. It offers state-of-the-art performance in logical and mathematical tasks.

Integration & Operation:

  • Azure AI Service only (in the EU, prompt processing possible in the USA).
Deepseek

R1 distilled models

Smaller, cost-efficient models for specific applications.

Deepseek R1 distilled model variants in sizes 1.5B, 7B, 8B, 14B, 32B, 70B are ideal for tasks that require fewer languages and specialised functions, while still offering strong performance capabilities. They are ideal for defined, specific application scenarios where efficiency, language and costs play a role.

Integration & operation:

  • compliant on dedicated GPU VM.
  • compliant with on-premises GPU server.

More models

Cohere

Cohere offers large language models for the search for advanced generation functions.

Hugging Face

Hugging Face offers thousands of models covering categories from text creation to image analysis.

Stability AI

Stability AI provides models for generating images and videos over text, including Stable Diffusion and Stable Video Diffusion.

Is your favourite model not listed?

Our team is at your disposal to help you make the best choice of language models for your specific business requirements and ensure smooth integration into your existing system. Contact us to find out more about the possibilities.