AI Model Database
Comprehensive database of open-source AI models from leading providers. Compare specifications, architectures, and find optimal deployment configurations.
Total Models
Providers
Largest Model
Max Context
AI21 Labs
2Mamba-Transformer Hybrid
Hybrid Mamba-Transformer with 256K context window.
Recommended Quantization:
Mamba-Transformer Hybrid
Compact hybrid model with 256K context window.
Recommended Quantization:
Alibaba
11Transformer
Alibaba's flagship model with strong multilingual and coding capabilities.
Recommended Quantization:
Transformer
Mid-size Qwen model with excellent coding and reasoning capabilities.
Recommended Quantization:
Transformer
Consumer GPU-friendly model with strong multilingual support.
Recommended Quantization:
Transformer
Efficient multilingual model suitable for edge deployment.
Recommended Quantization:
Transformer
Compact multilingual model for mobile and edge devices.
Recommended Quantization:
Transformer
Ultra-lightweight multilingual model for minimal hardware.
Recommended Quantization:
Transformer
Specialized coding model with state-of-the-art code generation capabilities.
Recommended Quantization:
Transformer
Efficient coding model suitable for consumer hardware.
Recommended Quantization:
Transformer
Lightweight coding model for edge deployment.
Recommended Quantization:
Transformer
Specialized mathematical reasoning model with exceptional problem-solving capabilities.
Recommended Quantization:
Vision Transformer
Vision-language model with exceptional document and image understanding capabilities.
Recommended Quantization:
BigCode/ServiceNow
3Transformer
Open code generation model with fill-in-the-middle capabilities.
Recommended Quantization:
Transformer
Efficient code generation model.
Recommended Quantization:
Transformer
Compact code generation model for edge deployment.
Recommended Quantization:
Cohere
3Transformer
Cohere's most capable model with advanced RAG and tool use capabilities.
Recommended Quantization:
Transformer
Efficient model with strong RAG and tool use capabilities.
Recommended Quantization:
Transformer
Compact model with RAG and tool use capabilities, Apache 2.0 licensed.
Recommended Quantization:
Databricks
1Mixture of Experts
Databricks' open MoE model with strong coding and reasoning capabilities.
Recommended Quantization:
DeepSeek
7Mixture of Experts
DeepSeek V3 is a 671B parameter MoE model with only 37B activated per token, delivering exceptional efficiency.
Recommended Quantization:
Mixture of Experts
Open-source reasoning model matching OpenAI o1 performance on math, code, and logic tasks.
Recommended Quantization:
Transformer
Reasoning capabilities distilled into a 70B model, offering excellent reasoning at manageable size.
Recommended Quantization:
Transformer
Compact reasoning model with impressive performance for its size.
Recommended Quantization:
Transformer
Consumer GPU-friendly reasoning model with strong performance.
Recommended Quantization:
Transformer
Edge-friendly reasoning model suitable for consumer hardware.
Recommended Quantization:
Transformer
Ultra-lightweight reasoning model for minimal hardware.
Recommended Quantization:
EleutherAI
3Transformer
One of the first truly open large language models.
Recommended Quantization:
Transformer
20B parameter open model from EleutherAI.
Recommended Quantization:
Transformer
Research-focused model designed for interpretability studies.
Recommended Quantization:
Transformer
Google's 27B model with knowledge distillation from larger models.
Recommended Quantization:
Transformer
Efficient 9B model with impressive performance for its size.
Recommended Quantization:
Transformer
Ultra-lightweight model for edge and mobile deployment.
Recommended Quantization:
Transformer
Multimodal model with vision capabilities and extended context.
Recommended Quantization:
Transformer
Efficient multimodal model with vision capabilities.
Recommended Quantization:
Transformer
Compact multimodal model suitable for edge deployment.
Recommended Quantization:
Transformer
Ultra-lightweight multimodal model for minimal hardware.
Recommended Quantization:
IBM
4Transformer
IBM's enterprise-focused model with strong tool use and safety features.
Recommended Quantization:
Transformer
Compact enterprise model for edge deployment.
Recommended Quantization:
Transformer
Specialized code model with enterprise-grade capabilities.
Recommended Quantization:
Transformer
Efficient code model suitable for enterprise deployment.
Recommended Quantization:
Meta
12Mixture of Experts
Llama 4 Maverick is a 400B parameter Mixture-of-Experts model with native multimodal capabilities, supporting text, image, and video understanding.
Recommended Quantization:
Mixture of Experts
Llama 4 Scout features an industry-leading 10M token context window, making it ideal for long-document analysis and code understanding.
Recommended Quantization:
Transformer
Llama 3.3 70B delivers comparable performance to Llama 3.1 405B with significantly reduced computational requirements.
Recommended Quantization:
Vision Transformer
Llama 3.2 90B Vision is Meta's most capable vision model, supporting image understanding and visual reasoning tasks.
Recommended Quantization:
Vision Transformer
Efficient vision-language model optimized for edge deployment while maintaining strong multimodal capabilities.
Recommended Quantization:
Transformer
Lightweight model designed for mobile and edge devices with surprisingly strong performance.
Recommended Quantization:
Transformer
Ultra-lightweight model for the most constrained environments.
Recommended Quantization:
Transformer
Meta's largest open model with state-of-the-art reasoning, tool use, and multilingual capabilities.
Recommended Quantization:
Transformer
Highly capable model balancing performance and efficiency. One of the most popular open models.
Recommended Quantization:
Transformer
Accessible model with strong performance for its size and full multilingual support.
Recommended Quantization:
Transformer
Meta's third generation Llama model with significant improvements in reasoning and coding.
Recommended Quantization:
Transformer
The most popular small open model, excellent for fine-tuning and edge deployment.
Recommended Quantization:
Microsoft
6Transformer
Microsoft's Phi 4 with exceptional reasoning and math capabilities from synthetic data training.
Recommended Quantization:
Mixture of Experts
Mixture of Experts model with 42B parameters and 128K context.
Recommended Quantization:
Transformer
Compact model with 128K context window for edge deployment.
Recommended Quantization:
Transformer
Mid-size Phi model with strong reasoning capabilities.
Recommended Quantization:
Transformer
Efficient Phi model with strong performance for its size.
Recommended Quantization:
Transformer
Compact model that started the Phi 3 series with impressive capabilities.
Recommended Quantization:
Mistral AI
9Transformer
Mistral's most capable model with strong multilingual and coding performance.
Recommended Quantization:
Transformer
Latency-optimized model with excellent performance for its size. Apache 2.0 licensed.
Recommended Quantization:
Vision Transformer
Mistral's flagship vision-language model with advanced document understanding.
Recommended Quantization:
Vision Transformer
Efficient vision-language model with strong multimodal capabilities.
Recommended Quantization:
Mixture of Experts
Sparse MoE model with 141B total parameters but only 39B active per token.
Recommended Quantization:
Mixture of Experts
Groundbreaking MoE model that started the open MoE revolution.
Recommended Quantization:
Transformer
Specialized coding model with fill-in-the-middle capabilities.
Recommended Quantization:
Transformer
12B model with 128K context and strong multilingual capabilities.
Recommended Quantization:
Transformer
The model that proved small models can be highly capable.
Recommended Quantization:
NVIDIA
2Transformer
NVIDIA's largest open model designed for synthetic data generation and reward modeling.
Recommended Quantization:
Transformer
Efficient model for synthetic data generation.
Recommended Quantization:
Snowflake
1Mixture of Experts
Enterprise-focused MoE model optimized for SQL and coding tasks.
Recommended Quantization:
Stability AI
2Transformer
Stability AI's language model with multilingual support.
Recommended Quantization:
Transformer
Lightweight coding model with fill-in-the-middle support.
Recommended Quantization: