Revolutionizing Bilingual Text Embeddings with Multi-Task Contrastive Learning

Our new paper explores how our Spanish-English and German-English models use multi-task contrastive learning and a sophisticated data pipeline to master language understanding and cross-lingual efficiency for texts up to 8192 tokens

Composite image of four colorful, stylized landmarks: Brandenburg Gate, St. Peter's Basilica, Tiananmen, and Golden Gate Brid

In our recent paper, Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings, we detailed our development of German-English and Spanish-English bilingual text embedding models.

Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings
We introduce a novel suite of state-of-the-art bilingual text embedding models that are designed to support English and another target language. These models are capable of processing lengthy text inputs with up to 8192 tokens, making them highly versatile for a range of natural language processing tasks such as text retrieval, clustering, and semantic textual similarity (STS) calculations. By focusing on bilingual models and introducing a unique multi-task learning objective, we have significantly improved the model performance on STS tasks, which outperforms the capabilities of existing multilingual models in both target language understanding and cross-lingual evaluation tasks. Moreover, our bilingual models are more efficient, requiring fewer parameters and less memory due to their smaller vocabulary needs. Furthermore, we have expanded the Massive Text Embedding Benchmark (MTEB) to include benchmarks for German and Spanish embedding models. This integration aims to stimulate further research and advancement in text embedding technologies for these languages.
Embedding API
Start with 1M free tokens. Top-performing, 8192 context length bilingual embeddings for your search and RAG systems.

Our approach utilizes multi-task contrastive learning and advanced data curation pipeline, focusing on bilingual capabilities while extending to support 8192 tokens in length. This method allows our models to excel in understanding target languages and in conducting cross-lingual evaluations efficiently.

Aquí Se Habla Español: Top-Quality Spanish-English Embeddings and 8k Context
Jina AI’s new bilingual Spanish-English embedding model brings the state-of-the-art in AI to half a billion Spanish speakers.
Ich bin ein Berliner: German-English Bilingual Embeddings with 8K Token Length
Jina AI introduces a German/English bilingual embedding model, featuring an extensive 8,192-token length, specifically designed to support German businesses thriving in the U.S. market.

In addition to the bilingual models covered in the paper, we have also developed bilingual Chinese-English and monolingual English models. These additions showcase our commitment to covering a broad spectrum of linguistic needs and furthering our capabilities in language processing.

8K Token-Length Bilingual Embeddings Break Language Barriers in Chinese and English
The first bilingual Chinese-English embedding model with 8192 token-length.
Jina AI Launches World’s First Open-Source 8K Text Embedding, Rivaling OpenAI
Jina AI introduces jina-embeddings-v2, the world’s first open-source model boasting an 8K context length. Matching the prowess of OpenAI’s proprietary models, this innovation is now publicly accessible on Huggingface, signaling a significant milestone in the landscape of text embeddings.

Our bilingual models are characterized by their efficiency, operating with optimized vocabulary sizes to require fewer parameters and less memory. This efficiency underscores our dedication to creating powerful yet resource-efficient tools for language processing.

Following the release of our paper, we expanded the Massive Text Embedding Benchmark (MTEB) to include benchmarks for our English-German and English-Spanish embedding models. This expansion is part of our effort to stimulate further research and advancements in text embedding technologies for non-English languages.

At Jina AI, our aim is to enhance the processing and understanding of multiple languages, contributing to the NLP field with our developments in bilingual and monolingual text embedding models.