Translate text accurately between over 100 languages using state-of-the-art neural machine translation models. Perfect for content localization, document translation, and real-time communication.
Request Body
Text content to translate (up to 50,000 characters per request)
Source language code or “auto” for automatic detection:
auto
- Automatic language detection
en
- English
es
- Spanish
fr
- French
de
- German
it
- Italian
pt
- Portuguese
ru
- Russian
ja
- Japanese
ko
- Korean
zh
- Chinese (Simplified)
zh-TW
- Chinese (Traditional)
ar
- Arabic
hi
- Hindi
And 85+ more languages
Target language code for translation (same codes as sourceLanguage)
model
string
default: "neural-mt-v2"
Translation model to use:
neural-mt-v2
- Latest neural machine translation model
neural-mt-v1
- Previous generation model (faster)
specialized-{domain}
- Domain-specific models (legal, medical, technical)
conversational
- Optimized for chat and informal text
literary
- Optimized for creative and literary content
Additional context to improve translation accuracy (e.g., document type, subject matter)
Desired tone for the translation:
neutral
- Standard, balanced tone
formal
- Professional and formal
informal
- Casual and conversational
friendly
- Warm and approachable
business
- Professional business tone
academic
- Scholarly and precise
Whether to maintain original text formatting (line breaks, spacing, etc.)
Custom terminology for consistent translation Preferred translation for the term
Context where this translation should be used
Whether to transliterate proper names and place names
Whether to include alternative translations for ambiguous phrases
Response
Detected or specified source language
Translation confidence score (0.0 to 1.0)
Language detection results (when sourceLanguage is “auto”) Detection confidence (0.0 to 1.0)
Alternative language possibilities with confidence scores
Alternative translations (if requested) Show Alternative Translation
Confidence score for this alternative
Context where this alternative might be preferred
Word count statistics Word count in source text
Word count in translated text
Translation metadata Model used for translation
Time taken to translate in seconds
Number of characters processed
Example
curl -X POST "https://api.tensorone.ai/v2/ai/translation" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Hello, how are you today? I hope you are having a wonderful day!",
"sourceLanguage": "en",
"targetLanguage": "es",
"model": "neural-mt-v2",
"tone": "friendly",
"preserveFormatting": true
}'
{
"translatedText" : "¡Hola! ¿Cómo estás hoy? ¡Espero que tengas un día maravilloso!" ,
"sourceLanguage" : "en" ,
"targetLanguage" : "es" ,
"confidence" : 0.95 ,
"detectedLanguage" : {
"language" : "en" ,
"confidence" : 0.98 ,
"alternatives" : []
},
"wordCount" : {
"source" : 13 ,
"target" : 12
},
"metadata" : {
"model" : "neural-mt-v2" ,
"processingTime" : 1.2 ,
"charactersProcessed" : 68
}
}
Document Translation
Translate entire documents while preserving structure:
def translate_document ( document_text , target_lang , document_type = "general" ):
response = requests.post(
"https://api.tensorone.ai/v2/ai/translation" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"text" : document_text,
"sourceLanguage" : "auto" ,
"targetLanguage" : target_lang,
"model" : "neural-mt-v2" ,
"context" : f " { document_type } document" ,
"tone" : "formal" ,
"preserveFormatting" : True ,
"transliterateName" : True
}
)
return response.json()
# Translate a business proposal
proposal_text = """
# Business Proposal: Sustainable Energy Solutions
## Executive Summary
Our company proposes to develop renewable energy infrastructure
that will reduce carbon emissions by 40 % o ver the next five years.
### Key Benefits:
- Cost reduction of 25%
- Environmental impact mitigation
- Energy independence
Contact: John Smith, CEO
Email: john.smith@company.com
"""
translated_proposal = translate_document(proposal_text, "es" , "business proposal" )
print ( "Translated Document:" )
print (translated_proposal[ 'translatedText' ])
Batch Translation
Translate multiple texts in different language pairs:
def batch_translate ( texts_and_targets ):
"""
Translate multiple texts to different target languages
texts_and_targets: List of tuples (text, target_language)
"""
results = []
for text, target_lang in texts_and_targets:
response = requests.post(
"https://api.tensorone.ai/v2/ai/translation" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"text" : text,
"sourceLanguage" : "en" ,
"targetLanguage" : target_lang,
"model" : "neural-mt-v2" ,
"tone" : "neutral"
}
)
result = response.json()
result[ 'original_text' ] = text
results.append(result)
return results
# Batch translate product descriptions
product_descriptions = [
( "High-quality wireless headphones with noise cancellation" , "es" ),
( "High-quality wireless headphones with noise cancellation" , "fr" ),
( "High-quality wireless headphones with noise cancellation" , "de" ),
( "High-quality wireless headphones with noise cancellation" , "ja" )
]
batch_results = batch_translate(product_descriptions)
for result in batch_results:
print ( f " { result[ 'targetLanguage' ].upper() } : { result[ 'translatedText' ] } " )
Custom Glossary Translation
Use custom terminology for consistent brand and technical translations:
def translate_with_glossary ( text , target_lang , custom_terms ):
# Convert custom terms to glossary format
glossary = [
{
"source" : source_term,
"target" : target_term,
"context" : "brand/technical terminology"
}
for source_term, target_term in custom_terms.items()
]
response = requests.post(
"https://api.tensorone.ai/v2/ai/translation" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"text" : text,
"sourceLanguage" : "en" ,
"targetLanguage" : target_lang,
"model" : "neural-mt-v2" ,
"glossary" : glossary,
"tone" : "business" ,
"preserveFormatting" : True
}
)
return response.json()
# Technical documentation with custom terms
tech_text = """
Our CloudSync platform integrates with DataFlow API to provide
real-time analytics through the SmartDash interface.
The TensorCore engine processes data at 10x speed.
"""
custom_terms = {
"CloudSync" : "CloudSync" , # Keep brand name
"DataFlow API" : "API DataFlow" , # Adapt to target language structure
"SmartDash" : "SmartDash" , # Keep brand name
"TensorCore" : "TensorCore" # Keep technical term
}
spanish_translation = translate_with_glossary(tech_text, "es" , custom_terms)
print ( "Technical Translation:" )
print (spanish_translation[ 'translatedText' ])
Real-time Translation
For chat and real-time communication:
def real_time_translate ( message , target_lang , source_lang = "auto" ):
response = requests.post(
"https://api.tensorone.ai/v2/ai/translation" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"text" : message,
"sourceLanguage" : source_lang,
"targetLanguage" : target_lang,
"model" : "conversational" , # Optimized for chat
"tone" : "informal" ,
"preserveFormatting" : False # More flexible for chat
}
)
return response.json()
# Simulate chat translation
messages = [
"Hey! How's it going?" ,
"I'm doing great, thanks for asking!" ,
"Want to grab lunch later?" ,
"Sure! How about that new Italian place?"
]
print ( "English -> Spanish Chat Translation:" )
for msg in messages:
translated = real_time_translate(msg, "es" , "en" )
print ( f "EN: { msg } " )
print ( f "ES: { translated[ 'translatedText' ] } " )
print ()
Language Detection
Identify languages in multilingual content:
def detect_language ( text , return_alternatives = True ):
response = requests.post(
"https://api.tensorone.ai/v2/ai/language-detection" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"text" : text,
"returnAlternatives" : return_alternatives,
"confidenceThreshold" : 0.5
}
)
return response.json()
# Detect language of mixed content
mixed_text = "Hello! ¿Cómo estás? Je suis très bien, merci."
detection_result = detect_language(mixed_text, True )
print ( "Language Detection Results:" )
print ( f "Primary Language: { detection_result[ 'primaryLanguage' ] } " )
print ( f "Confidence: { detection_result[ 'confidence' ] } " )
if 'segments' in detection_result:
print ( " \n Language Segments:" )
for segment in detection_result[ 'segments' ]:
print ( f "' { segment[ 'text' ] } ' -> { segment[ 'language' ] } ( { segment[ 'confidence' ] } )" )
Domain-Specific Translation
Use specialized models for different domains:
def translate_specialized ( text , target_lang , domain ):
model_map = {
"medical" : "specialized-medical" ,
"legal" : "specialized-legal" ,
"technical" : "specialized-technical" ,
"financial" : "specialized-financial" ,
"academic" : "specialized-academic"
}
response = requests.post(
"https://api.tensorone.ai/v2/ai/translation" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"text" : text,
"sourceLanguage" : "en" ,
"targetLanguage" : target_lang,
"model" : model_map.get(domain, "neural-mt-v2" ),
"context" : f " { domain } document" ,
"tone" : "formal"
}
)
return response.json()
# Medical text translation
medical_text = """
The patient presented with acute myocardial infarction.
Electrocardiogram showed ST-elevation in leads II, III, and aVF.
Cardiac enzymes were elevated with troponin I at 15.2 ng/mL.
"""
medical_translation = translate_specialized(medical_text, "es" , "medical" )
print ( "Medical Translation:" )
print (medical_translation[ 'translatedText' ])
# Legal text translation
legal_text = """
The party of the first part hereby agrees to indemnify and hold harmless
the party of the second part from any claims arising from breach of contract.
"""
legal_translation = translate_specialized(legal_text, "fr" , "legal" )
print ( " \n Legal Translation:" )
print (legal_translation[ 'translatedText' ])
Quality Assessment
Evaluate translation quality and get improvement suggestions:
def assess_translation_quality ( source_text , translated_text , source_lang , target_lang ):
response = requests.post(
"https://api.tensorone.ai/v2/ai/translation/quality-assessment" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"sourceText" : source_text,
"translatedText" : translated_text,
"sourceLanguage" : source_lang,
"targetLanguage" : target_lang,
"assessFluency" : True ,
"assessAccuracy" : True ,
"checkCulturalAdaptation" : True
}
)
return response.json()
# Assess translation quality
source = "The early bird catches the worm."
translated = "El que mucho madruga, Dios le ayuda." # Different but culturally equivalent
quality_assessment = assess_translation_quality(source, translated, "en" , "es" )
print ( "Translation Quality Assessment:" )
print ( f "Fluency Score: { quality_assessment[ 'fluencyScore' ] } " )
print ( f "Accuracy Score: { quality_assessment[ 'accuracyScore' ] } " )
print ( f "Cultural Adaptation: { quality_assessment[ 'culturalAdaptation' ] } " )
print ( f "Overall Score: { quality_assessment[ 'overallScore' ] } " )
if 'suggestions' in quality_assessment:
print ( " \n Improvement Suggestions:" )
for suggestion in quality_assessment[ 'suggestions' ]:
print ( f "- { suggestion } " )
Advanced Features
Translation Memory
Store and reuse previous translations for consistency:
# Create translation memory
def create_translation_memory ( translation_pairs , name ):
response = requests.post(
"https://api.tensorone.ai/v2/ai/translation/memory" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"name" : name,
"translationPairs" : translation_pairs,
"sourceLanguage" : "en" ,
"targetLanguage" : "es"
}
)
return response.json()
# Use translation memory
def translate_with_memory ( text , memory_id , target_lang ):
response = requests.post(
"https://api.tensorone.ai/v2/ai/translation" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"text" : text,
"targetLanguage" : target_lang,
"translationMemoryId" : memory_id,
"leverageMemory" : True ,
"memoryMatchThreshold" : 0.8
}
)
return response.json()
# Create memory from previous translations
translation_pairs = [
{ "source" : "user account" , "target" : "cuenta de usuario" },
{ "source" : "dashboard" , "target" : "panel de control" },
{ "source" : "settings" , "target" : "configuración" }
]
memory = create_translation_memory(translation_pairs, "UI_Translations_ES" )
memory_id = memory[ 'memoryId' ]
# Use memory for consistent translation
new_text = "Access your user account through the dashboard settings."
consistent_translation = translate_with_memory(new_text, memory_id, "es" )
print ( "Consistent Translation:" )
print (consistent_translation[ 'translatedText' ])
Post-editing Integration
Get suggestions for improving translations:
def get_post_editing_suggestions ( source_text , translated_text , source_lang , target_lang ):
response = requests.post(
"https://api.tensorone.ai/v2/ai/translation/post-edit" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"sourceText" : source_text,
"translatedText" : translated_text,
"sourceLanguage" : source_lang,
"targetLanguage" : target_lang,
"suggestionTypes" : [ "grammar" , "fluency" , "terminology" , "style" ]
}
)
return response.json()
# Get improvement suggestions
original = "The product will be launched next quarter."
machine_translation = "El producto será lanzado el próximo trimestre."
suggestions = get_post_editing_suggestions(original, machine_translation, "en" , "es" )
print ( "Post-editing Suggestions:" )
for suggestion in suggestions[ 'suggestions' ]:
print ( f "Type: { suggestion[ 'type' ] } " )
print ( f "Original: { suggestion[ 'original' ] } " )
print ( f "Suggested: { suggestion[ 'suggested' ] } " )
print ( f "Reason: { suggestion[ 'reason' ] } " )
print ()
Supported Languages
TensorOne supports translation between 100+ languages including:
European Languages : English, Spanish, French, German, Italian, Portuguese, Dutch, Polish, Russian, Ukrainian, Czech, Hungarian, Romanian, Bulgarian, Croatian, Serbian, Slovak, Slovenian, Estonian, Latvian, Lithuanian, Maltese, Irish, Welsh, Basque, Catalan
Asian Languages : Chinese (Simplified & Traditional), Japanese, Korean, Hindi, Bengali, Tamil, Telugu, Marathi, Gujarati, Punjabi, Urdu, Thai, Vietnamese, Indonesian, Malay, Filipino, Burmese, Khmer, Lao
Middle Eastern & African : Arabic, Hebrew, Persian (Farsi), Turkish, Swahili, Yoruba, Igbo, Hausa, Amharic, Somali
And many more regional and minority languages
Use Cases
Content Localization
Website Translation : Localize websites for global markets
App Localization : Translate mobile app interfaces and content
Marketing Materials : Adapt campaigns for different regions
Documentation : Translate user manuals and help content
Business Communication
Email Translation : Communicate with international clients
Contract Translation : Translate legal and business documents
Meeting Transcripts : Translate multilingual meeting notes
Customer Support : Provide support in multiple languages
Education and Research
Academic Papers : Translate research papers and publications
Learning Materials : Create multilingual educational content
Language Learning : Generate translation exercises and examples
Cross-cultural Studies : Translate surveys and research materials
Subtitle Translation : Create subtitles for videos and films
Book Translation : Translate literature and non-fiction
News Translation : Localize news content for different markets
Social Media : Translate posts for global audiences
Best Practices
Clean Text : Remove unnecessary formatting and artifacts
Context : Provide context when translating ambiguous terms
Segmentation : Break very long texts into manageable chunks
Encoding : Ensure proper text encoding for special characters
Quality Optimization
Choose Appropriate Model : Use domain-specific models when available
Custom Glossaries : Define important terms for consistency
Tone Selection : Match tone to intended use and audience
Review Output : Always review translations, especially for critical content
Cultural Adaptation
Local Conventions : Consider local date, number, and address formats
Cultural References : Adapt idioms and cultural references appropriately
Regulatory Compliance : Ensure translations meet local legal requirements
Currency and Units : Convert measurements and currency when needed
Pricing
Standard Translation : $0.02 per 1K characters
Specialized Models : $0.03 per 1K characters (medical, legal, technical)
Real-time/Conversational : $0.015 per 1K characters
Batch Processing : 25% discount for 100+ requests
Custom Models : Available for enterprise customers
Quality Metrics
Translation quality is measured using multiple metrics:
BLEU Score : Bilingual evaluation of translation quality
Fluency : How natural the translation sounds
Adequacy : How well meaning is preserved
Cultural Appropriateness : Adaptation to target culture
Translation accuracy is highest for language pairs with large training datasets (e.g., English-Spanish, English-French). Less common language pairs may have slightly lower accuracy.
For best results with technical or specialized content, use domain-specific models and provide custom glossaries with key terminology. Consider post-editing for critical translations.