Tinker API: Mira Murati's AI Fine-Tuning Platform Launch
https://www.bivashvlog.com/2025/10/tinker-api-launch-mira-muratis-new-ai.html
Mira Murati, former CTO of OpenAI, has launched Tinker API from her startup Thinking Machines Lab - a Python-based API for fine-tuning large language models with unprecedented control and flexibility.
Product Status: Tinker API is currently in private beta BETA and is not yet charged, but will have fees in the future.
Product Overview
Key Features
Founder:
Mira Murati (former OpenAI CTO)
Company:
Thinking Machines Lab
Valuation:
$12 billion (raised $2B)
Approach:
Python-native, developer-focused API
Core Technology:
LoRA-based tuning, distributed training
Target Users:
Researchers, AI developers
Supported Technologies
Large Language Models (LLMs)
Mixture-of-Experts Models
LoRA-based Tuning
Reinforcement Learning
Distributed GPU Training
Open-weight Models
How Tinker Differs from Existing Services
Tinker API
Developer-Focused & Flexible
- Python-native API with custom training loops
- Full control over losses and training logic
- Supports advanced fine-tuning algorithms
- Open-source Tinker Cookbook with recipes
- Transparent, open approach to AI customization
- Abstracts distributed training complexity
Traditional Services
Drag-and-Drop & Black-Box
- Limited customization options
- Closed, proprietary model ecosystems
- Restricted training logic modification
- Simplified turnkey solutions
- Opaque fine-tuning processes
- Limited support for advanced research
Example: Tinker's forward_backward API
# Example Python code using Tinker's forward_backward API
from tinker import TinkerTrainer, Model, Dataset
# Initialize model (open-weight or custom)
model = Model.load_pretrained("some-large-open-model")
# Prepare dataset
dataset = Dataset("your-dataset-path")
# Create trainer instance
trainer = TinkerTrainer(model=model)
def forward_backward(batch):
# Forward pass: get model outputs
outputs = model(batch["inputs"])
# Compute loss based on outputs and targets
loss = model.compute_loss(outputs, batch["targets"])
# Backward pass: compute gradients
loss.backward()
# Optionally return loss for logging
return loss.item()
# Run fine-tuning loop with custom forward_backward function
trainer.train(dataset=dataset, forward_backward_func=forward_backward)
from tinker import TinkerTrainer, Model, Dataset
# Initialize model (open-weight or custom)
model = Model.load_pretrained("some-large-open-model")
# Prepare dataset
dataset = Dataset("your-dataset-path")
# Create trainer instance
trainer = TinkerTrainer(model=model)
def forward_backward(batch):
# Forward pass: get model outputs
outputs = model(batch["inputs"])
# Compute loss based on outputs and targets
loss = model.compute_loss(outputs, batch["targets"])
# Backward pass: compute gradients
loss.backward()
# Optionally return loss for logging
return loss.item()
# Run fine-tuning loop with custom forward_backward function
trainer.train(dataset=dataset, forward_backward_func=forward_backward)
Technical Comparison
| Feature | Tinker API | Traditional Fine-Tuning Services |
|---|---|---|
| Customization Level | Full control over training logic and losses | Limited to predefined options |
| Programming Interface | Python-native API | Drag-and-drop or limited APIs |
| Model Support | Small to large open-weight models, including MoE | Typically limited to proprietary models |
| Infrastructure Management | Automatic GPU management and error recovery | Manual infrastructure setup often required |
| Cost Efficiency | LoRA-based tuning for resource sharing | Often more expensive with less optimization |
| Research Focus | Designed for cutting-edge AI research | Focused on enterprise applications |
Key Differentiators
Tinker's Developer-Centric Approach
Tinker provides an intuitive Python-native API where researchers write custom training loops, explicitly controlling losses, training logic, and workflows. This flexibility allows experimenting with advanced fine-tuning and reinforcement learning algorithms.
Infrastructure Abstraction
The API abstracts the complexities of distributed training, managing GPU infrastructure, and error handling behind the scenes, making large-scale distributed processing accessible without typical overhead.
Frequently Asked Questions (FAQs)
What is Tinker API?
Tinker is a Python-based API designed to simplify and enhance the fine-tuning of large language models (LLMs). It allows researchers and developers to have direct control over training workflows while abstracting infrastructure complexities.
Who created Tinker API?
Tinker was created by Mira Murati, former CTO of OpenAI, through her new startup Thinking Machines Lab, which raised $2 billion at a $12 billion valuation earlier in 2025.
How does Tinker differ from other fine-tuning services?
Tinker provides a low-level, developer-focused approach with full control over training logic, unlike drag-and-drop or black-box interfaces. It supports advanced features like custom training loops and reinforcement learning while handling infrastructure automatically.
What is the Tinker Cookbook?
The Tinker Cookbook is an open-source companion resource with pre-built tuning recipes, encouraging research-level customization rather than simplified turnkey solutions.
Is Tinker API available to the public?
Tinker is currently in private beta and is not yet charged, but will have fees in the future. Major research teams from institutions like Princeton and Stanford are already using it.
What types of models does Tinker support?
Tinker supports both small and large open-weight models, including advanced mixture-of-experts models, and uses LoRA-based tuning for cost-efficient resource sharing.
