Blog
Research, comparisons, and updates on finetuning and inference — plus product and company news from the Maniac team.
Platform LandscapeJan 21, 2026
Limitations of Together and Fireworks finetuning (and why autonomous finetuning can win)
Managed finetuning reduces setup time, but it can bottleneck iteration and portability. Here’s what breaks in practice—and how autonomous finetuning can lower total cost, including inference.
Inference StacksJan 19, 2026
Inference stacks compared: vLLM, TGI, TensorRT-LLM, llama.cpp, and SGLang
A practical guide to choosing an inference stack based on latency targets, model size, and operational tradeoffs.
Platform LandscapeJan 15, 2026
The finetuning platform landscape: how teams compare providers
A neutral map of the finetuning ecosystem, with decision criteria that apply across managed providers and open-source stacks.
Company UpdatesJun 25, 2025
Maniac raises $2M pre-seed round from PearX and a16z / speedrun
Maniac raised $2M pre-seed from PearX and a16z Speedrun to build a self-optimizing AI dev platform for finetuning, optimization, and automatic routing.
Showing 4 of 4 posts