Mixture of Experts: The Efficiency Trick Behind Modern AI2025-12-08•7 min read#machine learning#moe#efficiency#llm#architecture#mixtral#deepseek#neural networksMixtral uses 46.7B parameters but only activates 13B per token. This architectural trick called Mixture of Experts powers Gemini 1.5, DeepSeek V3, and more. Learn how MoE works, its hidden costs, and when to use it.
Why Your LLM Only Uses 10-20% of Its Context Window (And How TITANS Fixes It)2025-12-08•15 min read#ai#machine learning#transformers#memory architectures#long context#titans#miras#neural networksGPT-4's 128K context window? It only uses about 10% effectively. Google's TITANS architecture introduces test-time memory learning that outperforms GPT-4 on long-context tasks with 70x fewer parameters.