Engram: How DeepSeek Added a Second Brain to Their LLM2026-01-13T11:00:00Z•18 min read#deep learning#llm architecture#memory#mixture of experts#deepseek#sparse computationA technical deep dive into DeepSeek's Engram architecture, which introduces conditional memory as a new axis of sparsity for large language models.
ReasoningBank Explained: How AI Agents Are Finally Learning to Remember2024-10-25T22:11:36+01:00•13 min read#ai#agents#memory#reasoningbank#machine-learning#google-researchGoogle's ReasoningBank framework solves agent amnesia by distilling experience into reusable strategies, enabling AI agents to learn from both successes and failures.