Building AI agents, exploring cognitive science, unraveling bioinformatics
Creative engineering where philosophy meets technology
Popular Topics
View all topicsLatest Posts
When 62 Days of Compute Becomes 3: Diffusion Models as Fast Surrogates for Agent-Based Biological Simulations
How generative diffusion models can serve as fast surrogates for expensive biological simulations, achieving 22x speedup while preserving the stochastic diversity that makes these models scientifically useful.
When the Algorithm Can't Explain Itself: ML Interpretability in Precision Oncology
Machine learning models now outperform FDA-approved biomarkers in predicting treatment response, but the best-performing models often resist explanation. Here's how precision oncology is navigating the trade-off between performance and interpretability.
Project Silicon: What If We Could Do Gradient Descent on Assembly Code?
A deep dive into Project Silicon's proposal to build differentiable CPU simulators, enabling gradient-based optimization of assembly code and opening a new frontier in neural algorithm synthesis.
Biological World Models: The Projects You're Not Building (But Should Be)
Why computational biologists should stop building embeddings and start building simulators, with three tractable project ideas you can implement today using flow matching, Neural ODEs, and cell fate trajectory modeling.
Benchmarks vs RL Environments: Why the Distinction Actually Matters
Understanding when you're working with an environment versus a benchmark changes how you design experiments, interpret results, and communicate findings. This guide covers the practical differences every RL practitioner should know.
Why 1000-Layer Networks Finally Work for Reinforcement Learning
Recent research shows 1024-layer networks achieve 2x to 50x improvements in goal-conditioned RL. Here's why extreme depth works now, and when you should consider it for your own agents.