#Deep Learning
9 posts tagged with "Deep Learning"
How AlphaGenome Models Gene Regulation: 2D Embeddings, Splicing, and the Race to Read Non-Coding DNA
A technical look at AlphaGenome's architecture, its 2D pairwise embeddings for splicing prediction, and what the model means for clinical variant interpretation.
A Bioinformatician's Guide to Choosing Genomic Foundation Models
A practical guide to selecting genomic foundation models for bioinformatics tasks. Covers ESM-2, DNABERT-2, HyenaDNA, Nucleotide Transformer, scGPT, and Evo with specific recommendations for DNA sequence analysis, protein structure prediction, and single-cell analysis based on hardware requirements, inference speed, and task type.
Why 1000-Layer Networks Finally Work for Reinforcement Learning
Recent research shows 1024-layer networks achieve 2x to 50x improvements in goal-conditioned RL. Here's why extreme depth works now, and when you should consider it for your own agents.
Tensor Logic: One Equation to Rule Them All
Pedro Domingos proposes that neural networks and symbolic AI are the same mathematical operation - a logical rule can be equivalently written as a tensor equation in Einstein summation notation. If true, we've been building separate tools for problems that share identical structure.
When Machines Design Their Own Learning Algorithms
A machine trained on simple grid worlds beat every hand-designed RL algorithm on Atari. DeepMind's DiscoRL discovers algorithms through meta-learning that outperform DQN, PPO, and A3C - methods humans spent decades developing.