<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Links on Monad Alpha</title><link>https://monadalpha.com/tags/links/</link><description>Recent content in Links on Monad Alpha</description><generator>Hugo</generator><language>en-gb</language><lastBuildDate>Sat, 11 Apr 2026 11:00:00 +0100</lastBuildDate><atom:link href="https://monadalpha.com/tags/links/feed.xml" rel="self" type="application/rss+xml"/><item><title>Worth reading</title><link>https://monadalpha.com/2026/04/worth-reading/</link><pubDate>Sat, 11 Apr 2026 11:00:00 +0100</pubDate><guid>https://monadalpha.com/2026/04/worth-reading/</guid><description>&lt;p&gt;Found a great paper on transformer efficiency. The key insight: you can prune attention heads during inference without retraining.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://example.com/paper"&gt;Link to paper →&lt;/a&gt;&lt;/p&gt;</description></item></channel></rss>