Our content-intelligence pipeline has been built around per-platform scrape stages. Apify for Instagram. The cowork-youtube-research skill for YouTube. Custom scrapers for Reddit, Hacker News, X. Each one was built when we needed it, against the constraints we had at the time. Each one works, mostly. The aggregate is a stack of platform-specific code that would not exist if a single tool had covered all the platforms when we started.
mvanhorn/last30days-skill is the consolidation. v3.1.0 shipped April 23, 2026. 24,200 stars, MIT license, ranked #1 trending on GitHub the day it published. It does what our intelligence-agent stage does, plus what we never got around to building. Engagement-ranked research across thirteen platforms in a single skill invocation.
The thirteen platforms
The interesting two are Polymarket and Bluesky. We have no Polymarket scraping today, and the closest we have to Bluesky is hand-checking accounts. The other eleven (Instagram, TikTok, YouTube, Twitter, Reddit, Hacker News, Threads, Mastodon, LinkedIn, GitHub trending, ProductHunt) overlap with stages we already run. The skill replaces them, not adds to them. The output format is normalized across platforms. Engagement-ranked, deduplicated, summarized. The same shape regardless of which platform contributed the content.
This is the integration shape that justifies replacement rather than addition. If the skill gave us Polymarket and Bluesky on top of what we have, we would integrate it as one more stage in the pipeline. Because it normalizes across platforms with engagement-ranking we would otherwise have to build, the skill replaces the per-platform stages. We lose the ability to tweak any one platform's logic in isolation, but we have not actually tweaked any of them in three months. The platform-specific code was never the value. The aggregation across platforms was.
The install
gh skill install mvanhorn/last30days-skill --agent claude-code
That command works because of the gh skill CLI we covered separately this week. The single-CLI install path is what makes this kind of skill replacement frictionless. Without gh skill, the install would have been a /plugin marketplace add flow that drifted out of sync with upstream the moment it landed. With gh skill, the version is pinned and gh skill update is a one-liner.
What to do after installing
Run it once against the same scope as your existing intelligence-agent stage. Diff the output. The places where last30days surfaces content the existing stage missed are where the replacement is winning. The places where the existing stage had nuance the replacement strips are the parts to keep, either as a post-processing step or as a separate stage that runs after.
The Polymarket and Bluesky outputs are net-new. Treat them as inputs you are seeing for the first time, not as existing inputs you are validating. Polymarket signal is leading on a few specific question classes (election dynamics, regulatory timelines, AI release windows). Bluesky signal is mostly tech-press accounts moving from Twitter, useful for dev-tooling launches that get less Twitter traction than they used to.
Why this earned the [INSTALL NOW] tag
The skill clears the three filters that decide what we install. It is allowlist-clear. 24k stars, MIT, official-looking maintenance pattern, version-pinnable through gh skill. It does not overlap anything we already do well. The platform-specific stages we already had were not adding value over the aggregation. The install cost is one CLI command plus a one-time output diff against the existing stage. The cost of skipping is that our content-intelligence pipeline continues to be a per-platform stack while the public answer converges on the aggregation pattern.
The publish date matters. v3.1.0 is the version that hit MIT license and added the engagement-ranked normalization. Earlier versions were per-platform, like ours. The skill caught up to the pattern we needed in the same week we found out we needed it. Install it.