← homeDaywave: An Entire Newsletter, Automated End to End
Daywave: An Entire Newsletter, Automated End to End
2026-03-18
·
⏱︎ 14 min read·
About a month ago, I launched daywave, a daily AI newsletter for builders. It covers the models, tools, launches, and business moves that matter if you're actually building things with AI. One five-minute read, every morning.
Here's what most readers don't know: the entire thing is automated. Every stage. Research, writing, editorial, graphics, site deployment, email delivery. No human touches it between the time the pipeline kicks off and the time the brief lands in your inbox.
I wanted to see how far I could push it.
The Problem I Was Solving For Myself
I was spending hours every day reading about AI. Not the research papers, the practical stuff: what shipped, what broke, what tools are worth trying. The signal was scattered across Reddit threads, Hacker News arguments, Product Hunt launches, Twitter hot takes, GitHub trending repos, and company blog posts buried under SEO garbage.
I'd open a dozen tabs every morning, skim hundreds of posts, mentally filter for what actually mattered, and come away with maybe five or six things worth knowing. Then I'd do it again the next day.
That's a system. Systems can be automated.
What Daywave Actually Produces
Every issue follows a consistent editorial format:
The Big Picture - a thesis that connects the day's stories into a narrative
Stories That Matter - four deep-dive breakdowns with "So What" callouts explaining why each story matters to builders
Ideas to Watch - emerging patterns and trends worth tracking
Tools and Products - specific things you can go try, tagged as "Try it," "Know it," or "Watch it"
Quick Hits - rapid-fire bullets for everything else worth a mention
One Thought - a closing editorial take
The editorial voice is specific and consistent. Direct, opinionated, builder-focused. Each "So What" callout distills a story into one sentence framed from the reader's perspective: not "this happened" but "here's what it means for you." That voice took iteration to get right, but once it's defined clearly enough, it's surprisingly reproducible.
The Pipeline
I'll share the architecture without giving away every detail, because part of the fun is that it works and I'd like to keep some of the how to myself.
{}
Collect
Reddit, HN, GitHub, PH, X
##
Score
Relevance, novelty, trends
Aa
Write
Editorial voice + structure
~~
Design
Hero images + OG cards
</>
Build
HTML, meta, archive, RSS
>>
Ship
Deploy + email subscribers
Runs every morning, no human in the loop
Stages 1-3: From Raw Signal to Editorial
The system pulls from multiple sources every day. Reddit, Hacker News, GitHub trending, Product Hunt, X, and major AI company blogs. It's not a simple RSS reader. It fetches, parses, and normalizes content from fundamentally different platforms into a common format it can reason about.
Raw collection produces hundreds of items. Most of it is noise. The system scores each item across multiple dimensions: relevance to builders, novelty, significance, and whether it represents a trend or an isolated event. Items that don't clear the threshold get dropped. This is the stage that took the most iteration. The difference between a useful newsletter and a forgettable one is entirely about what you cut.
The scored and filtered items then feed into the writing stage. This isn't "summarize these links." The system produces original editorial content with a specific voice and structure. It synthesizes across sources, draws connections between stories, and produces the opinionated "So What" analysis that makes each story actionable.
r/LocalLLaMAOllama now supports web search plugins for local models. Been waiting for this...2,847
Hacker NewsShow HN: I built an autonomous AI hacker that found 9 CVEs342
Product HuntMy Computer by Manus AI - Desktop app that brings AI agents to your machine454
r/ChatGPTJust switched to Claude after the Pentagon deal. Anyone else? The responses feel...1,205
@kaboraaiCursor just hit $2B ARR. Let that sink in. A code editor. Two billion dollars.89 RT
GitHubcodex-subagents: Parallel specialized AI workers for complex coding tasks★ 2.1k
+ 247 more items from today's crawl
Hundreds of items collected from Reddit, HN, GitHub, Product Hunt, and X
The Big Picture section is the hardest part. It requires looking at a day's worth of disconnected events and finding the thread that connects them into a coherent narrative. Some days that thread is obvious. Some days the system has to work for it. A day where OpenAI launches a model, Google cuts prices, and a trending repo gets 2,000 stars might all tie together under a "the commodity era is here" thesis. The system has to find that.
Stage 4: Graphics
This is one of my favorite parts of the system. Every issue gets a unique hero image generated to match the theme of that day's brief.
Mar 18
AI's Windows 95 Moment
1 / 7
Every issue gets a unique illustration generated to match its editorial theme
The images aren't stock photos or generic AI slop. They follow a consistent artistic direction: painterly, slightly abstract, warm palette. A story about AI security gets interlocking gears. A story about ethics gets a cracked compass. A story about platform maturity gets windows being thrown open. The visual language is coherent across issues while still being unique to each one.
Beyond the hero images, every issue also gets custom open graph images for social sharing. The pipeline takes each hero image and composites it with the daywave branding, the issue date, and proper 1200x630 sizing, so it looks clean when someone drops the link on Twitter or LinkedIn.
Each issue gets a branded social card with logo and date, composited automatically
The entire graphics pipeline runs automatically. The system reads the finished editorial content, determines the visual theme, generates the hero image, composites the OG variant, and outputs both at the correct sizes and formats. No Figma. No Photoshop. No manual export step.
Stage 5: Assembly and Deployment
The written content and generated images get assembled into the final HTML pages. The site is static, which keeps it fast and cheap to host. Each issue becomes an HTML file with full SEO metadata, structured content sections with custom styling, share buttons, navigation, and an inline subscribe form. The archive page, homepage, RSS feed, and JSON index all rebuild automatically.
Deployment is automated. The pipeline finishes, the site updates, and the new issue is live.
Stage 6: Distribution
Once the site is live, subscribers get their email. The system handles subscription management through its own API, sends formatted emails with the full brief, and tracks delivery. There's also an RSS feed that updates automatically.
daywave@daywave.co
d
daywave6:00 AM
AI's Windows 95 Moment: The Platform Is Here
to me
daywave
3/18/26|unsubscribe
AI's Windows 95 Moment: The Platform Is Here
The Big Picture
AI is quietly having its Windows 95 moment. While everyone obsessed over which model scored better on which benchmark, the infrastructure layer grew up. Builders are no longer asking "what can AI do?" They're asking "how do I wire this into everything?"
Meta dropped $27 billion on compute infrastructure while planning workforce cuts. Microsoft unified its Copilot engineering teams. Google expanded Personal Intelligence to all US users. The message is clear: this isn't experimental anymore. This is the new operating system for work, and companies are building for a world where AI isn't a feature; it's the foundation.
The shift isn't just in enterprise budgets. It's in developer behavior. Cursor hit $2B annualized revenue. GitHub is flooded with agent frameworks and MCP servers. Product Hunt launches are dominated by AI productivity tools and coding assistants. The infrastructure that makes AI useful is maturing faster than the models themselves.
Stories That Matter
1
OpenAI drops two new models, pricing war intensifies
GPT-5.4 mini and nano launched today, with mini hitting 54.4% on coding benchmarks and nano priced at $0.20 per million input tokens. The mini model runs 2x faster than the previous version and is now available to free users through ChatGPT's "thinking" option.
So What
OpenAI is quietly abandoning the consumer race and doubling down on developers who'll pay premium prices for coding-optimized models.
2
The Pentagon builds its own AI as Anthropic relationship sours
The Department of Defense is developing alternatives to Anthropic's Claude after their dramatic falling out over military applications. MIT Technology Review reports the Pentagon is planning secure environments where AI companies can train military-specific models on classified data.
So What
AI is splitting into civilian and military tracks, with the Pentagon signaling it won't depend on companies that can say no to defense contracts.
3
China's AI stocks surge as Jensen Huang calls OpenClaw "the next ChatGPT"
Nvidia's CEO praised OpenClaw during GTC, calling AI agents the next major breakthrough and specifically highlighting China's rapid adoption. Chinese AI companies like MiniMax and Zhipu saw shares jump as Huang noted OpenClaw usage in China has surpassed the US.
So What
China is treating AI agents as infrastructure, not experiments, while US companies are still figuring out pricing models.
4
Nvidia unveils DLSS 5 and gamers revolt
Nvidia announced DLSS 5, which uses AI to transform basic 3D frames into photorealistic imagery in real-time. Instead of traditional rendering, the technology analyzes game pixels and generates Hollywood-grade lighting and materials at 60fps. Gamers immediately pushed back, calling it "yassified."
So What
Nvidia is betting its future on AI-generated graphics while consumers push back, highlighting the tension between technical capability and user preference.
Ideas to Watch
Multi-agent coding workflows are replacing solo AI development
Developers are moving beyond single AI coding assistants to orchestrated agent teams. Tools like Codex Subagents and Angy are launching with specialized roles; one agent plans, another builds, a third tests.
Local-first AI infrastructure gains quiet momentum
While everyone watches cloud model races, builders are deploying capable systems entirely on local hardware. The movement isn't about privacy activism; it's about reliability and cost control for production workloads.
Voice AI skips command interfaces entirely
Google launched conversational Maps powered by Gemini, handling complex queries like "find EV charging near dog-friendly restaurants with outdoor seating."
Tools and Products
My Computer by Manus AI
Desktop app that brings AI agents onto your local machine for file organization, app automation, and Swift development without code.
Try it
Lightning Rod SDK
Turns real-world data into verified training datasets using Python. Processes news, filings, and documents without manual labeling.
Try it
Mistral Forge
Platform for building custom AI models from scratch using enterprise data.
Know it
Codex Subagents
Parallel specialized AI workers for complex coding tasks using TOML agent definitions. Prevents context rot through isolated agent roles.
Try it
Nemotron 3 Nano 4B
Compact 4B parameter model now available via Ollama, optimized for local inference and agent workflows.
Try it
Get Shit Done
Meta-prompting framework for spec-driven development with context engineering and Claude Code integration.
Watch it
ClickSay
Chrome extension that captures CSS selectors, styles, and component names with voice or text fixes for AI coding tools.
Try it
Quick Hits
Encyclopedia Britannica sued OpenAI claiming GPT-4 memorized nearly 100,000 articles and generates "near-verbatim" copies
Roche deployed 2,176 additional Nvidia GPUs, bringing total AI infrastructure to 3,500+ Blackwell chips
OpenAI signed AWS partnership to sell AI systems for classified and unclassified government work
UniCredit expects €400-500 million in AI cost reductions over five years through process automation
BuzzFeed launched AI-powered social apps at SXSW to mixed reactions
ICML rejected all papers from reviewers who used LLMs, despite agreeing not to
Chinese tech giants hiked AI prices up to 34% as demand surges
One Thought
The infrastructure is ready. While we've been debating model capabilities and safety alignment, the boring stuff quietly matured. APIs that don't break. Frameworks that scale. Desktop apps that actually work. The AI ecosystem isn't waiting for AGI anymore; it's building for the agents we have right now.
daywave
AI signal for builders. Every morning.
Subscribers get a formatted email every morning with the full brief
The whole pipeline runs without intervention. I wake up, and the brief is already published.
The Design
I spent real time on this part, because I think most automated content fails at the presentation layer. It reads like a machine wrote it and it looks like nobody cared.
Daywave's site is intentionally high-quality. Dark background with warm orange accents. Serif headlines in Instrument Serif. Subtle wave animations on the homepage. "So What" callouts in the post body with accent-colored left borders. Tool badges color-coded by type. None of that is accidental. The design communicates that this is a publication worth reading, not a bot dump.
The hero images reinforce that. When a reader sees a new painterly illustration every morning that actually relates to the content, it signals editorial intent. The fact that the system produces these automatically is the part I find most interesting.
What I Learned
Curation is harder than creation. Getting an AI to write is easy. Getting it to decide what's worth writing about is the real challenge. The scoring and filtering stage went through more iterations than everything else combined.
Consistency matters more than brilliance. Any single issue could probably be better if a human spent three hours on it. But a human can't spend three hours on it every single day, seven days a week, without burning out. The automation doesn't get tired, doesn't skip weekends, and doesn't phone it in on Friday.
Voice is buildable. It took work to get the editorial voice consistent, but it's solvable. The "So What" callouts are a good example. Every story gets a one-sentence takeaway framed from the reader's perspective. That pattern is repeatable once you define it clearly enough.
Graphics sell the credibility. The hero images and branded OG cards do more for Daywave's perceived quality than any amount of good writing. People make snap judgments about whether content is worth their time, and visuals drive that judgment before they read a single word.
The last 20% is still the last 20%. Getting a prototype working took a fraction of the time that getting it production-ready took. Error handling, edge cases, source reliability, image generation failures, deployment quirks. The boring stuff is still the majority of the work, even when AI is doing the creative parts.
The Vision
Daywave started as an experiment. I wanted to answer a question: can AI produce something that's genuinely useful to read every day, not as a novelty, but as something you'd actually subscribe to and rely on?
A month in, I think the answer is yes, with caveats. The output quality is surprisingly high when you invest in the pipeline rather than just the prompt. Most people who try to automate content focus on the generation step and ignore everything else. But the research, the filtering, the editorial structure, the visual identity, the distribution: that's where the real work lives. The writing is almost the easy part.
I'm interested in what happens when you treat AI not as a replacement for a person, but as the engine inside a well-designed system. A person couldn't read every AI subreddit, every HN thread, every Product Hunt launch, every company blog, every trending GitHub repo, score them all, and write a polished editorial brief about the best ones, every single morning. That's not a human-shaped task. But it is a system-shaped task.
The question I keep coming back to is: what other workflows look like this? Where the bottleneck isn't creativity or intelligence, but the sheer volume of inputs that need to be processed, filtered, and synthesized into something useful?
I think there are more of those than most people realize. And I think the people who figure out how to build those systems, rather than just chatting with AI in a text box, are going to build some interesting things.
daywave.co publishes every morning. You can subscribe for free.