AI for the Stack
For data engineers Twice a week — free

The newsletter that ships
working code —
not just takes.

AI for the Stack covers how to actually integrate LLMs into real data pipelines — dbt, BigQuery, Redshift, Airflow, GCP. Every Tuesday: a deep-dive workflow with copy-paste-ready code. Every Thursday: the best tools, workflow setups, and community signal worth your time this week.

Free. No spam. Unsubscribe anytime.

Read the archive first
Tuesday deep dive Thursday roundup Working code every issue No AI-news digests

What you get

Two issues.
Two formats. Both useful.

Tuesday 1,500–2,500 words

Deep Dive

One real-world use case. One concrete workflow. Every issue ships a working GitHub repo with code you can copy-paste into your stack before end of day — not example snippets, not pseudocode.

Real pipeline, not a toy demo
Copy-paste-ready code every issue
Honest take on where it breaks
$ dbt run --select ai_documented_models+
Thursday 400–600 words

Roundup

Three to five items, scannable, opinionated. The tools worth evaluating right now, the workflow setups people are actually shipping, and the community signal you'd otherwise miss. Curated for practitioners, not observers.

Best tools available right now
Workflow setups worth stealing
What's actually working in production
3–5 picks · no filler · straight verdict

What's covered

Real workflows.
Concrete skills.

LLMs in your dbt workflow

Use Claude and GPT-4 to write, review, and document dbt models — with sensible guardrails for production.

$ dbt run --select ai_generated+

AI-assisted SQL debugging

Feed failing queries to an LLM with schema context and get fixes that actually account for your data model.

err → context → patch → PR

Automating data documentation

Generate and keep fresh column descriptions, lineage notes, and README files — on every schema change.

schema.yml ← auto-generated

Agentic pipelines with n8n

Automate LLM calls, data quality checks, and alert routing without writing a full orchestration layer.

trigger → classify → notify

Self-healing pipeline patterns

Classify pipeline failures with an LLM, route them to the right fix path, and reduce on-call pages.

alert → classify → remediate

When NOT to use AI

The takes nobody else publishes — where LLMs make your pipeline worse, slower, or harder to debug.

// strong opinions, held loosely

Why it exists

Most AI resources are written for two audiences: ML researchers, and non-technical people who want the highlights. There's almost nothing for the engineer in the middle — the one already running dbt and Airflow, who's been asked to "add AI" to a system they're responsible for keeping reliable.


AI for the Stack fills that gap. Every issue is written by a practitioner, for practitioners — with the assumption that you already know your stack and just need to know what actually works when LLMs meet production data pipelines.


No breathless AGI takes. No "Top 10 AI Tools" listicles written by someone who hasn't touched a pipeline. Just concrete patterns, honest verdicts, and working code.

Written by practitioners with production data engineering experience Every deep dive ships a working GitHub repo Roundup covers only tools the team has actually evaluated

Join the newsletter

Two issues a week.
Both worth your time.

Tuesday: a real-world workflow with working code you can ship the same day.
Thursday: the best tools, setups, and production signal — curated, not aggregated.

For data engineers integrating AI into real stacks. Not for beginners. Not a news digest.

Free. No spam. Unsubscribe anytime.