Skip to content

Progress Tracker

How to use: Fork or download this file. Check off items as you complete them. Items marked 🌟 are highest-priority for your learning path.


Curriculum Modules

Orientation & Foundations

Patterns & Production Skills

Advanced & Agentic


Exercises

Module 1

  • Exercise 1.1 — Prompt decomposition (five-component rewrite)
  • Exercise 1.2 — Ambiguity identification
  • Exercise 1.3 — Comparative analysis (naive vs. engineered)

Module 2

  • Exercise 2.1 — Specificity audit on an existing prompt
  • Exercise 2.2 — Decomposition design (intra vs. inter-prompt)
  • Exercise 2.3 — Iteration log (three revision cycles)

Module 3

  • Exercise 3.1 — Pattern identification in production prompts
  • Exercise 3.2 — Pattern selection for four tasks
  • Exercise 3.3 — Few-shot vs. zero-shot design comparison

Module 4

  • Exercise 4.1 — Token budget audit
  • Exercise 4.2 — Prompt refactoring (eliminate duplication across prompt files)
  • Exercise 4.3 — Anti-pattern identification and fix

Module 5

  • Exercise 5.1 — RAG prompt design for a technical support chatbot
  • Exercise 5.2 — Red-team your own prompt (three attack types)
  • Exercise 5.3 — Evaluation pipeline design (5-case test suite)
  • Exercise 5.4 — Cross-model portability audit

Module 6

  • Exercise 6.1 — Plan-and-execute agent design
  • Exercise 6.2 — Reflection loop applied to a previous exercise
  • Exercise 6.3 — Multi-agent code review system design
  • Exercise 6.4 — Memory management design for multi-session agent

Hands-On Labs


Deep-Dive Comparisons


Reference Guides

Architecture Decision Records


Research Extension Track (~25 hours total)

Track 1: Foundations of In-Context Learning

  • Paper 1 — Language Models are Few-Shot Learners [Brown2020]
  • Paper 2 — Training LMs with Human Feedback [Ouyang2022]

Track 2: Reasoning and Chain-of-Thought

  • Paper 3 — Chain-of-Thought Prompting [Wei2022]
  • Paper 4 — Large LMs are Zero-Shot Reasoners [Kojima2022]
  • Paper 5 — Self-Consistency CoT [Wang2023]

Track 3: Agents and Tool Use

  • Paper 6 — ReAct [Yao2023]
  • Paper 7 — Reflexion [Shinn2023]
  • Paper 8 — Generative Agents [Park2023]

Track 4: Safety and Robustness

  • Paper 9 — Red Teaming LMs with LMs [Perez2022]
  • Paper 10 — Indirect Prompt Injection [Greshake2023]

Track 5: Retrieval Augmentation and Evaluation

  • Paper 11 — RAG for Knowledge-Intensive NLP [Lewis2020]
  • Paper 12 — MT-Bench and Chatbot Arena [Zheng2023]

Track 6: Reasoning Models and Test-Time Compute

  • Paper 13 — Test-Time Compute Scaling [Snell2024]
  • Paper 14 — Process Supervision / Let's Verify Step by Step [Lightman2023]
  • Paper 15 — System-2 Attention [Saha2024]

See Research Extension Track for full study guides.


Prompt Templates Used

Python Stack

  • create-feature.prompt.md
  • review-code.prompt.md
  • debug-issue.prompt.md
  • write-tests.prompt.md
  • refactor-code.prompt.md
  • generate-docs.prompt.md
  • update-generate-readme.prompt.md

React + TypeScript Stack

  • auditor-best-practices.prompt.md
  • auditor-codebase-maturity.prompt.md
  • auditor-cybersecurity-features.prompt.md
  • auto-code-implementation.prompt.md
  • create-chatbot-ollama.prompt.md
  • safety-gate-llm.prompt.md

Node.js + TypeScript Stack

  • create-api-endpoint.prompt.md
  • review-code.prompt.md
  • write-tests.prompt.md
  • generate-openapi-spec.prompt.md

Last updated: February 2026. Check CHANGELOG.md for new additions.

← Back to curriculum