Access

How to Ship Software Without Touching Your Keyboard (Seriously)

By clicking submit, you agree to receive marketing emails, event reminders, and our flagship Ultrathink newsletter.
Thank you.
Oops! Something went wrong while submitting the form.

How to Ship Software Without Touching Your Keyboard (Seriously)

Ryan Carson’s 2-file prompting system: A playbook for turning LLMs into reliable junior engineers and shipping features fast.

Subscribe to the

ultrathink

Newsletter

By submitting this form, you agree to receive recurring marketing communications from Tenex at the email you provide. 
To opt out, click unsubscribe at the bottom of our emails.
Thank you.
Oops! Something went wrong while submitting the form.

The Human

Ryan Carson is not a 22-year-old cracked FAANG engineer. 

He earned his compsci degree a quarter of a century ago and spent most of the time since building companies, not grinding LeetCode.

Through countless reps, he’s cracked a formula for shipping features fast without doing most of the typing himself. Ryan’s a five-time founder and Builder in Residence at AMP. Before that, he started Treehouse, the coding school that’s taught more than a million people.

The Loop

This playbook dissects Ryan’s GitHub-famous, two-file AI agent system—the one he uses to ship actual product features in production.

All of it runs tool-agnostic, with agents combing through your codebase like a junior dev hopped up on Celsius.

He speaks his first prompt into existence—straight from the dome to the terminal. Then forces the agent to ask a few clarifying questions before it’s allowed to turn the conversation into a PRD a junior programmer could actually ship against. From there, the machine gets its marching orders: a two-level task list, relevant files, and bite-sized steps small enough for the agent to tackle without drifting.

The agent codes, tests, and checks unified logs in short loops, and Carson steps in to slap bad ideas out of the way and enforce product taste. 

“As an engineer, don’t outsource your thinking,” he says. The model does the typing; the human still owns the judgment. 

Use Cases

If you see yourself in any of these, this playbook applies:

  • Founder/early eng team: 
    • Don’t burn out your one senior engineer. This loop lets your humans focus on architecture and product calls.
  • Staff/principal engineer: 
    • You’ve said “AI code is slop” before. Turns out you were just one-shotting cursor prompts in 2023. This system sets up bowling bumpers—PRDs, task lists, logs—so the agent behaves like a sane junior and delivers a strike.
  • Team with a huge, crusty codebase: 
    • You're sitting on a 15-year-old monolith. It's more like stinky cheese than fine wine. Ryan’s seen AI engineering used to replatform legacy stacks (e.g., Booking.com moving from Perl to modern code) faster than human-only teams.
  • Non-traditional/reborn engineers: 
    • You used to code, or never did. You want to become “AI-fluent” without going back to school. Ryan literally re-skilled himself and argues that anyone willing to suffer in order to grow can do the same.
“People thought agents would end the work of engineers. It’s actually the opposite—a full 180º. If you’re willing to be technical, you can do 10x more. And if you’re not, you’re absolutely going to be left behind.” 
— Ryan
Founder in Residence, Amp

01: Whisper Sweet Nothings

Your mouth writes better than your fingers.

First, know what feature you want. Got it? Good. 

The way Ryan brings every feature to life is absurdly lightweight: he hits a voice-to-text tool (Wispr Flow) and speaks a messy, natural description of what he wants. 

Why voice?

  • When we type, we over-optimize for brevity and precision.
  • When we talk, we naturally include more context, assumptions, and edge cases.
  • Agents love that extra context—it gives them surface area to ask good questions later.

Ryan pairs this with a simple rule: Talk to the agent when you’re brainstorming; fall back to structured prompts when the stakes are higher (like pre-deploy checks).

“Don’t be obsessed with every single little detail,” he says.

Human-Verified Prompt

“We need to ship a feature that [high-level outcome]. Our stack is [frameworks/languages/infra]. We’re using [auth/payments/key dependencies]. Generate a rough PRD for this feature, but do not finalize it yet. I’ll refine it after you ask clarifying questions.”

Copy

Pro Tip: Engineers love tools. Founders love crazy new stacks. Ryan’s take is as unsexy as picturing grandpa in the shower: keep your stack boring. It will help you fight analysis paralysis and just build, build, build.

02: Interrogate + Align

Good agents are like entry level hires: They won’t start till they understand the assignment.

Most people fail with coding agents because they do this:

“Build X.”
*generates 200 lines of code…*
“This sucks. AI sucks. Bye.”

Ryan solves this with a strict but straightforward prompt from his (very popular) open-source repowhere the agent is not allowed to write a PRD until it asks 3–5 clarifying questions. His prompt below does half the work for you.  

Human-Verified Prompt

"# Rule: Generating a Product Requirements Document (PRD) ‍ ## Goal To guide an AI assistant in creating a detailed Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature. ‍ ## Process 1. **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality. 2. **Ask Clarifying Questions:** Before writing the PRD, the AI *must* ask only the most essential clarifying questions needed to write a clear PRD. Limit questions to 3-5 critical gaps in understanding. The goal is to understand the "what" and "why" of the feature, not necessarily the "how" (which the developer will figure out). Make sure to provide options in letter/number lists so I can respond easily with my selections. 3. **Generate PRD:** Based on the initial prompt and the user's answers to the clarifying questions, generate a PRD using the structure outlined below. 4. **Save PRD:** Save the generated document as prd-[feature-name].md` inside the `/tasks` directory. ‍ ## Clarifying Questions (Guidelines) Ask only the most critical questions needed to write a clear PRD. Focus on areas where the initial prompt is ambiguous or missing essential context. Common areas that may need clarification: ‍ * **Problem/Goal:** If unclear - "What problem does this feature solve for the user?" * **Core Functionality:** If vague - "What are the key actions a user should be able to perform?" * **Scope/Boundaries:** If broad - "Are there any specific things this feature *should not* do?" * **Success Criteria:** If unstated - "How will we know when this feature is successfully implemented?" ‍ **Important:** Only ask questions when the answer isn't reasonably inferable from the initial prompt. Prioritize questions that would significantly impact the PRD's clarity. ‍ ### Formatting Requirements - **Number all questions** (1, 2, 3, etc.) - **List options for each question as A, B, C, D, etc.** for easy reference - Make it simple for the user to respond with selections like "1A, 2C, 3B" ‍ ### Example Format ``` 1. What is the primary goal of this feature? A. Improve user onboarding experience B. Increase user retention C. Reduce support burden D. Generate additional revenue ‍ 2. Who is the target user for this feature? A. New users only B. Existing users only C. All users D. Admin users only ‍ 3. What is the expected timeline for this feature? A. Urgent (1-2 weeks) B. High priority (3-4 weeks) C. Standard (1-2 months) D. Future consideration (3+ months) ``` ‍ ## PRD Structure The generated PRD should include the following sections: ‍ 1. **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal. 2. **Goals:** List the specific, measurable objectives for this feature. 3. **User Stories:** Detail the user narratives describing feature usage and benefits. 4. **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements. 5. **Non-Goals (Out of Scope):** Clearly state what this feature will *not* include to manage scope. 6. **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable. 7. **Technical Considerations (Optional):** Mention any known technical constraints, dependencies, or suggestions (e.g., "Should integrate with the existing Auth module"). 8. **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X"). 9. **Open Questions:** List any remaining questions or areas needing further clarification. ‍ ## Target Audience Assume the primary reader of the PRD is a **junior developer**. Therefore, requirements should be explicit, unambiguous, and avoid jargon where possible. Provide enough detail for them to understand the feature's purpose and core logic. ‍ ## Output * **Format:** Markdown (`.md`) * **Location:** `/tasks/` * **Filename:** `prd-[feature-name].md` ‍ ## Final instructions 1. Do NOT start implementing the PRD 2. Make sure to ask the user clarifying questions 3. Take the user's answers to the clarifying questions and improve the PRD"

Copy

“In the end, an agent is very simple. It's a loop with a fixed length of memory… Just like if a human walked into your office, you wouldn’t barf the entire information of the company on them, give them all the books, and expect them to be successful. You'd have to say, 'here’s the book you need for this task—now, rock and roll.'” 
— Ryan
Founder in Residence, Amp

03: Bend The Bot to Your Will

Once the PRD is solid, Ryan doesn’t say “OK, now build it.” That’s how you get wet spaghetti.

Instead, he uses his second file: generate-tasks.md. Its job is to read the PRD and output a checkable task list:

  • Relevant files: what to touch/create (code + tests).
  • Parent tasks: 0.0, 1.0, 2.0… with human-readable titles.
  • Subtasks: 1.1, 1.2… with concrete actions.
  • Progress model: every subtask is a checkbox the agent updates as it goes.

There are two key constraints baked into his prompt:

  • 0.0 is always “Create feature branch.” You keep new work isolated by default.
  • The agent must pause after creating high-level tasks and ask for a thumbs-up before breaking them into subtasks.

Human-Verified Prompt

"# Rule: Generating a Task List from User Requirements ## Goal To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on user requirements, feature requests, or existing documentation. The task list should guide a developer through implementation. ## Output - **Format:** Markdown (`.md`) - **Location:** `/tasks/` - **Filename:** `tasks-[feature-name].md` (e.g., `tasks-user-profile-editing.md`) ## Process 1. **Receive Requirements:** The user provides a feature request, task description, or points to existing documentation 2. **Analyze Requirements:** The AI analyzes the functional requirements, user needs, and implementation scope from the provided information 3. **Phase 1: Generate Parent Tasks:** Based on the requirements analysis, create the file and generate the main, high-level tasks required to implement the feature. **IMPORTANT: Always include task 0.0 "Create feature branch" as the first task, unless the user specifically requests not to create a branch.** Use your judgement on how many additional high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on your requirements. Ready to generate the sub-tasks? Respond with 'Go' to proceed." 4. **Wait for Confirmation:** Pause and wait for the user to respond with "Go". 5. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks necessary to complete the parent task. Ensure sub-tasks logically follow from the parent task and cover the implementation details implied by the requirements. 6. **Identify Relevant Files:** Based on the tasks and requirements, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable. 7. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, and notes into the final Markdown structure. 8. **Save Task List:** Save the generated document in the `/tasks/` directory with the filename `tasks-[feature-name].md`, where `[feature-name]` describes the main feature or task being implemented (e.g., if the request was about user profile editing, the output is `tasks-user-profile-editing.md`). ‍ ## Output Format The generated task list _must_ follow this structure: ## Relevant Files - `path/to/potential/file1.ts` - Brief description of why this file is relevant (e.g., Contains the main component for this feature). - `path/to/file1.test.ts` - Unit tests for `file1.ts`. - ‘path/to/another/file.tsx` - Brief description (e.g., API route handler for data submission). - `path/to/another/file.test.tsx` - Unit tests for `another/file.tsx`. - `lib/utils/helpers.ts` - Brief description (e.g., Utility functions needed for calculations). - `lib/utils/helpers.test.ts` - Unit tests for `helpers.ts`. ### Notes - Unit tests should typically be placed alongside the code files they are testing (e.g., `MyComponent.tsx` and `MyComponent.test.tsx` in the same directory) - Use `npx jest [optional/path/to/test/file]` to run tests. Running without a path executes all tests found by the Jest configuration. ## Instructions for Completing Tasks **IMPORTANT:** As you complete each task, you must check it off in this markdown file by changing `- [ ]` to `- [x]`. This helps track progress and ensures you don't skip any steps. Example: - [ ] 1.1 Read file` → `- [x] 1.1 Read file` (after completing) Update the file after completing each sub-task, not just after completing an entire parent task. ## Tasks - [ ] 0.0 Create feature branch - [ ] 0.1 Create and checkout a new branch for this feature (e.g., `git checkout -b feature/[feature-name]`) - [ ] 1.0 Parent Task Title - [ ] 1.1 [Sub-task description 1.1] - [ ] 1.2 [Sub-task description 1.2] - [ ] 2.0 Parent Task Title - [ ] 2.1 [Sub-task description 2.1] - [ ] 3.0 Parent Task Title (may not require sub-tasks if purely structural or configuration) ## Interaction Model The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details. ## Target Audience Assume the primary reader of the task list is a **junior developer** who will implement the feature."

Copy

“The agent needs to be able to know everything I know.”
— Ryan
Founder in Residence, Amp

04: Guardrails

Ryan’s biggest guardrails are:

  1. Give your agent eyeballs
  2. Never one-shot

Give your agent eyeballs

A lot of 1x developers today run their agent, watch it crash, and then spend precious time fetching the dead bodies (copy-pasting error messages back into the model like some overworked medieval clerk). That’s clunky, slow, and beneath the work of a 10x engineer.

Ryan refuses to do that. He pipes both the Next.js frontend and the server logs into a single stream the agent can read, then gives it one simple tool: “Check the last [#] of lines of logs.” 

Now, when something breaks, he just types /cl and the bot investigates its own mess. The AI sees the crash before he does—and fixes it without making him babysit.

The principle: As long as the feedback loop is machine-readable (logs, test output, snapshots), you can keep humans out of the clerical work.

Human-Verified Prompt

Adjust to your environment (Next.js, Rails, etc.). “You have access to a log file at [path or command to read logs]. When I run /check-logs, you should: Read at least the last 100–200 lines of the unified logs (both server and client). Identify any errors, warnings, or stack traces relevant to the feature branch [branch-name]. Summarize the root cause hypothesis in 2–3 sentences. Propose 1–3 concrete code changes to fix the issue, referencing specific files and functions. If logs are insufficient, tell me exactly what additional logging or instrumentation to add. Do not ask me to paste logs; always read them via the logging command.”

Copy

Never One-Shot

You miss 100% of the one-shots you take. 

Do not one-shot anything. Not the PRD, not the plan, definitely not the implementation.

What does that mean?

  • If a feature can’t be shipped to prod in a day, it’s too big—break it down.
  • The agent has a to-do list, checking off tasks one by one.
  • The human jumps in to:
    • Remove useless tasks (AI will often over-document, so you gotta shave the yak).
    • Insert design checks (e.g., “Design the component UI before you wire it up”).
    • Stop runaway behavior early.

Use this when you’re ready for the agent to start coding:

Human-Verified Prompt

“You are an AI engineer working on branch [branch-name]. You have a markdown task list for this feature. Work one parent task at a time. Never start a new parent task until the current one is implemented and tested. After each subtask you complete, update the markdown file by changing - [ ] to - [x]. After each parent task, run the relevant tests and /check-logs to verify nothing regressed. When you’re unsure about UX or behavior, pause and ask me a specific question instead of guessing. If you detect changes that should be a separate feature, propose a follow-up ticket instead of expanding scope. Keep the total scope to something that could reasonably be shipped to production in one day by a human engineer. If it exceeds that, propose how to split into smaller features.”

Copy

“Simplify. Build. Optimize. Ship. Don’t try to optimize as the first step. AI overwhelm is crazy.”
— Ryan
Founder in Residence, Amp

05: Spread Like A Virus

Take a quick sec for some thought-fluencing.

Anyone willing to do hard, sometimes painful work can become an AI engineer. You don’t need a wicked high IQ. You need curiosity, reps, and a willingness to be “elastic,” Ryan says.

If you’re leading an eng team, you shouldn’t just “hire some AI people,” he says. You need to:

  1. Learn this loop.
  2. Train your best ICs on it.
  3. Let them become internal coaches who help everyone else adopt the pattern.
  4. Move from “AI experiments” to “this is how we ship.”

The Takeaway

When you use this system the way it’s designed, engineering suddenly gets way more efficient. 

Instead of spending days grinding through boilerplate, an agent can push a day’s worth of work forward in a single loop—while you handle the judgment calls, and inject taste into everything else.

The real unlock is leverage comes when you + your people stop drowning in repetitive work and start focusing on everything else. 

Ready to Tenex your business?

If this playbook sparked ideas, imagine what you could build with our team behind you.

From AI engineering squads to full org-wide transformation, we help companies execute with speed, clarity, and impact.

Let's solve your biggest bottlenecks—together. Get started here.

Built by builders, trusted by leaders
Built by builders, trusted by leaders
Built by builders, trusted by leaders
Built by builders, trusted by leaders
Built by builders, trusted by leaders
Built by builders, trusted by leaders
Built by builders, trusted by leaders
Built by builders, trusted by leaders

Stay on the right side 
of history.

Get Started