Skip to content
8 min read
Back to blog
AI Product Management Claude Code Building With AI Tools April 10, 2026 8 min read

From Browser Tab to Terminal: How AI Became My Co-Builder

A product manager's journey from copy-pasting ChatGPT responses to building apps, analyzing 660K data points, and transforming team workflows - all from the command line.

From Browser Tab to Terminal: How AI Became My Co-Builder

I studied mechanical engineering, worked in supply chain at a multinational, then became a product manager. I have never written production code in my life.

Last month, I built an Android app that reads my bank SMS messages, auto-parses every transaction, and visualizes my spending in a dashboard. I also analyzed 660,000 AI chatbot sessions, building a multi-layered behavioral taxonomy with streaming data pipelines and automated classifiers.

All from my terminal. No IDE. No Stack Overflow tabs. Just me and an AI tool running in my command line.

0
Lines of code written
660K
Sessions analyzed
3
Apps shipped
4
PMs now demo-first

This post is about how I got here. Not overnight, but through three distinct phases of AI adoption that fundamentally changed what I can accomplish as a non-software engineer.

1

The Question Box

Late 2022 – 2024

AI as a browser tab. Type a question, copy the answer, paste it somewhere else. Useful but disconnected from real work.

ChatGPT
Context lost on every tab switch. Could not touch files or data.
2

The Thinking Partner

2024 – Early 2025

Better reasoning, longer conversations, structured analysis. But still the same copy-paste workflow.

Claude (Browser)
The copy-paste ceiling. Better output, same friction.
3

The Co-Builder

Mid 2025 – Present

AI lives in the terminal. Reads files, writes code, runs commands, builds things directly on your machine.

Claude CodeOpenCodeGemini CLI

Phase 1: The Question Box (Late 2022 - 2024)

I was one of the early ChatGPT users when it launched in November 2022. Back then, it could not browse the internet, could not search for anything, and had no knowledge of events after its training cutoff. It was a conversational assistant drawing from static training data.

But even in that limited form, it was immediately useful for product work:

  • Drafting emails and stakeholder communications
  • Brainstorming feature ideas and exploring edge cases
  • Learning how large language models actually worked under the hood
  • Creating first drafts of PRDs and strategy documents

The workflow was simple: think of a question, switch to the ChatGPT tab, type it in, read the response, copy what was useful, paste it into whatever document I was working on. Repeat.

What this phase taught me: Using AI daily as a consumer gave me an intuition for what large language models can and cannot do. I learned where they hallucinate, where they excel, what kind of prompts produce useful output, and what kind produce garbage.

That understanding turned out to be the most valuable thing I gained from Phase 1. Not the emails it drafted or the ideas it generated, but the mental model of how these systems work.

Why? Because I was about to build one.

When I started leading the development of a curriculum-aligned AI chatbot for millions of students, every design decision was informed by hundreds of hours as a daily AI user. I knew firsthand that generic LLMs would give confidently wrong answers about our national curriculum. I knew that students would try to push the model outside its boundaries. I knew that the difference between a useful AI response and a useless one often came down to how the system was prompted, not which model was used. Being a power user made me a better AI product builder.

Phase 2: The Thinking Partner (2024 - Early 2025)

Sometime in 2024, I started using Claude. The shift was not dramatic at first. It was still a browser tab. Still the same copy-paste workflow.

But the quality of reasoning was noticeably different. Claude could hold longer conversations without losing the thread. It could work through complex strategy problems step by step. It was better at analysis frameworks and at pushing back when my thinking had gaps.

I started using it for:

  • Breaking down complex product strategy problems
  • Building analysis frameworks for user behavior data
  • Working through prioritization decisions with structured reasoning
  • Longer, multi-turn conversations that built on previous context

The limitation I hit: The output improved, but the workflow stayed exactly the same. I was still switching between my terminal, my editor, my data tools, and this browser tab. Context was constantly lost. Every new conversation started from zero. And I still could not get AI to actually do anything with my files or data. I could only describe them in text and hope the response was applicable.

This was the copy-paste ceiling. And I did not realize how much it was holding me back until I broke through it.

Phase 3: The Co-Builder (Mid 2025 - Present)

The shift happened because I saw someone demo a CLI-based AI tool. I do not remember the exact video or post, but I remember the reaction: “Wait, it can just… read your files and make changes directly?”

Curiosity took over. I installed Claude Code first, then OpenCode, then Gemini CLI. Within a week, my relationship with AI had fundamentally changed.

The difference is structural, not incremental. In the browser, AI is a conversation partner. In the terminal, AI is a co-builder. It lives where your work lives.

CapabilityBrowserCLI
Answer questions about code
Read your project files
Edit and create files directly
Run commands and see output
Maintain context across your codebase
Chain multi-step operations
Debug errors in real-time
Work without copy-pasting

Here are three concrete things I built that would have been impossible with browser-based AI:

📊

660K Session Analysis

Built a streaming data pipeline to analyze 4.6GB of AI chatbot conversations with a 4-layer behavioral taxonomy.

660K
Sessions
1.2M
Conversations
4.6 GB
Data Size
PythonClaude CodeStreaming Pipeline
📱

Spend Tracker App

Android app that reads bank SMS, auto-parses transactions (amount, merchant, date), and visualizes spending in a dashboard.

0 lines
Code Written
Android
Platform
Kotlin
Language
KotlinSMS ParsingClaude Code
🌐

Portfolio Website

Full portfolio site with interactive data visualizations, animated components, blog with series support, and automated deployment.

Astro
Framework
React
Components
Vercel
Deploy
AstroReactMDXTailwind

1. Analyzing 660,000 AI Chatbot Sessions

For an AI product I built that serves over 200K users, I needed to analyze 660,000 sessions containing 1.2 million conversations. With 4.6GB of raw data, even loading it into a browser-based tool was impractical.

With Claude Code, I described what I wanted to understand and built this entire pipeline from my terminal:

Data Pipeline Built From Terminal
📥
Raw Data
4.6 GB JSON · 660K sessions
🌀
Stream & Join
Memory-safe processing
🧠
Classify Prompts
Bloom's taxonomy · Bangla + English
🔍
Detect Patterns
8 session behavior types
👤
User Archetypes
Learner profiles across sessions
📊
Visualize
Charts for LinkedIn & portfolio
Read left to right, top to bottom → entire pipeline built through terminal conversations

The analysis revealed patterns I had not anticipated: rural female students showed more quick clarification behavior, struggling learner patterns were twice as common in rural areas, and students used the AI for fundamentally different purposes depending on their subject.

2. A Spend-Tracking Android App

I wanted a simple personal tool: read my Standard Chartered bank SMS, parse each transaction, and show spending in a dashboard. Instead of writing a PRD and handing it to a developer, I described what I wanted to an AI tool in my terminal.

It built the entire Android app. I did not write a single line of Kotlin. But I made every product decision.

The Build Loop – How I Ship Without Writing Code
💬
Describe
PM
🔨
Build
AI
📲
Test
PM
🔄
Refine
Both
↻ Repeat until it matches the vision

The iterative loop was faster and more direct than anything I had experienced, even working with human developers. Describe a feature, test it on my phone, describe what to change, see it fixed immediately.

3. My Portfolio Website

The site you might be reading this on was built the same way. Astro, React islands, MDX content, Vercel deployment. The interactive charts on my project pages, the scroll-triggered animations, the data visualization components: all built through terminal conversations.

The architecture decisions were mine. The implementation was handled by AI. When something did not look right on mobile, I described the change and iterated until it matched my vision.

What It Did to My Team

The personal projects were impressive to me, but the biggest impact was on how my product team works.

We used to start every feature with a dense, multi-page PRD. Stakeholders would read it (or more often, skim it), try to mentally visualize the user flows, and give feedback based on their imagination of what the product would look like.

Now, we demo first.

BeforePRD-First Workflow
Write PRD
Hours of writing
Static Flows
Flowcharts & wireframes
Mental Model
Stakeholders imagine
Feedback
Based on imagination
AfterDemo-First Workflow
Build Demo
AI-assisted prototype
Click Through
Real user flows
Live Feedback
Immediate & specific
Document
After alignment

Before any stakeholder review, the PM responsible builds a functioning prototype using whichever AI tool they prefer. Some use Cursor. Some use v0.dev. Some use Lovable. I use Claude Code. The tool does not matter. What matters is that everyone clicks through actual user flows during the meeting instead of reading about them.

The shift from “writing first” to “demo first” had a surprising side effect: it sharpened our product thinking. When you actually click through what you have designed, the gaps in the user journey become immediately obvious in ways that no document could reveal.

PRDs are not dead. They still matter for complex decisions, compliance requirements, and long-term documentation. What changed is the order: demo first, document later.

Getting Started: A Guide for Curious PMs

If you are a PM or non-engineer who wants to move beyond browser AI, here is what I would suggest:

Claude Code

Beginner

CLI AI tool that understands your codebase and makes changes directly. The most intuitive starting point.

Best for: Full projects, data analysis, multi-file changes

Cursor

Beginner

AI-powered code editor. Familiar IDE experience with AI built in. Lower barrier than pure CLI.

Best for: Code editing, frontend work, visual projects

v0.dev

Beginner

Generate UI components from text descriptions. No setup required. Great for quick prototypes.

Best for: UI prototypes, component design, stakeholder demos

OpenCode

Intermediate

Open-source terminal AI. Lightweight alternative with provider flexibility.

Best for: Quick tasks, model experimentation

Start with one small personal project. Do not start with work. Build something for yourself: a personal site, a data analysis, a small utility. The stakes are low and the learning is high.

Learn to give good context. The biggest skill shift is not technical. It is learning to describe what you want clearly and completely. Think of it as writing a really good user story, but for an AI pair programmer.

Do not expect magic. AI tools make mistakes. They write buggy code. They misunderstand requirements. The skill is in reviewing, guiding, and iterating. You are the product manager of the conversation.

Try multiple tools. Each has strengths. Cursor feels familiar if you have used an IDE before. v0.dev requires zero setup for UI work. Claude Code and OpenCode give you the most control from the terminal. Experiment and find what fits your workflow.

Lessons Learned and What Is Next

The journey from browser to terminal was not smooth. Here is what I got wrong and where I think this is heading.

What I Got Wrong
⚠️

AI amplifies judgment, not replaces it

If you cannot explain what the code should do in plain language, you are not ready to have AI write it. I shipped broken things early on because I trusted output I could not evaluate.

📚

The terminal has a learning curve

File systems, project structures, dependencies, build steps. These are prerequisites that non-engineers must build before CLI tools become productive.

🎯

Expect inconsistency

The same prompt produces different quality output depending on context window state and project complexity. Working with that variability is its own skill.

What Is Next
🚀

PMs will validate ideas independently

The boundary between defining what to build and building it is dissolving. PMs who can prototype will have more informed conversations with engineering teams.

👥

Demo-first is becoming a core PM skill

Every PM on my team is now expected to be comfortable building prototypes with AI tools. It is not a nice-to-have anymore.

🤖

From co-builder to autonomous agent

AI tools are moving toward handling multi-step workflows independently. We are in the co-builder phase now. The question is how quickly you make the shift.

The question is no longer whether PMs should learn to use AI tools beyond the browser. The question is how quickly you can make the shift.


If you want to see what I have built, visit rifatbinalam.com. If you want to talk about AI tools for product teams, reach out on LinkedIn.

Rifat Bin Alam Rohit

Rifat Bin Alam Rohit

Product Lead at Shikho with 5+ years building edtech and logistics products. Currently leading AI features used by 200K+ students. Teaching 1,700+ learners about product management and data storytelling.