This AI Codes From Your Phone

A new system unveiled by AI expert Cole Medin lets you manage and deploy code from anywhere, using apps like Telegram and GitHub. It's a radical shift that turns your AI assistant into a true remote employee.

ai tools
Hero image for: This AI Codes From Your Phone

Your IDE Is Now an Option, Not a Prison

Most developers still live and die by a single machine. Your keyboard, your local clone, your carefully tuned VS Code or JetBrains setup—walk away from that, and your productivity drops to near zero. Remote desktop hacks, underpowered laptops, and half-baked web IDEs only remind you how tightly your workflow is chained to one box.

AI was supposed to loosen those chains, but today’s tools mostly add new ones. GitHub Copilot lives in your editor. ChatGPT and Claude sit in browser tabs, cut off from your filesystem and build tools. Copilot-style plugins in VS Code, Zed, or JetBrains give you autocomplete and chat, but each one stays siloed inside its host, with its own context, its own quirks, and no shared memory of how you actually ship software.

Every time you switch devices, you rebuild the same fragile stack. You re-auth your AI tools, re-open the right repo, re-teach the assistant your architecture, and hope your extensions sync correctly. Want to fix a production bug from your phone or kick off a refactor from a tablet? You screenshot logs into an AI chat and manually paste patches back into GitHub, praying you don’t miss a file.

Developers feel this as constant, low-grade friction. Your “AI pair programmer” can’t follow you into Slack, your terminal, or your CI system. You juggle: - A desktop IDE plugin - A browser-based AI chat - A separate mobile experience that knows nothing about your code

A different model is starting to emerge: remote agentic coding. Instead of AI trapped in a plugin, you get a persistent agent that lives next to your repos and tools, reachable from anywhere. You talk to it from Slack, Telegram, or a browser on your phone; it talks to Git, your test runner, and your editor, no matter where you are.

Cole Medin is pushing that model to its logical extreme. His new remote agentic coding system, unveiled in a “Live Unveiling” stream to viewers from Greece, Brazil, Wyoming, and beyond, reframes the IDE as just one optional client. The desktop stops being a prison; it becomes a window into an AI-driven workflow that starts wherever you happen to be.

The Unveiling: Code from a Coffee Shop

Illustration: The Unveiling: Code from a Coffee Shop
Illustration: The Unveiling: Code from a Coffee Shop

Snow fell outside Cole Medin’s Minnesota window as 257 viewers packed into YouTube chat, dropping locations from Greece and Brazil to Wyoming. Medin’s voice was still hoarse from a week of workshops, but the energy was dialed to launch-mode: a “remote agentic coding system” he’d been hyping for weeks was finally going live.

Instead of another abstract AI concept, Medin opened GitHub on stream, flipped a private repo to public in real time, and pasted the link straight into chat. He framed it as a one-shot drop: available only during the Live Unveiling, with a brief encore window on Cyber Monday for people in impossible time zones.

At the center of his pitch sits a deceptively simple idea: connect any app, to any AI coding assistant, to any codebase. If you live in Telegram, Slack, or GitHub, you kick off work from there. If you prefer Claude Code, Gemini, or something homegrown, the system routes your request to that assistant, wrapped in the right project context.

Medin argues this breaks the lock-in of traditional IDE workflows. Instead of being shackled to a single machine and editor, your “IDE” becomes a thin endpoint: a chat app, a terminal, or a web UI that speaks a shared agentic protocol back to the system.

The live demo made that concrete. Medin pulled out his phone, opened Telegram, and fired off a natural-language request to modify a real codebase wired into the system. On stream, viewers watched the agent receive the task, analyze the repository, generate changes, and surface diffs the way a human collaborator might.

No remote desktop, no SSH juggling, no cloud IDE login. A Telegram message from a phone triggered a full coding workflow on a machine miles away, with the AI assistant handling file edits, reasoning, and validation.

Medin repeatedly stressed that this is not a slide deck fantasy. The repo shipped with runnable code, setup instructions, and a working pipeline that viewers could clone and adapt. For all the hype around “agentic coding,” this demo planted a flag: remote AI pair programming can already leave the lab and run from your pocket.

It's Not the Prompt, It's the Protocol

Context, not clever wording, drives Cole Medin’s remote agentic coding system. He calls his approach Context Engineering, and he treats it less like prompting and more like designing an API contract between developer, tools, and model.

Basic prompting asks an LLM to “add OAuth” or “fix this bug” with a few sentences of guidance. Context Engineering instead feeds the agent a structured dossier: project architecture, dependency graph, coding standards, test strategy, and concrete examples of “good” and “bad” changes.

Medin’s system wires this context into every request. Before the model writes a single line, it knows the monorepo layout, shared libraries, feature flags, and how CI enforces quality.

That structure turns the assistant from autocomplete-on-steroids into a production-ready collaborator. Rather than hallucinating new patterns, it reuses existing abstractions, updates related modules, and edits tests in the same PR.

Medin pushes this further with Agentic RAG, which he frames as the antidote to “snippet amnesia.” Traditional RAG sprays the model with loosely related chunks; Agentic RAG sends an agent to hunt down exactly what matters.

Agents run targeted searches over the filesystem, docs, and git history, then assemble a coherent narrative: how an auth middleware works, why a migration added a column, which feature flags gate a flow. The model sees a storyline, not a pastebin.

That distinction matters in large codebases. A login change might touch HTTP handlers, shared validators, SSO adapters, and front-end forms; Agentic RAG surfaces all four, so the agent patches the real system instead of a single file.

Underneath, standardized protocols make this portability possible. Model Context Protocol (MCP) defines how tools expose capabilities—filesystem access, search, test runners—so any compliant agent can plug in.

Agent Communication Protocol (ACP) handles how agents coordinate across environments. One agent can run inside a cloud workspace, another inside Zed, a third on a CI worker, all negotiating over a shared protocol instead of bespoke glue code.

Medin’s demo showed Claude Code and Gemini CLI threads operating on the same repo, mediated by these protocols, while a developer on a phone approved or rejected changes in real time. No IDE lock-in, just protocol-level interoperability.

Researchers exploring similar architectures, like Agentic Coding from First Principles - Matsen Group, echo the same thesis: protocols and context, not prompts alone, unlock serious agentic development.

Under the Hood: A Universal AI Translator

Forget the glossy demo; Medin’s remote agentic coding system lives or dies on a surprisingly simple architecture. At its center sits a persistent server that does one job: listen for structured commands, translate them, and route them to whatever AI and tooling stack you’ve wired in.

That central process exposes a clean JSON-based protocol. Every request becomes a standardized “intent” object: who asked, what they want, what repo or project it touches, and which tools are allowed. Every response flows back through the same pipe, whether it came from Claude, a shell script, or a GitHub Action.

Hanging off that core are application connectors—thin adapters that convert real-world events into those intents. Medin demoed webhooks for Telegram chats, GitHub Actions for repo events, and simple HTTP endpoints that anything else can hit. A Telegram message like “add unit tests for auth” becomes a structured job the server can understand and dispatch.

On the other side sit the coding assistant wrappers. These are CLI-facing shims for models like Claude or Gemini that understand the protocol, call the model with rich context, and then execute file edits, git operations, or test runs on the remote environment. They behave more like programmable operators than chatbots, with flags for safety rails, dry runs, and review modes.

Everything talks the same protocol, which is where the “universal translator” analogy stops being marketing and starts being literal. The server mediates between human-friendly commands and the strict, tool-aware instructions AI models need to behave predictably. It also arbitrates conflicts, like two assistants trying to touch the same file, by serializing or rejecting operations.

Modularity falls out of that design. To add a new app, you only build a connector that can: - Receive an event or message - Map it into the shared intent format - Post results back to the user

To add a new AI assistant, you write a wrapper that: - Consumes intents - Calls the model with Medin’s Context Engineering payloads - Applies or proposes changes in the target environment

Because every piece is swappable, you can chain multiple models, rotate providers, or stand up parallel assistants per repo without rewriting your workflows.

Your GitHub Issues Now Write Their Own Code

Illustration: Your GitHub Issues Now Write Their Own Code
Illustration: Your GitHub Issues Now Write Their Own Code

GitHub stops being just a code host in Cole Medin’s world and becomes the orchestration layer for a swarm of remote coding agents. Issues, branches, pull requests, and CI checks turn into a control plane that AI can read and act on without you opening an IDE.

A typical workflow starts where modern development already lives: a bug report. Someone files a GitHub issue, tags it with a label like `agent:fix`, and a developer assigns it to the remote agent with a single comment command, often something as simple as `/agent take`.

From there the system behaves like a disciplined junior engineer who never sleeps. The agent spins up a new branch off `main`, pulls the repo into its own environment, and uses Medin’s Context Engineering stack to ingest project structure, coding standards, and recent changes.

Instead of spraying speculative patches, the agent walks the GitHub issue thread, stack traces, and linked PRs to form a plan. It then edits the codebase file by file, running tests as it goes, and pushes commits back to the branch with detailed messages that map directly to the issue description.

Once the agent believes it has a fix, it opens a pull request that looks indistinguishable from a human one. You get a PR title tied to the issue, a checklist of changes, inline comments explaining non-obvious decisions, and links back to the original bug report for traceability.

Human oversight stays central by design. Developers shift from line-by-line authoring to review and governance: checking diffs, running local repro steps, and deciding whether the agent’s solution meets team standards before hitting Merge.

Because everything flows through GitHub, existing CI/CD pipelines stay untouched. The agent’s PR automatically triggers the same test matrix, static analysis, security scans, and deployment previews you already wired into GitHub Actions, CircleCI, or Jenkins.

If CI fails, the system does not stall at a red X. The agent reads the failing logs, updates the code, and pushes follow-up commits to the same branch, iterating until the checks go green or it flags the issue as needing human intervention.

This tight loop turns GitHub into a remote control surface for Medin’s agentic coding system. You orchestrate work with labels and comments, your pipelines enforce quality, and the AI quietly does the heavy lifting between issue opened and PR approved.

The 'Any App, Any Agent' Promise

Context engineering gives Cole Medin’s system a superpower: it doesn’t care where a task starts or which model finishes it. Any event that can throw a webhook can, in principle, kick off a coding run. That means a Slack message, a Jira ticket, or a Notion database entry can all become first-class triggers for remote agentic coding.

Picture a Slack channel where a PM types “/ship hotfix-1243” and walks away. Behind the scenes, the system grabs the linked GitHub issue, pulls logs from your observability stack, and hands a fully structured context bundle to an AI agent. Jira can do the same when a ticket moves to “Ready for Dev,” or Notion when a row in a “Backend Tasks” table flips to “Implement.”

Medin’s architecture treats these apps as interchangeable front doors. The heavy lifting happens in a unified orchestration layer that speaks one internal protocol and fans out to whatever tools you already use. Slack, Jira, Notion, Linear, or custom in-house dashboards all just map to the same “create coding task” primitive.

Backend flexibility is where the “Any App, Any Agent” promise gets real. Today you might wire it to Claude Code; tomorrow you might prefer an OpenCode fork, a Gemini-based assistant, or a closed-source in-house model. You swap the model adapter and keep every trigger, GitHub workflow, and review loop exactly the same.

That abstraction line attacks the worst part of modern dev tooling: fragmentation. Instead of juggling half a dozen browser tabs—one for your IDE, one for GitHub Copilot Chat, another for a Claude web IDE—you get a single, protocol-driven interface for AI-assisted development. For teams worried about safety and governance, resources like What Is Agentic Coding? Risks & Best Practices map neatly onto this design, because policy lives at the orchestration layer, not inside any one model.

The Billion-Dollar Question: What's the Cost?

Cost is the first thing every developer asks about custom AI setups, usually right after they see a flashy demo. Medin knows the horror stories: a weekend experiment that quietly burns through 2 million tokens, or a “quick” refactor that leaves a surprise $600 bill on the company card. Agentic workflows amplify that fear, because autonomous agents happily loop, call tools, and re-query models until someone pulls the plug.

Medin’s remote agentic coding system sidesteps that anxiety by refusing to be yet another metered middleman. Instead of proxying every request through a bespoke backend, it leans on the CLI for tools you already pay for—like Claude Pro, Gemini, or other model-specific command-line clients. The system orchestrates workflows; your existing subscriptions handle the actual model calls.

Practically, that means cost tracks almost one-to-one with what you would spend using those services directly. If Claude Pro gives you a fixed monthly quota, this setup just consumes that same quota, whether your agent is editing a React app from a coffee shop or triaging GitHub issues at 2 a.m. from your phone. No extra per-token markup, no opaque “platform usage” line item.

Because the system operates as a universal AI translator and router, not a billing layer, developers can scale up their agentic ambitions without scaling up financial risk. Want an agent to monitor GitHub Issues, open branches, run tests, and raise pull requests while you commute? The cost profile stays bounded by your existing plan rather than some runaway API meter.

That design choice matters more than any single feature demo. It turns advanced, always-on agentic workflows from a luxury for well-funded teams into something a solo dev can justify. You get the power of remote, protocol-driven coding agents, with the same bill you were already willing to pay—no surprise spike, no budgeting spreadsheet required.

Why You Can't Just 'git clone' It

Illustration: Why You Can't Just 'git clone' It
Illustration: Why You Can't Just 'git clone' It

Scarcity started as a stunt. During the Live Unveiling stream, Cole Medin flipped his remote agentic coding system from private to public on GitHub, dropped the link in chat for roughly 260 people, and set a hard deadline: once the stream ended, the repo vanished again, with a brief reopening window on Cyber Monday at 4 p.m. Central for about an hour.

That wasn’t a gimmick; it was a distribution model. Medin made clear that the two-commit public repo was just a snapshot, a port of the version he had been evolving inside his Dynamis courses and workshops, not the living system itself.

Ongoing development now happens behind the walls of his private Dynamis AI community. That’s where the system’s context templates, ACP wiring, GitHub workflows, and “any app, any agent” integrations keep changing weekly as new tools, models, and protocols land.

Instead of chasing GitHub stars, Medin is chasing tight feedback loops. Dynamis members hit the system with real workloads—enterprise monorepos, messy legacy services, multi-agent workflows—and their failures and edge cases feed directly into the next iteration.

The course alone spans 71 lessons and about 18 hours of content, but the more important number is cadence. Medin runs frequent live workshops, ships new agent templates, and refactors the remote coding stack as Anthropic, OpenAI, and Google quietly tweak their APIs and rate limits.

GitHub still matters, but as an orchestration layer, not as the primary community. Issues, PRs, and Actions become triggers for agentic workflows that only fully exist if you’re inside Dynamis, where members test new flows like:

  • Auto-responding to issues with working patches
  • Spinning up per-branch remote dev agents
  • Routing tasks across Claude, Gemini, and local models

Anyone can fork the frozen public snapshot from that livestream recording. Almost no one can keep it competitive without the private playbook that explains how to maintain prompts, update protocols, and re-balance cost vs. latency as the model landscape shifts.

Scarcity here functions as a moat and a filter. If you want the current, battle-tested version of the system—and a say in where it goes next—you don’t “git clone”; you join the room where the agents are actually being trained.

A Blueprint for AI-Human Collaboration

AI that codes from your phone stops being a novelty once you treat it as infrastructure, not a toy. Medin’s remote agentic system functions less like a chatbot and more like a distributed teammate wired into your repos, terminals, and notification streams. That shift hints at what software development looks like when AI agents follow shared rules instead of ad hoc prompts.

Rather than inventing yet another proprietary bot, Medin effectively prototypes a standardized framework for agents. His use of patterns similar to ACP, the emerging Agent Communication Protocol, turns model calls into messages that any compliant agent can parse, route, and act on. That means a GitHub issue, a Slack thread, and a CLI command can all trigger the same underlying behavior.

Current AI helpers usually live in silos: a Claude Code tab here, a Cursor window there, maybe an Aider CLI session on the side. Guides like Agentic Coding Tools Explained: Complete Setup Guide for Claude Code, Aider, and Cursor show how fragmented this ecosystem still is. Medin’s system treats those tools as interchangeable front ends speaking into one orchestrated brain.

Framed this way, remote agents become first-class team members instead of autocomplete on steroids. They accept tickets, estimate work, open PRs, and wait for human review like a junior engineer who never sleeps. The difference is that their “onboarding” lives in structured Context Engineering files instead of tribal knowledge.

That structure buys something black-box copilots rarely offer: traceability. Every action flows through explicit protocols, logged requests, and serialized tool calls, so developers can see not just what the agent did, but why it thought that was the right move. When an agent runs a migration or refactors a module, its reasoning sits in the commit history, not hidden behind a vendor dashboard.

Control shifts back toward the developer. Teams can pin models, cap tool scopes, or swap out providers without rewriting their entire workflow because the protocol surface stays stable. In a world where AI systems increasingly feel opaque and centralized, a protocol-first, remote agentic model looks less like magic and more like a blueprint for sustainable AI-human collaboration.

Will Your Next PR Be Written by an AI?

Pull out your phone, open GitHub, and imagine your next pull request already waiting for review—tests green, description written, edge cases handled by a swarm of background agents that never touched your laptop. That is the quiet but radical shift Cole Medin’s remote agentic coding system points toward: developers as orchestrators of AI workflows, not just authors of lines of code.

Instead of replacing engineers, this stack aggressively reframes their job. You become the person who defines architecture, codifies standards, and curates context, while AI agents grind through boilerplate, refactors, and integration glue at 3 a.m. from a server farm you never see.

A “normal” day in this world looks different. You kick off a feature from your phone on the train, tagging a GitHub issue with a structured template that encodes requirements, constraints, and acceptance tests. Agents fan out: one plans the change set, another edits code via ACP, another runs the CI pipeline, and a final one drafts the PR with a rationale and risk analysis.

By the time you sit down with coffee, you are not starting work—you are reviewing it. You skim a PR that links back to the originating issue, references design docs retrieved through Agentic RAG, and includes automatically generated benchmarks. Your job is to veto, redirect, or approve; the system’s job is to propose concrete, testable diffs.

That shift makes developers look less like typists and more like staff engineers managing a team that just happens to be synthetic. You decide which agents get access to which repos, which tools they can invoke, and which workflows run on autopilot versus requiring human sign-off. Governance, not keystrokes, becomes the scarce skill.

None of this lands overnight. Teams will phase it in the way they adopted CI/CD: first as an experiment on side projects, then as a helper for tests and docs, and finally as the default path for routine implementation work. Resistance will not come from the models, which already handle complex refactors, but from habits built in an era when your IDE felt like a prison instead of a switchboard.

Change is compounding quickly; Medin’s 18-hour agentic coding curriculum already feels dense for a field that only coalesced around MCP and ACP-style protocols in the last year. If your next PR is not written mostly by AI, odds are high the one after that will be—and the real question is whether you are the person directing those agents, or the one still waiting for their laptop to finish booting.

Frequently Asked Questions

What is a remote agentic coding system?

It's a framework that allows a developer to interact with an AI coding assistant from any application (like Telegram or Slack) on any device, instructing it to perform complex coding tasks within a remote codebase, such as fixing bugs or adding features via GitHub.

How does Cole Medin's system work without expensive API keys?

The system cleverly leverages your existing subscriptions to services like Claude Pro or Codex. It acts as a bridge, using the command-line interface (CLI) of these tools on a remote server, so you aren't charged per token via a direct API.

Is this system a replacement for a traditional IDE?

Not entirely. It's more of a powerful extension. It excels at delegating well-defined tasks remotely, but developers still use an IDE for complex debugging, initial architecture, and code review. It changes where and how you work, but doesn't eliminate the IDE.

What is 'Context Engineering'?

Context Engineering is Cole Medin's methodology for providing an AI with a comprehensive, structured understanding of a project. Instead of a simple prompt, it gives the AI access to architecture diagrams, coding rules, best practices, and examples to ensure it generates high-quality, consistent code.

Tags

#agentic-coding#dev-tools#ai-agents#future-of-coding#github

Stay Ahead of the AI Curve

Discover the best AI tools, agents, and MCP servers curated by Stork.AI. Find the right solutions to supercharge your workflow.