personal project · 2026

AI Use Tracker

Building a native iOS app as a solo designer using Claude Code. From first prompt to a working build on my iPhone in seven hours. Product thinking, UX design, and SwiftUI development by one person.

roleSolo designer-developer
projectPersonal project
timeline7 hours · single sitting
platformiOS · SwiftUI
stackSwiftUI · ActivityKit · WidgetKit · Keychain · Claude Code (Sonnet 4.5)
statusbuilt · shelved (ToS constraint)
AI Use Tracker iOS app showing countdown timers and usage tracking for Claude and ChatGPT
fig.01 · AI Use Tracker · cover composition with Dynamic Island countdown cover · png
Build time
7h
Single sitting, end-to-end
Swift files
33
Fully functional iOS app
Figma files
0
Concept straight to code
UsageLiveActivity.swift SwiftUI
123456789101112131415161718
import ActivityKit
import WidgetKit
import SwiftUI

// Live Activity drives Lock Screen + Dynamic Island
struct UsageLiveActivity: Widget {
  var body: some WidgetConfiguration {
    ActivityConfiguration(for: UsageAttributes.self) { context in
      LockScreenView(state: context.state)
    } dynamicIsland: { context in
      DynamicIsland {
        // expanded · trailing · countdown pill
        DynamicIslandExpandedRegion(.trailing) {
          CountdownPill(reset: context.state.resetAt)
        }
      } compactTrailing: { Text(context.state.shortLabel) }
        minimal: { Text("%") }
    }
  }
}
§ 00 / 12

From design to code in 7 hours

I designed and built a native iOS app from scratch. No engineering team, no bootcamp, no prior Swift experience.

AI Use Tracker monitors token consumption across Claude and ChatGPT with countdown timers, Live Activities, and Dynamic Island integration. The entire project, from first prompt to a working build on my iPhone, took seven hours in a single sitting, using Claude Code as my engineering partner.

Then I discovered I couldn't ship it.

This case study isn't about the app itself. It's about what happens when a designer with a decade of product experience stops waiting for engineering resources, builds the thing, and finds the boundaries that only become visible when you go all the way to the metal.

§ 01 / 12

Solo designer-developer, 7-hour sprint

Every meaningful product decision was mine. Claude Code translated those decisions into Swift.

my decisions human
Product thinking, UX design, visual design, information architecture, iOS development direction, API integration strategy, and QA. Every meaningful product decision was mine.
claude code execution agent
SwiftUI syntax, code generation, Swift pattern translation, and boilerplate implementation. Claude Code handled the syntax. I handled every decision.
§ 02 / 12

The invisible limits

Every designer who uses AI tools daily knows the frustration. You're mid-flow, deep in a conversation, and suddenly you hit a usage limit you didn't see coming.

There's no countdown, no warning, no visibility into how close you are to the wall. You're forced to switch context, lose momentum, and wait.

I use Claude and ChatGPT professionally for research, prototyping, copywriting, code generation, and design critique. Across both platforms, usage resets happen on different schedules with different rules, and neither platform surfaces this information in a useful way.

The tools I rely on to work faster were, paradoxically, slowing me down by being opaque about their own limits.

§ 03 / 12

No visibility, no control

Both Claude and ChatGPT have usage limits for consumer accounts, but neither platform provides the information power users need.

  • A real-time view of remaining capacity
  • Predictable countdown timers for session and weekly resets
  • Glanceable status without opening the app
  • Proactive notifications before hitting limits

The information exists. Buried in account dashboards, hidden behind multiple clicks, formatted for billing teams rather than power users. For someone who depends on these tools for six to eight hours daily, this is a workflow gap with real productivity cost.

Claude Settings page showing usage data buried in account dashboards, hidden behind multiple clicks
fig.02 · Claude settings · usage data buried behind multiple clicks platform · audit
OpenAI Codex Usage dashboard showing usage breakdown, also buried deep in platform settings
fig.03 · OpenAI Codex usage · same pattern, different surface platform · audit
§ 04 / 12

Why build instead of design

The fastest path to solving the problem. Not a proof of point.

I've spent my career creating Figma files that communicate intent to engineering teams. Prototypes that approximate behaviour. Specs that try to capture every state. But this project didn't have an engineering team to hand off to. It was a personal tool for a personal problem.

The decision to build it myself wasn't about proving a point. It was the fastest path to solving the problem. A Figma prototype wouldn't refresh my usage data. A clickable mockup wouldn't tap me on the wrist through Dynamic Island.

Claude Code changed the calculus. The question stopped being "can I code this?" and became "can I make the right product decisions fast enough?"

The question stopped being "can I code this?" and became "can I make the right product decisions fast enough?"
§ 05 / 12

Design thinking, engineering execution

Three operating principles for the seven hours. Each is a designer's instinct applied to code.

Start with the interface, not the architecture.

As a designer, I think in screens, states, and transitions, not in class hierarchies. So I started where I start every project: what does the user see?

I sketched the core views on paper first. Three tabs (All, Claude, ChatGPT), each showing a countdown timer and a usage bar. The information hierarchy was clear from day one because I've spent ten years making complex data glanceable.

Use AI as a pair programmer, not a code generator.

Claude Code didn't write the app. I made every architectural decision: MVVM pattern, Keychain for token storage, ActivityKit for Live Activities, separate framework for shared logic between app and widget. Claude Code translated those decisions into Swift. When it suggested patterns I didn't understand, I asked it to explain. When its suggestions conflicted with my UX requirements, I overruled it.

The dynamic was closer to a senior designer directing a fast junior developer than to "AI building an app."

Development environment showing Xcode alongside the working AI Use Tracker app running on iPhone
fig.04 · Development environment · Xcode side-by-side with device build workspace · png

Design system thinking from the start.

Even for a personal project, I couldn't help myself. Consistent spacing, a deliberate colour system (claudeOrange, chatgptBlue), SF Symbols for iconography, and Liquid Glass effects for iOS 26 with proper fallbacks. The muscle memory from building design systems at Evoke across three brands carried over directly.

AI Use Tracker full app view in dark mode showing Claude and ChatGPT usage cards, countdown timers, usage bars, and tab navigation
fig.05 · App home · dark mode · dual provider cards with countdowns screen · dark
AI Use Tracker in light mode with Liquid Glass effects, showing both providers with session and weekly usage bars
fig.06 · Light mode · Liquid Glass · session and weekly bars screen · light
AI Use Tracker dark mode with single provider connected, showing Claude usage details and Add Provider option
fig.07 · Single-provider state · empty slot for second provider screen · state
§ 06 / 12

Technical decisions are design decisions

The most important feature isn't inside the app. It's outside it.

Live Activities + Dynamic Island UX
Live Activities put the countdown timer on the Lock Screen and in the Dynamic Island, meaning I never need to open the app to check my remaining usage. The best interface is the one you don't have to think about opening. Building this required a Widget Extension with its own data model, a LiveActivityManager, and careful thinking about compact vs. expanded states. The pill toggles between timer and percentage. A tiny interaction detail that only matters when you live with the product daily.
Dual authentication architecture SYS
Claude uses OAuth PKCE, a proper, standards-based flow. ChatGPT required a different approach: cookie-based session extraction through a WebView. Two completely different auth patterns, unified under one AuthManager that abstracts the complexity from the UI layer. Different backends, consistent frontend.
iPhone Lock Screen showing Live Activity with Claude usage countdown in compact widget view
fig.08 · Lock Screen · compact Live Activity with countdown live activity · compact
iPhone Lock Screen showing expanded Live Activity with Claude usage details and notification shortcuts
fig.09 · Lock Screen · expanded Live Activity with quick actions live activity · expanded
The 10-year advantage shows up here. Decisions like "separate the Core framework so the Widget Extension can share models" or "use Keychain not UserDefaults for tokens" come from years of seeing how products break at scale.
§ 07 / 12

The build · 7 hours, one sitting

From first prompt to a working build on iPhone. Each phase tracked against the wall clock.

00:00
Hours 1–2 · Foundation
Core architecture, data models, authentication flows, basic UI scaffold. The hardest part was getting the OAuth flow right. Intercepting redirects in WKWebView required understanding both the web standard and iOS's security model. Claude Code helped with the Swift syntax; I directed the security decisions.
02:00
Hours 3–4 · Core features
Usage tracking, countdown timers, notification scheduling, settings. The ResetTracker service became the central nervous system. I refactored twice because the first data flow felt wrong. Knowing when something "feels wrong" architecturally is the same instinct as knowing when a layout has bad hierarchy.
04:00
Hours 5–6 · Polish + Live Activities
Dynamic Island integration, Lock Screen widget, OLED mode, Liquid Glass effects. This phase was almost entirely design work expressed through code. Tweaking timing, colours, information density, and interaction states.
06:00
Hour 7 · Ship to device
Final QA, build to iPhone via Xcode, edge case fixes. The difference between "it compiles" and "it feels right on glass" is the same gap between a Figma prototype and a shipped product. That last hour was the most valuable.
§ 08 / 12

Then reality hit

I compiled the final build, installed it on my iPhone, and watched the Dynamic Island light up with my Claude usage data for the first time. Then I started the audit.

I compiled the final build, installed it on my iPhone, and watched the Dynamic Island light up with my Claude usage data for the first time. It worked. Seven hours earlier, this app didn't exist, and I'd never written a line of Swift in my life.

Then I started the audit.

I shelved the project that same evening.
!
The app couldn't ship. Both providers prohibit programmatic extraction of usage data from consumer services.
Anthropic terms of service A
"You may not use automated or programmatic methods to extract data from Consumer Services." The app required OAuth redirect interception to access usage data. No public API exists for consumer token consumption.
OpenAI terms of use O
"You may not scrape, crawl, or use automated means to collect data from our services." The app required cookie-based session extraction through a WebView. No public API exists for consumer usage limits.
Add Provider screen showing Claude and ChatGPT connection options, the dual authentication interface that violated platform terms
fig.10 · Add provider screen · the interface that violated platform terms screen · auth

But here's the thing: the app not shipping doesn't diminish what happened in those seven hours. It amplifies it.

For the first time in ten years, after hundreds of Figma files handed off to engineering teams, after thousands of design specs translated by someone else into code, after a decade of being one step removed from the thing that actually reaches the user, I built something end to end. From the first sketch to a working build on my own phone. No engineer. No sprint planning. No handoff document. No waiting.

And the discovery that the app couldn't ship? That was itself the kind of finding that only comes from building. A Figma prototype would never have surfaced the ToS constraints. A spec document would never have revealed the authentication complexity. You have to go all the way to the metal to understand where the real boundaries are.

The fact that power users need to reverse-engineer access to their own usage data is itself a design failure from the platforms. There's a clear product opportunity for both Anthropic and OpenAI to expose consumer usage APIs, giving users and developers legitimate access to information that's already theirs. Understanding these platform constraints (regulatory, technical, and legal) is core to my work. At Evoke, I navigated UKGC gambling regulations. At Google, I worked within Gemini's safety guardrails. Knowing where the boundaries are is part of designing responsibly.

You have to go all the way to the metal to understand where the real boundaries are.
§ 09 / 12

What I learned

Design experience transfers directly. Three observations that surprised me.

designers who code L01
When you're the one implementing, you make decisions a spec can't capture. The countdown shows 2d 14h when over 24 hours, but switches to 14:32:07 under 24h. Not a spec item. Something I changed at 11pm because it felt wrong. Those micro-decisions compound into a product that feels considered.
AI doesn't replace judgement L02
Claude Code could generate SwiftUI views, network layers, and data models. It could not decide what to show in the Dynamic Island's compact vs. expanded state, or whether the usage bar should show "used" or "remaining." Every meaningful decision was mine. The AI accelerated execution, not judgement.
the 10-year advantage L03
A junior designer using Claude Code would have built a different, worse app. Not because of coding ability. The AI handles that. But because decisions like "separate the Core framework so the Widget Extension can share models" or "use Keychain not UserDefaults for tokens" come from years of seeing how products break at scale.
§ 10 / 12

What this means for design

The project is shelved, but the capability isn't.

The project is shelved, but the capability isn't.

What excites me most isn't the app itself. It's what it represents. We're at the beginning of an era where the designer doesn't just draw the interface and hope the implementation matches. The designer sits at the table with the people who build, carrying a decade of knowledge about users, business logic, and systems thinking directly into the engineering process.

A designer who understands authentication flows makes better login screens. A designer who's wrestled with API limitations makes better error states. A designer who's built the data layer makes better dashboards.

The future isn't designers who code replacing engineers. It's designers who can prototype at the level of fidelity that used to require a full team, testing real APIs, real authentication, real device capabilities, and bringing that depth of understanding back into every design decision they make.

Seven hours. Thirty-three Swift files. One shelved project. And the clearest signal I've had in ten years that the role of the designer is expanding in the most exciting direction possible.
§ 11 / 12

Results

Impact, in four lines.

build
33 Swift files in 7h
Fully functional iOS app with Live Activities, Dynamic Island, dual-provider authentication, Keychain storage, and local notifications. What would traditionally require a small team across two to three sprints, done in a single sitting.
design
0 Figma files used
For the first time in my career, concept to working product without opening my primary design tool.
find
Critical discovery
The app cannot be distributed due to third-party ToS constraints. A finding that only emerged through building, not designing.
skill
New capability
I now prototype at implementation level in hours, fundamentally changing how I approach early-stage product exploration.
§ 12 / 12

Tools and skills

Stack used end to end during the seven-hour build.

SwiftUI Xcode Claude Code (Sonnet 4.5) ActivityKit WidgetKit WKWebView Keychain Services UserDefaults SF Symbols Product thinking UX design Visual design Information architecture iOS development API integration AI-assisted development