Building a Wellness App with AI: Part 3
AI Started Making UX Decisions (And I Had to Learn to Guide It)
Summary: Moving from static screens to interactive experiences revealed the true power (and limitations) of AI-assisted development. While Claude Code excelled at implementing animations, state, and basic interaction and functionality, it also made assumptions about user behavior that required a human in the loop. This is where the designer's role shifts from pixel-pusher to experience architect, guiding AI through the nuanced decisions that make apps feel alive.
So, remember how I ended the last post saying I was going to tackle interactions and functionality? Yeah, well, that turned into an adventure.
I thought I knew what I was getting into. Static screens? Claude Code and Figma MCP pretty much handled those. But making things actually work and feel good? That's where things got really interesting. And by interesting, I mean both amazing and occasionally head-scratching.
This is part 3 of my series documenting how I'm building a wellness app using AI-assisted development. If you missed the earlier chaos, check out part 1 and part 2.
When Things Got a Little Too Smart
Okay, so I started simple. Or what I thought was simple. I asked Claude Code via Figma MCP to "make the breathing guide svg animate with a 4-second inhale, 4-second hold, 4-second exhale pattern."
What I expected: an svg that scales up and down with some basic timing.
What I got: a full breathing guidance system with smooth scaling animations, ease in and out applied, text instructions, and haptic feedback.
I literally sat there staring at my screen like... I didn't ask for half of this stuff.
But it all worked. Beautifully. On the first try. Although I would eventually dump some of these items and keep others.
This was my "oh shit" moment. Claude Code wasn't just following my instructions. It was making assumptions about what a breathing app should do based on... I don't know, every app it had ever seen?
Suddenly I realized I wasn't just directing code implementation. I was in a conversation with something that had opinions about user experience.
The Stuff AI is Weirdly Good At
After I got over the whole 'wait, the AI has UX opinions', I started noticing what it was actually good at. The technical implementation stuff? Claude Code crushed it. Way faster than I could have done it, and probably cleaner too.
Animations? The breathing circle wasn't just functional, it was smooth. Proper easing, handled interruptions gracefully, no weird jank.
Cross-platform stuff? When I mentioned wanting this on both iOS and Android, it just... handled the differences. Platform-specific things I wouldn't have even thought about. I am guessing because it knows I am using Flutter it could handle that.
But it got weird: Claude Code was constantly making these tiny UX decisions. What happens if someone taps the screen mid-breath? What if they navigate away and come back? Should the haptic feedback be subtle or more pronounced?
These aren't coding questions. These are experience design questions. And the AI was answering them based on... its training? Its best guess? I honestly don't know.
Learning to Talk to AI About Feelings
I quickly figured out that asking AI to build interactions is completely different from asking it to match a Figma design. You can't just say "make this work." You have to describe what it should do and how it should feel.
Instead of "add navigation between screens," I learned to say something like "create gentle slide transitions that keep the calm, meditative vibe. Users should never feel rushed or jarred."
Instead of "make the timer work," it was more like "build a timer that shows progress without creating anxiety. Make it feel supportive, not stressful."
The more I described the emotional experience I wanted, the better Claude Code got at building it. It was like directing a really talented developer who could build literally anything, but needed you to explain the why behind every decision.
All this said, I still needed to go in and direct it even more. Specifying animation speed, duration, etc.
AI Getting it Wrong is Useful
Not everything Claude Code assumed matched my vision. That breathing screen I mentioned? It added a bunch of stuff I never asked for:
A "cycle count" (not terrible, but not what I wanted)
A social sharing thing after each session (completely wrong for my minimal approach)
This is where I realized the designer's job isn't going away, it's just changing. AI can build any experience you can describe and technology can support, but it can't read your mind about which experiences fit your design philosophy.
I found myself constantly course-correcting:
"Remove the skip button – I want people to commit to the full cycle."
"Kill the audio cues, just do subtle haptic feedback."
"No social features. This should feel personal and private."
Each time I corrected something, I got better at understanding both what the AI could do and what I actually wanted.
No “One Shots”
After a few rounds of this "add features I didn't want" dance, I figured out something important, don't try to build everything in one go.
I was making the mistake of asking Claude Code to "create the complete breathing exercise screen with timer, animations, and user controls." And every time, it would interpret that as license to add whatever it thought a breathing exercise screen should have.
So I started breaking things down:
First: "Create just the breathing svg animation with the 4-second cycles."
Then: "Add a simple timer display that shows session progress."
Then: "Add start/stop controls, but keep them minimal."
Each smaller ask let me guide the experience more precisely. When Claude Code only had one thing to focus on, it did exactly what I asked for instead of trying to anticipate what else I might want.
This approach felt slower at first, but it actually saved time. No more removing unwanted features or explaining why the AI's suggestions didn't match my vision. Just clean, focused implementation of exactly what I designed.
The Debugging Reality
Of course, not everything worked perfectly. But Claude Code is really good at debugging its own work if you describe the problem clearly.
"The breathing animation gets choppy when someone returns to this screen" became a starting point for it to fix animation stuff.
"The breath count is not in sync with the animation" led to it completely refactoring the timing logic.
What's weird is how it approaches debugging. It doesn't just patch problems, it often finds root causes and makes the whole thing better. Sometimes the fixes improved stuff I didn't even know needed improving.
What I'm Doing Now
Three weeks into this experiment, my job has completely changed. I'm not pushing pixels anymore. Instead I'm:
Describing how things should feel, not just how they should look.
Learning to communicate design intent in ways AI can understand and run with.
Constantly evaluating whether AI-generated experiences match what users actually need.
Testing ideas in working code instead of static mockups.
This feels like what design was always supposed to be, focusing on the human experience and letting tools handle the technical implementation.
The Good, the Bad, and the Unexpected
What's working really well:
Complex animations that would have taken me forever
Cross-platform compatibility without me thinking too much about it
Testing interaction ideas super quickly
What's still tricky:
AI making assumptions about user behavior that don't match my vision
Figuring out when to trust AI suggestions vs. when to override them
Keeping the code organized as I add more features
Balancing AI's "helpfulness" with my actual design goals
Where This Is All Heading
Building this app is making something pretty clear: we're not heading toward AI replacing designers. We're heading toward designers becoming experience orchestrators, working with AI to bring ideas to life way faster and more completely than before.
The technical barriers between design and implementation are basically dissolving. The question isn't whether AI can code your designs (it can), but whether you can articulate the experiences you want to create clearly enough for AI to build them well.
Up Next: When Things Get Messy
Next post, I'm diving into the less glamorous side of this experiment. What happens when you need to refactor AI-generated code? How do you handle edge cases the AI completely missed? And what happens when you try to add features that don't fit neatly into the AI's idea of what a "wellness app" should be?
What questions do you have about working with AI on interactions? Drop them in the comments. I'm documenting all of this and happy to dig into specific challenges in future posts.