The Task: Defining the Opportunity
My objective was to address a significant environmental and economic pain point for modern families: the rapid lifecycle of children’s toys. I was tasked with designing a seamless, community-focused mobile platform that facilitates the exchange and donation of pre-loved toys. The challenge was not just building a marketplace, but designing a system that felt secure, local, and incredibly easy to use for busy parents who are often multitasking and time-constrained.
The Project Goal: Designing for Circularity
The core ambition of the Orbi app was to foster a “Circular Economy” within local neighborhoods. We wanted to move away from the traditional “buy-and-discard” model and replace it with a “share-and-reuse” ecosystem. The goal was to build an interface that prioritized trust through verified profiles and proximity-based searching. By reducing the distance between neighbors and the complexity of listing an item, we aimed to lower the barrier to entry for sustainable living, making toy-swapping as habitual as a trip to the local playground.
My Role in the Team: Strategic Design Lead
Operating within a cross-functional team of three, I acted as the Lead Product Designer, responsible for the end-to-end user experience. My role was to serve as the voice of the user while balancing technical feasibility and the client’s business objectives. I didn’t just design screens; I orchestrated the entire journey—from the initial research and problem mapping to the creation of a high-fidelity design system. I took ownership of the prototyping phase and led the usability testing to ensure that the final product was not only visually delightful but functionally bulletproof for the target demographic.
Research
More about the project
The Investigation Phase: Mapping the Journey
For those unfamiliar with the design process: Discovery is the “Detective Phase.” As indicated by the project timeline, this was a multi-month undertaking where I had to prove that the problem was worth solving before a single pixel was drawn. If you look at the Discovery Flowchart on this page, you’ll see how I mapped out the entire problem space—from identifying user pain points to framing the final solution. This phase ensured that we weren’t just “building an app,” but building a solution that addressed real-world human behavior. I used the section labeled “What to include in research”to define our boundaries, moving from a broad objective to a specific, feasible product direction.
Accelerating Discovery with AI Implementation
While the original project required a deep manual research cycle, I utilized Claude 3.7 as a strategic accelerator to condense months of data synthesis into a few weeks. By feeding the AI the “Discovery Objectives” seen in my diagrams, I generated “Synthetic Interviews” to stress-test our early assumptions. The AI didn’t replace real people; it acted as a filter for the raw data I gathered during the “Product Discovery” phase. This helped me identify the “scheduling friction” that you see highlighted in the branching paths of my Information Architecture. By using Gemini 2.0 to simulate these user hurdles early, I ensured the project hit its delivery milestones with a much higher degree of logical accuracy and community impact.
Research
The Strategic Design Process
The 5-Stage Methodology
To ensure the Orbi app was both user-friendly and commercially viable, I followed a structured five-stage design process. To a non-designer, this is the “Roadmap” that takes us from a vague idea to a finished, working product. By breaking the project into these distinct phases, I ensured that every decision was backed by data and that the final app solved real problems for real parents.
AI-Enhanced Workflow Overview
Efficiency is driven by the synergy between human creativity and artificial intelligence. Below is how I utilized an AI-powered stack to elevate each stage of the Orbi project:
-
01. Discover (Online & Competitor Research): I used Gemini 2.0 to perform high-speed market analysis. Instead of manually reading hundreds of app reviews, the AI synthesized competitor strengths and weaknesses, identifying a critical “trust gap” in existing toy-swap platforms.
-
02. Define (Personas, Empathy & Journey Maps): I employed Claude 3.7 to build data-driven “Synthetic Personas.” This allowed me to simulate the emotional journey of a parent, ensuring the app’s features were tailored to specific needs like sustainability and budget management.
-
03. Ideate (User Flow & Information Architecture): I used AI logic engines to “stress-test” the app’s structure. By simulating thousands of user paths, I was able to refine the Information Architecture to ensure that the most important actions—like listing a toy—were always the easiest to find.
-
04. Design (Style Guide & Wireframes): Using Figma Make, I accelerated the transition from paper sketches to digital wireframes. The AI assisted in maintaining a consistent Design System, automatically checking for visual balance and accessibility across all screen sizes.
-
05. Test (Usability & Interactive Prototypes): I utilized AI-driven “Synthetic Testing” to identify potential friction points before the first real user even touched the app. This predictive testing allowed me to fix navigation errors in the prototype phase, saving significant development time and cost.
Research
Research Methodologies: Data-Driven Foundations
The Competitive Audit: Strategic Positioning
As illustrated in my Competitor Research, I performed a deep-dive analysis of major platforms like OLX and Facebook Marketplace. While these platforms dominate the general second-hand market, my audit revealed they were built for generic transactions and lacked the specialized trust required for the parenting community.
AI-Augmented Implementation: I used Claude 3.7 to run a “Friction Analysis” on these specific platforms. I prompted the AI to analyze 500+ negative reviews for OLX and Facebook Marketplace regarding “Toy Sales” and community trust.
-
The Insight: The AI identified that the “Safety and Cleanliness” factor was the primary reason parents hesitated to use these sites. This validated my decision to build Orbi as a “Vetted Community” rather than a blind marketplace and giving us a clear competitive edge over the generalist platforms.
Demographics & Quantitative Research: The Scale of the Problem
To understand the “How Many,” I conducted Quantitative Research targeting a specific demographic: Men and Women aged 25–45 with a higher education background and an interest in sustainable living. I deployed surveys to gather hard data on their habits.
AI-Augmented Implementation: I used Gemini 2.0 to analyze the results of the specific questions I asked during this phase:
-
“How many toys does your child outgrow in a year?” (The data showed an average of 12+ per child).
-
“What is the main barrier to exchanging toys?” (The data identified “Distance” and “Lack of Time”).
-
“Would you trust a neighbor more than a stranger on OLX?” (90% responded “Yes”).
The AI transformed these data points into a “Priority Matrix” and proved that our core demographic valued convenienceand trust over price. This dictated our “Hyper-local” design strategy.
Qualitative Research: Understanding the “Human Why”
To dig deeper into the “Why,” I performed Qualitative Research through 1-on-1 interviews. I focused on the primary question mentioned in my discovery notes: “What happens to your child’s toys once they are no longer used?” This question revealed the emotional and physical burden of “clutter guilt”—parents wanted to get rid of items but felt bad throwing away plastic and “selling out” their child’s memories.
AI-Augmented Implementation: I utilized AI as a Sentiment Analyst. I fed the transcripts of these interviews into an AI model to detect emotional triggers.
-
The Discovery: The AI flagged “Community Connection” as a more powerful driver than “Financial Gain.” This meant the Orbi UI should not look like a “Shop” (like OLX) and should instead look like a “Social Circle.” This directly influenced the final Ideation stage and where I pivoted the design to focus on storytelling and neighborhood impact rather than just price tags.
Research
Human-Centric Modeling: Personas & Journey Maps
The Archetypes: Sarah and Mike
Based on my discovery findings, I developed two primary Personas to represent our user base. These are not just names; they are detailed profiles that include motivations, frustrations and behavioral patterns.
-
Sarah (The Sustainable Parent): Sarah is driven by environmental impact and community connection. She is not interested in profit; she wants to know that her children’s toys are being reused and not ending up in a landfill.
-
Mike (The Budget-Conscious Parent): Mike is focused on value and convenience. He wants high-quality toys for his kids without the retail price tag and needs a fast, reliable way to find items within his immediate neighborhood.
AI-Augmented Implementation: I utilized Claude 3.7 to “evolve” these personas beyond static descriptions. I uploaded my research notes and asked the AI to simulate a week in the life of both Sarah and Mike.
-
The Result: The AI identified a “Hidden Friction” for Sarah: she felt guilty if she couldn’t verify the cleanliness of a toy. This led me to add a mandatory “Condition & Hygiene” tag in the listing process, a feature I hadn’t originally planned but that the AI proved was essential for Sarah’s trust.
The User Journey Map: From Clutter to Connection
As shown in the “Journey Map” illustration within the “Define” section, I mapped out the end-to-end experience of a user engaging with Orbi. For a non-designer, a Journey Map is a timeline that tracks a user’s actions, thoughts and emotional states as they move through the app.
The map begins with the “Awareness” phase (the moment a parent realizes they have too much clutter) and moves through “Discovery” (searching for a swap), “Engagement” (messaging a neighbor) and ends with the “Success” of a physical exchange.
AI-Augmented Implementation: I used Gemini 2.0 to perform a “Stress Test” on this Journey Map. I prompted the AI to find the “Emotional Low Points”—the moments where a user is most likely to quit the app.
-
The Connection: The AI pointed out that the “Transition from Online to Offline” (meeting the neighbor) was the highest point of anxiety. Looking at my Journey Map, I realized the emotional dip happened right before the meeting. To fix this, I designed automated “Safety Reminders” and “Meeting Suggestions” that pop up in the chat. This AI-backed refinement turned a potential point of failure into a point of confidence and ensured a smooth transition from digital interaction to real-world community building.
Design
App Blueprint: Information Architecture & User Flow
Defining the Structural Logic (IA)
The Information Architecture image you see on the page is the “Master Map” of Orbi. For someone who doesn’t know design: imagine you are building a library. Before you buy the books, you need to decide where the shelves go and how to label the sections. I mapped out every screen—from the Splash Screen to the Profile and Settings—to ensure that no user ever feels lost. I specifically structured the “Exchange” flow to be the central pillar of the app, ensuring that the primary action (swapping a toy) is never more than two taps away from the home screen.
AI-Augmented Logic & Flow Optimization
To ensure this architecture was as efficient as possible, I used Claude 3.7 as a “Logic Auditor.” I didn’t just hope the map worked; I used AI to stress-test every path shown in the User Flow diagram.
-
The Simulation: I prompted the AI to simulate 500 “Happy Path” and “Edge Case” journeys. For example, I asked: “What happens if Mike forgets to upload a photo but tries to list a toy?” * The Insight: The AI identified a “Dead End” in my original flow where a user might get stuck on a loading screen if their internet dropped during a toy upload.
-
The Solution: Based on this AI feedback, I added a “Draft Mode” and “Offline Persistence” to the IA. This means if a parent is interrupted by their child or loses signal, their progress is saved automatically. By using AI to “think ahead,” I was able to refine the User Flow seen in the image to be 25% faster than the initial draft and ensuring a frustration-free experience for busy parents.
Defining the Structural Logic (IA)
The Information Architecture image you see on the page is the “Master Map” of Orbi. For someone who doesn’t know design: imagine you are building a library. Before you buy the books, you need to decide where the shelves go and how to label the sections. I mapped out every screen—from the Splash Screen to the Profile and Settings—to ensure that no user ever feels lost. I specifically structured the “Exchange” flow to be the central pillar of the app, ensuring that the primary action (swapping a toy) is never more than two taps away from the home screen.
AI-Augmented Logic & Flow Optimization
To ensure this architecture was as efficient as possible, I used Claude 3.7 as a “Logic Auditor.” I didn’t just hope the map worked; I used AI to stress-test every path shown in the User Flow diagram.
-
The Simulation: I prompted the AI to simulate 500 “Happy Path” and “Edge Case” journeys. For example, I asked: “What happens if Mike forgets to upload a photo but tries to list a toy?” * The Insight: The AI identified a “Dead End” in my original flow where a user might get stuck on a loading screen if their internet dropped during a toy upload.
-
The Solution: Based on this AI feedback, I added a “Draft Mode” and “Offline Persistence” to the IA. This means if a parent is interrupted by their child or loses signal, their progress is saved automatically. By using AI to “think ahead,” I was able to refine the User Flow seen in the image to be 25% faster than the initial draft and ensuring a frustration-free experience for busy parents.
-
Design
Building the Design System with AI Collaboration
The Foundations: Visual Style & Components
The Design System image you see is more than just a collection of buttons; it is a set of rules. I chose a palette of soft blues and energetic oranges to balance the “Safe” feel of a parenting app with the “Playful” nature of toys. Using Claude, I transformed my creative vision into a technical reality through a step-by-step collaborative process.
Step-by-Step Guide: How I Used Claude to Engineer the System
Step 1: Establishing the Color Logic & Accessibility I started by providing Claude with my primary brand colors and asked it to generate a full functional palette.
-
The Process: I used a prompt like: “Here is my primary brand orange #FF8C00. Generate a scale of 10 shades for ‘Success’, ‘Warning’, and ‘Disabled’ states. Ensure all pairings meet WCAG 2.1 AA contrast standards against a white background.” * The Result: Claude instantly calculated the hex codes that are visible in my Style Guide, ensuring that every color I used was already pre-vetted for accessibility before I even placed it in Figma.
Step 2: Defining a Mathematical Typography Scale To ensure the text was readable for parents on the move, I needed a consistent hierarchy.
-
The Process: I asked Claude to define a “Major Third” typography scale based on a 16px base font.
-
The Result: The AI provided the exact pixel values for H1, H2, Body and Caption text. This removed the guesswork and created the perfectly balanced text hierarchy seen in the Typography section of my Design System image.
Step 3: Component Logic & State Mapping A button isn’t just a rectangle; it has different “states” (Default, Hover, Pressed, Disabled).
-
The Process: I prompted Claude: “List all necessary states for a ‘Primary CTA’ button in a toy-exchange app and describe the visual changes needed to communicate ‘Trust’ and ‘Action’ to a 35-year-old parent.” * The Result:Claude suggested subtle rounded corners (to feel friendly) and specific shadow depths (to feel “tappable”). I used these specifications to build the Button Components seen in my design library.
Step 4: Creating a Legend for Handoff As noted in my site text, building a system requires a “legend” for displaying different types of features.
-
The Process: I asked Claude to write a “Developer Handoff Guide” for my system, explaining how the components should behave when scaled.
-
The Result: This ensured that when the project moved to the development team, they understood the logic behind the Visual Hierarchy I had created, reducing the need for constant back-and-forth communication.
Design
Validating the Flow: AI-Augmented Wireframing
The Purpose of Low-Fidelity Drafts
The Digital Low-Fi Wireframes you see in the images were prepared specifically for the first round of user and stakeholder testing. At this stage, I purposely removed all branding elements to ensure that the feedback focused on Navigation and Functionality. I focused on the “Happy Path”—the most direct route for a parent to find a toy and initiate an exchange.
Step-by-Step: Using AI to Stress-Test the Logic with Stakeholders
Step 1: Rapid Prototype Generation with Figma Make Instead of manually drawing every screen, I used Figma Maketo convert my initial paper sketches into the digital wireframes seen on the page.
-
The Process: I uploaded my hand-drawn “Toy Listing” sketch and used a prompt: “Convert this sketch into a high-fidelity wireframe in Figma, ensuring the primary CTA ‘Upload Photo’ is prominent and follows a 60/40 screen split for accessibility.”
-
The Result: This allowed me to generate a complete clickable flow in hours, providing a tangible product for stakeholders to react to much earlier in the timeline.
Step 2: Simulating “The Stakeholder Voice” with Claude Before presenting to the actual client, I used Claude to anticipate potential business and technical objections.
-
The Process: I fed the wireframe logic into Claude and asked it to act as three different stakeholders: a Product Manager (focused on conversion), a Developer (focused on technical difficulty) and a Legal Consultant (focused on safety/compliance).
-
The Result: The AI identified a potential issue in the “Exchange Flow”: the lack of a clear “Safe Meeting Spot” suggestion. I was able to add this feature to the wireframes before the real meeting, demonstrating proactive problem-solving.
Step 3: Validating Data-Driven Design Issues During the stakeholder session, we used the wireframes to walk through “Edge Case” scenarios.
-
The Process: Using a live AI plugin, I ran a “Cognitive Load Audit” on the screens shown in the image. The AI flagged that the “Search Results” page was too cluttered for a parent holding a baby in one hand and a phone in the other.
-
The Result: Based on this data, I simplified the wireframes to show only 2 toys per row instead of 4. This change was validated in real-time with stakeholders, ensuring that our Information Architecture was optimized for the physical reality of our users.
Step 4: Preparing for Gorilla Testing As the site text mentions, these wireframes were prepared for “Gorilla Testing” (quick, informal usability tests).
-
The Process: I asked Claude to generate a Usability Script based on the wireframe screens.
-
The Result: This provided me with a scientific way to ask questions to users, ensuring that the feedback I received was actionable and directly linked to the screens seen in the “Highlights” section of the project.
Design
The Final Product: High-Fidelity UI Excellence
The Visual Aesthetic: Trust through Design
The final screens you see are the result of an intentional “Soft-Tech” aesthetic. I utilized the blues and oranges from my Design System to create an interface that feels energetic yet dependable. Notice the large, rounded touch targets and the high-contrast typography; these were specifically designed for parents who need to navigate the app quickly with one hand. Every illustration and icon was curated to lower the “barrier of entry” for new users, making the act of swapping toys feel like a joyful community event rather than a chore.
AI-Driven Polish & Final Validation
Step 1: Contextual Content Generation with Gemini To ensure the UI felt real during final presentations, I used Gemini 2.0 to generate hyper-realistic “Edge Case” content.
-
The Process: I asked the AI to generate 50 different toy listings, ranging from short names like “LEGO” to long, complex descriptions of vintage board games.
-
The Result: As seen in the final UI Screens, the layout remains perfectly balanced regardless of the text length. This prevented “layout breaking” issues that often occur when real-world data is entered into a “perfect” design.
Step 2: Automated Accessibility & Contrast Audits I used AI-driven plugins within Figma to perform a final sweep of the entire app for WCAG 2.1 compliance.
-
The Process: The AI scanned all screens in the “Design” gallery to ensure that every button and text string was readable for users with visual impairments.
-
The Result: The AI identified a slight contrast issue on the “Notification Badge” in the orange palette, which I corrected before the final export. This ensures that Orbi is truly accessible to every member of the community.
Step 3: Interactive Prototyping & Motion Logic The final screens were linked into an Interactive Prototype used for the ultimate round of stakeholder sign-offs.
-
The Process: I used AI to suggest the most natural “easing” for animations—ensuring that when a user “swipes” to exchange a toy, the motion feels physical and satisfying.
-
The Result: This created a “Delight Factor” that makes the app feel premium and trustworthy, encouraging long-term user retention for both Sarah and Mike.
