Sitemap
Bootcamp

From idea to product, one lesson at a time. To submit your story:

Created by author using Figma, April 2025

I Designed a Digital ID App for the Detroit Tigers in 3 Hours Using AI

Caitlin
14 min readApr 16, 2025

--

I challenged myself to design a full-featured digital ID app for Detroit Tigers season ticket holders in just 3 hours using AI tools at every step. From research to wireframes, icon generation to image prompts, I explore where AI accelerated my workflow, where it held me back, and why human judgment still makes or breaks a product. Here’s what worked, what didn’t, and what I’d do differently next time.

Designing an entire app in three hours sounds like a dare. But when my assignment was to build a Digital ID app for Detroit Tigers season ticket holders complete with perks like parking updates, in-stadium discounts, and bragging rights. I took it as a challenge to see just how far AI tools could take me.

The goal? Create a fully designed app that lives on 3–5 screens

The twist? Use AI across every stage of the design process.

From research to high-fidelity wireframes, I wanted to push AI to its limits and see where it actually made a difference. Spoiler: sometimes it did. Other times… I found myself thinking, I should’ve just done this myself.

Here’s what the three-hour sprint looked like, broken down phase by phase.

Research: Kickstarting Ideas with AI, Then Talking to People

Before designing, I needed to know: what do fans actually want in a digital ID app?

I started with AI, using ChatGPT and Perplexity to explore what other sports teams were doing, what perks mattered most, and what kinds of pain points fans face on game day. AI helped surface trends like loyalty integrations, in-app ticketing, and social sharing.

But it only got me so far. A lot of the insights felt vague, repetitive, or pulled from outdated sources. Some research even sounded totally made up with fake personas, inaccurate stats, or insights that didn’t reflect real-world use.

So I did what I probably should’ve done first: I talked to a few people.

Photo by on

I asked friends and classmates what features they’d want in a sports team app. Their feedback helped me re-center on the user: they wanted simplicity, easy access to their perks, and a feeling of exclusivity as a season ticket holder.

AI helped frame the space. But real conversations helped define the product.

User Flows: Starting With AI, Finishing With My Brain

To sketch out the app’s user flow, I used Whimsical with a bit of help from ChatGPT. I started by asking AI to lay out how a typical user might move through the app such as login, view parking updates, show digital ID, access discounts, and maybe share something on social media.

It gave me a decent outline, but it wasn’t in depth enough. They started out strong with understanding how to log into the app, but when it got to the features themselves, no matter how many times I tried generating the user flow, it wouldn’t budge. That’s when I realized that Whimsical wasn’t going to take that next step because I hadn’t create a decision or potential choices to make for each feature.

Userflows were created by author in Whimsical

Next I began adding more for each feature and explaining it and guiding Whimsical through out the entire process, which looked like this:

Userflows were created by author in Whimsical, April 2025

This was a good start, but it was clunky. AI wanted to add multiple navigation layers, confirmation screens, and redundant clicks — far too much for this experience.

So I reversed the process: I drafted the flow myself, then asked AI to expand or simplify parts. That worked better.

I simplified everything down to a clean, top-down flow. For example, I prioritized “Next Game Info” at the very top since most fans want to know when their next perk opportunity is followed by season ticket ID, parking, and shareable content.

Now I know that the user flows should follow certain shapes for every decision, every input, and every page, but my sister who works as a Senior Product Designer at one of the FAANG companies would beg to differ.

Her wise expertise mentioned the difference between a Junior Product Designer and a Senior Product Designer which is:

  1. Whether a designer makes a user flow.
  2. How simplified the user flow is.

When going to meetings, VPs and Directors have only a short period of time to get a strong grasp and understanding of the feature. They do so by looking at the most concise form.

Given that I only have 3 hours, I decided to do the same.

Userflows were created by author in Figma, April 2025

The takeaway? AI was helpful for breaking out of creative blocks, but simplifying the UX flow required a human touch.

Low-Fidelity Wireframes: AI Tools Were a Mixed Bag

I tested a few AI wire-framing tools, with very different results:

  • Figma AI’s Wireframe Designer was pretty underwhelming. It generated generic layouts with little customization or context-awareness. Quite disappointing actually. I almost had to delete every single feature it had to offer because it had barely anything to do with the Digital ID App and the features I wanted to incorporate.
  • UX Pilot? Surprisingly good. I liked that it gave me enough credits to experiment. The interface felt clean, intuitive, and had a few smart layout ideas like a status-sharing feature for season ticket holders and a solid parking module. It wasn’t perfect (some sections like “Profile” felt too templated), but it sparked new ideas.
Low-fidelity wireframes were created by author using UX Pilot, April 2025

Prompt: Can you help me create a digital ID app for the Detroit Tigers, I’m interested in incorporating a few features such as 1) Parking Availability which allows ticket holders to reserve parking 2) Check out Discounts 3) Have PreGame Dugout Access 4) Bragging rights to share about their status. This app is targeted to season ticket holders for the Detroit Tigers.

The most valuable part wasn’t the layouts…it was how AI forced me to ask what matters most. What’s the first thing users should see? I decided it should be the next game info and a visual confirmation of their season ticket status. I rearranged the layout accordingly, giving hierarchy to the content fans actually care about.

AI gave me something to react to, but I made the calls.

Creating Icons

Material Symbols: Clean, Familiar, but Generic

Material Symbols (Google’s default design system) are clean, modern, and universally recognizable. They’re safe. They’re accessible. And they work well for most projects.

But for this app, something meant to feel a little more premium, a little more exclusive Material icons just felt a bit… flat. Too familiar. Too default. The spacing was rigid, the stroke weight felt a little heavy in places, and overall, it lacked the personality I wanted to bring to the experience.

Recraft.ai Icons: Unique, Consistent, and On-Brand

So I turned to Recraft.ai and that’s when things started to click.

With Recraft.ai, I had more control over the aesthetic. I could specify stroke thickness, shapes, and styles. I was able to prompt for icons that were not just functional, but stylistically aligned with the rest of the interface.

The result? Icons that felt custom, not templated. They had smoother lines, better cohesion, and more visual balance across the bottom nav. Even though they were generated, they looked like they had been designed with the same hand.

And perhaps most importantly — they didn’t feel like any other app. They made this one feel mine.

Here’s an example for when Recraft.ai nailed it on the first shot:

Prompt: Generate a scan icon for a digital ID app. The icon should feature a simple QR-code-like or barcode-inspired design with a frame, maintaining a consistent stroke with round edges.

Created by author using Recraft.ai, April 2025

Meanwhile there were also times, when the AI needed a few more prompts to get to the right tone for the app:

Prompt #1: Generate a minimalist home icon for a digital ID app. The icon should have a clean, modern aesthetic.

Created by author using Recraft.ai, April 2025

Prompt #2: Help me create a home button for a digital ID app, don’t make it too complicated, keep the style clean and easy to understand.

Created by author using Recraft.ai, April 2025

Even when I had to iterate (especially for a basic home icon), the results were still strong. If anything, Recraft saved me the most time and gave me the cleanest results of any AI tool I used.

The Difference between Material Symbols and Recraft.ai

Still I wanted to compare the difference between Material Symbols and Recraft AI. This side-by-side comparison helped solidify something I’ve been learning through design: small visual choices have big emotional impact.

Icons were created by author using Material Symbols (left) and Recraft.ai (right), respectively

When users open the app, they’re not just scanning for features — they’re subconsciously forming an impression of quality, personality, and care. Recraft’s icons helped push this app from default utility to something more thoughtful and tailored.

Going forward, I’ll definitely use AI to prototype visual systems faster, but I’ll also remember to pause and ask:

“Do these visuals reinforce the story I’m trying to tell?”

In this case, Material Symbols said “template.”
Recraft AI said “intentionally designed.”

Guess which one I picked.

Images: Visually Interesting, Creatively Frustrating

This part? A headache.

I tried everything: and to create images for things like:

  • Funnel cake at a baseball game
  • Parking lots with marked availability
  • Fans holding up digital IDs
  • Headshots for a profile section

Sometimes, the output was weirdly beautiful. Other times, completely wrong.

Of all the image generation tools I experimented with, Adobe Firefly consistently delivered the most accurate and usable results. It wasn’t perfect, but it had a better understanding of composition, color balance, and context. The visuals felt more grounded like they could actually exist in the world of my app.

DALL·E 2, on the other hand, was hit-or-miss. And by that, I mean mostly miss. While it’s great for surreal or artistic prompts, it really struggled with realistic scenarios especially anything involving people, branded environments, or specific actions like scanning a digital ID or standing in a parking lot. The results were often weirdly abstract, with distorted faces or strange props that had no business being in a baseball stadium.

So, for the rest of the project I decided to use Adobe Firefly.

I realized that I had to become a director, not just a designer. I had to control framing, lighting, mood, realism, angle, and subject in a text prompt. And even then, something was always off:

  • A smiling fan with seven fingers.
  • A parking lot that looked more like a mall
  • A funnel cake morphing into soft-serve ice cream.

Here are a few examples of what I mean:

Generating Food

Prompt 8: Create funnel cake for the baseball game

Created by author using Adobe Firefly, April 2025

It felt a little off, the first picture seemed the closest, but still not funnel cake enough.

Prompt 9: Funnel cake yummy cinematic 8k at a baseball game.

Created by author using Adobe Firefly, April 2025

The Result? Ice cream in a waffle bowl.

And sometimes…they wouldn’t change that much

Generating Parking Lot Maps

Prompt 14: Show a zoom in of the parking lot with cars and spots that might be available and not available, sharp focus, realistic view, side with people

Created by author using Adobe Firefly, April 2025

Prompt 15: Show a zoom in of the parking lot with cars and spots that might be available and not available, sharp focus, realistic view, side with people getting ready for the baseball game

Created by author using Adobe Firefly, April 2025

Prompt 16: Show a zoom in of the parking lot with cars and spots that might be available and not available, sharp focus, realistic view, side with people getting ready for the baseball game, dynamic lighting

Created by author using Adobe Firefly, April 2025

This part of the process: generating images with AI was definitely the most frustrating. While it technically saved time and money, it often came at the cost of control.

Personally, I’d rather use these as quick placeholders and just shoot the real thing myself. At least then, I’d know exactly how the lighting, composition, and emotion would come across because I’d be behind the lens, not a prompt.

And so while Firefly didn’t always nail it on the first try, it got closer to what I was looking for far more often. It reminded me that not all AI tools are created equal and sometimes, knowing which tool to use is half the battle.

Generating Faces

Next came the challenge of generating people’s faces and honestly, this was one of the trickiest parts. All I wanted was a simple, clean headshot for a digital ID. But AI made even that feel like a gamble.

Prompt 20: Headshot white background of a person for a digital id picture

Created by author using Adobe Firefly, April 2025

One image would have the eyes half-closed, another would have awkward hair placement, or a slightly off expression that made it feel… uncanny. It was a constant game of almost right, but not quite. What should’ve been a straightforward task turned into a surprisingly delicate balancing act.

Generating Detroit Tiger Merch

Eventually, I learned a valuable trick when working with AI image generators: avoid using licensed names, especially something like “Detroit Tigers.” Every time I included the team name in a prompt, the results went off the rails. The AI would hallucinate logos, mash together tigers, or generate surreal, off-brand imagery that looked nothing like what I had in mind.

Here’s how that turned out:

Prompt 24: Person wearing merchandise from official Detroit Tigers Store, stylish, clothing advertisement, sleek, HDR, Backlit.

Created by author using Adobe Firefly, April 2025

Didn’t fit what i needed, I was not literally looking for tiger fashion, but more so merchandise from the baseball game. Prompt 24 was off-brand, so I decided to emphasize Detroit Tigers colors in Prompt 25.

Prompt 25: Person wearing baseball merchandise with colors of orange, blue, white from official Detroit Tigers Store, stylish, clothing advertisement, sleek, HDR, Backlit

Created by author using Adobe Firefly, April 2025

This was better, but the images seemed like artificial models, I wanted the images to seem more realistic and believable that these are real people.

So I pivoted. Instead of asking for “Detroit Tigers fans,” I started prompting with color schemes like “blue and orange” or using more generic phrases like “baseball stadium” or “ticket holder.” This simple change made the results way more usable and actually gave me more flexibility in styling the visuals around my own brand direction.

It was a small adjustment, but a helpful reminder that when working with generative tools, language is everything. Knowing what not to say became just as important as knowing what to include.

Would I use AI-generated images again?

Yes — but only for placeholders or early concept previews.

AI is great for quickly visualizing ideas, especially in the early stages. But when it comes to capturing specific, real-world moments like a fan scanning their digital ID outside the stadium it falls short. The images often feel slightly off, with awkward angles or uncanny expressions.

If this were a real product, I’d rather shoot the photos myself. I know exactly what I want to see. AI doesn’t (at least not yet). For now, it’s a helpful shortcut, but not a replacement for real visuals.

High-Fidelity Wireframes

I’ll be honest by the time I got to this phase, I tried letting AI polish things, but it just made low-fidelity ideas slightly shinier. Nothing substantial changed.

Prompt: Can you help me create a digital ID app for the Detroit Tigers, I’m interested in incorporating a few features such as 1) Parking Availability which allows ticket holders to reserve parking 2) Check out Discounts 3) Have PreGame Dugout Access 4) Bragging rights to share about their status. This app is targeted to season ticket holders for the Detroit Tigers.

High-Fidelity Wireframes were generated using UX Pilot by author, April 2025

In the end, I found that my original wireframes were actually better suited to the assignment. The AI-generated “high-fidelity” versions looked polished on the surface but felt empty underneath. They were too generic, lacking brand personality, design intention, and often introducing more problems than they solved, like inconsistent spacing, awkward typography, or illogical button behavior.

It became clear that these tools weren’t thinking about hierarchy, context, or user flow. They were just styling existing components. So I scrapped the high fidelity AI designs and finished the screens myself. It took more effort, but the result felt cleaner, more purposeful, and far more aligned with the user experience I actually wanted to create.

Final Results After the 5 Phases

After moving through research, flows, wireframes, icons, and images, this was the final output of the Detroit Tigers Digital ID app.

Design mockups were created by the author in Figma
  • Home, Team, Watch, Live Cam: Designed to give fans quick access to core features like upcoming games, rankings, team recaps, and live selfie content.
Design mockups were created by the author in Figma
  • Parking Experience: A streamlined three-screen flow for reserving parking with real-time availability and a confirmation screen showing your car’s location.
Design mockups were created by the author in Figma
  • Scanning & Deals: A digital ID card for quick check-in and app-exclusive food discounts and order tracking.
Design mockups were created by the author in Figma
  • VIP Access: A reservable experience for events like the Pregame Dugout or Stadium Tours, with barcode integration and ticket transfers.

Each screen was designed with season ticket holders in mind — prioritizing ease, personalization, and perks. While AI accelerated parts of the process, it was the manual refinement that brought everything together.

Disclosure: I created more than 3–5 frames after the 3 hour mark for my portfolio.

What I Learned: AI is a Creative Catalyst, Not a Creative Director

This project pushed me to explore AI in ways I hadn’t before and I learned a lot.

AI helped me move faster. It helped me break creative blocks. It filled in some blanks and opened up unexpected ideas. But at every step, I still had to be the one making the calls.

Photo by on

AI didn’t know what my users cared about. It didn’t understand what made the experience meaningful or what made a layout actually usable. That was my job.

Would I use AI again?

Definitely. But not to finish the project. Only to get it started.

AI isn’t a designer. It’s a co-pilot. And when you’re racing the clock on a three-hour assignment, having a co-pilot isn’t such a bad thing.

If you enjoyed this article, feel free to give it a clap and share your thoughts in the comments!

While working on this project, I used ChatGPT-4o to help structure my thoughts, refine sections, and smooth out transitions. I asked it things like, “Can you help me simplify this user flow explanation?” or “How can I make this paragraph more concise without losing my voice?” The AI helped speed up the process, but all final decisions, rewrites, and design perspectives were shaped by my own experiences, insights, and three-hour race against the clock.

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. To submit your story:

Responses (3)