Back to project

Drawzy AI - MobAI Hackathon Case Study

Case study for the AI part of Drawzy, covering sketch recognition, RL-style scoring, mobile integration, and team repo architecture.

Overview

Drawzy is a team-built drawing game prototype from the MobAI hackathon at ENSIA. This case study focuses on the AI contribution: a FastAPI backend for mobile sketch prediction and round scoring.

Problem

The AI needed to fit a fast game loop: mobile sketch export, backend normalization, and a simple response the gameplay UI could use during the demo.

Context

The public organization has three repos: drawzy-ai for FastAPI, drawzy-mobile for Flutter/Firebase, and drawzy-landing for Next.js. The role wording stays focused on the AI/backend part.

My Role

AI contributor focused on model serving, sketch preprocessing, prediction API, and RL-style scoring. Mobile and landing surfaces are credited as team work.

Goals

  • Provide an AI backend that the mobile drawing screen could call during gameplay
  • Decode mobile canvas output sent as base64 image data
  • Preprocess sketches into the 28x28 grayscale shape expected by the classifier
  • Return drawing predictions from a trained CNN model
  • Expose a scoring route driven by timing and similarity-style round signals
  • Keep the architecture simple enough for a hackathon demo and later portfolio review

Technical Decisions

  • Use FastAPI and Pydantic for a small, explicit model-serving API
  • Separate prediction and scoring routes through routers so the AI service has clear responsibilities
  • Convert sketches to grayscale, crop to the detected drawing area, preserve aspect ratio, and pad to 28x28 before inference
  • Use a TensorFlow/Keras CNN model for sketch classification on a Quick, Draw-style subset
  • Use a Gymnasium environment and Stable-Baselines3 PPO model for a prototype scoring mechanism
  • Have the Flutter app encode canvas output as PNG bytes and send it to the prediction endpoint as JSON
  • Keep Firebase-backed mobile sessions, chat/game state, and the landing page outside the AI backend boundary

Architecture

The Flutter app posts base64 canvas images to FastAPI. The backend decodes, preprocesses to a 28x28 grayscale tensor, calls the CNN, and returns the guessed class. For scoring, it builds a 61-value observation and calls the PPO model.

Architecture Map

Mermaid
flowchart LR
  Player["Mobile player"] --> Mobile["Flutter Drawzy app"]
  Mobile --> Canvas["Sketch canvas and game modes"]
  Canvas --> GuessRoute["FastAPI predict sketch route"]
  GuessRoute --> ImagePrep["Decode base64, crop, resize, pad"]
  ImagePrep --> GuessModel["CNN sketch guess model"]
  GuessModel --> Predictions["Class label and top five guesses"]
  Mobile --> ScoreRoute["FastAPI score route"]
  ScoreRoute --> Observation["Similarity timeline plus previous score"]
  Observation --> ScoreModel["PPO scoring model"]
  ScoreModel --> RoundScore["Round score"]
  Mobile --> Firebase["Firebase app services"]
  Landing["Next.js landing page"] --> Mobile

Case Study Screenshots

4 views
Guess-model training curves from the AI backend for the sketch classifier.
Mobile splash/brand screen from the Flutter app, giving the AI backend a visible product context.
Mode-selection screen for offline, online, and multiplayer drawing game flows.
Onboarding visual from the mobile repo, showing the prototype as a complete app experience.

Key Features

  • Sketch prediction route under the FastAPI prediction router
  • Base64-to-image request flow from Flutter to Python
  • Drawing-area crop and padding logic that keeps sketches centered for inference
  • Top-class prediction path from the CNN model
  • RL-style scoring route using timing and a 60-step similarity sequence
  • Flutter drawing canvas with a visible prediction action and response state
  • Firebase-backed multiplayer/session work in the mobile repo
  • Landing repository that frames the hackathon app as a shareable product

Challenges

  • Making model input reliable when user sketches are sparse, off-center, or empty
  • Keeping the mobile/backend API contract understandable under hackathon time pressure
  • Turning ML artifacts into endpoints that the rest of the team could call
  • Presenting the project publicly without overstating prototype maturity
  • Explaining the three-repository organization clearly enough for portfolio visitors

Results / Outcomes

  • Produced a public case study from the MobAI hackathon organization
  • Extracted real visuals from the mobile and AI repositories instead of using placeholder screenshots
  • Documented the AI backend from mobile canvas export to FastAPI inference and scoring
  • Clarified the role as AI/backend contribution while explaining how the team prototype fits together

What I Learned

  • AI hackathon work is easier to evaluate when the model has a clear product surface calling it
  • Preprocessing can matter as much as the model when sketches come from real users
  • Short demos need small API contracts that frontend/mobile teammates can use quickly
  • Portfolio case studies should separate personal contribution from team-owned surfaces