Featured Research

Cheese Image Classification with Machine Learning

A summer semester-long undergraduate research project guided by Dr. Xiang Ma, focusing on building a high-quality cheese image dataset and training modern vision models to classify cheese varieties using PyTorch and university HPC resources.

Summer–Fall 2025 Undergraduate Research

Conducted at the University of Wisconsin–Eau Claire following a structured weekly timeline from tools onboarding to final documentation.

View Project Write-up

Scope

End-to-end pipeline: from dataset cleaning and curation to model design, training, and evaluation on cheese image data.

Weeks 1–14

Tooling

VS Code, Python, Git/GitHub, Google Colab, and PyTorch as the core stack for development and experiments.

ML Stack

Data

Multiple cheese datasets from Kaggle plus a small, curated dataset collected later in the project for evaluation.

Dataset Focus

Compute

Model training on the university's high-performance computing GPU servers after prototyping in Colab.

HPC GPU

Research Breakdown

The project follows a structured weekly plan: learning the tools, cleaning and understanding the cheese datasets, designing a model that fits the GPU cluster, and evaluating how well it generalizes.

Project Summary

This research project investigates how modern image classification models handle real-world cheese images. The work starts with tool ramp-up (VS Code, Python, Git, Colab), moves through classic datasets like MNIST to understand core ideas, and then applies those principles to cheese images sourced from Kaggle and a small custom dataset.

The emphasis is not just on accuracy, but on building a clean, reliable dataset and understanding where and why models fail.

Structured Weekly Plan

  • Weeks 1–2: Tools, Python, Git/GitHub, and ML basics.
  • Week 3: MNIST and CNNs as a reference point for image classification.
  • Weeks 4–5: Explore and clean existing cheese image datasets.
  • Weeks 6–7: Study SOTA models and design a cheese classifier.
  • Weeks 8–11: Implement, train, and refine the model on HPC.
  • Weeks 12–14: Collect a small cheese dataset, evaluate, and document.

Tools & Foundations

Early weeks focus on building a solid foundation: using VS Code as the main editor (with Python and Markdown support), following Python tutorials, and learning Git & GitHub for version control. MNIST is used as a “hello world” for computer vision to understand dense networks and convolutional neural networks before touching the cheese data.

Once the basics are solid, the work shifts to PyTorch on Google Colab for prototyping models and experimenting with training loops.

Data & Cleaning Strategy

  • Start from public cheese image datasets on Kaggle and inspect class labels and image quality.
  • Apply criteria for good datasets: no mislabeled examples, consistent image sizes, minimal blur, and no duplicates.
  • Use tools like cleanlab/cleanvision and Google Gemini-based checks to flag images that don’t actually contain cheese or appear mislabeled.
  • Resize all images to a consistent resolution to feed into the model cleanly.
01

MNIST & Baseline CNN

Reproduce basic fully connected and CNN models on MNIST to understand training dynamics, loss curves, and what “good” performance looks like on a clean benchmark.

02

Cheese Dataset Cleaning

Apply cleaning tools and manual inspection to remove mislabeled, non-cheese, blurry, or duplicate images. Align all images to a shared size to prepare them for training.

03

Model Design & Implementation

Design a PyTorch model (starting from CNN backbones and transfer learning) sized to fit the university’s GPU cluster. Consider trade-offs between model size, accuracy, and compute cost. Large-scale transformer-style models are explored conceptually as future extensions.

04

HPC Training & Refinement

Train the model on the HPC GPU server, monitor performance, and refine via hyperparameter tuning, regularization, and architecture tweaks. Later in the project, evaluate the trained model on a small, newly collected cheese dataset.

Early Observations

  • The quality of the cheese dataset (labels, duplicates, non-cheese images) significantly affects model performance, often more than the specific architecture choice.
  • Techniques that work well on MNIST do not transfer directly; the cheese domain has more visual variety, clutter, and ambiguity.
  • Using pre-trained CNN backbones in PyTorch accelerates convergence and provides strong baselines on limited hardware and time.

Project Wrap-Up & Future Directions

  • My contribution focused on data cleaning, baseline model design, and early experiments using PyTorch and the university’s HPC resources.
  • After my time on the project, the lab’s future directions include deeper error analysis, exploring more advanced architectures (such as vision transformers), and expanding the curated cheese dataset.
  • The documented weekly workflow (tools, datasets, model design, and evaluation) is intended to support future students who continue this line of research.