Jong Hoon (Sam) Park

DRAG TO LAUNCH
ABOUT ↓
Jong Hoon (Sam) Park

About Me

I'm a recent graduate from Carnegie Mellon University with a Master's degree in Mechanical Engineering (2022–2024), where I worked as a Graduate Research Assistant.

My work focuses on applying machine learning as a statistical predictor across robotics, autonomous driving, and aviation. I'm particularly interested in multimodal ML, self-supervised learning, on-device ML, and model compression.

Currently at Verdant Robotics, working on ML systems for agricultural robotics.

PyTorchComputer Vision Multimodal MLSignal Processing

Selected Projects

RAG Chatbot

Q&A Chatbot via Retrieval Augmented Generation

Built a RAG-based Q&A system to mitigate LLM hallucination. Users upload PDF documents; the pipeline vectorizes them into a grounded database and performs semantic search to retrieve relevant context before generating responses with an LLM.

LLMRAGVector DBNLP
Pilot Workload Estimation

Pilot Workload Estimation via Multimodal ML

Multimodal ML model estimating eVTOL pilot stress levels from 7 biometric modalities — heart rate, eye gaze, GSR, brain activity (fNIR), body pose, grip force, and response time. Ground truth collected via NASA Task Load Index (TLX).

Multimodal MLSignal ProcessingPyTorcheVTOL
On-Device ML

Large Generative Model On-Device Deployment & Optimization

Compressed a 72M-parameter virtual garment try-on model onto NVIDIA Jetson Nano 4GB via quantization, structured/unstructured pruning, and knowledge distillation. Conducted filter-wise sensitivity analysis to identify key-player filters.

Model CompressionPruningKnowledge DistillationJetson Nano
Motion Prediction in Airports

Motion Prediction in Airports via Heterogeneous Map Representations

Studied rasterized vs. graph-based airport map representations for aircraft motion forecasting using a transformer-based multi-modal joint prediction model (GPT-2 encoder + GMM header). Trained on 200 days of FAA SWIM trajectory data from KSEA and KEWR airports.

Motion ForecastingTransformerGraph Neural NetAviation
Facial Emotion Recognition

Human Facial Emotion Recognition & Classification

Trained a CNN classifier on the AffectNet benchmark (291K images, 8 emotion labels) to recognize facial expressions. Achieved ~70% validation accuracy. Analyzed class imbalance via confusion matrix, precision, recall, and F1 scores.

CNNComputer VisionAffectNetClassification