COMPUTER SCIENCE PROJECT
A hybrid multimodal AI system combining DenseNet visual feature extraction with a LLaMA-based language model to deliver intelligent, image-aware dermatological guidance through a conversational interface.
Try it out →The Skin Care Assistant integrates image analysis and conversational reasoning into a unified multimodal system. This semester, DenseNet acts as a vision encoder whose feature embeddings connect directly to a fine-tuned LLaMA-based language model — enabling reasoning over visual patterns such as texture and lesion characteristics, rather than relying on a single predicted label.
Design the AI core of the Skin Care Assistant by developing and evaluating a hybrid multimodal model that integrates visual features and textual reasoning in a more unified way, and evaluate whether this improves diagnostic and conversational quality compared to the previous modular pipeline.
THE TEAM