Research
Emergency EHR Visualization
August 2025 – PresentResearch Assistant (Supervisor: Prof. Mai Dahshan)
- Analyzed literature and existing models on event summarization, retrieval pipelines, and context-grounded reasoning to design a theoretically consistent AI summarization workflow.
- Explored RAG-based summarization frameworks for generating concise, context-aware summaries of evolving emergency events from heterogeneous data sources.
- Evaluated LLM reasoning reliability and retrieval consistency, analyzing trade-offs between factual grounding and adaptive summarization depth.
Agentic Reasoning Project
May 2025 – PresentResearch Assistant (Supervisor: Prof. Sheng Li)
- Conducted a theoretical literature review on reasoning depth, confidence calibration, and agentic control, identifying key gaps between passive LLM reasoning and active metacognitive regulation.
- Reviewed and examined project code and agent workflows to extract decision-making mechanisms linking confidence, context, and tool-calling.
- Developed a unifying theoretical framework connecting over-reasoning phenomena with agentic confidence regulation, outlining hypotheses for future implementation and evaluation.
Clarifying Question & Robot Uncertainty
August 2025 – PresentResearch Assistant (Supervisor: Prof. Yen-Ling Kuo)
- Reviewed literature on uncertainty alignment and clarifying-question frameworks, synthesizing insights from KNOWNO (robotic uncertainty) and Double-Turn Preference (ICLR 2025) to model how LLM agents identify and verbalize uncertainty.
- Explored and extended the project codebase of the Clarifying Question model, including its data pipeline and training logic to replicate experimental workflows.
- Implemented a custom evaluation function to measure model performance and reasoning reliability, supporting research on adaptive clarification strategies in robotic and language agents.
Neuroscience in the Loop
December 2024 – May 2025Research Assistant (Supervisor: Prof. Mai Dahshan)
- Applied image and text transformers to analyze multimodal neuroscience data, experimenting with cross-modal architectures for pattern recognition.
- Built and tested AI models on high-dimensional neural datasets to uncover correlations and improve model interpretability.
- Led early-stage evaluation and iteration cycles, optimizing transformer-based pipelines for improved accuracy and research usability.
Humanity Unleashed — Time-LLM Multivariate Forecasting
September 2024 – May 2025- Integrated PatchTST into the Time-LLM model, enabling high-performance multivariate time series forecasting through transformer-based temporal encoding and patch-wise learning strategies.
- Extended Time-LLM by implementing DeepAR, introducing probabilistic forecasting capabilities to produce uncertainty-aware predictions for complex temporal data.