Vertically Integrated Projects

We aim to develop technologies for self-care and remote care for the chronically ill, elderly, and those patients located at an untenable distance from primary or specialty providers.
VIP ChallENG research goals
We will develop and validate multimodal sensing technology that combines RGB cameras and millimeter-wave radar to enable robust human pose estimation, activity recognition, and interaction recognition. This technology will be designed for unobtrusive monitoring of individuals in real-world settings, with applications in healthcare, aged care, and rehabilitation. The goal is to support early detection of abnormal behaviours, enhance safety, and promote independent living, particularly for older adults and patients with chronic conditions.
The Connected Health VIP is supported by the Tyree Foundation Institute of Health Engineering (IHealthE). IHealthE is engaged in translational research and education under five themes. The first theme that has been launched is ‘Connected Health’, which inspired the work of this VIP project on stroke. For more information on IHealthE view .Ìý
- Wearable electronic devices
- ³¾³¾°Â²¹±¹±ðÌý°ù²¹»å²¹°ù²õ
- Infrared sensors
- Bio-signal processing
- Clinical decision support systems
- Predictive analytics, machine learning and deep learning
- Software design
- Smartphone application development
United Nations Sustainable Development Goals
Good Health and Well-being:Â
Explore the Connected Health sub-teams
Students will be divided into the teams below. Each team will first conduct a literature review to understand current research in multimodal human monitoring. They will then collaborate with academic leads to complete their tasks.
-
Goal: Estimate 3D body poses of multiple individuals using a combination of mmWave radar and RGB video data.
Tasks:
- Develop deep learning models to predict anatomical keypoints (e.g., joints) for multiple people.
- Use knowledge distillation or attention-based fusion to combine radar and RGB features.
- Train and evaluate models using the annotated in-house dataset with diverse movements and subject types.
- Ensure robustness across real-world settings with occlusion, low light, and crowd density.
-
Goal: Recognise human activities (HAR) and interactions (HHIR) using RGB images and descriptive text, enabling zero-shot classification.
Tasks:
- Use the in-house dataset of RGB video and rich textual labels to fine-tune pretrained video-language models for activity understanding.
- Apply contrastive learning to align visual features (from RGB) with semantic labels.
- Classify both seen and unseen actions/interactions using zero-shot learning techniques to enable generalisation to novel scenarios.
-
Goal: Enhance recognition of human activities and interactions by fusing radar data (heatmaps or point clouds) with RGB and text input
Tasks:
- Convert mmWave radar signals into heatmaps or 3D point clouds representing motion and body shape.
- Fuse radar and RGB modalities using multimodal LLMs with attention or contrastive alignment.
- Detect subtle and complex activities and interactions in diverse environments.
- Join this team for
- Desired skills
Credit
✔ 6 UoC per courseÂ
Professional Development
•   Teamwork
•   Leadership
•   Design
•   Communication
•   Integrity
•   Innovation and excellence
•   Diversity
•   Respect
•   Resilience
Industry Partners
•Â
•Â
•Â
•Â
- Biomedical Engineering
- Computer Science and Engineering
- Software Engineering
- Artificial Intelligence