Research Projects
School of Science
Division of Life Science
Alzheimer's disease (AD) is the leading cause of dementia and poses an escalating public health crisis in our aging population. Recent progress in anti-Aβ monoclonal antibody treatments, such as Lecanemab and Donanemab, has shown promise in slowing disease progression by promoting the clearance of aggregated Aβ. However, these therapies are frequently associated with adverse effects, notably amyloid-related imaging abnormalities (ARIA), highlighting the urgent need for alternative or complementary therapeutic strategies beyond Aβ.
Apolipoprotein E (APOE) ε4 is the most significant genetic risk factor for AD, implicated in over 70% of cases. Consequently, targeting APOE represents a compelling therapeutic avenue. Emerging anti-apoE therapies have demonstrated the potential to modulate AD pathology with significant reduced incidence of ARIA in both preclinical and early clinical studies. Nonetheless, the efficacy of current apoE-targeted approaches remains limited and warrants further optimisation.
In this project, we aim to identify and develop novel protective apoE variants that can be delivered via viral vector-mediated expression to mitigate AD pathology. Specifically, we will characterise the biophysical and biochemical properties of both naturally occurring protective apoE variants and synthetic variants generated in-house. With a series of biochemical assays and in vitro cellular models, we will systematically compare these variants to understand their structure-function relationships and neuroprotective potential of these variants in AD. This knowledge will guide the nomination and development of optimised apoE variants, laying the foundation for personalised gene therapies targeting APOE in AD.
1) Work with PG students and postdoctoral fellows to develop and execute this project.
2) Learn critical skills in critical thinking and experimental techniques.
1) Understand the scientific rationale, design, and main approaches of the project.
2) Understand the current research landscape of Alzheimer's disease and therapy.
3) Master critical laboratory techniques used for this project.
Moderate
2) Learn how to build magnetic tweezers.
3) Learn how to solve scientific questions systematically.
Plastics are widely used all over the world and slowly degraded into nanoplastics (<100 nm). The number of nanoplastics increase rapidly with the decrease of particle size and accumulate in the air. Nanoplastics can travel long-distance in the atmosphere, causing international pollution.
Airborne nanoplastics can be inhaled by humans and can cross cells in lungs and accumulate in tissues and organs, causing adverse effects in health. For example, nanoplastics can cause respiratory diseases including pulmonary infection and lung cancer. However, there is no effective way to prevent nanoplastics from accumulating or entering cells in lung. In this study, we will develop traditional Chinese medicine to clearing these nanoplastics in lung and prevent pulminary infection.
1) Read the papers about uptake of nanoplastics into lung.
2) Perform the uptake assays of nanoplastics in lung cells.
3) Find the optimal traditional Chinese medicine to clear nanoplastics in lung cells.
1) Understand how nanoplastics enter lung cells.
2) Understand how traditional Chinese medicine clear nanoplastics in lung cells.
Moderate
2) Prepare and evaluate EM samples.
2) Understand basic knowledge of protein science.
3) Apply protein expression and purification technique.
4) Conduct cryo-EM sample preparation and optimization.
5) Perform cryo-EM data acquisition and processing.
Challenging
The applicant need to have the basic knowledge on biological science, particular protein biochemistry.
2) Apply the developed AI program to other cancer cells to test potential applications.
2) Learn how to apply AI program to biological and medical questions.
2) Produce recombinant plasmids for CRISPR KO.
3) Edit genes.
4) Evaluate the KO efficiency.
2) Dissociate hippocampal neurons.
3) Perform the assay of synaptic vesicles in the presence of amyloid.
2) Gain a hands-on experience.
2) Interpret experimental data.
3) Find how the dynamics of G-quadruplex change under different conformations.
2) Study how the dynamics of G-quadruplex change under different conformations.
2) Study how nanoplastics affect neurotransmission in living neurons.
2) Interpret the results.
2) Evaluate different methods by using Cryo-ET and other EM technique.
2) Gain hands-on experience of EM.
3) Conduct cryo-EM sample preparation.
4) Perform cryo-EM data acquisition and processing.
2) Perform experiments.
3) Analyze data.
4) Draw conclusion from analyzed data.
2) Draw conclusions from experiments and data analysis.
2) Build the theoretical model.
2) Learn how to analyze data to check models.
3) Learn how to program.
Department of Mathematics
2) Reproduce the results in these references.
3) Apply the methods to new problems.
2) Apply the neural network methods to solve PDEs that arise from science and engineering.
Challenging
Students are expected to have strong Mathematics background; familiar with programming; knowledge of numerical methods.
Prerequisites: multivariable calculus, linear algebra, differential equations, and numerical analysis.
Chaotic systems are found throughout science and engineering—from fluid turbulence to weather patterns. Their defining feature is an extreme sensitivity to initial conditions, known as the "butterfly effect", which makes long-term, point-wise prediction fundamentally impossible. While their governing equations are often deterministic, they evolve in an erratic and seemingly random manner.
Given this complexity, it is often more fruitful to adopt a statistical perspective. For example, turbulence is simulated from the deterministic Navier-Stokes equations, yet its solutions are so intricate they are typically analyzed as a stochastic process. This project embraces that paradigm: we will treat the deterministic chaotic system itself as a stochastic process, focusing on its statistical behavior rather than its exact trajectory.
The goal is to use a data-driven approach to build a surrogate model from limited observational data. This machine learning model will learn the evolution of the system's probability distribution over time, capturing its essential statistical dynamics without knowledge of the governing equations. The ultimate aim is to create a model that, trained on a small dataset, can analyze the long-term behavior of the system, thereby bypassing the need for additional data or expensive numerical simulations.
1) Generate training and testing data by numerically solving the differential equations of a chosen chaotic system (e.g., the Lorenz system).
2) Implement the stochastic flow map learning framework using a deep learning library such as PyTorch or jax.
3) Train the model on the generated time-series data; focus the validation on the accuracy of short-term predictions and on the model's ability to replicate the geometric and statistical features of the system's attractor.
1) Develop a solid understanding of the fundamental principles of chaotic dynamical systems.
2) Gain hands-on experience in applying advanced machine learning and deep learning techniques to a challenging scientific problem.
3) Acquire practical skills in implementing and training neural networks, including residual networks and generative models, using modern deep learning frameworks.
4) Gain experience in data generation, scientific computing, and the evaluation of predictive models for complex systems.
5) Understand the emerging field of scientific machine learning and its potential to revolutionize scientific discovery and engineering design.
Challenging
Previous hands-on experiences in machine learning is a must.
Familiarity with linear algebra and differential equations.
Prior coursework in machine learning or deep learning is beneficial but not strictly required.
A strong interest in interdisciplinary research at the intersection of machine learning, physics, and applied mathematics.
Department of Chemistry
2) Train deep learning models using orbital-based learning methods.
3) Develop and improve the current model architecture.
4) Apply the models to real-world problems, such as ML potential and molecular predictions.
2) Learn basic electronic structure theories, including Hatree-Fock, Coupled-cluster, GFN-xTB, and commonly used DFTs.
3) Familiarize with PySCF and able to run PySCF for feature and label generations.
4) Train deep learning models on GPUs.
5) Assist the new model architecture improvements.
6) Understand how to bridge computational chemistry and quantum simulations with real chemistry systems and applications.
2) Learn how to carry out the fs-TA spectroscopy and time-resolved fluorescence spectroscopy independently.
3) Work under the supervision of the PI, Prof. Tengteng Chen, and a PG student who will assist in acclimating to the lab environment.
4) Provide regular reports to Prof. Tengteng Chen.
2) Investigate nanoaggregation synthesis and characterization.
3) Work with advanced ultrafast spectroscopies for measurements and instrumentation.
4) Work efficiently on a research project and develop project management skills.
Department of Physics
2) Solve the model and find the phase diagram. It needs to be shown that the gapped phase is described by a Z2 gauge theory, i.e. toric code.
3) Derive the phenomena when one adiabatically goes around an incontractible circle that encloses the gapless region of the model, in the parameter space. This will presumably be mapped into a path of the mass that gap out the Majorana cone of the matter fields and derive a topological term associated with the evolution of the mass under a closed circle.
2) Understand the technique of partons, Dirac fermions and topological terms.
School of Engineering
Department of Computer Science and Engineering
2) Implement baseline multimodal models (physio + vision + scales) and iteratively harden them for real-world use via robustness tests and domain adaptation.
3) Design and execute application scenario surveys (screening, progress tracking, social skills training); support data collection, and pilot experiments with clear evaluation metrics.
2) Gain proficiency in PyTorch and transformer toolkits for multimodal fusion and deployment.
3) Understand open-source LLM and multimodal architectures, and their use for contextual integration and privacy-aware training.
2) Implement and benchmark baseline acceleration methods to evaluate latency, throughput, and energy efficiency for LLM inference on mobile platforms.
3) Design and prototype intelligent mobile GUI agents that autonomously operate device interfaces, leveraging LLM capabilities for efficient task automation.
4) Evaluate and optimize trade-offs among accuracy, latency, and resource consumption in mobile applications.
2) Develop hands-on skills with model compression and acceleration techniques, specifically for mobile deployment.
3) Learn to balance trade-offs among accuracy, latency, and resource consumption in resource-constrained environments.
4) Gain experience in prototyping intelligent mobile applications and integrating multimodal systems for enhanced real-time interaction.
2) Collect and analyze data.
3) Present results.
2) Understand the cognitive mechanisms underlying human action recognition and prediction, and how it can be implemented computationally.
Challenging
This is a Cognitive Science project suitable for students who are interested in interdisciplinary research across CSE and SOSC.
2) Learn theories and bridge that into deployable systems.
2) Develop software system consisting of database, intelligent algorithms and IoMT for real-life applications.
3) Research and develop advanced ML algorithms to tackle challenging AI + healthcare problems.
Moderate
The projects are software oriented, with strong bias toward machine learning and AI.
2) Apply machine learning, programming and computational skills.
2) Use statistics and optimization to automate data cleansing and denoising.
3) Mine user behavior out of the cleansed data.
4) Make predictions and recommendations given user behavior.
This project blends human-computer interaction (HCI), information visualization, and visual analytics. We will work with UI controls/widgets, such as range sliders, radio buttons, and dropdown menus, that are often found in web-based applications to elicit user input. We believe these widgets can be enhanced to improve data analysis workflows. For example, what if these widgets could track users' interactions (with them) and also visualize them to make users more aware of their current behavior and potentially change subsequent behavior? In line with this belief, we have already built an open-source JavaScript library, ProvenanceWidgets (https://provenancewidgets.github.io) which tracks user interactions with the widgets and dynamically overlays them on the widget, everything in situ and in real-time. In this project, we will enhance and integrate these widgets into a data analysis tool—one where users upload a dataset, analyze data attributes and records, and create visualizations—to model and understand user behavior (e.g., if the user is interacting in a suboptimal/biased manner) and then make them aware of and fix issues with the same. Then, we will conduct a user study to evaluate how and when these widgets can help users during analysis. The long term vision is to build GuidanceWidgets—UI widgets that adaptively guide users during analysis, form-filling or other related use-cases on the web. This guidance may be to help a user become unstuck or to enhance their ongoing workflow, to name a few use-cases.
1) Contribute to the design and implementation of a web-based prototype data analysis system.
2) Develop proficiency in frontend web-development, in particular the React (or Angular) framework, and also backend scripting (e.g., using Python Flask).
3) Participate in planning and conducting a user study to assess the utility of the widgets in improving users' task performance and experience.
4) Engage in literature review, iterative design, and data analysis including scientific writing and presentation.
1) Understand foundational concepts of analytic provenance (i.e., tracking users' interactions and modeling their behavior).
2) Gain hands-on experience building an interactive visualization tool.
3) Develop skills in designing, conducting, and analyzing data collected from user studies.
4) Improve critical thinking about the role of guidance during data analysis.
5) Improve critical scientific writing and presentation skills.
Moderate
Prior web development experience is a bonus.
2) Learn to implement some of these techniques effectively. This will translate into widely applicable skills in software engineering and compiler research.
Moderate
Students applying to this project should have extensive programming experience and at least some experience in either functional programming or compiler design. For instance, a good way of obtaining this experience is to take the Principles of Programming Languages and/or Modern Compiler Design courses.
There exists a Python package, NL4DV, that takes as input a tabular dataset and a natural language query about that dataset. In response, the toolkit returns an analytic specification modeled as a JSON object containing data attributes, analytic tasks, and a list of Vega-Lite specifications relevant to the input query. In doing so, NL4DV aids visualization developers who may not have a background in NLP, enabling them to create new visualization NLIs or incorporate natural language input within their existing systems. Read more about this toolkit at https://nl4dv.github.io
This project involves utilizing text-based and vision-based language models to further enhance this toolkit's capabilities. I particularly want to support styling or authoring "multilingual" visualizations using natural language. For more information, email me.
2) Become proficient in frontend web-development, in particular the React (or Angular) framework, and also backend scripting (e.g., using Python Flask).
3) Participate in planning and conducting a user study to assess the utility of the toolkit in improving users' task performance and experience.
4) Engage in literature review, iterative design, and data analysis including scientific writing and presentation.
2) Develop skills in designing, conducting, and analyzing findings from user studies.
3) Improve critical thinking about the role of natural language in interacting with visualizations.
4) Improve critical scientific writing and presentation skills.
Moderate
Python is a must; web-development is a bonus.
2) Assist in relevant data analysis and report preparation.
2) Account for face or object recognition effects or deficits in human data, and perform relevant data analysis.
3) Present results and prepare reports.
Challenging
This is suitable for students with computational/engineering backgrounds with an interest on human cognition.
2) Deploy the model on edge devices and optimizing the inference latency.
3) Assist in data collection and other related experiments.
2) Develop proficiency with the PyTorch deep learning framework and popular transformers toolkits.
3) Gain expertise in mainstream open-source computer vision algorithms.
ARTIQ requires both hardware and software components. On the software side, it features a Python-to-assembly compiler, nicknamed NAC3, which is what this project will be about. The compiler and associated tooling are brand new and still in need of many developments. For example, one goal is to reimplement the popular NUMBA compiler for numerical computing using NAC3.
In this project, students are guided to conduct research in the database field. Depending on students' strength, they are given some particular database problems. For example, if they are good at geometry, some candidate problems include "how to find a nearest restaurant from a given location" and "how to find a shortest path from a source to a destination". If they are good at programming, one of the candidate problems is "how to return results efficiently when users issue some queries".
1) Study some important problems in the field.
2) Implement some important algorithms in the field.
3) Conduct research.
1) Learn how to implement some important algorithms in the field.
2) Learn how to conduct research.
Challenging
Students are required to have their GPA/CGA at least 3.7 (out of 4.0).
2) Learn about EEG and eye movement data analysis.
3) Explore decoding techniques using machine learning methods.
2) Experiment on AI models for evaluation or benchmarking purposes.
2) Acquire knowledge of methods for alignment, evaluation, or humen-centric benchmarking of AI models.
3) Gain hands-on experience in comparison studies between humans and AI models.
Challenging
This is a Cognitive Science project suitable for students who are interested in interdisciplinary research across CSE and SOSC.
Basic knowledge about cognitive science/experimental psychology methods, quantitative data analysis, or machine learning/programming skills are required.
2) Perform relevant data analysis.
3) Assist in report preparation.
2) Acquire skills to perform human data processing and analysis, as well as eye movement data analysis using EMHMM.
3) Gain experience in preparing research reports.
More description on the technology of the project can be found at: http://mwnet.cse.ust.hk/wherami
2) Conduct simulation studies, research and development.
3) Write programs to develop a full system.
4) Perform trials with Android and iOS.
5) Conduct commercial and real-site trials.
2) Develop indoor localization techniques.
3) Conduct R&D in the area.
4) Write mobile programs (Android and/or iOS) and system administration.
In this project, students are guided to conduct research in the data mining field (or the knowledge discovery field). Depending on students' strength, they are given some particular data mining problems. For example, if they are good at optimization, some candidate problems include "how to find the best discount rate for a shop in order to attract more customers" and "how to promote products for a shop". If they are good at graph, one of the candidate problems is "how to find similar friends in facebook".
1) Study some important problems in the field.
2) Implement some important algorithms in the field.
3) Conduct research.
1) Learn how to implement some important algorithms in the field.
2) Learn how to conduct research.
Challenging
Students are required to have their GPA/CGA at least 3.7 (out of 4.0).
2) Implement preliminary code for machine learning model training and inference.
3) Conduct a survey on various application scenarios.
4) Assist in data collection and other related experiments.
2) Develop proficiency with the PyTorch deep learning framework and popular transformers toolkits.
3) Gain expertise in mainstream open-source LLM architectures and multimodal systems.
There is a vast space of possible design and implementation choices to make when creating a new language or improving an existing one. Because this space is too large and complex to explore exhaustively, a certain dose of creativity and consistent intuition building are important parts of PL research. Equally important is the rigorous analysis of programming language semantics, verifying that language designs exhibit desirable properties and that language implementations fulfill their specifications. Finally, relatively advanced programming and algorithmic skills are required to ensure the reliability and efficiency of compiler and type checker implementations.
The goal of this project is to explore each of these axes to varying extents, depending on each student's personal interests.
2) Formalize programming language semantics and derive mathematical correctness proofs.
3) Acquire new programming and algorithmic skills along the way.
Moderate
Students applying to this project should have extensive programming experience and at least some experience in either functional programming or compiler design. For instance, a good way of obtaining this experience is to take the Principles of Programming Languages and/or Modern Compiler Design courses.
2) Become proficient in or develop proficiency in frontend web-development, in particular the React (or Angular) framework, and also backend scripting (e.g., using Python Flask).
3) Participate in planning and conducting a user study to assess the utility of the tool in improving users' task performance and experience.
4) Engage in literature review, iterative design, and data analysis including scientific writing and presentation.
2) Develop skills in designing, conducting, and analyzing data from user studies.
3) Improve critical scientific writing and presentation skills.
Moderate
Prior web development experience is a bonus.
2) Develop preliminary code for deep learning model training and inference with public datasets.
3) Assist in data collection and other related experiments.
2) Develop proficiency with the PyTorch deep learning framework and popular transformers toolkits, especially in multimodal learning.
3) Gain expertise in mainstream BCI and other wearable sensor systems.
2) Support user/asset tracking, people sensing, crowd counting, etc., with algorithms and protocols. (This includes IoT design, sensing, camera innovations, edge AI, and data/video analytics for large-scale deployment.)
3) Help on research, prototyping, simulation, and experimental trials.
4) Involve in industrial deployment based on our research results to enable new retail, promote smart city and create new market opportunities.
5) Prepare documentations in the form of patent, papers, and presentations.
2) Conduct video analytics, network programming, machine learning and protocol design.
Department of Chemical and Biological Engineering
2) Write a program that uses computer vision to identify and classify defects.
2) Use of AI tools leveraging on computer vision.
2) Develop your own cathode material for lithium sulfur battery.
2) Characterise in-house developed materials for OECTs.
2) Understand basic electrochemistry.
3) Explore design of OECT devices and organic semiconductor material.
2) Analyze data.
3) Use machine learning and AI technologies.
2) Improve programming skills, both in reading and modifying code from others and in writing usable and maintainable code.
3) Process and reason with a large amount of data with sound statistical principles.
4) Gain exposure to scientific research, in particular in computational method development.
2) Fabricate a prototype featuring a p-n junction diode.
2) Work with a team of members from different background.
3) Learn the principles to handle an engineering project.
Department of Civil and Environmental Engineering
Global climate change and subsequent extreme temperatures have become an increasing threat to sustainability in cities. Numerous novel engineering materials and building technology have been proposed in recent years as strategies to enhance thermal environment and building energy efficiency. This project aims to review the potential strategies in the literature, and compare their impacts on building energy efficiency through numerical modeling. The effectiveness of different strategies will be accessed for urban neighborhoods in different cities at the annual scale.
1) Review implemented strategies and policies in global cities.
2) Conduct numerical simulations and analysis.
1) Understand urban climate and its interaction with buildings.
2) Develop numerical simulation skills for environmental analysis.
Moderate
Basic knowledge on Matlab/coding is helpful for the project.
Extreme climate hazards pose severe threat to human beings, especially for residents in the urban environment. This project will focus on analyzing the magnitude and trend of various hazards (e.g., heatwave, tropical cyclone) for global metropolitan areas under climate change. By integrating population data with climate model outputs, the objective is to quantify the impacts of hazardous weather on human health.
1) Collect and process multi-model climate projections.
2) Analyze trend of hazardous weather under climate change.
3) Conduct population exposure analysis via integration of population and climate data.
1) Develop data processing skills including basic coding.
2) Gain knowledge of urban climate hazards under climate change.
Basic
Department of Industrial Engineering and Decision Analytics
2) Conduct numerical experiments using synthetic or real-world datasets.
3) Visualize and interpret model performance under different uncertainty levels.
4) Review relevant literature on risk measures (VaR, CVaR, RVaR).
5) Present findings in a short written report and presentation.
2) Gain hands-on experience in modeling and solving optimization problems using Python and Gurobi.
3) Learn how to evaluate and visualize robustness and sensitivity of optimization results.
4) Develop the ability to connect mathematical models with real-world decision problems.
5) Practice effective research communication through technical writing and presentation.
Challenging
Basic knowledge of Python (NumPy/Pandas) is required.
An interest in optimization, data analytics, or operations research is preferred.
Department of Electronic and Computer Engineering
2) Measure the performance of an existing Internet-scale system.
Moderate
Applicants are expected to have taken ELEC 3120/COMP 4621 or at least are taking them in the same semester.
Strong programming skills in C++ or FPGA will be preferred.
School of Business and Management
Department of Management
2) Conduct basic data processing and visualization.
3) Read and categorize participants' responses.
4) Participate in the ideation and design of experiments.
2) Learn concepts related to behavioral economics and judgment and decision making, with an emphasis on those relevant for organizations and policy.
Department of Economics
2) Conduct descriptive and econometric analysis using STATA or R.
3) Visualize key ecosystem indicators (funding, gender, and regulation) through charts or dashboards.
4) Review policy and regulatory documents to identify gaps and integration pathways with Hong Kong.
5) Contribute to short policy briefs or presentations summarizing preliminary findings.
2) Learn practical skills in survey data cleaning, statistical analysis, and visualization.
3) Gain exposure to comparative financial regulation and cross-border policy research.
4) Develop collaborative research and writing skills in a professional research environment.
5) Produce a research deliverable contributing to a public FinTech Ecosystem Report.
Challenging
STATA or R, knowledge of Russian or Central Asian languages is an advantage.
Department of Finance
The project aims to develop a better understanding of the trade policy toolkit—instruments such as sanctions or government aid—during international conflicts. In doing so, it aims to analyze the impact of various trade measures at the country and firm levels, quantifying policy effects and decomposing them into structural mechanisms.
1) Produce literature reviews.
2) Collect and clean data.
3) Run data analysis.
4) Write structural models.
5) Estimate model parameters with data.
1) Practice to be a part of the collaborative work process.
2) Develop skills in data analysis and structural estimation.
3) Get exposure to research project development and execution.
Moderate
Research tasks will be adjusted to the student's background.
Prior exposure to R and/or Python is a bonus.
School of Humanities and Social Science
Division of Social Science
2) Conduct literature review and data-analysis.
3) Prepare experimental material.
4) Participate in recruitment and coordination.
5) Conduct hands-on experiments, both in-person and online.
6) Prepare posters and present at conferences.
2) Be able to prepare and generate experimental stimuli.
3) Get hands-on experience in conducting behavioral experiments.
4) Familiarize with basic data analysis tools and methods.
5) Build up ownership in projects and be a lead person/team player.
Moderate
Students are expected to have some basic data analysis skills (Excel, R, SPSS). Programming knowledge (e.g., Python, PsychoPy, JSON) is recommended but not required.
2) Review relevant literatures.
3) Prepare experimental material.
4) Participate in recruitment and coordination.
5) Conduct hands-on experiments in-person and online.
6) Conduct data-analysis.
7) Prepare posters and present at conferences.
2) Get hands-on experience in conducting behavioral experiments.
3) Build up ownership in projects and are comfortable working in a team.
Moderate
Students are expected to have some basic data analysis skills (Excel, R, SPSS). Programming knowledge (e.g., Python, PsychoPy, JSON) is recommended but not required.
2) Review relevant literatures.
3) Prepare experimental material.
4) Participate in recruitment and coordination.
5) Conduct hands-on experiments in-person and online.
6) Conduct data-analysis.
7) Prepare posters and present at conferences.
2) Get hands-on experience in conducting behavioral experiments.
3) Build up ownership in projects and are comfortable working in a team.
Moderate
Students are expected to have some basic data analysis skills (Excel, R, SPSS). Programming knowledge (e.g., Python, PsychoPy, JSON) is recommended but not required.
2) Help with follow-up assessment design and online data collection.
3) Assist with the data cleaning or preliminary processing.
2) Develop an understanding of preschool-aged children's social-emotional competence.
3) Obtain first-hand experience of conducting a psychological study.
4) Obtain and practice basic data analysis skills.
2) Conduct literature reviews.
3) Prepare experimental stimuli.
4) Conduct experimental studies on children and adults.
5) Perform statistical analyses.
6) Generate research outcomes for conferences.
2) Get hands-on experience in conducting behavioral experiments.
3) Build up ownership in projects and are comfortable working in a team.
Division of Humanities
2) Organize, clean, and manage geospatial data for analysis.
3) Analyse maps to identify patterns in urban growth, land use, or infrastructure.
4) Experiment with GIS programming and emerging GeoAI techniques to improve workflows.
2) Gain experience in analysing spatial patterns from historical maps.
3) Explore computational and GeoAI approaches for historical geospatial research.
4) Communicate findings effectively through maps, visualisations, and written summaries.
Moderate
GIS, Python is required.
2) Develop coherent arguments concerning the origins and development of philosophy of science.
3) Write a cohesive philosophical essay on the topic.
2) Develop a sophisticated understanding of the history of philosophy of science.
3) Examine various philosophical views on scientific methodology and scientific change critically and independently.
2) Collect, clean, and organise biographical data.
3) Analyse data to identify patterns and support research questions.
4) Assist in improving database tools and documenting research processes.
2) Gain experience in working with structured historical data and basic analysis techniques.
3) Develop skills in interpreting patterns in social, political, or professional networks.
4) Learn to communicate research findings clearly in written, oral, and/or visual forms.
Moderate
Data analysis (R or Pandas) is required.
2) Develop coherent arguments concerning scientific progress and the development of the periodic table.
3) Write a cohesive philosophical essay on the topic.
2) Develop a sophisticated understanding of the development of the periodic table as well as the history of 19th century chemistry.
3) Examine various philosophical views on scientific change critically and independently.
2) Develop coherent arguments concerning the relation of scientific understanding to scientific progress in the development of the social sciences.
3) Write a cohesive philosophical essay on the topic.
2) Develop a sophisticated understanding of the history of the social sciences.
3) Examine various philosophical views on scientific change and scientific practice critically and independently.
Academy of Interdisciplinary Studies
Division of Integrative Systems and Design
2) Apply an iterative prototyping approach to design and manufacture actuated wearables that meet given motion requirements.
3) Validate the effectiveness of the mechanical design after integration of the AI for emotion recognition.
Smartphones play a central role in daily life, yet prolonged and unregulated use can lead to both physical fatigue (e.g., finger or hand strain) and mental fatigue (e.g., reduced attention and alertness). Current digital wellbeing tools, however, primarily rely on simplistic, time-based metrics that do not reflect how the device is used. This project seeks to develop a more intelligent, behavior-driven approach to estimating user fatigue by analyzing interaction patterns directly from smartphone usage.
We will investigate how touch dynamics (e.g., typing speed, tap rhythm, error rates), device motion (e.g., tremor, grip variation), and contextual factors (e.g., app switching frequency, time of day) can serve as implicit indicators of fatigue. Leveraging built-in smartphone sensors—and optionally lightweight external devices such as smartwatches—we will collect user interaction data alongside self-reported fatigue levels. This data will be used to train machine learning models capable of passively inferring fatigue states. The long-term objective is to support user wellbeing by identifying potential overuse or strain based on behavioral quality, contributing to next-generation digital wellbeing systems.
1) Assist in the design and implementation of a smartphone-based data collection tool (e.g., Android/iOS app).
2) Participate in data collection and management, including pilot testing with users and ensuring data quality and privacy.
3) Extract features from behavioral and sensor data (e.g., typing speed, tap frequency, accelerometer patterns).
4) Support the development and evaluation of machine learning models for fatigue or user state estimation.
5) Contribute to the design and prototyping of fatigue-aware user interface feedback.
6) Engage in potential paper writing and submission.
1) Gain hands-on experience in human-computer interaction (HCI) research and mobile sensing technologies.
2) Learn to design, implement, and deploy mobile data collection tools for behavioral research.
3) Develop skills in processing and analyzing sensor data from smartphones.
4) Apply machine learning techniques to real-world behavioral datasets.
5) Understand user-centric design principles in the context of digital wellbeing and adaptive interfaces.
6) Improve communication and collaboration skills through participation in a multidisciplinary research environment.
Moderate
2) Deploy Rhino Grasshopper software for building a parametric CAD model of the eel-skin, and 3D-print the generated structures directly on fabric using state-of-the-art PolyJet or FDM additive manufacturing technology.
The collaborator Prof. Mason Dean will provide high-resolution, 3D, multi-scale characterizations of eel skin architecture using material science and bio-imaging tools and characterize eel skin structure-function links via performance mechanics and wear tests of eel skin.
2) Create materials with unique properties by 3D-printing directly on fabric using state-of-the-art PolyJet and FDM additive manufacturing technology.
3) Analyze the performance of a parametric family of bioinspired materials.
2) Tune the manufacturing equipment (e.g. laser cutter or 3D-printer) and potentially designing new plug-ins for streamlined and reliable manufacturing of the actuators.
2) Acquire practical manufacturing skills such as the tuning of 3D-printing parameters to optimize the fabrication process.
2) Select suitable components such as hydraulic proportional valves and motors based on the robot arm requirements and manufacturer datasheets.
3) Implement waterproof design for mounting the robot arm control system onto an AUV.
4) Perform embedded programming for controlling the robot arm.
2) Fabricate the soft robotic finger with integrated color patterns and electronics.
3) Calibrate the sensor.
4) Characterize the actuator.
2) Use state-of-the-art multi-material additive manufacturing equipment to integrate materials of different color as well as electronic components in a single 3D-printed object.
3) Apply machine learning for calibrating proprioceptive and tactile sensors for soft robots.
Magnetically guided catheters are often steered with game controllers that were designed for camera or vehicle orientation—not for magnetic field manipulation near a patient. This creates a fundamental coordinate-frame mismatch among (a) the patient/magnet system, (b) the handheld controller, and (c) the catheter's moving tip. The result is non-intuitive control, increased cognitive load, and trial-and-error steering.
This project tackles that core usability problem through mechanical design and systems engineering. The goal is to create a frame-matched, haptic control concept that makes the desired bend direction physically obvious—so a first-time user can "move the handle where they want the tip to go". Work will emphasize embodiment of the patient/magnet axes in the mechanism, ergonomic affordances that guide valid motions, and a clean systems architecture that maps user input to magnetic field commands reliably and safely. Evaluation will be done on a benchtop setting with simple targets to compare intuitiveness and basic performance against conventional input methods.
1) Mechanical Concept Development: Generate and refine mechanisms that align user hand motion with the patient/magnet frame; consider ergonomics, constraints, and safety.
2) Prototyping and Fabrication: Build low- to mid-fidelity prototypes (e.g., 3D-printed or machined parts) to iterate on form, feel, and kinematics.
3) Systems Integration: Define sensor/actuator needs at a block-diagram level and integrate the mechanical interface with a basic software pipeline for interpreting inputs.
4) Test Planning and Execution: Design simple benchtop tasks and metrics (e.g., time-to-reach, directional accuracy, correction counts) to qualitatively assess intuitiveness.
5) Documentation and Communication: Maintain clear design rationale, trade-off records, and present results with figures, CAD excerpts, and short demos.
1) Mechanical Embodiment of Coordinate Frames: Translate abstract kinematic relationships into tangible mechanisms that "teach" the correct motion.
2) Human-Centered Mechanical Design: Apply ergonomics, affordances, and haptic cues to reduce cognitive load and support safe, intuitive operation.
3) Systems Thinking: Architect a clear signal path from user input > interface motion > interpreted command, including basic sensing and constraints.
4) Rapid Iteration and Prototyping: Practice fast, evidence-driven design cycles—from sketching and CAD to build, test, and refine.
5) Design for Safety and Reliability (Medical Context): Consider fail-safe behaviors, limits, and basic hygiene/cleanability in early-stage concepts.
6) Experimental Evaluation and Reporting: Plan simple, fair comparisons to legacy controls and communicate findings with concise visuals and narratives.
Challenging
2) Select suitable photovoltaics, communication, localization, and USV propulsion hardware based on a list of design specifications.
3) Create a working prototype of a PV-powered USV with communication and localization capabilities.
Division of Emerging Interdisciplinary Areas
This summer undergraduate research exchange will immerse students in the development and evaluation of DID:ART—the Decentralized Identifier Registry for Art. The project aims to blend cutting-edge technology (blockchain, decentralized identity, verifiable credentials) with art business practices to enhance provenance, trust, and data management for artwork and media creations. Over 8 weeks, students will participate in researching, prototyping and preparing test data and reports or solutions that contribute to the design, deployment, and real-world application of DID:ART. Interdisciplinary tasks span technical development, user experience, stakeholder outreach, standards analysis, and public communication. This Art ID Registry project is receiving support from the EMIA and AMC divisions and will plan to launch the DID:ART prototype via a symposium or forum. The solution and architecture will be posted in Github under W3C DID methods.
The proposed summer undergraduate research projects for DID:ART naturally group into five categories.
1) There is a significant focus on Literature and Standards Review, where students would conduct comprehensive surveys of decentralized identity technologies, art provenance standards such as C2PA and IIIF, and evaluate existing blockchain-based art registries. This foundational research will help map out integration opportunities and pitfalls relevant to the art ecosystem.
2) Students could engage in Technical Development and Prototyping, tackling tasks such as testing DID libraries, writing sample DID documents, developing API endpoints for art registration, simulating verifiable credentials issuance, and prototyping user interface components including wireframes and workflow designs. These activities provide hands-on experience with decentralized identity frameworks and software engineering.
3) The category of Security, Privacy, and Risk Analysis invites students to analyze potential vulnerabilities such as forgery or data breaches, review compliance with privacy laws like GDPR, and propose mitigation or governance solutions that balance transparency with confidentiality in the registry environment.
4) Projects under the theme of Governance and Stakeholder Engagement would have students map the art sector's diverse participants, research DAO governance models for decentralized registries, conduct interviews or surveys with art business professionals, and draft educational or communication materials to engage and onboard key stakeholders effectively.
5) The Testing, Evaluation, and Reporting category encompasses tasks related to user experience trials, verification workflows, documentation of bugs or usability issues in prototypes, comparative architectural analyses with other DID solutions, and preparation of final research summaries and recommendations for ongoing project development.
1) Understand the foundational principles of decentralized identity (DID), blockchain technology, and verifiable credentials in the context of art provenance and management.
2) Develop practical skills in researching, designing, and prototyping registry infrastructure components, including DID document creation, API development, and user experience considerations.
3) Gain insights into the interplay between privacy, security, and governance in decentralized digital systems through risk analysis and participation in multi-stakeholder DAO governance modeling.
4) Apply interdisciplinary research methods to engage with real-world art business stakeholders, analyze existing standards, and contribute to the evolution of a sustainable, interoperable art identity ecosystem.
Moderate
Students are expected to have an understanding of data structure, any programming languages, and the use of JSON, XML in APIs.
Division of Environment and Sustainability
2) Analyze air pollutant concentration data (including CO, NO, NO2 and O3) from air quality monitoring stations of the Environmental Protection Department of Hong Kong.
3) Analyze meteorological data (solar, wind speed and direction, etc.) from the Hong Kong Observatory.
4) Compare air pollutant data between urban and suburban site during ozone episodes.
5) Investigate air mass transport by using Hybrid Single Particle Lagrangian Integrated Trajectory Model (HYSPLIT).
2) Learn the methodology of data analysis and some data processing software.
3) Develop analytical skills and critical thinking ability.