I am a PhD candidate in Computer Engineering at the University of Toronto. My research interest is on improving security and privacy in the web, system and applications through use of machine learning and large language models. Currently seeking full-time opportunities in Summer/Fall 2025.
PhD
University of Toronto
MASc
University of Toronto
BSc
University of Toronto
Web security is a dynamic and evolving research area, where malicious actors continuously develop new methods to extract or exchange undesirable information from clients, while detectors strive to identify such activities. This constant struggle represents an arms race. Reliable detection mechanisms benefit from the development of creative client-side tools that gather contextual information about web requests to identify undesirable information exchanges. My research projects, vWitness and Duumviri, exemplify this approach by using contextual data to detect and block undesirable activities. For instance, vWitness leverages user-provided inputs extracted from a series of screenshots leading to the generation of a request to uncover user-impersonating requests, while Duumviri the consequential effect of blocking a request to identify trackers. However, these are just the beginning — there remains significant potential for further exploration into advanced contextual information, including browser state dynamics, fine-grained user interaction data, and even AI-powered behavioral models, to enhance web security.
Bugs in software systems are inevitable, and vulnerabilities (exploitable bugs) can have devastating consequences, such as data breaches, infrastructure failures, and financial losses, when exploited. Detecting, verifying, fixing vulnerabilities and validating their fixes is a resource-intensive process, requiring significant human expertise that is both time-consuming and prone to error. Large Language Models (LLMs) offer promising solutions by enhancing these tasks with human-like decision-making capabilities. For example, LLMs can detect early signs of emerging vulnerabilities (e.g., from social media activity like tweets), aggregate and synthesize information from diverse sources, assess a system’s exploitability based on usage patterns, and even propose potential fixes. By integrating LLMs into the software security lifecycle, organizations can accelerate and improve the accuracy of vulnerability management, reducing the window of opportunity for exploitation.