I study how machine learning systems behave in high-stakes, deployed environments. My technical work focuses on privacy-preserving ML and applied deep learning, with emphasis on how design and evaluation choices shape the integrity of intelligent systems. In practice, this means looking beyond performance metrics, challenging assumptions, breaking normative frameworks, and analyzing the downstream consequences of every technical decision.
I approach technology as an inseparable medium from the social, political, and economic contexts it operates in, because I believe that systems can only be evaluated fairly when their impacts are understood.

About Me:

  • Project: Breaking the AI Trilemma: Unified Privacy, Robustness, & Explainability

    Researched how privacy-preserving machine learning systems fail under adversarial data poisoning, using explainability as a diagnostic tool. Focused on hybrid techniques with Federated Learning and Differential Privacy.

    Conducted under mentorship from Dr. Angela Newell (University of Texas, Austin).

    [Click to view page]

  • Predicting Social Media Addiction with Deep Learning Models
    This work applies deep learning to model behavioral sequences and engagement patterns, aiming to predict clinically relevant levels of social media addiction from user interaction data.

    Identifying AI-generated Content on Social Media
    This applied ML project investigates the gap between benchmark and real-world performance for synthetic media detection, with a specific emphasis on robustness to imperfect data and systematic failure analysis in uncontrolled settings.

    [Click to view projects]

  • Classifying AI Cybersecurity Systems as Critical Infrastructure Software
    Examines the urgent need to classify AI cybersecurity systems as critical infrastructure, arguing that their current lack of binding safety standards creates a systemic vulnerability in national defense. This policy brief builds on insights and evaluation results from my cybersecurity research.

    Explaining the Black Box
    Analyzes the operational risks of opaque AI in security enforcement, making the case that mandatory explainability standards are essential to prevent catastrophic errors and build trustworthy, effective cyber defenses.

    [Click to view briefs]

  • Karate and Self-Defense Instructor
    United Studios of Self Defense
    (Dec 2024 - Current)
    Led group classes and private self-defense instruction for children across a range of ages and experience levels. Adapted instruction for neurodivergent students by breaking complex movements into structured, incremental steps.

    Programming Instructor
    theCoderSchool
    (August 2025 - Current)

    Taught Python, C#, and Blender to middle and high school students in one-on-one and small-group settings. Guided students through project-based learning, emphasizing debugging, decomposition, and clear sequential reasoning.

    [Click to view page]