Home » Baskin Engineering News » Ph.D. students advance research in responsible AI and assistive technology through Fellowship for Anti-Racism Research (FARR)

Ph.D. students advance research in responsible AI and assistive technology through Fellowship for Anti-Racism Research (FARR)

K. Kritika
K. Kritika (Ph.D., Computational Media, ‘27)

Two UC Santa Cruz engineering graduate students are advancing anti-racist research with the support of the Baskin Engineering Fellowship for Anti-Racism Research (FARR). Each year, FARR supports two graduate students with $6,000 in funding to pursue research focused on making engineering and technology more inclusive. 

FARR reflects Baskin Engineering’s commitment to fostering a learning and research environment that serves a broad range of communities. Over the past few years, FARR has championed student projects which include sequencing Māori DNA, fostering inclusive open source software communities and creating equitable medical technologies

K. Kritika (Ph.D., Computational Media, ‘27) and Yaxuan Wang (Ph.D., Computer Science and Engineering, ‘28) are the FARR fellows, advancing research in assistive technology and responsible AI.

A community-driven approach to assistive technology

Kritika is a third year computational media Ph.D. student focused on the human and social aspects of technology. FARR enabled her to explore how neurodivergent people experience and navigate “masking,” a form of social adaptation often used to fit into neurotypical settings. 

“Masking is a coping strategy that many neurodivergent individuals adopt to better integrate into environments that don’t support or understand their needs,” Kritika said. “This could look like forcing eye contact, or modifying speech patterns—all to avoid being judged or excluded.”

Kritika’s research used qualitative surveys and social media content analysis to understand how people talk about masking in different contexts—at school, work, or in family settings—and how these experiences relate to identity and cultural background.

By working directly with the neurodivergent community, her research revealed a more nuanced understanding of masking than is often presented in clinical literature. While some psychologists advocate for reducing masking, Kritika found that many participants described it as essential to navigating daily life.

“Participants expressed that while masking can lead to mental exhaustion, it’s also necessary to maintain jobs, relationships, and social safety,” Kritika said.

For example, one participant described discreetly carrying a stim toy—a type of fidget toy or sensory tool designed to aid in self-regulation and sensory processing—hidden in their palm while masking in public. Repetitive movements, called stimming, help satisfy sensory needs. 

This research has implications for how assistive technologies, wearables, and digital platforms can be designed to better support the needs of neurodivergent users.

“We need to shift how neurodivergent behavior is viewed—not as a problem to be fixed, but instead start designing technologies that work with people’s lived realities,” she said. 

Kritika’s work was recently accepted at CHI 2025, a leading international conference in the field of human-computer interaction. She is a member of the Misfit Lab, led by Professor of Computational Media Kate Ringland.

Retraining AI to unlearn bias

Yaxuan Wang
Yaxuan Wang (Ph.D., Computer Science and Engineering, ‘28)

Second-year computer science and engineering Ph.D. student Yaxuan Wang is working to make large language models (LLMs) like ChatGPT more trustworthy by retraining these systems to “unlearn” bias. 

“LLMs like ChatGPT are everywhere now, from job applications to health care,” Wang said. “But because these systems are trained on real-world data, they often learn real-world biases—like associating certain jobs with specific genders.”

With support from FARR, Wang developed a novel method for identifying and mitigating these embedded biases. Her approach involves fine-tuning pre-trained models using curated datasets designed to counteract biased patterns—without retraining the entire system from scratch. 

For example, Wang found that LLMs tend to show political preferences, likely due to the overrepresentation of certain types of news articles in training data. Her method successfully reduced this bias while maintaining the model’s performance.

“FARR was a perfect match for my interest in responsible AI,” Wang said. “It gave me the support and platform to develop technologies that ensure fairness for everyone.”

Looking ahead, Wang aims to apply her method to next-generation models and help inform new policy guidelines and industry standards for responsible AI—shaping how major technology companies like OpenAI, Google, and Meta develop and deploy these LLMs. 

“This work has direct implications for organizations using AI,” Wang said. “Fair and transparent AI builds public trust—and that’s essential as these technologies continue to shape our lives.”

Wang is advised by Assistant Professor of Computer Science and Engineering Yang Liu.

This block group hides your featured image, remove this block group to show your featured image again.