Bias Detector - AI Tool for Social Good
- Aarav Doddapaneni
- Jun 30
- 2 min read
Recently, I built an AI-powered bias detector—an exciting project that I believe could have a meaningful impact. In this blog, I’ll walk you through how I approached it, from choosing the tools to building the model and creating the interface. The goal of this project was to identify biased or problematic language in everyday writing, including more subtle forms like microaggressions.
The first decision I had to make was which programming language to use. I narrowed it down to Python and Java. Java makes it easier to design graphical user interfaces (GUIs), but it’s harder to integrate artificial intelligence models. Python, on the other hand, has stronger libraries and tools for AI, though building a GUI in it takes a bit more work. Since the AI portion was the core of the project, I chose Python and set up my development environment in Visual Studio Code.
Once I was ready to code, I needed a good model to work with. There were many options—OpenAI, Kaggle, and more—but I ended up using a model from Hugging Face. I chose it because its training data was more relevant for analyzing text in the way I wanted. After importing the model, I wrote logic to recognize and flag biased language using conditionals like if and else statements to return results based on certain keywords and patterns.

A key challenge was teaching the model to detect microaggressions—subtle comments that may not seem harmful at first but carry underlying bias. For example, the phrase “He is good at gymnastics for a boy” suggests a stereotype that boys aren’t expected to excel in gymnastics. I trained the model with example sentences and keywords so it could start identifying these hidden patterns in text. The more samples I gave it, the more accurate it became.

Finally, I created a user interface using Tkinter, Python’s standard GUI toolkit. I kept the design simple but clear, with labels, titles, and a text input area using a scrollable text box. I also added a copy-paste button to make the tool easier to use. Overall, I tried to make the experience user-friendly and visually appealing. This project matters because bias—especially on social media—is everywhere, and tools like this can help people become more aware of it in their own words and in the content they read.
Check it out!
Github Link: https://github.com/AaravD123/BiasDetector





Comments