Starguard Beta Program
Starguard intercepts harmful content before it goes public with our patent-pending AI and human review system. We have state-of-the-art technology that allows users to get their posts reviewed in real time and get feedback on how harmful content can be avoided. This process empowers users to post on several platforms with confidence. The Starguard Beta Program allows us to launch this innovative technology into the world. The goal of this program is to engage students in college to be the flag bearers of free speech within their network and create a positive social impact. We will do so by dividing the students into two groups: Users and Guardians. In a user role, students will be creating social media content on our app, and in the guardian role, students will be providing a risk rating after AI has scored it. Eventually, this project will help with onboarding users on the application ensuring they consume social media ethically and also recruiting guardians for our ongoing projects.