





AI Trusty
Intro
I led the design of AI security and trust features for the platform. Prior to this initiative, there was no AI Trusty-related design in place. As a result, users often mistrusted the AI’s opaque responses. For trainers, the process of addressing issues in documents was tedious; they had to navigate to the Upload Library and manually locate the relevant files for revision.To address these challenges, I designed an interactive response system that makes AI-human interactions more transparent, thereby enhancing user trust.
Additionally, because Personal AI operates under RBAC, I carefully considered the user experience for multiple roles throughout the process, ensuring that trust and usability are maintained for all types of users.
Personal AI
2024-2025
Lead Product Designer + QA
Sharon Z. (CTO, AI Engineer)
Imran K. (Front-end Engineer)
Ishaan P. (PM)
Miroslav V. (Backend Engineer)Figma, Jira, Miro, Notion, After Effect
-
Empowering users to track and verify AI data sources through clear attribution, direct access, and exportable logs.
-
Designing intuitive indicators and status communications to visibly convey AI processing, ensuring users always know what the system is doing.
-
Transparency:
-
How can we guide users to the correct data source at the smallest "chunk" level of information?
-
How can we support users in modifying data in real time?
-
How can we tailor the display of data sources based on different user roles?
Processing:
-
Given limited backend API support and unpredictable processing times, how can we provide appropriate status messages and guidance during uncertain or ongoing search states?
-