Insights on Adoption, Safety & Transformation with Kate Marshall, Founder of TheGr.ai

Insights on Adoption, Safety & Transformation with Kate Marshall, Founder of TheGr.ai

  • Translating complex technical topics for non-technical audiences
  • Overcoming the “stickiness” of adoption in both cybersecurity and AI

Podcast

Overview

Dive into this insightful conversation between Ashutosh Garg and Kate Marshall, exploring the fast-changing landscape of AI in the workplace, practical tips for professionals, and the future of work. Kate Marshall, founder of TheGr.ai and author of AI at Work, shares more than two decades of expertise in AI adoption, cybersecurity, and training.

About Kate Marshall

Kate Marshall is an AI adoption strategist, cybersecurity expert, and founder of TheGr.ai. With more than two decades of experience in technology, risk, and digital transformation, she focuses on helping non-technical professionals and organizations confidently integrate AI into everyday work. Through training, advisory, and practical education, she advocates for responsible AI use, data safety, and building human-centered AI literacy in the modern workplace.

01:00 — What are the biggest challenges in teaching technology adoption and training?

  • Translating complex technical topics for non-technical audiences
  • Overcoming the “stickiness” of adoption in both cybersecurity and AI
  • Encouraging people to try new tools and experiment without fear

02:11 — What patterns do professionals show when struggling with new technology?

  • The biggest obstacle is the lack of time to learn and adapt
  • Constant updates and changes overwhelm users
  • Organizations that create dedicated time and space for experimentation see stronger transformation

03:10 — Why did you start TheGr.ai, and what problem does it solve?

  • Organizations were providing access to AI tools without teaching practical usage
  • Emphasis on “learning to drive” instead of understanding “how the car works”
  • Addressing the growing need for hands-on, practical AI training

04:31 — Why do so many professionals remain stuck between “should use” and “actually using” AI?

  • Uncertainty about what is safe, allowed, or off-limits
  • Poor communication regarding organizational policies and approved AI tools
  • Frustration caused by restricted tool choices, such as being forced to use only Copilot

08:32 — What does true AI literacy look like in practice?

  • AI fluency is now about human + AI augmentation, moving beyond simple prompting
  • The critical skill is orchestrating and optimizing tools with human input
  • AI literacy continues to evolve as new tools and features emerge

09:45 — How should professionals think about accountability when using AI tools?

  • “You own the outcome” — responsibility should never be delegated to AI
  • AI should be treated as a thinking partner, not as an authority
  • Reviewing, refining, and validating outputs is essential before sharing them

11:19 — What changes when someone commits to one LLM (Large Language Model) for a set period?

  • LLMs can learn tone, style, and user preferences over time
  • Separate AI accounts are recommended for work and personal life
  • Personal accounts help optimize context and learning
  • Users should always ask for sources and fact-check outputs to avoid hallucinations

15:49 — What is the biggest mistake professionals make regarding data safety?

  • Entering sensitive information, such as PII, into unprotected or non-enterprise AI models
  • Overlooking the risks of supporting tools like voice dictation and cloud services
  • Organizations need clear AI policies and boundaries

17:52 — How does the Trust-Check-Reject model help in practice?

  • Encourages quick data sensitivity assessments as a daily habit
  • Not every task requires deep scrutiny, but users should know what needs checking or rejecting
  • The framework formalizes instincts people already use subconsciously

RESOURCES:

Learn more about Kate Marshall: LinkedIn 

Enjoyed this podcast?

Biggest mistake? Feeding personal client or company info into random AIs without guardrails. Think before you paste! And watch out for side tools like voice dictation.

Share your thoughts in the comments and help spread these insights within your network.

Would you love to give us 5 stars? ⭐⭐⭐⭐⭐ If yes, we’d greatly appreciate your review. Help us reach more people to keep them in the know as we talk to leaders, high achievers, and thought leaders from diverse backgrounds and nationalities. Excellence can come from anywhere. Stay in the know, and hear from emergent high achievers and experts.

Stay updated with what’s shaping the world today through the latest The Brand Called You Podcast episode. Follow us on iTunes, Spotify, and Anchor.fm.  

You can find us at:

Website: www.tbcy.in 

Instagram: http://bit.ly/3HO7N06 

Facebook:http://bit.ly/3YzJOaD 

Twitter: http://bit.ly/3wMBOXK 

LinkedIn: https://www.linkedin.com/company/tbcy/ 

YouTube: http://bit.ly/3jmBqfq 

Chingari: https://chingari.io/tbcypodcast 

Moj:http://bit.ly/3wOrmPv 

Josh: http://bit.ly/3WWP0nB 

Thanks for listening! 

Profile

  • Kate Marshall is an AI adoption strategist, cybersecurity expert, and founder of TheGr.ai.
  • With over two decades of experience in technology, risk, and digital transformation, she helps organizations navigate AI integration with confidence.
  • She specializes in training non-technical professionals to use AI responsibly, safely, and effectively in everyday work.