エピソード

  • 138. The Human Element: Why AI Shouldn't Replace Us with Lena Robinson
    2026/03/27

    Lena Robinson is a writer, creative host and returning co-host whose conversation helps reopen Humans WithAI by grounding big AI questions in lived experience, burnout, mental health, and the messy reality of how people actually use these tools.

    In this episode of Humans WithAI… David and Lena return after a break to reflect on AI fatigue, the shift from hype to more practical use, the risks and promise of AI in mental health, and the danger of businesses talking more about tools than the humans they serve.

    Lena leaves you with a simple challenge to keep hold of your own judgement, because AI can be useful, supportive and even revealing, but it should never replace human awareness, human care, or the responsibility to think for yourself.

    Links

    New study raises concerns about AI chatbots fueling delusional thinking: https://www.theguardian.com/technology/2026/mar/14/ai-chatbots-psychosis

    I’ve turned AI into my therapist. The results were pretty disquieting: https://www.theguardian.com/lifeandstyle/2026/feb/24/ive-turned-ai-into-my-therapist-the-results-were-pretty-disquieting

    続きを読む 一部表示
    1 時間 21 分
  • 137. New Season. New Format. Same AI Mayhem?
    2025/10/18

    We’re back after a long summer break with a new format, live recording, and a whole lot to say about the state of AI, tech, and creativity.

    In this episode, David, Jo, and Lena dig into:

    • Google’s decision to limit search results to 10: What does this mean for small businesses, SEO, and information access?
    • Is AI making us lazy, dumber, or just revealing how unoriginal we’ve always been?
    • The looming AI bubble: Are we headed for a dot-com-style crash?
    • Creativity vs. automation: Is originality dead, or just evolving?

    Also in the mix:

    • David’s ChatGPT agent that refuses to do its job
    • Jo’s trust issues with AI trip planners
    • Lena’s sharp take on Gen Z and critical thinking
    • And a detour into quantum computing, fusion power, and alien tech

    This one’s equal parts conversation, curiosity, and mild chaos — just how we like it.

    Subscribe, share, and stay tuned.

    New season. New format. Same mayhem.

    続きを読む 一部表示
    54 分
  • 136. Marco Ramilli: Understanding AI: The Importance of Detecting Fake Content
    2025/07/04

    Marco Ramilli joins us to discuss the urgent need for technology that can identify whether images and videos have been generated by artificial intelligence. He shares that the idea for his software arose from a viral image of the Pope in a designer jacket, which sparked widespread debate and confusion over its authenticity. As the digital landscape becomes increasingly cluttered with manipulated content, Marco emphasizes the critical importance of distinguishing reality from fabrication. He explains how his software leverages advanced AI models to analyze visual content and determine its origins. This conversation sheds light on the broader implications of AI-generated media and the challenges we face in maintaining trust in what we see online.

    Takeaways:

    • Marco Ramilli discusses the importance of distinguishing real images from AI-generated content, especially in today's digital world.
    • He shares how the viral fake image of the Pope in a puffer jacket inspired him to develop software for identifying AI-generated media.
    • The technology developed by Marco can analyze photos, videos, and sounds to determine their authenticity, which is crucial for preventing misinformation.
    • Marco emphasizes that the responsibility lies with technology developers to incorporate safeguards against misuse of AI-generated content.
    • He notes that the rise of fake content can dilute public trust and complicate issues surrounding information verification in society.
    • Marco believes that collaboration among companies is essential to address the challenges posed by the proliferation of AI-generated media.

    続きを読む 一部表示
    33 分
  • 135. Tim Carter & Simon Mirren: AI’s Missing Soul: Who’s Really Telling the Story?
    2025/06/20
    In this episode of Creatives WithAI, Lena Robinson and David Brown are joined by Tim Carter (CEO) and Simon Mirren (Creative Officer) of Karmanline, a newly launched company focused on integrating AI into the content production industry. Together, they dive into the provocations and potential of AI in storytelling and content creation, from the philosophical to the practical.Simon brings his decades of experience as a showrunner and storyteller (Criminal Minds, Versailles etc.) to interrogate whether machines can ever grasp the soul of a narrative. Tim, with a background in IP, tech, and ethics, unpacks how generative tools can (and should) be leveraged across the production pipeline without sidelining the deep craft and collaboration that makes filmmaking human.From fake AI startups and the dangers of anthropomorphising machines, to the creative chaos worth protecting in an increasingly optimised world, this episode is a must-listen for anyone working in, adjacent to, or even worried about AI’s influence on the future of media.Takeaways:AI Creativity: A machine might generate content, but it can’t understand tension, soul, or satire. That still belongs to humans, at least for now.Middle Ground Disruption: AI is widening the talent pool, but in doing so, it’s making life harder for average-skilled professionals.Human-Centric Storytelling Matters: Technology can support storytelling, but it shouldn’t overwrite the stories of marginalised voices.Collective Craft is Sacred: Every role on a film set, from grips to carpenters, holds meaning. Disregarding that in pursuit of “efficiency” is both arrogant and shortsighted.Let’s Talk Back: The episode challenges us to stay involved, speak up, and resist the sanitisation of creativity through algorithmic convenience.( PS – We want to hear from you! Got a question for Lena and Dave to tackle in a future episode? Drop it in the comments on our socials and we might feature it on the show.)Find Tim and Simon Online:Tim Carter (CEO, Karmanline) – LinkedInSimon Mirren (Creative Officer, Karmanline; Showrunner (Criminal Minds, Versailles etc) LinkedInKarmanlineNews article Links referenced in this episode:1st News Article Tim Mentioned: I tested Google's VEO 3 Myself: Here's what they don't show you in the keynote2nd News Article Tim Mentioned: Video of Emily M Bender & Sébastien Bubeck at The Computer History MuseumArticle Lena Mentioned: 'Nobody wants a robot to read them a story!' The creatives and academics rejecting AI - at work and at home.Mentioned by Lena: Simon's Linkedin Post re"EI and IQ intelligence tests"People/companies worth a look and mentioned in this episode:Justine Bateman: (Actress, director, writer, outspoken on AI in Hollywood) IMDbEmily M. Bender: (Linguist, AI critic) WebsiteDave Chappelle: (Comedian / example of creative nuance) WebsiteDonald Glover (Actor, writer, creator of Atlanta) IMDbGoogle Veo 3: (Recent software release mentioned by Tim) WebsiteGoogle Assistant: (Software mentioned by Dave) WebsiteCriminal Minds: (TV show Simon worked on) IMDbVersailles: (TV show Simon worked on) IMDbCSI (Crime Scene Investigation): (TV show Simon worked on) IMDbSpinVox: (Historical AI-voice startup referenced by Tim) WikipediaBuilder.AI (AI startup outed for using human labour behind the scenes) WebsiteOpenAI (Developers of ChatGPT) WebsiteAnthropic (AI research company mentioned in context of safety/AI blackmail) WebsiteComputer History Museum: (Location of Emily Bender’s public AI debate) WebsiteKodak: (film and digital photography company) Website Red Digital Cinema: (RED cameras used in Simon’s productions) Website
    続きを読む 一部表示
    1 時間 7 分
  • 134. Dr. Sonia Tiwari: Why AI Characters Need Empathy and Boundaries
    2025/06/10

    Dr. Sonia Tiwari joins Iyabo Oba on Relationships WithAI to explore how her work in design, education, and character creation intersects with AI, particularly in emotionally safe and ethical ways. Sonia shares how AI characters can foster learning, how her personal journey shaped her approach, and why foundational skills matter in AI collaboration. The conversation delves into topics like dual empathy, the dangers of parasocial AI relationships, and the mental health chatbot she created, Limona. Sonia calls for thoughtful design, cultural awareness, and clear guardrails to ensure AI supports rather than harms, especially in children’s lives.

    Top Three Takeaways:

    1. Design and Empathy Matter in AI - AI characters that feel relatable and emotionally safe can support learning and mental health, but their design must include ethical safeguards and clear limits.
    2. Foundational Skills Are Crucial - AI tools amplify existing expertise—they don’t replace it. Educators and designers with real-world experience use AI more responsibly and creatively.
    3. Guardrails Must Be Built In - Effective AI literacy and child safety require action on three levels: law, design, and culture. Without all three, AI can become emotionally manipulative or unsafe.

    Links and References

    • Limona chatbot – Sonia’s CBT-based AI support tool
    • Daniel Tiger’s Neighborhood and Mr. Rogers’ Neighborhood – character-led emotional learning
    • Buddy.ai – AI tutor for kids
    • Everyone AI – nonprofit working on AI and child safety
    • CBT overview – understanding cognitive behavioural therapy
    • Red teaming – stress-testing AI for safety flaws

    続きを読む 一部表示
    1 時間
  • 133. Rola Aina: Why Emotionally Intelligent Leaders Will Win With AI
    2025/05/27

    In this episode of Relationships WithAI, Iyabo Oba sits down with tech transformation consultant and TurnTroop founder Rola Aina for a wide-ranging conversation on leadership, purpose, and building with AI.

    Rola shares how her faith and upbringing shape her mission to make AI adoption both ethical and inclusive. She explains how TurnTroop is using African talent to help businesses implement AI responsibly, creating social impact while solving real enterprise problems. They discuss why generosity is not a soft skill but a strategic one, and how emotionally intelligent leadership can slow things down to build faster, fairer systems.

    Rola also reflects on her use of AI tools like ChatGPT and Claude, why nuance and judgment still belong to humans, and how real connection and kindness must remain at the heart of how we build and lead in an AI-driven world.

    Takeaways
    1. AI as a Tool for Equity and Empowerment: Rola sees AI as a powerful tool to level the global playing field, particularly through her startup TurnTroop, which helps businesses adopt AI responsibly while building talent pipelines in Africa. She believes Africa doesn’t need saviours or more charities—it needs CEOs and commerce rooted in dignity and purpose.
    2. Leadership Grounded in Purpose, Generosity, and Emotional Intelligence: Rola champions emotionally intelligent leadership, rejecting the “move fast and break things” culture. She promotes slowing down to reflect, empowering teams, and building systems that include everyone. Her values of generosity and purpose shape how she leads, builds tools, and envisions ethical AI.
    3. Human Connection Must Remain Central in an AI-Driven World: While Rola utilises AI tools like ChatGPT and Claude as “chiefs of staff,” she emphasises their limitations, particularly in terms of nuance, judgment, and emotional presence. She urges founders and leaders to stay human, stay kind, and stay emotionally connected, especially in distributed teams.

    Links

    https://www.turntroop.ai/

    https://www.linkedin.com/in/rola-aina/

    続きを読む 一部表示
    58 分
  • 132. Lyudmila Lugovskaya: The Hidden Costs of Creative Automation
    2025/05/15

    In this episode of Women WithAI, Dr. Lyudmila Lugovskaya brings her extensive experience in AI and data science to our conversation. We discuss how she helps companies navigate the complex landscape of generative AI to achieve real business outcomes. Lyudmila highlights the common misconceptions surrounding AI, particularly the overestimation of its capabilities and the importance of having clean, organised data. We explore the evolving job market as AI becomes more integrated into various industries, and the necessity for humans to adapt and maintain essential skills. Our discussion emphasises the potential of AI as a tool for innovation while also recognising the challenges and responsibilities that come with its use.

    Takeaways:

    • Dr. Lugovskaya emphasises the importance of having clean and organised data for successful AI implementations.
    • Companies often have unrealistic expectations about AI, believing it can instantly solve their business problems.
    • As generative AI rapidly evolves, continuous learning and adaptation are essential for professionals in the field.
    • The conversation highlights that while AI can automate tasks, it also requires human oversight and input for effective results.
    • Lyudmila suggests starting to use AI tools with detailed prompts to achieve better outcomes and efficiency.
    • The emergence of AI is likely to change job roles, creating new opportunities while rendering some tasks obsolete.

    Links referenced in this episode:

    • linkedin.com
    • arxiv.org
    • huggingface.com
    • hackernews.com

    続きを読む 一部表示
    36 分
  • Women: Unlocking AI - Ash Stearn on Building Agents for Everyone
    2025/05/11

    Ash Stearn is transforming how non-technical professionals engage with AI automation. In our conversation, she shares her journey from being a content writer to a leader in the AI agent space. Ashleigh emphasises the importance of community, particularly her woman-led AI agent building group that supports diverse voices in technology. We discuss the distinction between AI agents and traditional models like ChatGPT, particularly how agents can make autonomous decisions. Her insights provide a clear path for those interested in learning about AI, highlighting the significance of prompt engineering and the accessibility of tools like Relevance AI.

    Takeaways:

    • Ash Stearn emphasises the importance of making AI accessible to non-technical professionals.
    • AI agents are capable of making autonomous decisions, unlike traditional AI models like ChatGPT.
    • The relevance AI platform enables users to create agents without extensive coding knowledge.
    • Learning prompt engineering is crucial for anyone starting with AI, as it drives effective results.
    • Ash's community focuses on collaborative learning, helping beginners build AI agents together.
    • AI agents should maintain a balance between autonomy and human oversight to ensure responsible use.

    Links referenced in this episode:

    • ashstearn.ai
    • relevance.ai
    • Ashleigh Stearn on Linkedin

    続きを読む 一部表示
    37 分