エピソード

  • 18. AI and Co-Intelligence: Beyond Prompts to Critical Partnership
    2026/04/12

    Is the biggest danger of AI not the technology itself, but how unreflectively we use it? And what does it actually mean to be the "human in the loop" when that concept remains frustratingly vague?

    Valentina Vlasova and Dr Kevin Coffey, senior lecturers at OMNES Education London, discuss the Co-Intelligence and AI Literacy module they designed after witnessing widespread unreflective AI use among their students. Drawing on Ethan Mollick's Co-Intelligence and the concept of co-thinking introduced by AI Swiss in 2025, they've built a course that goes far beyond prompt engineering to ask deeper questions about how humans and AI can genuinely collaborate.

    Valentina and Kevin share how they teach students to identify cultural, linguistic, and gender biases in AI outputs, including a classroom exercise that reveals how ChatGPT categorises ambition and management as male, and home and childcare as female. They discuss why bias in AI doesn't just reflect the world as it is, but amplifies it, creating a vicious cycle that's difficult to break.

    We explore the concept of embodied intelligence (what humans bring that AI fundamentally cannot) and why AI's inability to say "I don't know" matters more than students initially realise. Kevin and Valentina also reflect on what hasn't worked in the classroom, including how ChatGPT's failure to recognise mental health crisis language had real-world consequences before OpenAI intervened.

    With 70-80% of their students believing AI will replace their chosen career, this episode is essential listening for anyone thinking about how to prepare the next generation not just to use AI, but to lead it.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠ at the University of Warwick. The ⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    34 分
  • 17. AI and Amplification: Beyond Automation to Human-Centred Progress
    2026/03/29

    Is AI destined to replace us, or can it help us thrive? And why are we still stuck in the "wow" phase when we should be asking harder questions about implementation?

    Dr Bryan Reimer, research scientist at the MIT Age Lab and co-author of How to Make AI Useful: Moving Beyond the Hype to Real Progress in Business, Society and Life, discusses AI's journey from "wow" to "woah" to "grow", and why most organizations haven't moved past excitement about automation.

    Bryan argues the real value of AI isn't replacing human capability through automation, but augmenting it through amplification. The question isn't "what can AI automate?" but "how can AI make humans better at what they do?"

    We discuss AI as doer, assistant, and creator, and why the creator role raises the most ethical concerns right now. When machines generate new information, who owns it? Is machine-assisted design copyrightable? AI doesn't invent – it regresses to the mean – so it's "new, but not new."

    Bryan shares why AI as assistant is where the real success lies: it leaves ethical responsibility with humans while providing cognitive support. Students aren't just using ChatGPT to write essays, they're using it as an electronic tutor to understand material that wasn't explained clearly in lectures. Education needs to shift from banning AI to teaching both AI-amplified work and fundamental skills.

    We explore why "success is toxic" for established organizations struggling with AI adoption, why small start-ups can leapfrog traditional leaders, and how lead adopters are flying so fast that laggard may never catch up. Leadership before modern AI will be fundamentally different from leadership going forward.

    Bryan introduces "cathedral thinking" versus "strip mining" in how we need to build AI systems designed to last decades, not just solve today's problems. AI won't automate away the things we love doing: creativity, art, poetry, music. The goal is amplifying human creativity, not replacing it.

    Essential listening for anyone navigating AI adoption, wondering whether job loss predictions are overstated, or trying to understand how to make AI actually useful rather than just impressive.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠ at the University of Warwick. The ⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    26 分
  • 16. AI and Evidence: When Nobody is Accountable
    2026/03/16

    What happens when AI is used to analyse human behaviour and relationships, and the output is treated as reliable evidence in a formal process against another person?

    Dr Craig Webber, School Lead for the MA in Artificial Intelligence at the University of Southampton, joins the podcast to explore a growing and largely unaddressed risk at the intersection of AI and institutional decision making. Craig introduces a concept with profound implications for anyone who has ever been on the receiving end of a formal process - the confident confabulation.

    Large language models don't flag uncertainty. They don't interrogate the premise of the question they're asked. They reflect back whatever narrative they're fed, dressed in language that carries the appearance of authority and expertise.

    The result can be devastating. And the frameworks for accountability when it goes wrong are, at best, underdeveloped.

    This conversation explores how sycophantic AI reflects back and amplifies the narratives it receives, how AI generated analysis gets laundered into apparently human authored reports, and what it means when confident confabulations enter high stakes processes where people's lives and reputations are at stake.

    Craig returns throughout to two words. Legitimacy - does the process that produced this output have any genuine claim to being a reliable account of what actually happened? And accountability - when a confident confabulation causes real harm to a real person, who answers for that? Not the AI. Not the platform. Not the person who fed it the narrative and accepted what it reflected back without question.

    Currently, the answer is nobody.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠ at the University of Warwick. The IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    27 分
  • 15. AI and the Campus Revolution: When Students Outpace Their Universities
    2026/03/02

    What happens when AI use among university students doubles in a single year and institutions are still catching up?

    To mark the launch of Coursera's 2026 AI on Campus Report, Marni Baker Stein, Chief Content Officer, and Jack Moran, Global Enterprise PR Manager, join me to discuss the findings. With nearly half of UK students now using AI to complete their study tasks and 80% reporting improved grades, the data raises urgent questions about what we are actually measuring when we talk about academic success in an AI-augmented world.

    This conversation explores whether better grades signal deeper learning or simply more polished outputs, why the race to detect AI-generated work is one institutions are already losing, and what it would mean to genuinely redesign assessment for an AI-enabled generation. Marni and Jack also make a compelling case that AI, if intentionally designed, has the potential to strengthen belonging and reduce equity gaps rather than widen them, pointing to evidence from Coursera's own platform that underserved learners are among the most active users of AI tutoring tools.

    We discuss the tension between student enthusiasm and educator anxiety, why fewer than a third of UK universities have a formal AI policy despite the scale of adoption, and what it means for institutions to move from reactive policies to proactive frameworks that put faculty confidence and student equity at the centre.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    17 分
  • 14. AI and Agentic Systems: Balancing Autonomy with Human Oversight
    2026/03/02

    When AI agents can navigate systems autonomously, where do you draw the line between efficiency and control?

    Ed Crook, VP Strategy & Operations at DeepL, reveals how the company shifted from specialised translation to launching autonomous AI agents, and why human-in-the-loop oversight remains non-negotiable even as agentic AI scales across heavily regulated industries.

    This conversation explores how DeepL agents work through a secondary browser interface where users can view real-time navigation, pause, raise their hand, and take or relinquish control at any time. Ed explains why the agent asks when unsure, building trust the same way you'd work with a new colleague, rather than locking themselves in a dark room until 5pm. We discuss where users still actively request control (login access, sensitive systems), what 20,000 completed tasks during beta testing revealed about when AI needs intervention, and why agents can flawlessly complete advanced tasks yet fail at very basic ones.

    Ed shares how DeepL works with financial services, pharmaceuticals, and legal professionals navigating compliance requirements whilst exploring agentic AI. Over half of legal professionals report AI lets them spend more time on high-judgment strategic tasks, and two-thirds are already exploring agentic systems. He explains why shadow AI shouldn't be vilified but understood as employees seeking productivity.

    We discuss how the EU AI Act encourages proportionate responses where high-risk applications carry high responsibility, why having European-built AI success stories matters, and how centrally managed AI tools create governance oversight whilst enabling peer learning across teams. Ed reveals the education gap: access to AI tools has grown faster than training on responsible use, and why upskilling, both technical and conceptual, is the burning priority for companies navigating AI adoption.

    The challenge: build agents that combine autonomy with human judgment, scale AI adoption with responsible governance, and future-proof teams through peer learning rather than just technical training.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    22 分
  • 13. AI and Ecolinguistics: Building Ecosophies to Stop AI Amplifying Environmental Harm
    2026/02/16

    How do we prevent AI from amplifying destructive environmental narratives at a massive scale - potentially 100 billion words per day?

    Mariana Roccia and Jorge Vallego, from the H4rmony Project, reveal how ecolinguistics and ecosophies can reshape how large language models engage with ecological issues whilst addressing cultural and linguistic bias in AI-generated environmental discourse.

    This conversation explores how mainstream LLMs celebrate Coca-Cola as a "cultural icon" or patio heaters as "brilliant" without acknowledging environmental costs unless explicitly challenged. The team shares how they developed Theophrastus, an open-source assistant built on ChatGPT, instructed with an ecosophy: a living framework of ecological values that guides language generation toward planetary well-being rather than profit.

    We discuss how word embeddings cluster dominant narratives together in multidimensional space, why fine-tuning and reinforcement learning can shift those embeddings toward ecologically aligned responses, and how system prompts embed ecosophy into every AI interaction. The team explains their approach using preference datasets rather than imposed answers, working with the International Ecolinguistics Association's 1,500+ researchers to ensure cultural and linguistic representation.

    Mariana discusses why language representation matters, explaining how AI models are predominantly trained in English, which risks amplifying cultural imbalances and losing local ecological knowledge that's vital for different cultures. Jorge explains why transparency around environmental ethics in AI matters as much as addressing carbon footprint, and why major AI players need to adopt ecosophies just as they address gender and racial bias.

    This episode continues our new short series featuring conversations from the ⁠⁠⁠Building Bridges: A Symposium on Human-AI Interaction⁠⁠⁠ held at the University of Warwick on 21 November 2025. The symposium was organised by ⁠⁠⁠Dr Yanyan Li⁠⁠⁠, Xianzhi Chen, and Kaiqi Yu, and jointly funded by the Institute of Advanced Study Conversations Scheme and the Doctoral College Networking Fund, with sponsorship from Warwick Students' Union.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    28 分
  • 12. AI and Dialogic Feedback: Reframing Student Agency Through AI Partnerships
    2026/02/02

    What happens when AI becomes a dialogic partner in feedback rather than a replacement for human judgment?

    Dr Viktoria Magne, Dr Rebecca Mace, Sarah Hooper, and Dr Sharon Vince from the University of West London and University of Worcester reveal how structured AI conversations are helping students engage more deeply with feedback whilst keeping academic judgment clearly human-led.

    This conversation explores how AI creates low-stakes, judgment-free spaces where students can question, challenge, and co-construct understanding without fear of looking silly or upsetting relationships with staff. The team shares how they've designed reflective cycles using structured prompts that position students as active agents rather than passive recipients, and why this matters for equity, emotional safety, and critical AI literacy.

    We discuss the difference between transactional and dialogic AI use, why feedback shouldn't feel like static judgment, how AI helps students engage in "conversation with themselves", and what happens when first-generation students gain access to a network they've never had before. The team explains why digital literacy means learning to question AI outputs, not just operate tools, and how transparency around staff AI use builds trust.

    This episode continues our new short series featuring conversations from the ⁠⁠Building Bridges: A Symposium on Human-AI Interaction⁠⁠ held at the University of Warwick on 21 November 2025. The symposium was organised by ⁠⁠Dr Yanyan Li⁠⁠, Xianzhi Chen, and Kaiqi Yu, and jointly funded by the Institute of Advanced Study Conversations Scheme and the Doctoral College Networking Fund, with sponsorship from Warwick Students' Union.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    25 分
  • 11. AI and Assessments: When Students Ask "Does This Sound Like Me?"
    2026/01/18

    What happens when students delegate not just writing, but reasoning itself to AI?

    Chahna Gonsalves, Senior Lecturer at King's Business School, reveals how generative AI is transforming critical thinking in higher education through what she calls "epistemic offloading", the process of outsourcing intellectual work to tools like ChatGPT.

    This conversation examines how students are using AI to interpret readings, generate argument structures, and pre-evaluate their own work, shifting responsibility for core intellectual tasks. Chahna explores why AI prizes polish over depth, how this affects students' evaluative judgment, and what happens when students ask "does this sound like me?"

    We discuss the equity implications of tech-savviness, why reflexive AI use matters more than bans, and how Bloom's Taxonomy reveals which cognitive processes students readily offload versus protect. Chahna argues we need transparent conversations about delegation, judgment, and what truly requires human reasoning.

    Essential listening for anyone grappling with AI's role in learning, assessment design, and the future of thinking itself.

    This episode continues our new short series featuring conversations from the ⁠Building Bridges: A Symposium on Human-AI Interaction⁠ held at the University of Warwick on 21 November 2025. The symposium was organised by ⁠Dr Yanyan Li⁠, Xianzhi Chen, and Kaiqi Yu, and jointly funded by the Institute of Advanced Study Conversations Scheme and the Doctoral College Networking Fund, with sponsorship from Warwick Students' Union.

    AI Ethics Now

    Exploring the ethical dilemmas of AI in Higher Education and beyond.

    A University of Warwick IATL Podcast

    This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

    This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

    Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

    We will discuss:

    • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
    • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
    • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

    If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

    続きを読む 一部表示
    32 分