Tag: ethics

  • Who Gets Left Behind?

    Who Gets Left Behind?

    Bias and Representation in AI-Powered Media

    Artificial Intelligence is transforming the media and communications landscape—from how press releases are written to how marketing content is targeted. But as we celebrate these advances, it’s equally important to ask: Who is being excluded, misrepresented, or harmed by the AI systems behind our media tools?

    As future PR and media professionals, we must examine how bias enters the equation—and what we can do about it.


    📌 The Problem: Biased Data = Biased Output

    AI tools learn by analyzing massive datasets—news articles, books, social media, and even historical ads. But as multiple researchers have shown, these sources often contain built-in social biases—including racism, sexism, and ableism.

    According to a landmark study by MIT and Stanford researchers, facial recognition algorithms were less than 35% accurate when identifying darker-skinned women, compared to over 99% for lighter-skinned men (Gender Shades Study). These systems weren’t deliberately designed to discriminate—but their training data skewed heavily white and male.

    The result? People from marginalized communities are more likely to be misidentified, misrepresented, or excluded altogether from AI-powered visual and language tools.


    🎙️ What This Means for PR and Media

    AI-driven tools like ChatGPT, DALL·E, and predictive analytics software are increasingly used in:

    • Press release writing
    • Media pitching
    • Visual content creation
    • Campaign targeting

    But if these tools are trained on biased data, they may:

    • Reinforce stereotypes in written or visual outputs
    • Exclude diverse audiences from campaign targeting
    • Misinterpret tone or language in multicultural contexts

    🛠️ What Can We Do About It?

    Here are steps communicators can take to use AI responsibly and inclusively:

    1. Audit Your AI Tools

    Check what datasets your tools are using. If it’s not transparent, ask questions. Push for tools that are trained on diverse, representative data.

    2. Don’t Automate Without Oversight

    Human judgment is still essential. Always review AI-generated content for bias, misrepresentation, or tone-deaf phrasing—especially when representing underrepresented communities.

    3. Use Inclusive Design Standards

    Tools like Microsoft’s Inclusive Design Toolkit and OpenAI’s own bias mitigation strategies can guide content creators in producing more equitable work.

    4. Promote Algorithmic Transparency

    Support policies and practices that require AI platforms to disclose how decisions are made—whether in media coverage, hiring, or content distribution.


    💡 Looking Forward: AI as a Force for Equity?

    It’s not all bad news. AI can also help highlight bias and create space for voices that were historically ignored. Tools like Perspective API aim to reduce toxic comments in online spaces. Other platforms are now developing models specifically trained on inclusive datasets.

    Still, the progress will depend on how we design, question, and use these systems. As communicators, we have an ethical responsibility not just to use AI—but to challenge it when it falls short.


    📣 Final Thoughts

    Bias in AI is a reflection of societal inequalities. And in the world of media and communications, those inequalities can get amplified at scale.

    Let’s be the generation of PR and media leaders who don’t just adopt AI—but shape it to be more just, inclusive, and human.


    📚 Sources & Further Reading:


  • How the AI for Good Foundation is Shaping the Future of Ethical AI

    How the AI for Good Foundation is Shaping the Future of Ethical AI

    As the role of artificial intelligence continues to expand across industries, it’s becoming clear that learning how to use AI responsibly is just as important as learning how to use it effectively. In this post, I’m taking a closer look at the AI for Good Foundation — a nonprofit that’s at the forefront of aligning AI technology with social impact. Their work provides a great example for anyone entering the communications or tech field who wants to engage with AI in a way that is both ethical and community-focused.

    What is the AI for Good Foundation?

    The AI for Good Foundation was founded in 2015 with a mission to use artificial intelligence to advance global social good. Their focus is on aligning AI development with the United Nations Sustainable Development Goals (SDGs), from reducing poverty to improving access to education. They work with researchers, nonprofits, corporations, and governments to create scalable, ethical solutions to some of the world’s biggest challenges.

    Some of their key projects include humanitarian tools like the Eureka Platform, which connects refugees to real-time information about food, shelter, and healthcare. They also run educational programs to help students and professionals integrate AI knowledge with a commitment to social impact. Another major part of their work is conducting AI audits to assess whether systems are being developed in ways that are ethical, inclusive, and sustainable.

    How They Meet Community Needs

    One thing that stands out about the AI for Good Foundation is their clear understanding of real-world needs. Instead of developing technology for technology’s sake, they focus on solving problems that directly impact vulnerable communities. For example, the Eureka Platform is not just a cool tech project — it’s a life-saving tool for people displaced by war or natural disasters.

    Their educational initiatives also address a major gap in the AI world: the lack of accessible, interdisciplinary learning opportunities. By offering programs like the AI+SDGs Launchpad, they’re helping to create a generation of technologists and communicators who think beyond profit and innovation to consider human rights and equity.

    My Perspective on Their Effectiveness

    From what I’ve seen, the AI for Good Foundation is incredibly effective because they blend technical innovation with social consciousness. Their work shows that AI isn’t just about coding smarter algorithms, but about creating systems that genuinely improve people’s lives. I especially appreciate their commitment to transparency and collaboration across sectors, which is something that’s often missing in the tech world.

    For students and young professionals entering fields like communications, public relations, tech, or policy, the AI for Good Foundation offers a powerful example of how AI can be used thoughtfully and responsibly. Engaging with their work can be a way to stay grounded in ethics while still being excited about all the opportunities AI can offer.

    Learn More