The generative AI question that marketers and others must consider
- Posted on May 3, 2023
- Estimated reading time 4 minutes
Generative AI – especially ChatGPT – has suddenly made the possibilities of artificial intelligence (AI) very real for organizations and individuals alike. Not surprisingly, this move to the mainstream has come alongside a deluge of media coverage and analyst forecasts about generative AI, much of it as cynical as sentiment in the early days of the internet. That said, the potential for generative AI to disrupt how marketing is done and to replace roles currently held by humans certainly deserves focus. Unfortunately, most of the dialogue fails to adequately address the key question marketers and content creators (and all professionals) should be considering right now: How do I need to work differently with AI as my copilot?
Recent research from MIT Sloan Management Review and Boston Consulting Group indicates 60 percent of employees already view AI as a coworker and not a job threat. With generative AI specifically, there is reason to be optimistic that – just like the internet more broadly – its impacts will largely be positive if marketers and content creators are adaptable. This includes preparing to spend less time on tasks like copywriting and analyzing sentiment data and to instead take on more strategic tasks like content strategy and acting on insights generated through natural language queries.
However, our greatest responsibility as marketers will inevitably be to help our organizations and coworkers to capitalize on the attributes the make us uniquely human, notably trust and creativity. I recently co-presented with my colleague (and digital ethics guru) Chris McClean to Avanade’s global marketing community on some of the areas where marketers need to be ready to lead when it comes to using generative AI in the enterprise:
1. IP ownership/Responsible sourcing: Imagine the potential reputational damage if your creative team uses images from a generative AI tool for a new marketing campaign, then it’s revealed that some of the images are nearly identical replicas of someone else’s work and may be a breach of copyright. Without the appropriate checks, your organization could also suffer reputational damage from association with a partner that has used questionable labor practices to source information for or train LLMs.
2. Transparency: Customers, employees or other stakeholders may lose trust if they discover that a personal message has been generated by a machine. We should always think of people first, with generative AI as a copilot, where authenticity and the right tone of empathy and integrity are essential, such as executive communications. Marketers and content creators will also need to be custodians of appropriate transparency to explain how insights and messages are produced by humans and generative AI, with the goal of maintaining trust among key stakeholders.
3. Accuracy: It’s impossible to know how authoritative or up-to-date sources of content are for most large language models (LLMs), so relying on these tools to explain a product or service will likely result in factual inaccuracies. Even the latest version of the Open AI LLM that powers ChatGPT is only trained on content up to September 2021. Generative AI LLMs may also produce blatantly biased or otherwise inappropriate content, so you need to be vigilant about the risks of using generative AI for customer communications, especially real-time touchpoints like chatbots.
4. Information Security: Many don’t realize that information entered into generative AI prompts can be exposed to third parties. Avanade recommends using the Microsoft Azure OpenAI Service over public models like ChatGPT, because the Azure service takes advantage of Microsoft’s ongoing investments in enterprise security, privacy, and compliance. However, your organization should fully explore what access all generative AI services and partners have to your information. As marketers, we should be prepared to build on our experience with GDPR to help our organizations maintain appropriate customer data protections in an AI-first world.
To fulfill our responsibility as creative and ethical advocates for the needs of customers (and employees), we first need to improve our knowledge and understanding of their expectations. This knowledge and understanding will position marketers to help our organizations evaluate more effectively the ‘why’ of using emerging technologies like generative AI, rather than getting distracted by cool imagery and slick content. It will also better equip us to play a leadership role as responsible AI ambassadors, influencing our organizations to wisely choose experiments and use cases based on critical factors like the availability of quality, diverse data.
Finally, just as every organization is becoming an AI-first organization, we must become AI-first marketers. Accenture research finds 40 percent of all working hours may be impacted by LLMs like GPT-4, so each of us could have AI copilots on a constant basis. That means we need to get more savvy about the ‘back end’ of digital technologies, especially the data and behaviors that inform the AI systems and LLMs we work with. This learning can’t wait. Now is the time to start discussing with your colleagues and partners what AI copilots will mean for your organization and industry. Be intentional about conducting small experiments with generative AI to test the opportunities and risks of your ideas and to increase the readiness of your teams for an AI-first world.
If you find it challenging to keep on top of emerging technologies, I invite you to sign up for Avanade’s monthly Digital IQ webcast series, which uses plain speak and real-world stories to help busy executives remain digitally savvy through continual change.