On Tuesday, February 14 in San Francisco, I joined a packed house of 1,800+ entrepreneurs, marketing and PR professionals who were attending Jasper’s GenAI conference to learn more about incorporating generative AI (AI) in their workstreams to work more efficiently.
AI and ChatGPT have taken the mainstream consciousness by storm. You don’t need a technical background to understand the tremendous promise of AI, which not only can instantly generate plausible and engaging language to support functions like sales and marketing but can also help humans develop more efficient workflows.
While the idea of generative AI sparks uneasiness in some segments of corporate America—who may balk at a vision of machines ruling the world or feel threatened by the efficiencies it enables and fear for their jobs—for developers, marketers, writers and creators, generative AI is widely viewed as a breakthrough that can spark or reignite creativity while streamlining the steps needed to achieve outstanding results.
For example, using AI, at the stroke of a key, you can create images by describing a concept that you want to see and iterate those images with simple word changes. You can create a first draft of a social media post, blog or an essay in a few minutes by entering a few key words, phrases and concepts. Once the AI spits out a first draft, you can quickly iterate and revise the content. (Let’s face it: Most of us are better editors than first-draft writers.) Using AI software with a different focus, non-experts can generate computer code just by telling the AI tool what type of software they want to create. But the real beauty of this technology is its ability to continually improve by “learning” your style of writing, making images or coding. For example, Jasper AI will soon come out with an update to its content platform to streamline the work of brand marketers who are rolling out their brands across multiple properties and marketing assets with the same identity—saving companies potentially millions of dollars and a great deal of the time usually spent on brand projects.
However, as we look past all the hype around the benefits of generative AI, potential snags come into view, especially in certain industries where verbal precision is paramount. In the legal world, for instance, some practitioners are exploring the use of AI to find or generate model clauses for legal or compliance projects, but this poses serious risks unless very close attention is paid to the accuracy and currency of the data being used. The validity of generative AI output is highly dependent on the quality of the data fed into it. Remaining for a moment in the legal domain, it can be difficult to find the most updated statutes in the 50 jurisdictions and it may not be immediately clear whether you have the correct version of a statute. Users in these instances need to leverage rule engines that are able to parse out the correct data when the data set is not whole. We also need to be acutely aware that tools like ChatGPT can spit out entirely wrong data. When that happens, tools like Rover AI should be used to help define and clean up the searches as well as help users understand what information ChatGPT is missing.
A further consideration in the use of AI, one that was only touched upon during the conference, is the need for guardrails to protect consumers against errors and risks. For example, one of the concerns raised at the conference was how school children might use ChatGPT. In part because there is no standard manual for using generative AI tools—with each school and every teacher applying their own rules and ethical standards for acceptable use of the technology—machine-generated plagiarism is on the rise. In fact, there are now generative AI tools designed to combat plagiarism by identifying “student” essays that are actually written by ChatGPT. If technology providers could provide guidelines for how their tools should be used in the classroom and at home for school assignments, at least there would be some common understanding—well, maybe just an inkling—and perhaps those guidelines from the foundation for a set of best practices for educators and students to follow.
Generative AI doesn’t work in every context
Among all the sessions about types of AI tools and what to expect in the future, the most interesting talk was by New York Times columnist Kevin Roose, whose book “Futureproof: 9 Rules for Humans in the Age of Automation” identifies three kinds of work, and accompanying skillsets, that white collar workers can cultivate to futureproof their livelihoods and avoid being replaced.
- Surprising jobs – For the most part, AI helps with day-to-day mundane work: tasks like helping you write “thank you” emails, creating short-form content like blogs or even coming up with the first draft of a research paper. Where AI cannot help is with work that is irregular or chaotic, reliant on reacting to human contact or hard to codify. Think about a barista or a crisis communications manager whose work and schedules are often unpredictable. I particularly enjoyed Roose’s example of the machine-based baristas in the San Francisco airport. Certainly there will be machines that can efficiently pump out the standard drinks, but when you want a double-caffeinated matcha with oat milk, that’s when AI is probably not the best tool to use.
- Social jobs – The concept of “social” revolves around work that fulfills a social and emotional need where only an individual or a company can provide an experience unique to you. That might be a financial advisor who understands your lifestyle as well as your financial goals, and maybe takes you out to lunch as well. In accountancy, sure, AI can help with standard tax returns, but most consumers are still going to lean toward hiring a real person with whom they have a social or trusting relationship. Or think about artists and artisans: Machines can generate images, but the individual artist’s creativity cannot be replicated, whether they are composing an aria, drawing a portrait or producing amazing small-batch bread with figs and beer. Generative AI cannot replace the individuals who create these unique, one-of-a-kind products.
- Scarce jobs – In instances where the stakes are high, the tolerance for error is low and excellence is observable, only a human will do. Think about lawyers. Generative AI can certainly help lawyers become more efficient by doing a lot of the “busy work” (case citations, forms comparisons, research, etc.) that they don’t want to do and clients don’t want to pay steep hourly fees for. But when it comes to legal strategy and representation in court, the risks are too high to allow a machine to represent you.
My biggest takeaway from the conference, regardless of the type of AI you are using or how it is involved in your work, is that individuals, brands and companies need to protect and refine their unique selling propositions and think of Generative AI as powerful tool that can help them innovate and improve. But don’t think of AI as a substitute for human creativity. It’s not, and it’s unlikely to be anytime soon.
NOTE: While I wrote about generative AI, no part of this blog was created by Jasper.AI or any other AI tool.