Generative artificial intelligence is technology’s hottest talking point of 2023, having rapidly gained traction amongst businesses, professionals and consumers. But what is generative AI, how does it work, and what is all the buzz about? Read on to find out.
What is generative AI in simple terms?
Generative AI is a type of artificial intelligence technology that broadly describes machine learning systems capable of generating text, images, code or other types of content, often in response to a prompt entered by a user.
Generative AI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response.
SEE: Microsoft’s First Generative AI Certificate Is Available for Free (TechRepublic)
How does generative AI work?
Generative AI models use a complex computing process known as deep learning to analyze common patterns and arrangements in large sets of data and then use this information to create new, convincing outputs. The models do this by incorporating machine learning techniques known as neural networks, which are loosely inspired by the way the human brain processes and interprets information and then learns from it over time.
To give an example, by feeding a generative AI model vast amounts of fiction writing, over time the model would be capable of identifying and reproducing the elements of a story, such as plot structure, characters, themes, narrative devices and so on.
Generative AI models become more sophisticated with the more data they receive and generate — again thanks to the underlying deep learning and neural network techniques. As a result, the more content a generative AI model generates, the more convincing and human-like its outputs become.
SEE: Gartner: ChatGPT interest boosts generative AI investments (TechRepublic)
Examples of generative AI
The popularity of generative AI has exploded in 2023, largely thanks to the likes of OpenAI’s ChatGPT and DALL-E programs. In addition, rapid advancement in AI technologies such as natural language processing has made generative AI accessible to consumers and content creators at scale.
Big tech companies have been quick to jump on the bandwagon, with Google, Microsoft, Amazon, Meta and others all lining up their own generative AI tools in the space of a few short months.
There are a variety of generative AI tools out there, though text and image generation models are arguably the most well-known. Generative AI models typically rely on a user feeding it a prompt that guides it towards producing a desired output, be it text, an image, a video or a piece of music, though this isn’t always the case.
SEE: Cisco is bringing a Chat-GPT experience to WebEx (TechRepublic)
Examples of generative AI models include:
- ChatGPT: An AI language model developed by OpenAI that can answer questions and generate human-like responses from text prompts.
- DALL-E 3: Another AI model by OpenAI that can create images and artwork from text prompts.
- Google Bard: Google’s generative AI chatbot and rival to ChatGPT. It’s trained on the PaLM large language model and can answer questions and generate text from prompts.
- Claude 2: San-Francisco based Anthropic, which was founded in 2021 by ex-OpenAI researchers, announced the latest version of its AI model Claude in November.
- Midjourney: Developed by San Francisco-based research lab Midjourney Inc., this gen AI model interprets text prompts to produce images and artwork, similar to DALL-E 2.
- GitHub Copilot: An AI-powered coding tool that suggests code completions within the Visual Studio, Neovim and JetBrains development environments.
- Llama 2: Meta’s open-source large language model can be used to create conversational AI models for chatbots and virtual assistants, similar to GPT-4.
- xAI: After funding OpenAI, Elon Musk left the project in July 2023 and announced this new generative AI venture. Its first model, the irreverent Grok, came out in November.
Types of generative AI models
There are various types of generative AI models, each designed for specific challenges and tasks. These can broadly be categorized into the following types.
Transformer-based models are trained on large sets of data to understand the relationships between sequential information, such as words and sentences. Underpinned by deep learning, these AI models tend to be adept at NLP and understanding the structure and context of language, making them well suited for text-generation tasks. ChatGPT-3 and Google Bard are examples of transformer-based generative AI models.
Generative adversarial networks
GANs are made up of two neural networks known as a generator and a discriminator, which essentially work against each other to create authentic-looking data. As the name implies, the generator’s role is to generate convincing output such as an image based on a prompt, while the discriminator works to evaluate the authenticity of said image. Over time, each component gets better at their respective roles, resulting in more convincing outputs. Both DALL-E and Midjourney are examples of GAN-based generative AI models.
VAEs leverage two networks to interpret and generate data — in this case, it’s an encoder and a decoder. The encoder takes the input data and compresses it into a simplified format. The decoder then takes this compressed information and reconstructs it into something new that resembles the original data, but isn’t entirely the same.
One example might be teaching a computer program to generate human faces using photos as training data. Over time, the program learns how to simplify the photos of people’s faces into a few important characteristics — such as size and shape of the eyes, nose, mouth, ears and so on — and then use these to create new faces.
Multimodal models can understand and process multiple types of data simultaneously, such as text, images and audio, allowing them to create more sophisticated outputs. An example might be an AI model capable of generating an image based on a text prompt, as well as a text description of an image prompt. DALL-E 2 and OpenAI’s GPT-4 are examples of multimodal models.
What is ChatGPT?
ChatGPT is an AI chatbot developed by OpenAI. It’s a large language model that uses transformer architecture — specifically, the generative pretrained transformer, hence GPT — to understand and generate human-like text.
SEE: You can learn everything you need to know about ChatGPT right here. (TechRepublic)
What is Google Bard?
Google Bard is another example of an LLM based on transformer architecture. Similar to ChatGPT, Bard is a generative AI chatbot that generates responses to user prompts.
Google launched Bard in the U.S. in March in response to OpenAI’s ChatGPT and Microsoft’s Copilot AI tool. In July, Google Bard was launched in Europe and Brazil.
Learn more about Bard by reading TechRepublic’s comprehensive Google Bard cheat sheet.
SEE: ChatGPT vs Google Bard (2023): An in-depth comparison (TechRepublic)
Benefits of generative AI
For businesses, efficiency is arguably the most compelling benefit of generative AI because it can enable enterprises to automate specific tasks and focus their time, energy and resources on more important strategic objectives. This can result in lower labor costs, greater operational efficiency and new insights into how well certain business processes are — or are not — performing.
For professionals and content creators, generative AI tools can help with idea creation, content planning and scheduling, search engine optimization, marketing, audience engagement, research and editing and potentially more. Again, the key proposed advantage is efficiency because generative AI tools can help users reduce the time they spend on certain tasks so they can invest their energy elsewhere. That said, manual oversight and scrutiny of generative AI models remains highly important.
SEE: Why recruiters are excited about generative AI (TechRepublic)
Use cases of generative AI
Generative AI has found a foothold in a number of industry sectors and is rapidly expanding throughout commercial and consumer markets. McKinsey estimates that, by 2030, activities that currently account for around 30% of U.S. work hours could be automated, prompted by the acceleration of generative AI.
In customer support, AI-driven chatbots and virtual assistants help businesses reduce response times and quickly deal with common customer queries, reducing the burden on staff. In software development, generative AI tools help developers code more cleanly and efficiently by reviewing code, highlighting bugs and suggesting potential fixes before they become bigger issues. Meanwhile, writers can use generative AI tools to plan, draft and review essays, articles and other written work — though often with mixed results.
SEE: How Grammarly is drawing on generative AI to improve hybrid work (TechRepublic)
The use of generative AI varies from industry to industry and is more established in some than in others. Current and proposed use cases include the following:
- Healthcare: Generative AI is being explored as a tool for accelerating drug discovery, while tools such as AWS HealthScribe allow clinicians to transcribe patient consultations and upload important information into their electronic health record.
- Digital marketing: Advertisers, salespeople and commerce teams can use generative AI to craft personalized campaigns and adapt content to consumers’ preferences, especially when combined with customer relationship management data.
- Education: Some educational tools are beginning to incorporate generative AI to develop customized learning materials that cater to students’ individual learning styles.
- Finance: Generative AI is one of the many tools within complex financial systems to analyze market patterns and anticipate stock market trends, and it’s used alongside other forecasting methods to assist financial analysts.
- Environment: In environmental science, researchers use generative AI models to predict weather patterns and simulate the effects of climate change.
Dangers and limitations of generative AI
A major concern around the use of generative AI tools -– and particularly those accessible to the public — is their potential for spreading misinformation and harmful content. The impact of doing so can be wide-ranging and severe, from perpetuating stereotypes, hate speech and harmful ideologies to damaging personal and professional reputation and the threat of legal and financial repercussions. It has even been suggested that the misuse or mismanagement of generative AI could put national security at risk.
These risks haven’t escaped policymakers. In April 2023, the European Union proposed new copyright rules for generative AI that would require companies to disclose any copyrighted material used to develop generative AI tools. These rules were approved in draft legislation voted in by the European Parliament in June, which also included strict curbs on the use of AI in EU member countries including a proposed ban on real-time facial recognition technology in public spaces.
The automation of tasks by generative AI also raises concerns around workforce and job displacement, as highlighted by McKinsey. According to the consulting group, automation could prompt 12 million occupational transitions between now and 2030, with job losses concentrated in office support, customer service and food service. The report estimates that demand for clerks could ” … decrease by 1.6 million jobs, in addition to losses of 830,000 for retail salespersons, 710,000 for administrative assistants and 630,000 for cashiers.”
SEE: OpenAI, Google and More Agree to White House List of Eight AI Safety Assurances (TechRepublic)
Generative AI vs. general AI
Generative AI and general AI represent different sides of the same coin. Both relate to the field of artificial intelligence, but the former is a subtype of the latter.
Generative AI uses various machine learning techniques, such as GANs, VAEs or LLMs, to generate new content from patterns learned from training data. These outputs can be text, images, music or anything else that can be represented digitally.
General AI, also known as artificial general intelligence, broadly refers to the concept of computer systems and robotics that possess human-like intelligence and autonomy. This is still the stuff of science fiction — think Disney Pixar’s WALL-E, Sonny from 2004’s I, Robot, or HAL 9000, the malevolent AI from Stanley Kubrick’s 2001: A Space Odyssey. Most current AI systems are examples of “narrow AI,” in that they’re designed for very specific tasks.
To learn more about what artificial intelligence is and isn’t, check out our comprehensive AI cheat sheet.
Generative AI vs. machine learning
As described earlier, generative AI is a subfield of artificial intelligence. Generative AI models use machine learning techniques to process and generate data. Broadly, AI refers to the concept of computers capable of performing tasks that would otherwise require human intelligence, such as decision making and NLP.
Machine learning is the foundational component of AI and refers to the application of computer algorithms to data for the purposes of teaching a computer to perform a specific task. Machine learning is the process that enables AI systems to make informed decisions or predictions based on the patterns they have learned.
SEE: TechRepublic Premium’s prompt engineer hiring kit
Is generative AI the future?
The explosive growth of generative AI shows no sign of abating, and as more businesses embrace digitization and automation, generative AI looks set to play a central role in the future of industry. The capabilities of generative AI have already proven valuable in areas such as content creation, software development and medicine, and as the technology continues to evolve, its applications and use cases expand.
SEE: Firm study predicts big spends on generative AI (TechRepublic)
That said, the impact of generative AI on businesses, individuals and society as a whole hinges on how we address the risks it presents. Ensuring AI is used ethically by minimizing biases, enhancing transparency and accountability and upholding data governance will be critical, and ensuring that regulation maintains pace with the rapid evolution of technology is already proving a challenge. Likewise, striking a balance between automation and human involvement will be important if we hope to leverage the full potential of generative AI while mitigating any potential negative consequences.