Artificial intelligence is an interesting technology for use in government and elsewhere. For years, AI was talked about theoretically, with a few examples surfacing here and there which were either extremely limited in their capabilities or optimized for very specific tasks, like business process automation or gaming. And then ChatGPT came along, and suddenly AI was in the mainstream, in the news, and on everyone’s mind.
A few other interesting things happened as well. Suddenly, all of those warnings by scientists and tech luminaries about the dangers of AI didn’t seem quite so theoretical when we could actually see a fairly advanced AI in action. I interviewed quite a few AI scientists about their concerns over the past few months since ChatGPT was released, and they seemed to make a good case about why AI is such a powerful technology that it needs to be designed and deployed in an ethical way, especially when working for governments or otherwise placed in positions of authority or power.
The other interesting thing that happened was that so-called visual AIs which generate photographs or art on demand also started popping up all over the place, driven by the same type of casual, chat-like interface as their text-only cousins. There are fewer concerns about visual AIs doing something unethical, although artists have complained that those AIs steal their work, mix in some new elements, and then present it as original art.
It’s clear that AI technology, after years of more or less stagnation, is now moving ahead at warp speed. As such, there are now three new AIs of note that are slowly being deployed to the mainstream, or which are finally making a public debut after a long period of beta testing. Two of them are being paired with search engine technology, while the third is a purely visual engine, but one that users can download and use on their own computers without even needing to connect to the internet.
I jumped in and tested out as much of these three new AIs as I could.
The New Bing
It’s no surprise that the biggest news in terms of new AIs came from Microsoft, which just launched “The New Bing” search engine into beta, and which is slated to eventually replace the old one. The New Bing will add an AI chatbot based on ChatGPT, which is a really smart move because it allows that AI to have access to the internet and the world, unlike the truncated data that the core ChatGPT app uses.
If you remember from my review of ChatGPT, I explained how the AI was trained by human users over time to help it tailor its responses for both accuracy and conversation flow. As such, it almost always sounds good, even if its answers are sometimes not always completely correct. But the biggest limitation with ChatGPT is that the scientists developing it cut off the data that they were feeding it around the end of 2021 into the early part of 2022. If you ask it about current events, like how the Ukraine War is going, it won’t know. And if you ask it about Joe Biden visiting Kiev, it will rightly tell you that he did so on November 22, 2014, when he was Vice President of the United States, but won’t know anything about his more recent visit as president. That means that while ChatGPT currently has one of the biggest user bases for apps in history, its usefulness will decline over time as we get farther away from the point where it stopped getting new data.
Enter The New Bing, which plugs ChatGPT into the internet, and pairs it with a search engine to boot. Theoretically, anyone will be able to try out the new AI and search engine combination one day, but for now, you have to join a waiting list, as Microsoft is only letting a few people in at a time. A bunch of my friends and I joined up as soon as it was announced and only one of us got in so far, so I don’t know how long you may have to wait. While you are waiting, Bing offers some sample questions on the signup page so that you can see how it works.
The first thing that you will notice about The New Bing is that you have 1,000 characters to create your search query. And instead of having to ask for specific search terms to get good results, like “2023 Hyundai Elantra prices,” you can instead explain to Bing, in natural language, that you are looking for an affordable but comfortable car, with better than 40 miles per gallon gas milage, automatic transmission and high safety ratings. And then Bing will analyze all of that using the ChatGPT component and show you the most relevant results.
In that example, results for the car search will come up in two panels. The one on the left is a traditional search engine result with links to car dealers, company websites and maybe car magazines. But on the right, you will see what looks like a ChatGPT interface. In this case it will have lists of different car models that match your criteria with descriptions. However, unlike the baseline ChatGPT interface, almost every time that The New Bing AI makes a statement in the right panel about a vehicle, like “it can accelerate from 0 to 60 mph in 7.2 seconds” it will also provide a hyperlink so you can check out the source and make sure that the AI is telling you the truth.
Beneath those results will be more suggested search questions and an option for “Let’s Chat” which opens up a dialog like you would find with ChatGPT. From what I have seen, the Bing approach seems to be one of the best ways to go because you get the deep AI from ChatGPT without the restrictions of working within a closed system that no longer collects data.
Google uses a slightly different AI technology for Google Bard, its search engine and AI combination that it plans to rollout worldwide soon. According to Google, Bard will use its own next-generation language and conversation capabilities powered by the Language Model for Dialogue Applications, which Google calls LaMDA for short. The company unveiled LaMDA about two years ago, but only recently started to spotlight it, likely because of pressure from Bing, and to a lesser extent ChatGPT, making inroads against its search engine empire.
It also seems like Bard is still very much in development. Recent news stories have shared leaked memos from Google officials asking employees to quickly help test Bard, and the AI made an embarrassing blunder during its very first introduction to the world.
I was not able to get into a live test of Google Bard or even find where I could sign up to help test it out. I suspect that it may be restricted to employees right now. Eventually, Google says that they plan to integrate Bard with their search engine as well as use it to help businesses that need support teams but don’t have the budget to hire lots of humans. They also are looking to deploy a lightweight version of LaMDA which is supposed to be highly accurate but also quick, something that will be key to a search engine’s success.
We had a lot of fun over the Christmas holiday playing with the image generation AI part of ChatGPT, called DALL-E. Using common language in order to create great works of art is a really cool use for AI.
The DALL-E image generation AI is good, but is limited in a few key ways. First off, due to the popularity of the generator, users are only given a set amount of credits every month to spend on making new images. And secondly, there seems to be quite a bit of censorship implied on what DALL-E is allowed to generate.
Both of those issues are overcome with the Stable Diffusion engine. It works very much like DALL-E, but without many of the same restrictions. In fact, with a little effort, you can even install Stable Diffusion on your own PC or Mac computer and use it as much as you like without even having to connect to the internet.
You can also use Stable Diffusion online, although you may have to wait a bit during peak times. In my testing, I never had to wait more than about 15 seconds before my images started to generate. There also seems to be far less censorship of what images you can create compared with DALL-E, and the quality of the art is fairly comparable—better in some cases and worse in others.
Once you master how to carefully explain what you want the AI to generate, your art will get better with either Stable Diffusion or DALL-E. So practice definitely makes perfect, or at least more so, when learning how to paint a digital picture with just your words—and a highly advanced AI hanging on everything you say.
John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys