Press "Enter" to skip to content

Covariant’s CEO on building AI that helps robots learn

Covariant was founded in 2017 with a simple goal: helping robots learn how to better pick up objects. It’s a large need among those looking to automate warehouses, and one that is much more complex than it might appear. Most of the goods we encounter have traveled through a warehouse at some point. It’s an impossibly broad range of sizes, shapes, textures and colors.

The Bay Area firm has built an AI-based system that trains network robots to improve picks as they go. A demo on the floor at this year’s ProMat shows how quickly a connected arm is capable of identifying, picking and placing a broad range of different objects.

Co-founder and CEO Peter Chen sat down with TechCrunch at the show last week to discuss robotic learning, building foundational models and, naturally, ChatGPT.

TechCrunch: When you’re a startup, it makes sense to use as much off-the-shelf hardware as possible.

PC: Yeah. Covariant started from a very different place. We started with pure software and pure AI. The first hires for the company were all AI researchers. We had no mechanical engineers, no one in robotics. That allowed us to go much deeper into AI than anyone else. If you look at other robotic companies [at ProMat], they’re probably using some off-the-shelf model or open source model — things that have been used in academia.

Like ROS.

Yeah. ROS or open source computer vision libraries, which are great. But what we are doing is fundamentally different. We look at what academic AI models provide and it’s not quiet sufficient. Academic AI is built in a lab environment. They are not built to withstand the tests of the real world — especially the tests of many customers, millions of skills, millions of different types of items that need to be processed by the same AI.

A lot of researchers are taking a lot of different approaches to learning. What’s different about yours?

A lot of the founding team was from OpenAI — like three of the four co-founders. If you look at what OpenAI has done in the last three to four years to the language space, it’s basically taking a foundation model approach to language. Before the recent ChatGPT, there were a lot of natural language processing AIs out there. Search, translate, sentiment detection, spam detection — there were loads of natural language AIs out there. The approach before GPT is, for each use case, you train a specific AI to it, using a smaller subset of data. Look at the results now, and GPT basically abolishes the field of translation, and it’s not even trained to translation. The foundation model approach is basically, instead of using small amounts of data that’s specific to one situation or train a model that’s specific to one circumstance, let’s train a large foundation-generalized model on a lot more data, so the AI is more generalized.

You’re focused on picking and placing, but are you also laying the foundation for future applications?

Definitely. The grasping capability or pick and place capability is definitely the first general capability that we’re giving the robots. But if you look behind the scenes, there’s a lot of 3D understanding or object understanding. There are a lot of cognitive primitives that are generalizable to future robotic applications. That being said, grasping or picking is such a vast space we can work on this for a while.

You go after picking and placing first because there’s a clear need for it.

There’s clear need, and there’s also a clear lack of technology for it. The interesting thing is, if you came by this show 10 years ago, you would have been able to find picking robots. They just wouldn’t work. The industry has struggled with this for a very long time. People said this couldn’t work without AI, so people tried niche AI and off-the-shelf AI, and they didn’t work.

Your systems are feeding into a central database and every pick is informing machines how to pick in the future.

Yeah. The funny thing is that almost every item we touch passes through a warehouse at some point. It’s almost a central clearing place of everything in the physical world. When you start by building AI for warehouses, it’s a great foundation for AI that goes out of warehouses. Say you take an apple out of the field and bring it to an agricultural plant — it’s seen an apple before. It’s seen strawberries before.

That’s a one-to-one. I pick an apple in a fulfillment center, so I can pick an apple in a field. More abstractly, how can these learnings be applied to other facets of life?

If we want to take a step back from Covariant specifically, and think about where the technology trend is going, we’re seeing an interesting convergence of AI, software and mechatronics. Traditionally, these three fields are somewhat separate from each other. Mechatronics is what you’ll find when you come to this show. It’s about repeatable movement. If you talk to the salespeople, they tell you about reliability, how this machine can do the same thing over an over again.

The really amazing evolution we have seen from Silicon Valley in the last 15 to 20 years is on software. People have cracked the code on how to build really complex and highly intelligent looking software. All of these apps we’re using is really people harnessing the capabilities of software. Now we are at the front seat of AI, with all of the amazing advances. When you ask me what’s beyond warehouses, where I see this going is really going is the convergence of these three trends to build highly autonomous physical machines in the world. You need the convergence of all of the technologies.

You mentioned ChatGPT coming in and blindsiding people making translation software. That’s something that happens in technology. Are you afraid of a GPT coming in and effectively blindsiding the work that Covariant is doing?

That’s a good question for a lot of people, but I think we had an unfair advantage in that we started with pretty much the same belief that OpenAI had with building foundational models. General AI is a better approach than building niche AI. That’s what we have been doing for the last five years. I would say that we are in a very good position, and we are very glad OpenAI demonstrated that this philosophy works really well. We’re very excited to do that in the world of robotics.

source: TechCrunch