Rebooting AI: Deep learning, meet knowledge graphs

0
162
Rebooting AI: Deep learning, meet knowledge graphs | ZDNet

“This is what we need to do. It’s not popular right now, but this is why the stuff that is popular isn’t working.” That’s a gross oversimplification of what scientist, best-selling author, and entrepreneur Gary Marcus has been saying for a number of years now, but at least it’s one made by himself.

The “popular stuff which is not working” part refers to deep learning, and the “what we need to do” part refers to a more holistic approach to AI. Marcus is not short of ambition; he is set on nothing else but rebooting AI. He is not short of qualifications either. He has been working on figuring out the nature of intelligence, artificial or otherwise, more or less since his childhood.

Questioning deep learning may sound controversial, considering deep learning is seen as the most successful sub-domain in AI at the moment. Marcus on his part has been consistent in his critique. He has published work that highlights how deep learning fails, exemplified by language models such as GPT-2, Meena, and GPT-3.

Marcus has recently published a 60-page long paper titled “The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence.” In this work, Marcus goes beyond critique, putting forward concrete proposals to move AI forward.

As a precursor to Marcus’ upcoming keynote on the future of AI in Knowledge Connexions, ZDNet engaged with him on a wide array of topics. Picking up from where we left off in the first part, today we expand on specific approaches and technologies.

Robust AI: 4 blocks versus 4 lines of code

Recently, Geoff Hinton, one of the forefathers of deep learning, claimed that deep learning is going to be able to do everything. Marcus thinks the only way to make progress is to put together building blocks that are there already, but no current AI system combines.

Building block No. 1: A connection to the world of classical AI. Marcus is not suggesting getting rid of deep learning, but using it in conjunction with some of the tools of classical AI. Classical AI is good at representing abstract knowledge, representing sentences or abstractions. The goal is to have hybrid systems that can use perceptual information.

No. 2: We need to have rich ways of specifying knowledge, and we need to have large scale knowledge. Our world is filled with lots of little pieces of knowledge. Deep learning systems mostly aren’t. They’re mostly just filled with correlations between particular things. So we need a lot of knowledge.

No. 3: We need to be able to reason about these things. Let’s say we know physical objects and their position in the world — a cup, for example. The cup contains pencils. Then AI systems need to be able to realize that if we cut a hole in the bottom of the cup, the pencils might fall out. Humans do this kind of reasoning all the time, but current AI systems don’t.

No. 4: We need cognitive models — things inside our brain or inside of computers that tell us about the relations between the entities that we see around us in the world. Marcus points to some systems that can do this some of the time, and why the inferences they can make are far more sophisticated than what deep learning alone is doing.

To us, this looks like a well-rounded proposal. But there has been some pushback, by the likes of Yoshua Bengio no less. Yoshua Bengio, Geoff Hinton, and Yan LeCun are considered the forefathers of deep learning and recently won the Turing Award for their work.

There is more to AI than Machine Learning, and there is more to Machine Learning than deep learning. Gary Marcus is arguing for a hybrid approach to AI, reconnecting it with its roots. Image: Nvidia

Bengio and Marcus have engaged in a debate, in which Bengio acknowledged some of Marcus’ arguments, while also choosing to draw a metaphorical line in the sand. Marcus mentioned he finds Bengio’s early work on deep learning to be “more on the hype side of the spectrum”:

“I think Bengio took the view that if we had enough data we would solve all the problems. And he now sees that’s not true. In fact, he softened his rhetoric quite a bit. He’s acknowledged that there was too much hype, and he acknowledged the limits of generalization that I’ve been pointing out for a long time — although he didn’t attribute this to me. So he’s recognized some of the limits.

However, on this one point, I think he and I are still pretty different. We were talking about which things you need to build in innately into a system. So there’s going to be a lot of knowledge. Not all of it’s going to be innate. A lot of it’s going to be learned, but there might be some core that is innate. And he was willing to acknowledge one particular thing because he said, well, that’s only four lines of computer code.

He didn’t quite draw a line and say nothing more than five lines. But he said it’s hard to encode all of this stuff. I think that’s silly. We have gigabytes of memory now which cost nothing. So you could easily accommodate the physical storage. It’s really a matter of building and debugging and getting the right amount of code.”

Innate knowledge, and the 20-year-old hype

Marcus went on to offer a metaphor. He said the genome is a kind of code that’s evolved over a billion years to build brains autonomously without a blueprint, adding it’s a very sophisticated system which he wrote about in a book called The Birth of the Mind. There’s plenty of room in that genome to have some basic knowledge of the world.

That’s obvious, Marcus argues, by observing what we call a social animal like a horse, that just gets up and starts walking, or an ibex that climbs down the side of the mountain when it’s a few hours old. There has to be some innate knowledge there about what the visual world looks like and how to interpret it, how forces apply to your own limbs, and how that relates to balance, and so forth.

There’s a lot more than four lines of code in the human genome, the reasoning goes. Marcus believes most of our genome is expressed in our brain as the brain develops. So a lot of our DNA is actually about building strong starting points in our brains that allow us to then accumulate more knowledge:

“It’s not nature versus nurture. Like the more nature you have, the less nurture you have. And it’s not like there’s one winner there. It’s actually nature and nurture work together. The more that you have built in, the easier it is to learn about the world.”

The best tech inventions of all time that advanced civilization ZDNet

Exploring intelligence, artificial and otherwise, almost inevitably gets philosophical. The innateness hypothesis refers to whether certain primitives, such as language, are built in elements of intelligence.

BlikPixel

Marcus’ point about having enough storage to go by resonated with us, and so did the part about adding knowledge to the mix. After all, more and more AI experts are acknowledging this. We would argue that the hard part is not so much how to store this knowledge, but how to encode, connect it, and make it usable.

Which brings us to a very interesting, and also hyped point/technology: Knowledge graphs. The term “knowledge graph” is essentially a rebranding of an older approach — the semantic web. Knowledge graphs may be hyped right now, but if anything, it’s a 20-year-old hype.

The semantic web was created by Sir Tim Berners Lee to bring symbolic AI approaches to the web: Distributed, decentralized, and at scale. Parts of it worked well, others less so. It went through its own trough of disillusionment, and now it’s seeing its vindication, in the form of schema.org taking over the web and knowledge graphs being hyped. Most importantly, however, knowledge graphs are seeing real-world adoption. Marcus did reference knowledge graphs in his “Next Decade in AI” paper, which was a trigger for us.

Marcus acknowledges that there are real problems to be solved to pursue his approach, and a great deal of effort must go into constraining symbolic search well enough to work in real-time for complex problems. But he sees Google’s knowledge graph as at least a partial counter-example to this objection.

Deep learning, meet knowledge graphs

When asked if he thinks knowledge graphs can have a role in the hybrid approach he advocates for, Marcus was positive. One way to think about it, he said, is that there is an enormous amount of knowledge that’s represented on the Internet that’s available essentially for free, and is not being leveraged by current AI systems. However, much of that knowledge is problematic:

“Most of the world’s knowledge is imperfect in some way or another. But there’s an enormous amount of knowledge that, say, a bright 10-year-old can just pick up for free, and we should have RDF be able to do that.

Some examples are, first of all, Wikipedia, which says so much about how the world works. And if you have the kind of brain that a human does, you can read it and learn a lot from it. If you’re a deep learning system, you can’t get anything out of that at all, or hardly anything.

Wikipedia is the stuff that’s on the front of the house. On the back of the house are things like the semantic web that label web pages for other machines to use. There’s all kinds of knowledge there, too. It’s also being left on the floor by current approaches.

The kinds of computers that we are dreaming of that can help us to, for example, put together medical literature or develop new technologies are going to have to be able to read that stuff.

We’re going to have to get to AI systems that can use the collective human knowledge that’s expressed in language form and not just as a spreadsheet in order to really advance, in order to make the most sophisticated systems.”

deep-learning-pix.png

A hybrid approach to AI, mixing and matching deep learning and knowledge representation as exemplified by knowledge graphs, may be the best way forward

Marcus went on to add that for the semantic web, it turned out to be harder than anticipated to get people to play along and be consistent about it. But that doesn’t mean there’s no value in the approach, and in making knowledge explicit. It just means we need better tools to make use of it. This is something we can subscribe to, and something many people are on to as well.

It’s become evident that we can’t really expect people to manually annotate each piece of content published with RDF vocabularies. So a lot of that is now happening automatically, or semi-automatically, by content management systems. WordPress, the popular blogging platform, is a good example. Many plugins exist that annotate content with RDF (in its developer-friendly JSON-LD form) as it is published, with minimum or no effort required, ensuring better SEO in the process.

Marcus thinks that machine annotations will get better as machines get more sophisticated, and there will be a kind of an upward ratcheting effect as we get to AI that is more and more sophisticated. Right now, the AI is so unsophisticated, that it’s not really helping that much, but that will change over time.

The value of hybrids

More generally, Marcus thinks people are recognizing the value of hybrids, especially in the last year or two, in a way that they did not previously:

“People fell in love with this notion of ‘I just pour in all of the data in this one magic algorithm and it’s going to get me there’. And they thought that was going to solve driverless cars and chat bots and so forth.

But there’s been a wake up — ‘Hey, that’s not really working, we need other techniques’. So I think there’s been much more hunger to try different things and try to find the best of both worlds in the last couple of years, as opposed to maybe the five years before that.”

Amen to that, and as previously noted — it seems like the state of the art of AI in the real world is close to what Marcus describes too. We’ll revisit, and wrap up, next week with more techniques for knowledge infusion and semantics at scale, and a look into the future.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here