One of the major limitations of AI is labelled 

"That’s a big problem. See, cutting-edge algorithms learn, so to speak, after analyzing countless examples of what they’re expected to do. A facial recognition AI system, for instance, will analyze thousands of photos of people’s faces, likely photos that have been manually annotated, so that it will be able to detect a face when it pops up in a video feed. But because these AI systems don’t actually comprehend the underlying logic of what they do, teaching them to do anything else, even if it’s pretty similar — like, say, recognizing specific emotions — means training them all over again from scratch. Once an algorithm is trained, it’s done, we can’t update it anymore."

by Dan Robitzski in Futurism Aug 31st 2018

He goes on to report: -

"In fact, a number of AI experts who attended The Joint Multi-Conference on Human-Level Artificial Intelligence last week in Prague said, in private interviews with Futurism or during panels and presentations, that the problem of catastrophic forgetting is one of the top reasons they don’t expect to see AGI or human-level AI anytime soon."

But a first step to solving the issue has been taken by Irina Higgins, a senior research scientist at Google DeepMind.

Even so, Dan Robitzski reports: -

"Higgins’ team’s work is a pretty big step towards getting AI to imagine more like a human and less like an algorithm."

If AI is a strategic priority worth reading the article in full and in parallel "AI- "Analogue Fools rush in where Digital Angels fear to tread"