Using the widely popular Spongebob as an example, Microsoft’s blog post showcases its ability to point out the titular character, Patrick, Gary the Snail, and lesser-known characters. Customers can then tag them with names and other data, receiving statistics like the percentage they’re in the video. The improvement was achieved through the use of the Azure Custom Vision Service, with customers able to train AI with custom models in a single pipeline. As a result, Microsoft says you won’t need machine learning skills to use these tools. “The addition of reliable AI-based animated detection will enable us to discover and catalogue character metadata from our content library quickly and efficiently. Most importantly, it will give our creative teams the power to find the content they want instantly, minimize time spent on media management and allow them to focus on the creative,” said Andy Gutteridge, senior director at Viacom International Media Networks. Meanwhile, Microsoft is highlighting improvements in its identification of multi-lingual content. It’s moving away from techniques where language detection models have to have a language specified in advance, and instead discerning them automatically. All of the separate languages are then merged into one transcription or caption file. Finally, Video Indexer is also getting the ability to review and search by all people and locations, which will surface timeframes, descriptions, and Bing links. This is joined by shot detection, which will let you search for wide, closeup, outdoor, indoor, and two shot characteristics.