Newsletter: Microsoft Acquires Semantic & NVIDIA Teaches Robots To Observe Humans

Newsletter: Microsoft Acquires Semantic & NVIDIA Teaches Robots To Observe Humans



“Technology does more than delight, entertain and make our lives more convenient, it’s also an agent for social good. That is why it’s’ important for tech startups to stay informed about, and make a mark on, policies that impact them.” – Ron Conway.



Semantic Machines is an AI startup that emphasizes on building conversational AI that enables machines to communicate and collaborate and execute tasks. This acquisition should help Microsoft to fortify its conversational AI based technology, Cortana.


Tech giant Microsoft has recently announced the acquisition of one of the most renowned organizations that work in conversational Artificial Intelligence, Semantic Machines. The company was founded in August 2014 attracting $8.5 million in the same year and $12.3 in December 2015. Semantic works specifically towards the development in conversational AI that helps establishing a better synchronization among machines, resulting higher accuracy and convenience. Semantic also works in the fields of speech synthesis, natural language processing and deep learning.


With this acquisition, Microsoft takes the first step towards bolstering its AI offering Cortana, Microsoft Cognitive Services and the Azure Bot Service. As Semantic has the army of the best in conversational AI industry, it also includes Larry Gillick, who has provided his services as the chief scientist for Siri at Apple. As of now, Cortana may not be the sharpest tool in the shed but now with Semantic, Microsoft would be able to compete with conversational AI based services like Amazon Alexa, Samsung Bixby, Apple’s Siri and Google Assistant.

David Ku, CVP and chief technology officer of Microsoft AI & Research said, “With the acquisition of Semantic Machines, we will establish a conversational AI center of excellence in Berkeley to push forward the boundaries of what is possible in language interfaces”. He added, “Combining Semantic Machines’ technology with Microsoft’s own AI advances, we aim to deliver powerful, natural and more productive user experiences that will take conversational computing to a new level.”

Another benefit to Microsoft is that Semantic Machines has the brigade of experienced scientists in the area of conversational AI. These are the people who have worked on the core systems that kindled Google Now. Also, Semantic has the manpower from speech recognition ideal Nuance Communications. With Semantic workforce and technology, Microsoft would be able to make Cortana better at conversation with humans, execute tasks and remembering things that you’ve shared. Microsoft may also work towards making Cortana an example of Google just did by presenting the excellent artwork – Google Duplex.



The researchers at NVIDIA’s new robotic lab in Seattle had been working on this unconventional technology where they’re teaching robots to execute tasks by observing human in close proximity.


When it’s about robots, there has not been any significant development towards the learning. Although robots have been doing what they are intended to do, but there’s still some hope of getting new things done. They are performing only those tasks which they’ve been programmed to since beginning. Even these giant machineries of robotics can’t work in a close proximity to vulnerable humans. However, researchers of NVIDIA are now thinking that robots and humans can work in a close proximity. Moreover, they can even learn from humans by observing them. For this, NVIDIA’s new robotics lab in Seattle focuses on this research team that has presented a workaround teaching robot by simply observing humans at the ICRA (International Conference on Robotics and Automation), Brisbane, Australia.


According to Dieter Fox, the senior director of robotics research at Nvidia (also a professor at the University of Washington), says that the researchers wish to enable this next generation robots that can work in a close proximity to humans safely. However, it is not easy as those robots need to detect people, track their actions and learn on how they can help people. This could be beneficial for small scale industries or home users. Though it is possible to train an algorithm to play a game and learn from its mistake.

A team of NVIDIA researchers led by Stan Birchfield and Jonathan Tremblay developed a system that is helping them to teach a robot to perform the tasks by observing a human. These researchers have first worked to train a range of neural networks to detect objects, infer relationship between them and then generate a program to execute the actions it just witnessed the human perform. Also, this system generates a human readable description of the tasks it performs. It helps in diagnosing the problem if anything goes unexpected.

The team focused on using the synthetic data from a simulated environment to train the core models. As both Fox and Birchfield said, it is the simulated environment that is allowing to quickly train the robots. It is obvious that it would be taking longer if these experiments are done in real world and it could be dangerous too. “We think using simulation is a powerful paradigm going forward to train robots do things that weren’t possible before,” Birchfield noted.

Is this page helpful?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.