QUOTE OF THE DAY:
“You can’t be on the cusp of innovation and at the forefront of technology if you’re wearing blinders. If you don’t have an exploration program where you’re exploring your world here on Earth, underwater, and in space, then you’re wearing blinders and handicapping yourself.” ~ Gwynne Shotwell
GOOGLE PARTNER UP WITH GLAAD
GLAAD announced a partnership with Jigsaw – a unit within Google’s parent company Alphabet.
WHAT IS THE REASON OF THE COLLABORATION?
Jigsaw – a subdivision of Alphabet, the parent company of Google has collaborated with LGBTIQ+ organization GLAAD. The motive of the partnership is a large-scale attempt to make online conversations more comprehensive for the members of the LGBTIQ+ community.
The partnership was announced when a panel at SXSW titled The Digital War Against Bias. It also emphasized that it is an existing plan to make Google’s Artificial Intelligence more LGBTIQ+ inclusive. The initiative was taken when GLAAD came to know about the algorithmic censorship and bias experienced by some of the members of the community.
Let us share one incident that took place last year, Google’s Cloud Natural language API, which is used to analyze statements to find out to know if the statement is positive or negative, scrutinized them on the belief that being gay was bad.
According to this program, the “Sentiment” score which varies from -1 to +1 which denotes how positive your sentence is. It was revealed that the sentences with the word gay were given negative scores which were not a good thing and also demeans the sexual orientation factor.
Also Read: What is Google Cloud Print and How it Works?
WHAT GOOGLE AND GLAAD HAVE TO SAY?
Google spokesperson stated, “We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don’t always get it right. This is an example of one of those times, and we are sorry.”
The Jigsaw Product Manager CJ Adams said that the idea behind this collaboration is getting involved with the queer community to thrive meaningful solutions. He further added, “Our mission is to help communities have great conversations at scale.”
He continued: “We can’t be content to let computers adopt negative biases from the abuse and harassment targeted groups face online. We are grateful for the opportunity to collaborate with GLAAD and others in creating public research resources which can help improve the models we make and advance the field of bias-mitigation research through an open and collaborative process,”
Jim Halloran, Chief Digital Officer at GLAAD expressed his views, “A.I. has the potential for amazing benefits but also has the potential to widen social divisions and further harm marginalized communities like LGBTQ people. That is why it is crucial that we are collaborating with important organizations like Google to build inclusive A.I. that accelerates acceptance for all people.”
GOOGLE CHOOSE TO OPEN -SOURCE PIXEL PORTRAIT MODE
WHY IS GOOGLE MAKING AN EXCLUSIVE FEATURE OPEN SOURCE?
When Portrait mode on the front and rear camera, Pixel 2 and Pixel 2 XL was launched on October 19, 2017. The Portrait Mode of the Google phones is driven by software and AI. Moreover, the camera of these phones has been proved to be an eminent feature of the series.
With the success of the smartphone series, Google has now publicized Artificial Intelligence-driven technology as an open-source tool.
Google Research stated in a blog post, “We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology, added Liang-Chieh Chen and Yukun Zhu, Software Engineers”
HOW IS IT GOING TO HAPPEN?
Google’s research team revealed in a blog post that they have made the open source release of their “semantic image segmentation model” and named it DeepLab-v3+ and applied in TensorFlow. As mentioned in the blog post, “semantic image segmentation” stands for “assigning a semantic label, such as ‘road’, ‘sky’, ‘person’, ‘dog’, to every pixel in an image.”
Google said that this will help in providing power to various new applications and also will help with the synthetic shallow depth-of-field effect which is seen in portrait mode of the Pixel 2 and Pixel 2 XL smartphones.
Unlike Apple and Samsung, Google depends on the software to get the shallow depth of field.
It is also mentioned in the Google’s blog post that open source release also includes “models built on top of a powerful convolutional neural network (CNN) backbone architecture for the most accurate results…”