Upgrade & Secure Your Future with DevOps, SRE, DevSecOps, MLOps!

We spend hours on Instagram and YouTube and waste money on coffee and fast food, but won’t spend 30 minutes a day learning skills to boost our careers.
Master in DevOps, SRE, DevSecOps & MLOps!

Learn from Guru Rajesh Kumar and double your salary in just one year.

Get Started Now!

Google explains the science behind the Pixel 4’s Portrait Mode

Source: engadget.com

Google’s latest flagship phone, the Pixel 4, is renowned for its exceptional camera software. If you’re curious about how that was achieved, Google has revealed more about how Portrait Mode works, describing how its dual-pixel auto-focus system functions.

Portrait Mode takes images with a shallow depth of field, focusing on the primary subject and blurring out the background for a professional look. It does this by using machine learning to estimate how far away objects are from the camera, so the primary subject can be kept sharp and everything else can be blurred.

In order to estimate depth, the Pixel 4 captures an image using two cameras, the wide and telephoto cameras, which are 13 mm apart. This produces two slightly different views of the same scene, which, like information from human eyes, can be used to estimate depth. In addition, the cameras also use a dual pixel technique in which every pixel is split in half and is captured by a different half of the lens to give even more depth information.

Using both dual cameras and dual pixels allows a more accurate estimation of the distance of objects from the camera, which leads to a crisper image. Machine learning is used to determine how to weight the different outputs for the best photo.

In addition, Google has also improved the bokeh or blurred background effect. Previous, bokeh blurring was performed using a technique called tone mapping which makes shadows brighter relative to highlights. This lowers the overall contrast of the image, however. So for Portrait Mode, the software first blurs the raw image and then applies tone mapping, which makes the background nicely blurred but also as saturated and rich as the foreground.

Related Posts

Google fires second AI ethics leader

Source – https://www.itnews.com.au/ As dispute over research, diversity grows. Google fired staff scientist Margaret Mitchell on Saturday, they both said, a move that fanned company divisions on Read More

Read More

Total and Google to launch AI tool Solar Mapper in Europe

Source: solarpowerportal.co.uk O&G giant Total and Google Cloud are launching a new artificial intelligence (AI) tool to help accelerate the deployment of residential solar panels. Together they Read More

Read More

Unlock a new career in Google Cloud with this mastery bundle

Source: androidguys.com You may not realize this, but you interact with AI technology on a consistent, if not daily basis. And if you do recognize it, chances Read More

Read More

Cloud computing is betting on outer space

Source: livemint.com Microsoft CEO Satya Nadella announced the preview of Azure Orbital at Microsoft Ignite 2020 in New Orleans. According to Microsoft, Orbital is ‘Ground Station as Read More

Read More

Google Cloud And Anaplan Innovate To Transform Enterprise Planning

Source: aithority.com Google Cloud and Anaplan, Inc. announced a strategic partnership to offer Anaplan’s platform for enterprise planning and business performance on Google Cloud. As Anaplan’s first public cloud Read More

Read More

HOW DEEPMIND ALGORITHMS HELPED IMPROVE THE ACCURACY OF GOOGLE MAPS?

Source: analyticsinsight.net DeepMind is one of the companies that are leading the AI charge and coming up with innovative uses of AI. This London-based AI lab has been Read More

Read More
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x