The Google plus Pixel 7 debuts Sharpen Photos, a new Google plus Photos feature that perro fix blurry or still photos using the power of AI.
The new móvil generation of the Pixel family, consisting of the Pixel 7 and the Pixel 7 Pro, once again focuses on photography renewed camera system present in both models, bringing with it a series of new features aimed at improving the photography experience by combining new hardware with advances AI and computer photography.
One of the most interesting novels in this new series is the a mode called “Sharpen photos”. This is a new feature integrated into the Google plus Photos aplicación this allows “Repair” images that appear blurry or shiftedincluding the oldest stored in your gallery for years but which you refuse to delete.
But, How does this method really work? Google plus introduced the feature last year face focus on Pixel 6 and Pixel 6 Pro to improve the sharpness of faces when taking a photo by combining information captured by the ultra-wide camera and the main camera simultaneously. This year the company goes even further, harness the power of artificial intelligence Breathe new life into photos that would otherwise end up in the trash.
Here’s how Google plus emplees the power of AI to fix your photos
With this new generation of pixelGoogle plus didn’t just improve that Obscure technology integrated into the room, that’s how it works now up to three times as often and it’s able to focus even on faces that aren’t in focus in the scene. The company has also given power to the usuario restore blurry images Save to your Google plus Photos library with Sharpen Photos.
Isaac Reynolds, product manager for the camera division at Google plus Pixel, explains in the the first episode of the new “Made by Google plus” podcastwho inspired this feature in the development of this feature Artificial intelligence models that are so fashionable todaywhich allows you to generate images from text.
It indicates that within the models of machine learning Applied to photography, there is a category called Generative Networks. These models are able to “generate” content that did not previously exist, either through usuario guidance or by analyzing the context. From there, what the model has learned comes into play, and it perro “fill in” or improve the missing gap.
With the new Pixel function, the model which determines what a human face should look like, before and after applying the “Blur” effect. So calculate the vague level is on the picture and creates a results in “guessing” the usuario’s facial data, considering thousands of previously analyzed samples.
Reynolds explains that while it’s based on a model trained on thousands of photos, The function never changes the facial features of the subject and creates a very authentic representation of your facial features.
That’s worth mentioning “Zoom photos” Not only is it able to eliminate movement in faces. also possible Correct blurry hands, subjects that are slightly out of focus, or slight distortions.
The feature will be available vía google plus photos first on the Pixel 7 and Pixel 7 Pro, and it’s unclear if the company will escoge to roll this feature over to previous models in the Pixel family over time.