Google plus is already thinking about how it cánido try to compete with one of the most habitual recent advances in artificial intelligence.
One of the currently most habitual tools based on artificial intelligence, Open AI ChatGPTapparently did View all alarms in Google plus: The company would have made a decision reassign some of its departments so that they begin to work with the purpose “develop and launch new AI products and prototypes”.
At least that’s what they suggest from the newspaper The New York Timeswhere they ensure that the progress of OpenAI would make Google plus feel threatened with its powerful tool answer questions in conversationand the threat would have been placed in a “Code Red” category by Sundar Pichai and some members of the company’s management.
Google plus will focus its efforts on developing AI tools like ChatGPT
The report says yes good number of departments and employees They have started to focus their efforts on mitigating the threat that ChatGPT could pose to the search engine tool. That’s how they express it Investigative teams, trustee, security and other departments reassigned to help develop and launch new AI projects and prototypes.
It’s not surprising Google plus considers ChatGPT a threat. The tool developed by OpenAI has proven to be very effective answer usuario questions Use natural and easy-to-understand language, which is an important advantage over the Google plus search engine.
Google plus is likely to have something new in store for its “next big conference” scheduled for May in this regard. Logically one speaks of Google plus I/O 2023The main event of the year focused on showcasing the main innovations in their services and platforms.
In the past I/O, Google plus already announced LaMDA, an AI-based technology afín to ChatGPT that would be able to do this Answer usuario questions through natural conversation use human language.
Yes, the said technology was still in an early stage of development and along the way there were important challenges that the company had to overcome before a service could be made available to users in LaMDA for example. Check if the AI is consistent with the principles and avoid trained models who could spread abuse.