Submitted by Surur t3_10h4h7s in Futurology
FuturologyBot t1_j56ihbd wrote
The following submission statement was provided by /u/Surur:
Google executives hope to reassert their company’s status as a pioneer of A.I. The company aggressively worked on A.I. over the last decade and already has offered to a small number of people a chatbot that could rival ChatGPT, called LaMDA, or Language Model for Dialogue Applications.
Google’s Advanced Technology Review Council, a panel of executives that includes Jeff Dean, the company’s senior vice president of research and artificial intelligence, and Kent Walker, Google’s president of global affairs and chief legal officer, met less than two weeks after ChatGPT debuted to discuss their company’s initiatives, according to the slide presentation.
They reviewed plans for products that were expected to debut at Google’s company conference in May, including Image Generation Studio, which creates and edits images, and a third version of A.I. Test Kitchen, an experimental app for testing product prototypes.
Other image and video projects in the works included a feature called Shopping Try-on, a YouTube green screen feature to create backgrounds; a wallpaper maker for the Pixel smartphone; an application called Maya that visualizes three-dimensional shoes; and a tool that could summarize videos by generating a new one, according to the slides.
Google has a list of A.I. programs it plans to offer software developers and other companies, including image-creation technology, which could bolster revenue to Google’s Cloud division. There are also tools to help other businesses create their own A.I. prototypes in internet browsers, called MakerSuite, which will have two “Pro” versions, according to the presentation.
In May, Google also expects to announce a tool to make it easier to build apps for Android smartphones, called Colab + Android Studio, that will generate, complete and fix code, according to the presentation. Another code generation and completion tool, called PaLM-Coder 2, has also been in the works.
Google, OpenAI and others develop their A.I. with so-called large language models that rely on online information, so they can sometimes share false statements and show racist, sexist and other biased attitudes.
That had been enough to make companies cautious about offering the technology to the public. But several new companies, including You.com and Perplexity.ai, are already offering online search engines that let you ask questions through an online chatbot, much like ChatGPT. Microsoft is also working on a new version of its Bing search engine that would include similar technology, according to a report from The Information.
Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times. The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.
The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.
Google listed copyright, privacy and antitrust as the primary risks of the technology in the slide presentation. It said that actions, such as filtering answers to weed out copyrighted material and stopping A.I. from sharing personally identifiable information, are needed to reduce those risks.
For the chatbot search demonstration that Google plans for this year, getting facts right, ensuring safety and getting rid of misinformation are priorities. For other upcoming services and products, the company has a lower bar and will try to curb issues relating to hate and toxicity, danger and misinformation rather than preventing them, according to the presentation.
The company intends, for example, to block certain words to avoid hate speech and will try to minimize other potential issues.
The consequences of Google’s more streamlined approach are not yet clear. Its technology has lagged OpenAI’s self-reported metrics when it comes to identifying content that is hateful, toxic, sexual or violent, according to an analysis that Google compiled. In each category, OpenAI bested Google tools, which also fell short of human accuracy in assessing content.
“We continue to test our A.I. technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon,” Lily Lin, a spokeswoman for Google, said in a statement. She added that A.I. would benefit individuals, businesses and communities and that Google is considering the broader societal effects of the technology.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/10h4h7s/google_to_relax_ai_safety_rules_to_compete_with/j56dolb/
Viewing a single comment thread. View all comments