Google Adds Bard AI to YouTube, Drive, Docs, Maps, Gmail

Rate this post

Google announced a A supercharged update On Tuesday on its Bard chatbot: The tech giant will integrate generative In AI The most popular of the company Services including Gmail, Docs, Drive, Maps, Youtube, and more. A new version of the AI, with a new feature that tells when Bard provides potentially wrong answers is neck-and-neck with ChatGPT for The most useful and accessible large language model on the market.

Google is Calling generative features “Bard Extensions,” a name similar to one chosen by the user Add to Chrome. With AI Additionally, you’ll be able to send Bard on a mission that pulls data from all the different parts of your Google account for the first time. If you’re planning a vacation, for example, you can ask Bard to find dates your friend sent you in Gmail, search for flight and hotel options on Google Flights. A daily itinerary of things you can do based on YouTube information. Google promises that it won’t use your private data to train its AI and has opted for these new features-Just in.

Bard can now handle complex tasks from your Gmail account and other Google services.
Gif: Google

Perhaps just as important is a new accuracy tool that Google calls “Check Responses Twice.” After you ask the bard a question, you can press the “G” button and the AI ​​will check if the answers are backed up by information on the web. And Highlight information that may be confusing. feature Bard makes the first major AI tool that checks its own authenticity on the fly.

This new, souped-up version of Bard is a tool in its infancy, and it can be buggy and annoying. But this is a glimpse of the kind of technology we’ve been promised since the early days of science fiction. Today, you have to train a computer to ask questions in a very limited number of words that it can understand. It’s nothing like the devices you see on shows like Star Trek, where you can “computer” the machine and give instructions in the language you’d use to ask a human for any task. With these updates from Bard, we take a small but meaningful step closer to that dream.

Gizmodo sat down with Jack Krawczyk, head of product at Google Bard, to talk about new features, chatbot issues, and what the near future of AI holds for you.

(This interview has been edited for clarity and consistency.)

Jack Krawczyk: Two of the things we consistently hear about language models in general are “It sounds really cool, but it’s not really useful in my everyday life.” And second, you hear that it causes a lot of, what knowledgeable people call “delusions.” We have the answers to both of these things from tomorrow.

We are the first language model that integrates directly into your personal life. With the announcement of Bard Extensions, you finally have the ability to choose and help Bard retrieve and collaborate with information from your Gmail, or Google Docs, or elsewhere. And with Double Check the Response, we’re the only language modeling product out there that’s willing to admit when it’s wrong.

Thomas Germain: You pretty well summed up my reaction to the last year of AI news. These tools are amazing, but in my experience, fundamentally useless for most people. With all the other Google apps involved, it’s starting to feel like a party trick and a tool to make my life easier.

JK: At its core, interacting with language models allows us to change the mindset we have with technology. We are so used to thinking of technology as a tool for the You, tell me how to get from point A to point B. We find that people naturally gravitate in that direction. But it’s really inspiring to see it as a technology that does things with You, which is not intuitive at first.

I’ve seen people use it for things I never would have expected. We had someone take a photo of their living room and ask, “How can I move my furniture around to improve feng shui?” It’s a collaborative thing that I’m excited about. We call it “enhanced imagination,” because imagination and curiosity are in your head. We are trying to help you in a moment where ideas are really fragile and fragile.

TG: We’ve seen many instances where Bard or some other chatbot shows something racist or makes dangerous suggestions. It’s been almost a year since meeting ChatGPT. Why is this problem so difficult to solve?

JK: This is where I think the double check feature is really useful to understand at a deeper level. So the other day I cooked swordfish and one of the challenging things about cooking swordfish is that it can scent your whole house for days. I asked the bard what to do. One of his suggestions is to “wash your pet frequently.” It’s a surprising solution, but it makes some sense. But if I use the double check feature, it tells me it’s wrong, and results on the web say washing your pet too often can strip it of the natural oils it needs for healthy skin.

We’ve developed the app so it goes sentence by sentence and searches Google to see if it finds things that validate its answers. In the case of pet washing, this is a good answer, and not necessarily a right or wrong answer, but it does require nuance and context.

Bard’s double check feature lets you know when it’s confused and provides links to references on the web.
Gif: Google

TG: Bard has a small disclaimer that says it may provide inaccurate or offensive information and does not represent the company’s views. More context is good, but the obvious criticism is, “Why is Google releasing a tool that gives offensive or incorrect answers in the first place?” Isn’t this irresponsible?

JK: What these tools are really useful for is exploring possibilities. Sometimes when you’re in a collaborative position, you guess, right? We think this is the value of the technology and there is no tool for it. We can give people tools for fragile situations. We heard feedback from a person with autism who said, “I can tell when the person writing me an email is angry, but I don’t know if the response I’m going to give them will make them angrier. “

For that matter, you need to interpret rather than analyze. You have this tool that has the ability to solve problems that no other technology can solve today. That is why we have to strike this balance. We are six months into Bard. This is still an experiment and the problem is not solved. But we believe that the answers in our lives today are so profoundly good that we don’t have them, and that’s why we think it’s important to get it into people’s hands and gather feedback.

The question you’re asking is, “Why deploy technology that makes mistakes?” Well, it’s collaborative, and part of collaboration is making mistakes. You want to be bold here, but you also have to balance that with responsibility.

TG: I imagine that someday there will be no difference between Bard and Google Search, it will just be Google and what you find most useful at the moment. how far is it

JK: Well, an interesting analogy is tool belts versus tools. You have the hammer and screwdriver, but then there’s the belt itself. Is it even a tool? This is probably a meaningful argument. But right now, most of our technology works something like this, I go to this site to get the job done. I go to that site to get that other work done. We have all the personal tools and I think they will be supercharged by generative AI. You’re still using different tools, but now they work together. Thus we are having a separate generative experience, and I think we are taking the first step in that direction today.

TG: This is probably not what you want to talk about today. But I want to ask you about feelings. What do you think of that? Is it an important question for us to ask people like you right now?

JK: I think the fact that people are asking this means it’s an important question. Is what we are creating today sensitive? Clearly, I would say the answer is no. But whether there is or not is to be discussed chance to be sensitive. In spirit, I think it centers around comparison in many forms. I haven’t seen any indication that computers can have compassion. And pulling from Buddhist principles here, to have compassion, you need to suffer.

TG: So you haven’t given Bard any pain sensors yet?

JK: (Laughs) No.

TG: Can you share anything about Google’s plans to integrate Bard with Android?

JK: Currently, Bard is a standalone web app And the reason we put it there is because it’s still an experiment. For an experiment to be useful, you want to minimize the variables you put into it. At this point, our first hypothesis is that a language model connected to your personal life is going to be very useful. The second hypothesis is that a language model that is willing to admit when it is wrong and how confident one’s responses are is going to create a deeper truth about how people can engage with the idea. Those are the two hypotheses we are testing. There are many more that we want to test. But for now, we’re trying to minimize the variables.

Leave a Comment