Featured Presenter
Yaniv Markovski
Former Head of AI Specialist at OpenAI

Boost ops efficiency, drive revenue, & save big with omnichannel messaging
Building an autonomous customer experience function with ChatGPT

Featured Presenter
Yaniv Markovski
Former Head of AI Specialist at OpenAI

Boost ops efficiency, drive revenue, & save big with omnichannel messaging

Boost ops efficiency, drive revenue, & save big with omnichannel messaging
About the webinar
How did the OpenAI team develop a robust customer support platform using their own groundbreaking product? Join Yaniv Markovski, the driving force behind scaling OpenAI's customer experience (CX) evolution across people, technology, and processes.
In this engaging fireside chat, Yaniv will share invaluable insights from his journey leading up to the launch of Dall•E 2 and ChatGPT. Discover the principles and thinking behind the systems they built and how they navigated the challenge and capitalized on opportunities amidst the surge in adoption and subsequent growth in support case volume post-launch. Gain a profound understanding of the intricacies involved in the integration of generative AI capabilities into the customer experience function.
In this fireside chat, Yaniv shares his CX learnings leading up to the launch of Dall-E and ChatGPT 3, and how his team responded to the explosive growth in adoption (and subsequent support case volume) post-launch. The audience will gain insight into the challenges and opportunities when considering to build generative AI capabilities into the CX function.
Emmanuel [00:00:03] All right. Thank you very much all for joining us for this webinar on building an autonomous customer experience function with generative AI. Before we start, I just want to have a few, housekeeping items. If you have any questions, please submit your questions via the Q&A on BrightTalk. You will receive a link at the end of the webinar with the recording, after the webinar is complete. So just be aware that we were recording this. And then if you have any technical difficulties please email Daniel@sendbird.com. Well today I'm very excited to have Yaniv Markovski with us. Yaniv is an ex Open AI employee. Yaniv, please can you introduce yourself?
Yaniv [00:00:50] Hey. Thank you. Emmanuel. Yeah. I'm based here in San Francisco. Really glad to be here on this, webinar. Most of my career, I worked on and built some pretty, large support organizations. Worked at Zendesk, at, some other companies in cybersecurity in the mapping area. And my goal was always to work directly with customers and make sure that they're satisfied and they're happy with what they're getting. Most recently I worked with, with, or at OpenAI, building some, some autonomous systems with, you can say, one of the most, the craziest technologies, that that we saw in the last decade or so. And again, my goal was always to just to work directly with customers, bringing the, the, the technology on one hand, but also the high touch on the other hand, and making sure that our customers are happy and satisfied.
Emmanuel [00:02:00] Great, great. Great to have you here. I. So I am Emmanuel Delorme. I'm, product marketing manager at Sendbird. I'll just go over a little bit if you're not familiar with what we do. So we believe that every, business to have reliable customer communications. We have technologies for in-app communication. So we provide. We are a chat leader. We providing, voice video. And one recent thing we've done this year is we've created automated conversations. We have had our first integration, of GPT, actually, from OpenAI last April. We already had a chat interface. So we were able to plug that in and kind of revolutionize this experience for all our customers. It's evolved. We've then created a smart assistant. So it had the lamb, but it also had more features around it so that you could control that, feed it information. You could also, like, bypass it by, having preferred answers if you wanted. We also used the latest, functions from, OpenAI that allow us to retrieve, structured data from a database and use that for the interaction with the user. And then, we're about to, to release more with, better dashboards. Since the generative AI chatbots are able to work as, like, supporting people for knowledge, for sales, for commerce, or for marketing, actually. We will have dashboards so that people can evaluate all these different type of agents, how they're performing and how they're helping their business. And, we're also we're going to be supporting, I'm telling you this ahead of time. It's coming soon. Some additional alarms, to help with, like, privacy and performance enhancements. So just, to give you a little overview, we are, a cloud based, communication platform. We use APIs to integrate all those technologies. I've mentioned where you live in regions. Currently we have about 7 billion interactions per month running on our, platform with 300 million users. And then we work with people from all kind of different verticals. The top line, for example, is a lot of like, on demand economy. Then social economy, health care. Fintech, obviously. But gaming and social, education, etc.. For Gen AI, we already like touching on all those different verticals. But we can go on, into details of use cases, maybe, further, in the discussion. So, after this quick introduction, let me, stop sharing these slides. And, let me go straight to some questions for Yaniv. So, can you just, walk us through a little bit your journey, your journey, and, how as a function leader at OpenAI, you decided to, get to an autonomy experience, and. What was the journey? So one thing I want to say to everybody here is that we're dealing with someone who's got the tool first. And, what may seem familiar to us today, furthermore, was just like, pioneering this new world. So. Yeah, if you could just, like, describe a little bit about what was the thinking, what was the, the mind shift that you went through and what you tried to create?
Yaniv [00:05:38] Yeah, absolutely. So even though I kind of, started with, with my introduction and gave a little bit of information, I think that there is there is way more to that. So, I started my career as a, as a developer, and, they built, a quota cache tool. But even when I built this, as a developer, I always liked talking to customers, understanding my internal sales engineers that are selling very large and complex solutions. I really wanted to understand, like, what are the features that they need, work directly with them, wrote the code. But honestly, I enjoyed much more working with with these customers. In all of my career, I kind of, like, really latched into this and, and I moved between roles, as someone that goes into a company, usually a startup, and build this function, at Zendesk. I joined here pre-IPO, in San Francisco. And, a few months after I found myself, traveling the world, building support organizations around the globe 24 by seven, Follow the Sun, etc., etc.. We're talking about more than ten years ago. And back then, of course, the technology, was very different. We built technology around agents, right? Human agents were in the center. And we always build more and more features to make, the day to day much easier to make sure that customers can actually get their own, answers and solutions. Fast forwarding after a few other, roles like that, when I'm building from scratch, an organization, bringing the values, the culture, etc.. We're talking about probably five, six years ago when chatbots are already existing. And I worked at a startup called, Mapbox. We had many, many customers. Some of the customers were hobbyists, people that, built on top of the Madbox platform location services. When they just wanted to put, you know, a map. And, hey, here's the place that I, that I ate a burger, and it was amazing. And, like, just building point of interest on top of a map on the other side that there were very, very large enterprises that used our, our products. And that was when it was fairly hard to, to have, one solution, one support culture, one type of, of, support operation for such a different, persona, for such a different, user. On one hand, you have developers and hobbyists, on the other end, you have enterprises that are looking for a lot more, started building playbooks for each one of them. But we also knew that, as we, as our platform becomes more, valuable and, and and goes viral if you say. More customers and more hobbyists and more developers are actually coming and using the platform. We knew that we need something to automate because we couldn't just scale with more and more people. On the team, we hired many people. We trained them. We grow them. Or like, these people grow to to be product managers and software engineers and QA engineers and whatnot. And that's actually one of the favorite things for me as a support leader, to actually bring this very, very strong talent and bring that or help them get to the next, career level. We started implementing some chatbots, and we did, pretty massive research, and it was okay. It wasn't great. I can give you, like one example. This platform, supported APIs and SDK. And sometimes it was a little bit embarrassing because, somebody would ask a question about iOS, like developing on an iOS, but the answer would be for Android. If a human would read this question, probably they would understand that it's an iOS question or they will be able to ask a question. But these chatbots, they weren't sophisticated back then. There were proprietary, models that these companies built. And we, we implemented it, but we knew that it's just not what we need. We played a lot with, GPT two or GPT three, and we started understanding, that there is something there. Now, I can tell you as somebody that used, GPT quite a lot, that we didn't know what we did, what we were doing back then. But at the same time, we, we knew that this is a little bit different. We know that we can control the, and understand the narrative in a way that was very, very different. Fast forwarding a little bit more, and I'm finding myself at with open I. Building an internal platform, that, supposed to help users and customers. I'll stop here for one second. Emmanuela, I don't know if you have some follow up questions or if you want me to.
Emmanuel [00:11:05] Yeah, I just wanted to be going there because that that's the point, where things are shifting, right? You're you're getting chatbot. You're trying to get assistance from to support you or, and then from what I understand, when you were at OpenAI, you you flipped the board and now you put forward the AI, and now you are like in the position of supporting it. But can you explain that a little bit better of, like, what it meant for the organization and how you went through that?
Yaniv [00:11:34] Yeah, absolutely. So, I kind of said it before, but, all the systems that we, used to and by the way, it's not only in the support world, it's actually with everything else. You have the human agent in the middle, and then you have the, the applications that are supporting the human agent. We flipped it. We wanted to create an autonomous system, AI based, limb based, large language model that is in the center, and then agents and human agents that are actually supporting the system. So that was the first principle, the first objective that we had, OpenAI is building, LMS and, and very sophisticated technology when, when I started around two years ago, agents and all of these other frameworks that everybody or some people are aware of, were not exist. We created some of it or some, some people that used our, our OpenAI, platforms, created these, these, these frameworks or these methods. But this wasn't something that we, that we could use back then. We we already knew that, information based questions. If somebody needs, just like a how to question or like some sort of information, we knew that at that point of time, two years ago, we can give very, very good answer to these questions. It's half generated, half retrieval. But we understood what the intent of the question. And we knew how to answer this question because it was, either saved, in embeddings, which is vector database. We can talk to talk about that in a second. But, what's important is that we knew that, with a very high precision, we can answer information based question. The challenge was also, how do we take actions, how can we understand the question and taken an action for that? As we as we all know, in the support world. A lot of the troubleshooting is, or the majority of the questions are actually not sophisticated. They are not hard on the human brain. If I, as a human agent, need to understand a question, go to another tool. Check status of something, check a log, maybe press a button somewhere. It's actually not sophisticated if or, if I have the policy, if I know what needs to happen. It's actually not hard. Especially on on the human brain. Right. We can do much more than that. So how are we actually getting the autonomous system to understand it? And that was a little bit or a lot of what we built there. Again, that was before, agent networks and, and, AI agents, but we, had a pretty cool way to understanding and go and make some actions. And then with, with the help of generative AI, go back and tell the customer that that happened. Yeah. I'll stop here for one second.
Emmanuel [00:15:00] Yeah. So just just to, help people understand. So data retrieval means, you said there was a combination of two things happening witsh the with the LLM. The large language model, GPT one was that it could generate some of the answer itself, understanding the context, understanding the intent and be able to build that. And you're saying also data retrieval just to make sure everybody understands that means the AI itself will go look for some of the, information, our response into a database and then include that in the ticket answer, for example. Is that correct? Yeah okay. Very good. And then I in terms of embeddings. So sometimes people think of embeddings as something that happens at the pre-training level. So when we use ChatGPT we, we don't do embeddings. We, we may give it a large prompt with a lot of information and maybe makes its own embedding, where you actually, you know, providing a lot of data to the engine that was keeping training itself in order to, to keep improving the response and having it more and more autonomous. That is how it worked.
Yaniv [00:16:06] Yeah. So we specifically did not, train the model, for the support questions. We use prompts and other methods. But yeah, just just one word about, embeddings. Embeddings is just, mathematical manipulation. When you text and you make a mathematical, manipulation on top of it and then you store it, then it has a value once you, understand the intent, let's, let's call it a customer ticket. We can translate that to a math equation as well. And then we can understand how it relates to the other paradigm of, mathematical vectors. Once, once there is good correlation between the question and the answer, and it can be other things as well. But in this case it's a question and answer. Then, you can retrieve and you can manipulate, the text again with generative AI or with other methods.
Emmanuel [00:17:05] Got it. So, can you give us some insights, like, on the process in the learning process? I guess the first time you got GPT and you started to be like, let's try to enter ticket questions, with it, like, did it hallucinate in the beginning? Like, how did you manage risk there? I'm going to ask you more. So that's the technical question. I'm just interested in, you know, sharing with others, like how how did you go about like that process of, you know, did you go 0 to 1? What happened? And then can you tell us a little bit about the context? Because I assume once Chad GPT, I think you guys reached like a million users in five days. I think that was never done before. So can you explain that learning process, that methodology with the mayhem that you guys caused by, making this publicly available?
Yaniv [00:17:58] Yeah. Okay, so, actually, a few things. Let's put a timeline a little bit in front of everybody. When we started building this application, or this platform, OpenAI was completely B2B. There was no B2C, there was no Dall-E, there was no ChatGPT. There was, an API platform with, many, many customers, kind of what I described, very similar to what I described about Mapbox, right? That we're the the hobbyists that are just wanting to interact with, a machine which, you know, feel the magic firsthand. But there were there were also companies from all sizes, enterprises, startups, every everything in the middle that just tried to build their own, products, features, solutions, with AI and. When we started building this, I was also, I had the luxury to hire the best people out there that, that are usually were not falling under a support engineer. Job description. I had, machine learning engineers and software engineers and people with, advanced degrees in AI. So I was very, very lucky to have such a strong team. Very small team, but very strong team that took this mission of, we don't want to answer the same question more than once, and we want to make sure that it's autonomous. And we want to solve questions before they even come to us. Like all of these, we we package that into this is the platform that we're building. So this talent and these great people worked, very hard on on building it. Now, you ask me about the technology. And when we started, not only that, it was B2B, it was, GPT 3.5 just came out, which was a very, very large jump between 3 and 3.5. But we still we didn't know how to use it that well. We started building the platform, actually, not only with GPT, we, we built, some classifiers that will help us when tickets and signal comes in. And when I say signal, I mean, we didn't only look at, tickets that are coming in. We also looked at some other signal that customers or signature, it's that customers are leaving when they're using our system. We wanted to analyze it and wanted to understand what is going on. And on top of that, we wanted to act on it. We started building these classifiers, and these classifiers were, standard traditional machine learning classifiers with the model that, most people that, study statistics, know how to implement. And we, we built and we actually got two really good results with non non gpt methods. The only problem that we had was that every time that there is a new issue, these classifiers need to be retrained, etc., etc., and we were like, okay, we can train it every day. We can train it every few hours. Fine. But it just it was so cumbersome. Then in the timeline we're talking about, around six months before, GPT four came out and, we had access to that, it's still internally. It went through a lot of red teaming, a lot of, teams that are in or outside of OpenAI to try to break and understand the capabilities of the model, and we started implementing it. Internally, we kind of built the platform in a way that, it doesn't matter what is the model if it's 3.5 or 4 or, you know, it can be something else in the future. And we can just replace it. Of course, we will need to edit and change some of the parameters and some of the, the implementation, but it wasn't, that hard for us to do. We saw that GPT four is so capable of understanding the intent. We also gave a lot of logic into it, right? If if a question is a spam question, we tag it as spam and we don't deal with it. If the question is too vague, we're telling the model, hey, if it's too vague, please exit. If the question is something else, maybe there are other instructions and all of that was in the prompt. And all of that was, kind of like the, the step by step approach that we took. Now, you asked me specifically about hallucinations. And obviously loss intentions is a big topic, and we can probably talk about that for hours. But I think that, like, if I'm fast forwarding into, like, the most important thing about hallucination. Hallucination will get less and less of an issue as time goes on because the models will be stronger, etc. but it will also be less of an issue because. The way that limbs and the APIs for Open AI and others are being consumed, I think is now in a different phase than what it was a year ago or six months ago. We most most companies already understand that it's not a plug in play. Okay, I'll just have one prompt and I will get one answer and that's it. There are a lot of validations. There are a lot of frameworks. There are a lot of methods to make sure that these hallucinations are not a thing anymore. And, that that you can actually input something and output something else and you can validate some of these things. You can have validation scores. You can have references to like where it came from. We talked about embeddings in the in the vector database for a second. You know you can also bring that once you build the magic prompt, right. A lot of the a lot of them, the process is actually to understand what you need, put it in one place and then give the instructions to the lamb, to operate. Now, maybe there were multiple steps in the way that used lambs, but and that's important and good. But in the end of the day, the generative AI in the end that is actually in or outputting something for the customer can be, validated, and can have the right context from all of these channels and all of these methods and framework that I just, talked about.
Emmanuel [00:24:36] Got it. So, is it therefore needed or a process where humans sort of give feedback? Or is it just like a mass, like having more and more data that just helps the engine itself gets better?
Yaniv [00:24:53] Yeah. I think there are multiple methods. You can do it with humans and our chef, which is reinforcement learning, human feedback. You can do it with simple sampling. What we did, a lot of times we knew that some of the questions that we were answering, we knew that, we're answering very, very well. And we knew that some questions we didn't answer really well. But every business can, have their own policy. On what is your tolerance for mistakes? What are what is your tolerance for mistakes? For a specific topic. Maybe if it's a legal issue that the team is answering. Maybe your tolerance is different than if it's a very, simple how to question that you're fine if you're mistaken. If you're into health care, maybe you have a different tolerance, right? And even even within health care, you probably have different verticals and different, assessments for the questions that you're getting.
Emmanuel [00:25:54] And so is it as simple as, getting a feel for how good those questions is and what the answers are, and monitoring that and then putting in the prompt prompt like saying, you know, if that type of question comes up because we know, there would be terrible answer. I need to our level of risk tolerance. So be like, just answer this that you don't know. And you'll you'll be passed to an agent or human agent or something like this. It's as simple as building the prompt around that.
Yaniv [00:26:22] So you use the word simple twice. And I don't think anything is simple. So let's let's take a step back from the simple. But yes, I think that in essence, like, if you experiment with it enough and you understand, where your data is strong enough, where your data is not, when questions are completely new, when your questions are repeating, then yes, the answer is that you can actually start monitoring these. You can actually change the prompts. You can write more documentation for your knowledge base that then you are going to embed or inject into the prompt, etc., etc.. Yeah.
Emmanuel [00:27:00] Okay. Very cool. So just for the story, you said like there was a first phase, there was B2B and obviously you put a lot together. Anything changed on the launch of ChatGPT on that day? And I did your whole support system collapse and you had to rebuild it, or did it, actually do pretty well, dealing with, with volumes, the scales and probably the millions of different questions that started to come up.
Yaniv [00:27:25] Yeah. Good question. So we, we started building this platform when we were a B2B before Dall-E and before, ChatGPT. At some point Dall-E was introduced, and I think that that was like the first milestone of people understanding what Gen AI means. You input a prompt and you get this beautiful images. I don't I'm not an artistic person, but the, you know, the things that I was able to create with Dall-E was pretty amazing. Dall-E was also B2C, right? All of a sudden we have many, many users that are not building with our APIs and in their understanding of the situation is very different. There are also, they have a lot of, feedback, good or bad, in the middle. The that is coming on. The company is also starting to understand the power of, of gen AI. And we're starting to monitor how people are using it. Right. Some people are using it in a really cute way, but some people are also using it in a very nasty way. How can we learn? How can we improve our moderation? That is automatic, but. Or actually it's not only automatic, but it's different than just supporting the people. It's it's actually you have to monitor and understand how people are using the system. So we understand all of these things. But then yeah, the, the number of questions just skyrocketed like nothing that we saw before. There was good and bad news at the same time. The good news, was that, the system is there, the platform is there. We don't need to start from scratch. The bad news is that, okay, like, we thought that we're building one thing, and it's actually something very, very different. Thankfully, the questions that we received for the daily or from the customers or users of Dall-E, where in sophisticated, you know, can be like, hey, I'm on the waitlist for a month now. When am I getting access again? This is not a hard question. If I have the policy written, and just as I have a human that can understand this question, compare it to the person that wrote it and generate an answer. Generate generative AI can do the same. Now, it's not as simple as using a macro. It's not like everybody that writes in gets an answer because I know the policy on one end, but on the other end there is fraud. There are people that registered with ten different email addresses. The people that actually got access, but they didn't know, right? So you do need to take some actions and understand the situation. But again, all of that can be either you can write code to to validate these things, or you can use a little bit of like gen AI to, to understand the full picture and then answer. That was Dali, and it was amazing. But then judge came. What we thought was, crazy in terms of of, of, demand and questions and, and interactions, was nothing basically. You couldn't see the graph. Of of new questions. You talked about, a million offer, a few after a few days. I think about, like, the, the hundreds of millions that that were there as well. Some of them are paying. Some of them are not paying. Many, many more questions. Many more, problems and issues. As an anecdote, and that's publicly available. Open. I didn't think that ChatGPT is going to be what it was. It was supposed to be. Hey, world, here's a way to interact with Gen AI in a very easy to understand, UI, but that all of a sudden, you know, became, what it became. And then we started to, drone in, in customer signal. Quite a bit.
Emmanuel [00:31:52] And so, any learnings from this that, if someone was to launch their service and for whatever reason, would start to scale up like crazy. Any learnings on did you change the way you were working with ChatGPT or GPT or, in your methods? I also heard that you were had to create policies and then have those available for retrieval. So it looks like there's a lot of prep work for anyone who wants to do a good job. Right? Making that available to the other land to to retrieve the data. So in that case, like, did it mean twice as much work and more policies? Right. You said a lot of, of, you know, thankfully was easier, easy questions that maybe were, were and go well but yeah. Any learnings towards that or people what what people should be ready in case of like reaching a high scale.
Yaniv [00:32:46] Okay. So for for one for one I would say that, we're in a different world right now. The, the amounts of data and expertise and expertise, around alarms that people have now when they're building is very different than, you know, a year and a half, two years ago, when we started building this, this application, you know, we we talked like. The platform that we built, kind of used the agent framework, but the agent framework wasn't even a thing. Right. So. So what I'm trying to say is that now there are many more resources and many more, things and guides to follow and to understand. We learned a lot of things. I talked about like the, the traditional, machine learning classifiers that we built. And then all of a sudden we moved it to a realm. We sampled a lot of tickets and a lot of answers, and we on almost on an hour basis, we changed the prompts and we had to understand how we're fixing one thing and we're not breaking a different thing. And that happens quite a lot. We, we had different versions of the LLN and, you know, you you have a better version, but it doesn't mean that it's always better. It's sometimes it's actually breaking some other things. So we learned a lot from just the operation of the thing. And I think that this is one of the things that is most important for me to, to give anybody as a tip is like. All the tools around you are going to or already probably headed a lot of I as a feature is product is something new. And if you don't really play with it, you don't really know how it's going to behave. And, it is it is very important to understand because, it's almost impossible to take a traditional, workflow. And by workflow, I mean. Even even the job description that you're hiring your your people that are going to do the work. And you're basically putting this AI feature or product or whatever it is on top of it. It's not always working as you expected. And there are some gaps and leaps, between the two methods. Understanding how to leverage AI can be very beneficial for organizations. I think that's what one other thing that, that I wanted to add is that. Obviously, open eye and really, big interest even before jet GPT from enterprise customers. But, you know, something really clicked when Dall-E and GPT, became, such, successful products. And, obviously the demand went up and more larger deals and larger enterprises came and wanted to get, to start to build. For us, it also taught us the lesson of like, okay, this autonomous system is great for maybe B2C or for like the simple, B2B questions. But when you're signing very, very large deals, very important deals, you also need to bring, the human touch. Right. And kind of like. Maybe autonomous cars, right? Like you, you don't start from the end. You don't start with, like, the car without the wheel that drives you everywhere. You you you give features, right? You start, have, like, a driver assist and then maybe some security features, etc., etc. and then, like, all of a sudden it can actually drive on, on your lane without going to a different lane, and maybe it can even brake you or use the brakes. I think that like because what I said at the beginning, there are different personas, different types of users. You probably want to also have a policy or at least some some sort of a stance on like how your company, how your product is, helping this segment or maybe this segment and sometimes it's all segments and sometimes it's a very narrow segment. But I think that it's also important to understand how to juggle these, these different priorities and different personas.
Emmanuel [00:37:23] I think that's very interesting. We see that with our customers. So, you know, I hear those best practice advice, right? Start small. Do something that's effective and that works. So we've done that with our chatbot. Sometimes people just started with a summarize feature. So they had an human agent, and they wanted that to be enhanced by a generative AI that, I like while I'm the person. So in some ways, like, for a lot of our customer, the AI chatbot closed the gaps of the customer interaction. But as soon as you get sensitive or it goes beyond what people are or the companies is comfortable doing, then you will transfer it. But at that point, the person has been taken care of. There's been an interaction, it was conversational and sort of like human humanlike. So people enjoy that. And then, the summarized content etc. comes with a lot of information for the agent to be, more effective. So, yeah, I agree, like some people start very small, with what we have and then they start building, what I got from you is that, which I think is exactly true, is that it's like a tool, right? Unless you work with it, then nine out, you will know how to use it as well as someone who does. And so it is for every company, like an investment. I agree that, you know, people can further fine tune specifically for their business. Always monitor those responses. We offer features like that where people can flag when a response is very good, so that that can be reused in the future. And I think like with Enterprise company, like you said, it can be just off the shelf, something that is used as something. Well, but you need to work with the people, bring that expertise, working on buildings, problems, etc.. We started with, and I had bought on our docs and, there was a learning process initially with invent, plans we didn't have and we didn't offer. So obviously we had to control that. But, you know, the guys very quickly managed to build better prompts, etc.. So I, I really have seen that the learning experience. And then that's obviously since we are like trying our own technology, then we can use that learning and knowledge, with our customers. So I, I completely relate to that. And I can see how, you know, that's a good way to go. You start small into something that's effective and it works, and then you expand as you get more knowledgeable on how to use the tool.
Yaniv [00:39:53] Yeah. And it's, experimenting is the most important thing here because I can give you many, many small examples of like how you make the operation much, much better. For example, when a customer writes in. They write the story. Maybe English is not even their first language. How can you make this summary of their question? The essence of the intent of what they wrote to be the most accurate. So then later, when you move in the process of retrieving the answer, you actually got to, again to the essence of what they wrote. You know, you can basically cut and paste what they wrote, or you can manipulate what they said into what, you know, works in the system at the same time. How do you write knowledge articles, maybe the way that human, that people are getting, information is very different than the way that I get information. If I want a recipe right now when I'm cooking, you know, there's most of the words are actually not about the recipe. They're actually about SEO. Because this person that wrote this, you know, blog post want everybody to see that they're right, that the LLM doesn't care about the SEO at this point. Right. It needs. To understand the question or the intended or understand the answer, etc., etc.. So I think that the experimenting with that technology within your products is actually the most important thing.
Emmanuel [00:41:30] Very cool, very cool. We're reaching, like, about 40 minutes. I'd like to leave some time for questions, but before we do that, like, can you maybe give us, like, an insight on how you see the future, where you see things going, maybe specifically for cars and in general for generative AI. Like, what can we expect? I mean, I hear a lot of things about, you know, a general AI, for example, etc.. I just wanted to get your thoughts on this, since you've been very close to those who are, leading the world in that direction.
Yaniv [00:42:05] Yeah, I think that my opinion and I. I'm not going to be, super crazy with my analysis at this point. In the next couple of years, very short term, I think that, we will get a lot of really like we will get super powers in these teams. It will be, through, autonomous systems, through, Agent Assist. I, I'm working with, companies now. I'm helping them, with, with goals like. I have tier one agents which are usually working on product support on on how to questions and I'm going to make them tier three support the ones that are working and troubleshooting. Very, very deep, problems. A lot of times tier one don't even have the access control to, to even grab some of the logs and some of the, the harder, or the more sophisticated information that they need. And, and this type of agent assist, is already happening, and it will be much, much better. I think if we're talking about, like, the medium term, teams will just look very, very different. You will be able to do much more with much less, and then you will be able to contribute to really, really good data, right? Like it's all about the data that you have because, General Lem is probably not going to answer your company's questions. It can answer generally, it can understand, it can summarize, it can do all of these things. But in the end of the day, if you need instructions about something very specific in in your product, probably and I wouldn't know. And then this is when it start to listen anything etc. etc.. So I think that the, the structure and the skill set that will be required for these teams will be very different. And then one last sentence, I think that, see, CEOs and some other, C-level executives will start building, almost like an AI transformation or some, some organizations that are, helping with AI all across the board. There are many reasons for that. One, to have everybody aligned on the policies and like what you can and what you cannot do. You also open a lot of risks, when everybody in your team becomes a data scientist, even though they are not data scientists. Sure. You can type into chat GPT. Hey, analyze this in this method, blah blah blah, and it will give you an amazing answer. But if you don't know exactly how to bring the right data into the analysis, you might actually, you know, just cause the problem because you know your conclusions. Okay. So I think that like that there will be this like almost transformation within organizations that they understand, how the whole organization is working with AI. And then maybe lastly, which is sounds very crazy, but actually I think that it's closer than we think. We will start onboarding AI agents for certain, tasks. It can be in support. It can be in legal, it can be in in sales, it can be in other places. AI agents that are trained and maybe even fine tuned, and has a lot of knowledge about a specific area and they can, solve problems, they can make the operation more efficient, etc., etc..
Emmanuel [00:45:44] Cool. Thank you very much. We have some questions like, on the best way or where to learn about building good prompts. Is it, just practice, or are there people that, provide a help there and, people can leverage?
Yaniv [00:46:01] Yeah, there are a lot of resources, I hear, and there I'm still looking at these. I think that, there was recently Google, actually, reported something like that again.
Emmanuel [00:46:17] Right. There are there are a few plug ins in Google Chrome. Or even to Open AI that you can install to.
Yaniv [00:46:24] Yeah, there are plug ins. But actually I think that Google just, just, posted something like, a weird sentence like, that you put in the prompt, take a deep breath or something like that. And, and it actually helps the, the limb. And it's not necessarily these specific words, but it's the intention that it makes the okay, like, do it slowly. Like, don't rush into things. Break it down into, into, you know, smaller tasks. Okay. But you ask me about resources. I think that, like YouTube and Google just is full of of, things. Google, posted something. I'm pretty sure McKinsey, and maybe even Deloitte, had some pretty cool research, recently.
Emmanuel [00:47:13] Okay. Great. One person is asking about, customer support. Experience seems to be the most popular generic use case for, for businesses. What's the next appropriate use case? I don't if you want to take that. I have a point of view as well, because we are dealing with a lot of different customers. We're using our chatbots. I'll be happy to, to shed some light on it, too.
Yaniv [00:47:36] I think that everything that has. A lot of text in it will be very successful use case legal for example, like if you think about companies that are just exchanging, contracts in red lines, in some, some things like that, there's so much to read that there is so much to summarize, there's so much to actually work on. So these these are probably going to be very good use cases. I mean, it's it's it's everywhere.
Emmanuel [00:48:12] Yeah. Very true. We see, it's like, not just for support. So there's the the definitely like the knowledge assistants. So people who want to learn about something on your product, let's say you have a, e-commerce website and people doesn't want to go through the browsing, as with, like, the search in docs, right? Rather than going through the menus to just type keyword in and it just tells you which pages are something about, well, now that can be done in a conversational way. We used to be like, we used to just do chat with the leader in chat, right? We're like in companies like DoorDash, Reddit, etc. and, that was the big thing for us. It was the human to human interaction. The issue with that is we had three type of customers, some with communities, some with like a two sided network. So you'd have an e-commerce site with a buyer and the seller, or health care where you have a patient and a doctor. With generative AI. The beauty of that is you don't need that anymore. So as long as you have an e-commerce website, you put the widget, you have that, chatbot that can welcome someone that can make a recommendation on the product based on the, you know, the information they've done. Some of all the conversations that have been done can be used to be put in the CRM and enrich the data about the person and their preference, etc.. So then you'll be able to push like, new products, maybe through notifications or something like that. But that's the beauty of it. And I mentioned that earlier, we're seeing it more and more is that you have a customer journey and you have gaps and you build frustrations, and there are frictions along that. The AI chatbot that helps without a lot. So from welcoming to marketing all the way to sales and then to say, you know, I can't help you, let me transfer you. That really helps with that customer journey. And people find a lot of efficiency and effectiveness examples. Right. Let's say you are a, a platform where you're selling, I don't know, let's say houses, real estate. And then, the person is browsing. The problem from the business side is they don't want to have the salesperson doesn't want to take and engage with someone that's not qualified. Now, the I can qualify that person can have a discussion, understanding really what they're looking for, the level of interest for that house. And then boom, at that point that's a great time to because we're getting close to a sale. And now you need a human agent that can, you know, provide that human touch, and bring in all the way there. Hopefully some people don't even care for that. They'll just do it all over the journey. But it helps a lot. So we're seeing with for marketing, we're seeing for sales. Definitely. Those are a big one. We have people like, read the, implementing that for education. A page, for example, is a startup that helps people with low wage to find, gigs and the whole hiring process now, and the screening is happening with each other. So there's a lot of value on demand. You can that's, you know, maybe a bit futuristic, but you can imagine. Right. And so, to deliver, once maybe to connect with the genie to know what's the best path to get to the house to avoid traffic or, you know, being on his bike saying, I'm going this direction because the traffic is blocked there. Just let the person know if he's worried about, or is this order that I, I've made this change in my, itinerary, and I should be here in five minutes. I can be endless. And then you can connect, you know, all the text conversion to text, to voice. And you could have, like, even more human like interactions with voice, for example. So, I see, a lot of, a lot of, possibilities and, a lot of people being very creative in trying to streamline that process, want to improve the customer experience in to to increase the operational efficiency.
Yaniv [00:51:58] And I'm also I'm really glad that you brought education, because that's like probably another huge topic that, is getting so much, out of Gen AI like, if you can imagine private tutoring with, unique data with, personalized, I, Twitter. Right. Like, if I learn to if both of us are learning how to code, and I like soccer and you really like cooking, and all of our, coding challenges are personalized to what we want to do. Then, like, you can you can imagine the the engagement, how how the engagement will look like. It can also understand like where are the gaps? And we both of us started at the same time and learning the same thing. But maybe fundamentally, I don't have like a, you know, a basic, knowledge about something, and it can go back and, and help me and just like a different, like a private tutor would do. So. Yeah. Education is is a huge, industry for sure. Okay.
Emmanuel [00:53:03] I think the empathy is a big thing. Yeah, it can do, actually. It doesn't get hungry. Right? If you ask the question in a long time, when the person is sleepy or, or hungry and getting frustrated, you may not get the same attention, that there's a lot of, like, that people can take advantage of. Yeah. Well, Yaniv, thank you so much for that insight and spending some time with us. For everybody who's interested in generative AI chatbots, check us out. It's very easy to implement, for your website or or mobile apps. We'll be happy to help. Check out our, canva.com. And, Yaniv, thanks again. And, I wish you a great day. And, I hope we talk again soon.
Yaniv [00:53:50] Yeah. Thank you. Man, it was fun.
Emmanuel [00:53:52] Bye bye.
Yaniv [00:53:54] Bye bye.








