Amin Ahmad is the CTO and co-founder of Vectara, a company which enables firms worldwide to train and deploy AI models at scale. Amin was awarded a Spotlight Trailblazer award in AI.
Gupshup CTO Krishna Tammana Awarded Spryte AI Spotlight Award
Krishna Tammana was awarded the Spryte Spotlight Company to Watch award for his outstanding contribution and groundbreaking product Gupshup which builds AI-powered chatbots that truly engage customers and help organisations level up their customer service game.
Highlights
Everyone has good ideas, you just have to find them.
About
Get to know Spotlight leaders. In our interviews we delve deep into how they think and what they’re building.
Gupshup's CTO Krisnha Tammara on why chatbots will never fully replace live agents
Gupshup's CTO Krishna Tammara explains how conversations are the new browser
Everyone has good ideas, you just have to find them says Gupshup's CTO Krishna Tammara
View the full interview
Spryte AI Spotlight: Gupshup
Maybe you can offer some insight for the people out there who are confused, scared, excited. There's all sorts of emotions around AI today, right? Tell us a little bit about your own views and what you think. Let's start with the biggest question of all, right? Do you think AI is the end of humanity or is it a boon for humanity? Is it going to change everything? Is it just completely overhyped?
I don't think it's all doom and gloom. I'm more of an optimist. I think there's a lot to gain out of this, but not without a pinch of caution. Just like any new technology, any new invention has the wonderful ability to contribute to humanity, but also when used in incorrect ways can be trouble. So I'm more in the optimistic camp, not the doom and gloom.
Got it. Are you scared? Are you dubious? Are you excited? A little bit of all three? What's the prevalent emotion that you're feeling?
I think it's mostly excited, but I'm also a little bit scared because I'm not confident that the rest of humanity, particularly our governments, move as fast as they need to move with the technology to do the things that are necessary to keep it in check and used for the right purposes. So yes, I am excited, but I do have a sense of caution around it, just like any other technology that's come before us that is powerful and has the potential to revolutionize.
Where would you put AI in terms of technology breakthrough? Are we talking about fire? Are we talking about the car? Are we talking about the washing machine? Where would you put AI in terms of machine learning or what we're doing right now in terms of innovation?
I think it's a pretty big deal. Somewhere in the car to fire range, for sure. This is not an incremental step progress. It is actually a pretty phenomenal change that has the potential to fundamentally alter many, many things. So yeah, I don't think it is just a step function.
Got it. Okay. What's your stance overall on the call for AI regulation from either the public or government?
I think like any new technology that is revolutionary, regulation is important. But we all have a responsibility. We cannot just depend on regulators to solve all problems. Though I fully agree that regulators have to move faster than they have in the past, because the pace at which innovation is happening in this field does not have room for governments to move at their pace. We have to move faster. Having said that, I think the responsibility lies beyond just a government or a regulator. We all, as enterprises, individuals, have a responsibility to use this technology in a manner that doesn't hurt anybody. Regulation is important, necessary, but not sufficient because we have to be responsible as well.
Yeah. Individual responsiveness for all of humanity. That's going to be tough.
It is tough. But if you believe in doing the right thing for the society that you have to take some of the responsibility, yes. Am I betting everything on that? No. But that is something I would encourage people to think about.
Right. You've been in machine learning, deep into it for probably the last, what, three, four years at least? Maybe more, maybe 10, right? Is there a moment over the last five years since you've been in this where interaction with machine learning or AI has surprised you? And if so, you know, let's hear about it.
Yes and no. I mean, we've been talking about AI since I've been studying computer science in college. And the promise was as big at that time as it has ever been. The difference is that it never came to fruition the way we were all expecting and thinking. And I'm talking about 30, 40 years. The textbooks written on AI go that far. There has been enormous anticipation that didn't come true. In fact, if you see the overall research world, which slowed down quite a bit in the 90s and then in the late 90s, it picked up again in a different form in a corporate setting. AI is part of that type of evolution at that time. If you think of PageRank that Google introduced, technology like that has made a big difference. And since then, AI has been slowly picking up mainly in the form of machine learning. and the algorithms that were helping us do things better, whether it's machine learning, deep learning, and neural networks, this combination has been propelling us forward, but slowly, because the expertise required to put these technologies to use was tremendous. Now, when generative AI came on the scene, that was a big shift. I would say in the generative world, we are probably in the very early innings, but even the early innings were so vastly promising just from what they're able to do to the technology area and society in general. So yeah, we've heard rumors of certain things that are being researched in different companies and what capabilities and possibilities were there. That was all very exciting.
This is getting good. Give us the rumors. What's going on?
I'm talking about old stuff, right? But when you heard about Google's work causing systems to become sentient, for example, who knows? There's no actual data on that one, but when you see those possibilities, it's exciting and scary at the same time. But when ChatGPT came on the scene, it was real. We could experience it and then you could see the potential. So that I think is a big pivotal moment that all of us came to recognize.
Got it. I mean for me the surprise came with the advent of generative AI in art. DALL-E and some of the image generation things a little bit later. Actually the first I would say would be the deep fake algorithms a couple years ago. That was already quite surprising even though it seems like we could have all thought about that 10 years ago, 15 years ago, 30 years ago, but to actually see it live was quite shocking, I think, right?
I think in the generative AI field, as you said, DALL-E was the first really exciting thing we all saw. In some ways, with a sufficient amount of work, you could do some of these things even before, but it was just significantly harder. The ease with which you can do all of these things now, that's the big, big change.
Yeah the combined mass of brains that can attack this problem if you just free them, is a lot bigger. And I think that's what we're seeing right now. We're seeing anybody who understands the technology and the tools to now has access, as long as they can get the data somehow, to create these wonderful things. Give me a little background on how you came to Gupshup and the problem that you're attacking right now, your process and your progress to get interested in this field and to get into resolving the problem you're currently tackling.
Right. Sitting in California, our exposure here is limited to, not limited to, but is primarily around what's happening in the US and the West in general. But when the call came from Gupshup, I started doing some research and I was blown away by how much was happening in the rest of the world, especially in how the web was evolving and how technology was evolving, and how enterprises were leveraging channels that we never used in the West. Primarily messaging-based, conversation-based interactions were very common with people, but that started taking root in enterprises and the ubiquity of such conversations, whether it's China, India, or Southeast Asia and Latin America, was just eye-opening for me. And the scale at which these things operate and move and the speed with which the transformation is happening was quite fascinating. It's an area that I was not as exposed to, but when I saw, I took a look at it a bit closer, I was blown away. That's how I ended up at Gupshup.
Even though the environment in California is very conducive to startups and high-tech startups, I do think there is a bit of a bubble in the types of startups that get funded due to the interests and sort of the bubble that's there. I'm in Brazil right now and Brazil basically functions with WhatsApp, right? There's no life outside WhatsApp in Brazil. In India, it's the same. It was Orkut when the US wasn't even using Orkut and then it moved on to SMS. They've got their own platforms. That's interesting the way you got that call and you actually answered it right because most people would be thinking “wow SMS messaging who cares?
But it's not the SMS part that was the most interesting it's just a conversational nature of interactions with the richer messaging platforms like Whatsapp, WeChat and even Instagram. And now RCS is a rich messaging application protocol, so SMS is the very basic one-way messaging. Of course you can reply back but things like these these next generation apps are so much richer and powerful and so many possibilities in that. We do business in Brazil as well so I know very well how much Whatsapp is used there as in India and Southeast Asia.
For people who are not familiar with Gupshup, give me a basic rundown of the problem that you're tackling and the solutions that you’ve found and, and sort of what you guys are getting at.
So in areas where things like WhatsApp are very popular, businesses are leveraging that channel quite a bit. I'm using WhatsApp as an example, but there are many channels like this. Conversational engagement with customers is the way forward for everything that's happening in that part of the world. And I'm sure it's just going to spread everywhere else. What we do at Gupshup is power conversational engagement with customers. We work with brands and businesses who are trying to engage their customers in a conversational platform. What we do is help them make rich conversations possible, whether it's for a pre-purchase scenario as a brand, I'm engaging customers that are potential clients, helping them through the purchase process. And after purchase, we help them maintain, manage, interact, upsell, cross-sell the other products to them. We have a general purpose platform that anybody can use to engage their customers, and we have specialized offerings for BFSI and commerce. In a nutshell, that’s what we do. A Lot of the brands engage with that. We operate at a very large scale. We have customers in India, Southeast Asia, UAE, Brazil, Colombia, a lot of Latin America as well. So it's a pretty busy system. Very, very, very chatty as conversations are constantly going on.
I think what's difficult with customer interaction, maybe I'm wrong, but in my experience as a customer, it's hard to change user behavior, right? So you've got to kind of adapt and find the way that the user wants to interact with you.
That's correct.
And that's the difficult part, right? I guess the integration with all the channels, to me, that would be the key, right?
That's true. I mean, we're still not at a point where we can replace humans. That's not the idea. What we do is help brands engage with their customers, wherever they are, whichever channel and whatever time. We have the ability to help customers build bots, intelligent bots, this is where AI comes in as well, where if you feed all the knowledge to our system we can help customers interact with the automatic bot, a smart bot that can answer the questions the customer is looking to get answers for and It behaves very naturally because it generates answers in a language that you understand. We support multiple languages and it's a conversational interface, not a very tabular or a systemic interface. Now, the advantage is that you can interact with your brand or a business 24/7. There's no restriction on when you can interact with them.
Are you telling me that gone are the days of press 1 to speak to customer service?
I am telling you that. You don't need to do that. You just have to say, “I have this problem” and here comes the response. Ask you for more information or just give you the answer. But the advantage here is that while we may front with some bot, it's easy to say “I want to talk to a live agent” and we offer those capabilities and services as well. If an agent is interacting with the customer, we do things to help the agent respond better. Help them formulate answers, help them retrieve data, help them form well-structured English, or other language sentences, so we can assist humans as well. So this is not 100% bots only, but there are many situations where you and I would still like to speak with a human and that is still possible.
Yeah, I think that's never gonna go away. And I think the fear for most people on the AI side is to think, okay, this is the problem that this solution solves using AI. Gone are all of the phone centers, right? And phone attendant jobs. And that's going to be a problem. But I think in my mind, and looking back at the data throughout history, it's more that these jobs become much more productive, much easier, much more enjoyable in many senses, right?
Everybody becomes that much more efficient and the interaction becomes richer. That's the whole point of this.
Which parts of the experience from the personal human interaction, which parts of that become available through Gupshup or through AI? What's the future there?
We fully anticipate that the bots that handle your queries, 24/7, at any volume, are going to get smarter and smarter and smarter. As we give more data and more input and train them, they're just going to… what I anticipate is that more and more of these questions can be handled by the bot. Of course, there’s complex stuff that you'll need to talk to an agent, but your interaction with the agents will also become that much more smoother, higher quality interactions, and much higher customer satisfaction.I think that's what we are looking at in this. Now, It's the same thing, whether it's before you buy something or the process of buying, or support after making a purchase that you need some help with the product, etc. So I think it's just going to be a much richer environment in which the interactions happen with businesses at scale 24/7.
Can you give us a little bit of a little deeper insight into the actual machine learning that you guys use? Which types of models do you guys use? What technology do you guys use?
We have a research team that works on all things AI, and we used to do this with the traditional AI with algorithms that made sense at that time. But with generative AI, there are multiple foundational models, hatGPT being one, LAMA being the other, and there are a number of open source ones, including the Google models as well. We work with all of them. We try to use the best model for the problem we are solving. But what we do specifically is take all these models, apply the use cases, and the industries that we serve. And then on top of the foundation models, we do work to train specifically for our use cases and our industries. That's what we make available for our customers. Some customers want to build their own using this technology that we provide, and many customers say, “just please build it for us”, and we do whatever the customer needs. For example, banking, out of the box, we have the ability to create all the types of interactions you would want as a customer of a bank to have with that particular bank. So we work with that bank and make sure that all the things that they normally do, whether it's on the web or when a customer calls, we can use the same thing through WhatsApp or other technologies, other channels that we provide. So that's kind of the special thing. We don't build foundation models. The large enterprises like Google and ChatGPT and Facebook, they're building the foundation models. But we take that and then pick the model that makes the most sense, and apply that domain-specific knowledge that we do. We also have our own LLM, again, from open source. We've taken that and trained that to solve our use cases. And we make that available as well, especially when scenarios where customers don't want their data moving out of a particular geography, etc. Because a lot of these models are hosted in the US. Not everybody likes data moving across continents. And there are some regulations as well, like GDPR and other things. So we try to make sure that all of these scenarios are taken care of.
What are some of the new challenges with that workflow? From a technology point of view and a data logistics point of view, a cloud infrastructure point of view, what are some of the things that are cropping up that are new and challenging there to manage that scale?
I think that the number one thing is the amount of time it takes to train models right now requires enormous amounts of computer resources. That's a pretty big challenge. Even after you give a fairly large amount of resources, it still takes eight hours to train a model, five to eight hours. This cycle has to shrink and there's a lot of work being done to make the model smaller, move the computation to a much higher degree. We're working with our partners to figure out how to accelerate this. Some hardware related work and some figuring out which models are a little bit easier and faster to train, et cetera. So that's one of the challenges. As much as it has become easier to leverage AI, my desire would be to make it even easier. How long does it take to build something that allows one of our customers and bots and brands to go online and have excellent responses from the system? It still takes too long for my taste. I want to see it happen faster and faster. There is progress, but I'm eager to see it even better than it is.
Yeah, so AI ops is sort of cropping up as a major field, trying to resolve those problems. But in your mind, is the problem really in the data and infrastructure side? Or is there still a manual element there that's slow in terms of people being able to tweak, train, take the decisions, understand the models? Is there a human element that's still too slow?
There is certainly a human element, however, the sophistication required to leverage AI in the machine learning in the previous world compared to the current world is quite different. It has become a lot simpler. You don't need a computer science engineer with the coding skills to do anything basic like it used to be. Now, with technology like prompt engineering, you can tune and train the models to better answer the questions and have better generation of the answers. That part still takes skill, so it's a new field and a new skill set that is being developed, granted it is not you know computer scientists, but it's a different skill and it is something that we all have to learn as a user. So there is an aspect of learning and timing for that.But I think the bigger thing is, how long does it take to test that your system is working as expected? Because the scenarios are infinite. Because these large language models have so much in them, by asking a question a certain way, you don't know which aspect of the large language model generation you will trigger, and whether that's relevant to your business or not. So verification that the answers are precise is a non-trivial task. And since this is so new, it's not like we as humans are tolerant of answers that are approximate or irrelevant. It's not like a business can say, okay, or a customer can say, “this is new technology, I'll be okay with the wrong answer or an irrelevant answer.” So for us to launch something, we have to make sure it's very precise and accurate.
With the social media culture that we're in where it's instantaneous gratification, any slight mistake that gets out to the real world, you can be pretty sure it's going to turn into a screenshot and a meme within 20 minutes and go around the world. That's a major challenge. Is it solvable in your mind?
It could be a reputation issue.
Is that something that's solvable?
It's very much solvable. I mean, we can certainly solve it. Even today, we can tune the models and change sensitivities and do things that can restrict the model from giving wrong answers. But if you tune it too tight, then it's not smart anymore, it's just trying to give you canned answers. Figuring out where the right point is to stop is more art than science at this point.
You know, from a business point of view, that's where you really have the opportunity to create value and lasting relationships, right? By being able to do that well. So that might be actually a good side.
Right. That's why the models we use to train them more specifically for the industries and the use cases we serve, helps constrain that to relevancy for our brands and for our customers.
You mentioned prompt engineering and you're seeing the role of AI practitioners changing, right? And maybe new jobs cropping up and new positions sort of being defined. There's one in prompt engineering. Anything else that you see cropping up that's sort of a new and exciting sort of field that hasn't existed before?
Yeah, I mean, the entire notion of leveraging LLMs to put to work as a field is kind of new. I mean, which LLM should you be using? How do you tune it? And how do you prompt engineer? That whole field is different now. And what I tell you today is going to be obsolete in two weeks, I'm sure. Because now there's a lot of open source work going on. The speed with the changes happening is insane. But you're absolutely right. This is a new arena, new area. I read somewhere that the most expensive or highest paying job is now prompt engineering out of school. I don't know if it's true, but I'm sure it won't last, but it sounds like it is right now.
At the same time, I see personally, I don't know how you feel about this, but to be working with these models, efficiently you need to have some quite advanced math, right? Not to do the basic things, but to be able to choose the models, implement new models, create models from scratch, you do need a high level of math and understanding that is really tough to source, right?
It's a different skill to figure out which model to use. You have to understand the models a little better. And then pick and choose. There's a lot of experimentation that has to go on. If we, for example, take all of our use cases, all our data sets, use it on all different models constantly as new versions of these come, to see what's changing and how the quality of the responses are improving or not. And then we pick and choose. So yes, absolutely. It's a different way of looking at things.
We've had content moderators in social media or since websites have been sort of marketplaces and open for 20 years now. All sorts of people doing the dirty work of checking that content is okay. You know, this might be a new field of really checking what responses are okay.
Right.That's right.
In terms of the team, what do you look for in the team that you're building, your technical team? Has something changed then about the types of people that you're trying to hire technically or it's business as usual?
No, no, it's definitely changed. So much of this is technology driven. It's not like somebody is going to tell you and define what to build and how to build. Now you have to look at the technology and see the possibilities and see how we can improve the products, for example. When we hire engineers, now we not only look for people who can execute to a plan and a product definition, but we're also looking for people that can think, look at the technology and think about the possibilities that others cannot think about because they are not looking at what technology can do, what are the possibilities. So that's one. The second thing is when you're dealing with LLMs and the models, the way you think about AI is different now. The skill sets needed to decide which models to use and why, and how to tune and train and make it easy for somebody not dealing with AI day in and day out, is a skill set we look for. Because even within our company, we want many people that are customer facing, whether it's our delivery teams or support teams, they have to be able to use and leverage and implement for customers, or explain why something is behaving a certain way. So it's a different skill set you have to train people on. We look for people that can think differently, use LLMs, and also help the front lines understand how to explain it to customers and bring it to life for customers.
That's actually really sort of uplifting in my mind to think that creativity is actually being rewarded here. It sounds like you're looking for out-of-the-box thinkers and people that can think differently, which is always good.
Very much so. I believe that everybody has ideas, good ideas. You just have to find them. You have to source them. We make it a point to work with all of our engineers who are thinking and working on a daily basis on their localized problems. And they usually come up with something interesting we never thought of.
That's fun. So what's the future for Gupshup? Let's say, what's the 10-year roadmap? Where do you think this is going?
I think… If you look at conversations from a purely messaging standpoint, that's kind of the basic. Elevating that and creating abstractions for businesses to interact with systems is the future. And the degrees of abstraction are going to keep getting more and more sophisticated. So what you have to do as a customer of Gupshup to have this rich interaction with your customers has to become simpler and sophisticated. And that's what I anticipate happening. Our CEO calls this “the conversational internet.” So this is as big as the internet itself. This is a different kind of Internet where conversations are the new browser. That's kind of what we see happening. I think it's a very fundamental shift and you know, all your interactions will, I mean… the number of people that don't even go to a laptop or a desktop is amazing. If you see in Brazil, for example, I'm not sure, everybody's at a desk, but everybody has a phone, a smartphone. So I think that's the future. And something like a messaging application becoming your primary source of interactions for everything is what we see and we're right in the thick of it, powering that entire revolution. That's what I see happening.
Right now, we're kind of having to choose the platform that we interact on, and then that platform owns the data, and then that platform can monetize and use that data, whereas maybe it's just kind of people and then conversations, and then you're sort of powering that conversational layer across all the different channels, right?
Right. I think that's the future. We have to be where the customers are, not the other way around. I think gone are the days where technology pulls people to technology. These are days where technology goes to people where people are, and where they want to be. Well, that's exciting. So that's a bright future. It's been kind of very interesting to seep into your mind a little bit and get some ideas as to what's going on and what's the future. I really appreciate it. Any planned rollouts, anything coming out in 2024 that we should be looking out for? Oh, we're constantly rolling. Every five weeks, we have something new going into the marketplace. So we're on a roll all the time. But yes, we are in Brazil and we're in your part of the world right now. So you should look to hear something about Gupshop in there, some new announcements from Gupshup will be coming in the next quarter.
Well, that's exciting. So that's a bright future. It's been kind of very interesting to seep into your mind a little bit and get some ideas as to what's going on and what's the future. I really appreciate it. Any planned rollouts, anything coming out in 2024 that we should be looking out for?
Oh, we're constantly rolling. Every five weeks, we have something new going into the marketplace. So we're on a roll all the time. But yes, we are in Brazil and we're in your part of the world right now. So you should look to hear something about Gupshop in there, some new announcements from Gupshup will be coming in the next quarter.
Awesome. All right. Well, great to meet you, Krishna. Appreciate it.
Very nice talking to you, Stephane. Thank you.
Browse Articles
Macro Fiore was awarded the Spryte Spotlight Visionaries award for his outstanding contribution and groundbreaking product NetAI which focuses on networking and security to accelerate TELECOM operations and make cellular networks more efficient.
We spoke with modl.ai's CEO, Christoffer Holmgård, about his background, the history of modl.ai, and the future of AI in game testing.
Diffblue received the Spryte Spotlight Company to watch award for their groundbreaking product - the only fully-autonomous AI-powered Java and Kotlin unit test writing solution, generates reliable unit tests at scale, locally and in CI environments.
An Interview with Stip AI's Simone Faricelli