Spryte Spotlight: AI Trailblazer Amin Ahmad

Amin Ahmad is the CTO and co-founder of Vectara, a company which enables firms worldwide to train and deploy AI models at scale. Amin was awarded a Spotlight Trailblazer award in AI.

About Spryte Spotlight

Spryte Spotlight awards and highlights the very best CTOs, tech leads, and technical executives in various industries. Spotlight leaders are at the forefront of digital transformation for their organizations and society at large.

sprytelabs.com/spotlight

Related Articles

Highlights

quote

I am by turns scared and excited. The one thing I definitely don't believe about AI is that it's overrated. I'm quite confident that it's going to change our destiny as humans. We're living in a very special time right now. But whether it becomes something very good for humanity or something very bad, that's the question I ask myself.

Amin Ahmad

About

Get to know Spotlight leaders. In our interviews we delve deep into how they think and what they’re building.

Combatting deepfakes with digital signatures

The end of big-name Hollywood actors?

AI's impact on jobs

View the full interview

trailblazers

Watch Amin's Full Interview

A lot of people are learning about AI now, a lot of lay people are getting sort of confused or scared or excited. Which camp are you in? Do you think AI is the end of humanity or is it a boon for the revolution and the start of human well-being and taking us to another level? Or do you fall somewhere in the spectrum in between?

That's a great question to start off with. I think I would say that I am by turns scared and excited. The one thing I definitely don't believe about AI is that it's overrated. I'm quite confident that it's going to change our destiny as humans. We're living in a very special time right now. But whether it becomes something very good for humanity or something very bad, that's the question I actually ask myself. And I think that it has the potential to do either, and which one it becomes has a lot to do with how effectively we can govern ourselves. Not to be too much of a downer, but if you look at the world today, it feels a bit more chaotic to me than it has at any other time in my life. And so that's what worries me.

Got it. So a bit of excitement and a bit of worry as well. Are those the emotions that you feel?

Absolutely. I mean, AI's impact is going to be huge, but the worry is that it'll be misused instead of used for a benefit. That's the concern. There's both camps. There's people who are intent on trying to use it for good, but then there's also people whose incentives are misaligned. And I think the question is, what's going to win out? Which camp will become dominant? To give you a very concrete example of what I'm talking about, one of the big applications of AI is warfare. If you've been following the conflict between Russia and Ukraine, drones are already changing the way that modern warfare is conducted. And it's scary to see the kind of things that these drones can do. And so we're not very far off. In fact, it's already happening where drones are getting, you know, crude forms of AI attached to them and are becoming autonomous. And that's the kind of thing that really scares me a lot.

Yeah, we’re seeing it in actual warfare. We're seeing information warfare. If you think about deepfakes and social media. And so yeah, I think it's becoming real, right? It's been around for a while and you've probably, people who are in this field have known, you know, it's been part of computing for 50 years, maybe even more. But I think today the big difference may be that we're seeing impacts in our real lives, right?

Right. AI has been around as a field for a long time, but these discussions have been theoretical because it wasn't ever good enough. But now it's good enough. Like you said, the deepfakes are basically convincing. They actually do fake out real people. And I think where that particular threat is going to head, the deepfakes, is that very rapidly, none of us are going to believe what we see on TV, on YouTube, or anything. We're going to learn that it's most likely faked and it's manipulating us. I think that's what's going to happen.

Yeah, I think part of us are already there and the rest are soon to follow.

Right, right. Now, the interesting thing is there I do believe that there are solutions, and those solutions also come from computer science. If you think about digital signatures in cryptography, They're a way to prove that a certain individual or organization originated this content, or you could even say that a certain device, a certain camera, actually physically took this footage. Digital signatures, because they're backed by the theory of cryptography, they can't be faked by a neural network. And so, I think we've never had a reason to apply digital signatures extensively to content because content couldn't be faked convincingly, but now that it can, I think that's where the solution lies.

So there's a lot of new problems coming up, but also a lot of new solutions using AI.

Right. Using AI or using cryptography, other branches of computer science.

Got it. So I guess in the sense that you're saying, you know, that it is a big deal, and it's not overhyped, on a scale of, let's say, between, you know, fire and let's say the automobile, in terms of importance and revolutionary status, where would you say AI is at? Is it a drastic jump or is it an incremental improvement?

It's a drastic jump for sure. You know, fire, electricity, automobile, I obviously wasn’t around for those and it's hard for me to estimate, but I would put it on the scale of the internet, or greater than the internet, in terms of what its impact will be.

Wow, okay. Is there somebody you think, is there a personality out there, somebody that you look up to that really understands the future of this, that you could recommend the audience to look into?

You know, I think there's certainly some people who really know what they're talking about now who are talking out quite a bit, Andrew Ng, Yann LeCun, Geoff Hinton. Of the people who I have heard speaking on the topic, I think Geoff Hinton is a person who I respect a lot. I know maybe the most about him, and I find his views very convincing. At least they resonate with me.

Got it. I think we're at kind of a stage where things are moving so fast that it's really hard for a lot of people who are not in the field to separate fact and fiction and people who are just voicing opinions versus people who really should be respected. So that's kind of the reason for that question.

Yeah, exactly. And I think here's the thing, it's easy for any personality to weigh in with some kind of opinion. But those people who I mentioned, they've been so deep in the field that they can kind of see the arc and where this is headed. They can look a little bit over the horizon. So I think I give their opinions a lot more weight.

What's your stance on AI regulation overall?

Very important, but I'm not optimistic about it. Again, if you look at the process of governance in the US, we're running into real issues. Congress is often deadlocked on much more minor issues. I think that we need to regulate AI for several reasons. First of all, we need to regulate its use in warfare. But I'm not optimistic at all, I mean, I'm giving that almost a zero percent chance of happening. And I hate to be so negative because I'm actually an optimistic person, but you know, the U.S. is going to be developing it [AI for warfare], and I almost feel like regardless of what convention or treaties are signed, if we could ever get anything signed, the assumption will be that, secretly the other side is still working on this, and so we have to keep working on it because it will give such a large advantage to the country that develops at first and gets out ahead. But even let's say domestically, we need to worry about things like there's going to be, I believe, a massive displacement in jobs, massive unemployment. And I don't believe it'll be the case that everyone is just going to go and find a job somewhere else. Because of how general and broad this technology is, the number of jobs it can automate is just huge. I like what Bill Gates said – I'm not giving an argument for why we need to throw the technology away, this is inevitable, it's going to come – but we need to tax the companies that are going to reap huge profits and windfalls from this. We need to tax them heavily enough that the people whose jobs are taken, that they're taken care of in society. But again, are we going to have the political will to do this in a timely fashion before things get too desperate? It remains to be seen, but I'm not super optimistic.

Wow, okay. So what do you see as the timeline? Are we looking at two years, five years, ten years for this impact to come into our lives from a jobs perspective?

I think it's gonna happen fairly quickly. It's a hard question. I think some people have expressed surprise that it didn't already happen, given, you know, ChatGPT was in December 2022. I think, yeah, maybe one year is not enough time. I don't think it's going to take ten years. I think it's going to happen much sooner than that. I'll just give you a real concrete example. My previous manager at Google was a gentleman named Ray Kurzweil, who was an inventor and quite visionary, and he always said that the pace of technology and of artificial intelligence is exponential. We understand what that word means, but we're not wired to do those kinds of future forecasts on something that's moving exponentially. We always think in linear terms. A car's moving 70 MPH, so in two minutes, I can predict where it's going to be. His point was the change always happens faster than what you realize. When I started the company in 2020, we were focusing on using transformer-based neural networks for advanced information retrieval. We can find information across languages even. If you put a query in English, we can find results in, let's say, Korean and Japanese. It's not a problem for our systems. I knew about generative AI because I had been working on some of those systems when I was in Google research. But around 2020, it wasn't really ready for prime time. It hallucinated way too much, and my estimate was it's maybe four or five years out before that becomes ready for prime time. Despite understanding how exponential progress worked and all of that stuff, and having been inside research, I was surprised when ChatGPT came out in late 2022. Being a researcher in the field, I was surprised. So that's just a concrete example. I think the change is going to come quicker than maybe many people realize.

Yeah, I mean, we tend to, you know, sort of take the average out on most events in our lives, which are linear. And then when something is exponential, we kind of don't correct to anticipate it earlier, right? So I was going to ask you, and I think you led into it, ChatGP might be one example. Is there maybe another concrete example in your life where AI has somehow surprised you and shocked you?

Oh, well, yes. So it was around 2017 or 2018 in Google Research, and I was working with a small group. And we were trying to design really the first neural networks that could search for information in a very generalized sense, what they call zero-shot neural networks today. They don't need to be retrained as you move them from one domain to another domain. And we had good success with that. We could see that the model was working very well even if there was no keyword match, it could really match the concept. So it was exciting to see all of that. But then my colleague, Noah Constant, who's still actually in Google research, had the idea that if we trained the neural network a little differently, we could get it to work across languages. And I hadn't thought of that idea myself, but what he was saying was making sense. And frankly, the idea was very exciting, because that's not something that had been shown or demonstrated before. So we wired it up, and we hit train and it trained for a few days. And I remember – Noah is a linguist by profession, so he speaks four or five different languages, and he's fluent in Mandarin Chinese – the model finished training, and he called me over to his desk. We sat down and opened it up for the first time, and we had this corpus of material in English indexed. He started typing a question in Chinese, and as soon as he hit enter and the system returned its results entirely in English– that was a moment that blew my mind. Because it's one thing in theory, we understood that this would probably work, but it was still hard to believe that it could actually work. Like you can search across languages. That was a moment that blew my mind. And now, although I don't know how widespread the knowledge is, but for the last few years, information retrieval systems based on neural networks work across more than just two or three languages. They can work across dozens or even hundreds in some cases with the recent models. So anyway, that's a concrete example.

Wow. So let's see. I mean, I guess it's the technological equivalent of what a bilingual person would do in their own brain, which is sort of the whole idea behind neural networks to begin with. So it's a concrete validation of that idea, which is exciting.

Exactly. Just to be able to see a computer system actually doing that was really exciting and it did blow my mind.

So you co-founded Vectara in 2020… the transformer paper was, 2019, am I correct?

No, no, 2017. I think that was a key moment. Also for language research, the year after BERT was released, which introduced the idea of pre-training, that was 2018. So those were really two of the foundations of everything we're seeing today. Okay, so then give us some insight into the creation of Vectara and what brought you to start the project. Well, I would say that the model that I created that I've told you about with Noah and another gentleman, became very, very successful within Google. And my background is not as a researcher. My background is, first and foremost, as an engineer. Rather than directly publishing papers, and I did eventually publish a paper in 2019 about the system, but from 2017 through 2020, I was actually focused on trying to integrate this into different products at the company. I wasn't successful in every case, but what I saw from that hands-on work with different teams was that this was very broadly useful technology. I also got a chance to see where this didn't fit very well, this technology didn't fit very well into the product .But I saw enough that convinced me that really this capability ought to be packaged up as a platform and available to product teams. And then product teams should just be able to essentially integrate this on their own. If the APIs were simple enough it would be possible without needing to involve machine learning engineers and researchers, which is how we were doing it in those years at Google. But obviously, that kind of project is outside the scope of what a research team is trying to do. So that's not the kind of thing I could really pursue within a research group. Research groups are pushing the envelope of research, publishing papers, that kind of thing. So that's what kind of led me then to head out in 2020 with Vectara.

Got it. For people who don't know much about Vectara and what you guys specialize in, what's the secret sauce? You're trying to make that technology available to basically anybody. Give us some examples of industries or case studies or situations where Vectara's solution changes the paradigm for people?

OK, that's a great question. I would say that most people's journey into AI, especially for, let's say, the average CTO or something, they've really been thinking about it after the release of ChatGPT. Everyone could play around with that system. And everyone who did clearly understood that this thing has a lot of potential. And in businesses, they started to think about, how do we make our business run more efficiently with this technology?Well, the insight is to get ChatGPT on your data. For it to be useful in a business setting, it needs to be aware of your data. The most efficient way to make a large language model like ChatGPT aware of your data is through a pattern, which these days is called retrieval augmented generation. It means that if I'm sitting down as a user of this system and interacting with, let's just say an LLM like ChatGPT, I put in a request. Step one is I'm going to go out and retrieve everything that's relevant to that request. Whether it's from a corporate knowledge base or something, it could be HR manuals, whatever you can imagine. I'm going to look over some corpus of text and retrieve what's relevant, then present the LLM with the initial request from the user, the relevant information from that knowledge base, and then ask it to formulate a response. When you do that, hallucinations drastically reduce, and you've essentially got ChatGPT on your data, which is what everyone wants. That pattern, retrieval augmented generation, is what Vectara implements. And it implements it as a serverless cloud solution so you're not maintaining infrastructure. You don't need SREs to run that infrastructure. You don't need machine learning specialists. It's provided behind a set of APIs that's basically targeted for a software engineer to use, not assuming any knowledge of machine learning at all. It's also auto scale. As your load goes up or down and you put more or less data in the system and you're sending more or less queries, the system scales to accommodate all of that. That's basically what Vectara is for.

I can see a lot of applications in that. Can you share some examples from personal experience with clients of yours or anything where people could get a distinct concrete example of things that have been done with Vectara?

Yeah, so first I can just speak broadly, and I’ll speak specifically about Vectera as well, because what's true broadly has also been true for us. Really, the first applications of a lot of generative AI has been in contact and customer service center automation, sales automation, because every second of efficiency that you give to a rep directly translates into the bottom line and dollars saved. So that's where generative AI was used initially. That's actually where many of our initial customers also came from. For example, Conversica, which helps with sales automation and customer support automation, they're taking advantage of our platform. There are also other companies in the same area using Vectara. Now, as time has gone on, other industries have gotten interested. For example, there’s interest from biomedical, where they're using it for research. Recently, the legal field has also been showing a lot of interest in this because there's a lot of reading and interpretation in the legal field. The interesting thing is that I've described a series of applications where the stakes are getting higher and higher. If I'm dealing with customer service, and let's say if I'm dealing with items that are $100 or less in value, if the AI messes up and says the wrong thing, it's not great, but it's not the end of the world. On the other hand, if we're talking about a legal case, we could be talking about millions of dollars at stake, from saying or doing the wrong thing or drawing the wrong conclusion. It's exciting to see on the one hand, because truly AI does have potential in all of these areas – but that's why you really need to get control on hallucinations. A3% hallucination rate might be completely acceptable for a production solution for customer service automation, but it wouldn't fly at all in the legal or the medical fields.

Yeah, and when it comes to robotics and human interaction, there's also safety problems with that. So, you guys sort of spearheaded, you know, the hallucination quantification, or – I don't know what the right term is or what you guys call it. But the idea that you could actually quantify or put some metrics down onto how large language models hallucinate or generative AI hallucination in general. Can you give us some insight into how you got started with that and where it is currently?

Again, one of the main thrusts of Vectara is we're going to give you advanced AI, you can call it generative AI, in a very safe and managed way without a lot of the complexity. One of the important aspects of safety and manageability is taming hallucinations. The most important thing you can do is to actually switch from directly interacting with an LLM that has no grounding in your organizational data to grounding the LLM in your organizational data: retrieval augmented generation. Okay, Vectara handles that part of it already. So what's left? Where else can the hallucinations creep in? They can creep in if the retrieval system malfunctions, which is still a possibility. But the other place they can creep in is, let's say the retrieval system retrieves all the correct and relevant information, but in the process of drawing conclusions from that data, the LLM itself makes a mistake. Just to make it very concrete, let's say that I'm asking just simply for a summary, “summarize our parental leave policy from the HR handbook.” So what we wanted to study was what is the rate of making errors in that summarization process. In other words, if I present you with the correct facts, can you summarize it accurately? What we found is that there's actually quite a bit of variance between the top LLMs out there and how well they can summarize accurately. We had models, I think Google PaLM, an earlier version was as high as 27%. Twenty seven percent of the time it was producing inaccurate summaries Whereas at the other extreme, GPT-4 was only producing inaccurate summaries 3% of the time. I'd say that that's actually good enough for many real world use cases. Not all, but for many. We released the models that actually quantified the hallucinations open source. We tried to be as transparent as we could be about our methodology for doing the evaluation. And we basically stuck to very rigorous academic kinds of standards for how we did the evaluation. The intent was not just to create a leaderboard, but to provide researchers who are developing these LLMs the tools to actually improve on this particular metric of hallucination in the process of summarization. What we hope we'll see, and what I think we're already seeing, is that over time, that rate, while it's never going to reach zero, it's going to reach a point where it's at or below the amount of hallucinations that an average person would do while summarizing. Because we're not perfect either.

Yeah, I mean, 4% doesn't sound great if you're thinking of a medical diagnosis, but then again, you'd have to check what the human medical error rate is in diagnosis, and it's probably higher than 4%, I would guess.

That's a key point. If you look at the number of fatalities in the U.S. due to human error in the medical system, it's actually shocking. Likewise, if you look at, let's say, the effort to develop self-driving cars, we're not trying to get to a 0% accident rate. That would be ideal. But if we can improve the accident rate to human levels or better than human levels, then that's a net win for society.

Yeah. That starts to feel like encroachment of AI and humans in the same space at that point, when it starts to become real, when we can hit these default levels.

Yeah, and this is a very interesting topic, but you can look at the evolution of computer chess as maybe as an indicator of how this might roll out in society. Around 1996, Deep Blue beat Garry Kasparov. Then that was the end of an era where human intellect was dominant in this space. But then there was a period of time as these chess computers got better and better, where people discovered that if you paired a grandmaster, a human grandmaster with a chess assistant computer, that the combination would actually be better and could beat just a computer playing on its own, or it could beat just a grandmaster playing on their own. And in a way, I think that that was certainly very hopeful, I thought “this could be the future of humans and machines collaborating.” Ultimately, though, as the computers got much, much, much better, they started to play at such a superhuman level that they could play better on their own than with a grandmaster. When the grandmaster would basically provide a suggestion and override what the computer thought, it always made things worse. And, you know, there was that transitory period of four or five years in between where the combination was the best. So I don't know. That's what I think is roughly going to happen in a lot of fields.

Okay. So you're prognosticating… AI versus human interaction preceding a human-less world, maybe (laughs)?

And just to be very concrete, we can look at a field like radiology, where AI is already making a lot of strides and advances. Depending on the task, there's already areas where machine learning works as well as the average radiologist in diagnosing certain conditions. But it's still true that the combination of the two of them is what we're really going for. Let the AI make the recommendations. Let the radiologist make the final decision. It's AI-assisted. But we may reach a point not too far in the future where… we realize that the AI is so good, that anytime the radiologist chooses to override the AI, statistically, it results in a worse outcome. I think we will get to that point. And I realize that's disconcerting, but I think that's where it's headed. I think that's a strong analogy to… a sobering analogy for, because there's arguments both ways, but the people who are arguing for that sort of long-term vision of where this could go wrong. I think that's an interesting analogy that you're bringing forward, where we've actually got historical data that show how that worked, so it's definitely disconcerting. Going back to your earlier comments, it's really all about how we choose to govern ourselves, right? And it's kind of a litmus test that we're being put through at this point through AI. That's right. It will require a lot of maturity and wisdom and desire for the collective good versus individual gain, I think, to get a good outcome for all of us, for humanity from AI. And I'm just worried, I guess, by what I see around me. Well, you guys open sourced your hallucination quantification model. I think that's the right way, leading by example and sort of trying to make it better for humanity. I think we've got examples also of that going well. Yes. And, you know, and I'll just say that in general, also, Google's approach to the development of AI was always very open and idealistic. I always appreciate them for that. Many people have left Google Research in the past five or six years, but I think most of the people I've spoken to, they all feel that way. That they were not only very open in letting us pursue research that we thought made sense to pursue, but then we're also very open in allowing those results to be published and shared with the rest of the world. So for example, key papers like Transformers or the BERT paper in 2018, were shared with the world. And oftentimes those models were also released to the world to benefit from. So we have examples in front of us.

Yeah, it's very interesting insight that you have, because I think it's generally very difficult for large enterprises to realize when there's a step up, an incremental step up in technology, most enterprises just choose to ignore it until it's too late, right? It's very rare that enterprises sort of adapt and learn and change their business model, especially Google, which is in search. For them to take this technology seriously, to investigate it, to open source it, is radically different, I think, from every other enterprise we've seen, right? I mean, GM isn't the leading electric car maker. It's just a general case that it's hard for those enterprises to pivot and criticize their own business model. Can you give us a little more insight into how that's dealt with at Google?

I mean, I would say again, Google was very open with the research. I admired them for that and applaud them for that. They were actually very quick to start taking advantage of that research within their own product lines. I mean, I was working in that area. I know that BERT was also being applied throughout the company. But it's just, I guess, this technology is so powerful, and then once the papers were published explaining how it was done, interestingly, actually, it's not super difficult to understand. I mean, I can understand it. There's a lot of people who understand now how this stuff is done. The knowledge is widespread. Eventually, I guess it reached a point as hardware scaled up. That's one of the keys to all of this is the scaling up of hardware over time is what's actually enabled a lot of this AI. Because the theory for neural networks has existed for decades and decades. But until the hardware got strong enough that you could run a neural network with a billion parameters, you couldn't see the kind of effects that we're seeing today with an AI system that can reason and chat with you and all of that. So I think ultimately then things reached a point where a lot of, you could say, aspects of Google's core business model, came under threat because the way we consume the internet is actually changing now. I mean, I'll just speak for myself.I don't use Google nearly as much as I did. I often find myself turning to GPT-4, I have a paid subscription. I use GPT-4 to digest content from the internet because I don't want to go and click 10 blue links and try to scan and get distracted by ads on the side as well. It's a much cleaner interface and it's doing a lot of the analysis and synthesis for me. But what does it mean? It means I don't see any of Google's ads anymore. There's a big shift underway. I even use GPT-4 to help me with coding now, which I never imagined. I've almost forgotten how to code some pretty simple stuff because I use it as such a crutch. Everything is changing and I think that one of the consequences of that, is that maybe in Google and other companies, there's less openness now because there's a recognition that this kind of research can give a huge competitive edge. Maybe people are nervous about just giving it away so openly, and that's understandable. That's obviously not ideal for humanity as a whole, but that's what's happening.

Well, I think even maybe if it's changed slightly or on an individual basis, I think you're right in saying that their overall attitude and impact on AI in the field has been extremely beneficial. Is there anybody else in the field that you think is really helping? Is OpenAI really open? Who else in the field is really making a big difference in a positive way?

Well I just gave you a couple of recent examples. I think one of the big concerns for LLM makers, like OpenAI, is the rapid commoditization. With the extremely powerful hardware that's out there, smaller and smaller organizations can create very competitive LLMs. It's not just a game that OpenAI plays anymore. Mistral released a model under an Apache license. First of all, what Facebook is doing with the release of the Llama models is great, I think if there was any hesitation, the hesitation was that the license was non-standard. Now, Mistral has released their models under an Apache license and that really paves the way for a lot of people to use it. Even more significantly, or as significantly, I think just this week or maybe last week, Microsoft Research, which is, again, one of the leading labs in this area, they released their Phi 2 model, which is not only much smaller, it's 3 billion parameters, but as capable as 7 and 13 billion parameter models from other companies. But they released it under an MIT license, which is even more generous than the Apache license. So, you know, those are really great contributions to the community.

Wow. OK, so good things to think about for the future. What's your take on AI in art? I know there's a lot of ethical issues with the use of content. What's your overall vision there?

I think it's a boon for art.I mean, at Google, we were pairing up with artists kind of in experimental projects, even in the years that I was there. It's an area that's being explored in poetry and writing lyrics, increasingly like in actually generating music, obviously image generation. Now they're even going towards, I guess you could call it video clip generation, and eventually it might turn into movie generation. So all of that is good, and it means...I'll tell you, I use GPT-4 now for generating images for my slideshows, which I would never have done before. I'm not a good artist, but I have a picture in my head and I can describe it to someone. So people's creativity, if they have the skills to express themselves, is more easily expressed now. I think these are great tools. Now, where it might be problematic is for the commercialization of creative endeavors, the music industry, Hollywood, where they're trying to make money off of this, now things become a lot trickier for them. I think that's a whole other subject, but for art itself, I think it's a huge win.

I think a lot of artists are concerned but then again we've got historical evidence to show that the better the tool the better the artists. So I think we're already starting to see some pretty impressive artists who are just using AI the same way you're starting to use it to code and just getting either much much more productive or much better overall and more creative.

Yeah, and here's the issue, I think. So let's say professional artists who are trying to make a living from the art that they produce, let's just say an actor or something. The concern for them is the Hollywood industry, the production studios, who are trying to turn the biggest profit on the videos they produce or the movies they produce. You can often just create synthetic characters now that are quite believable and avoid paying actors at all. I know it's already happening actually for extras and the whole Hollywood strike, one of the big concerns was the amount of digital extras that are being created. Oftentimes these digital extras they're not created completely from scratch on a computer. They've done body scans on a real extra, and in the fine print of the contract they're like you're giving us perpetual usage rights. And so they get recycled in movie after movie after movie. So I think it's very worrying. If I was an actor in Hollywood or something like that, it is something to be worried about to the extent that I wonder if we're kind of, basically kind of almost seeing the end of the era of big names like Tom Cruise. Is that even going to be how it is in the future? I don't know. And I don't actually think so. I don't think so. If you look at East Asia, which seems to be, you know, they always seem to be a few years ahead of the curve on some of this stuff. It's been a few years that their influencer scene has been a mix of real people but also completely digital influencers who have profile pics and videos, but it's all computer-generated. It's not actually a real person behind the scenes. But they end up with hundreds of thousands of followers, and they're commercially valuable. I actually saw an analysis that corporate brands prefer those kinds of influencers because there's never going to be a scandal about them.

Okay, great. Well I appreciate your insight. This is hugely interesting for a lot of people, to be able to sort of peer into your brain. This field is moving so fast, right? I think your experience with this and, and your insight is really beneficial for people to hear. I really appreciate your time. Any closing remarks on anything that you're coming up with or excited about at Vectara for 2024?

First I’d like to say thank you for the time and the opportunity. It's been my pleasure talking to you. We have a team of ML experts here in Vectra. We've worked in places like Google Research. I think for CTOs out there, one of the big concerns they have, there's a high cost associated with implementing this technology. How do they know they're not going to implement something and then a month from now it's going to be outdated and obsolete? I think that strategically, what you really want to do is to find a company you can partner with that's going to deliver this kind of advanced technology, AI capability to your organization, and leave it to them and be able to trust them as AI advances. They're going to keep upgrading your systems and making sure that you stay on the cutting edge. That's essentially the value proposition of Vectara. That's what we try to do for the organizations we work with. So I guess I'll leave it like that. The innovations in our platform are more or less continuous because the field is moving so quickly. And it's not slowing down in 2024. I think that proposition is very strong for the upcoming year.

Browse Articles

companies-to-watch
Diffblue Spryte AI Spotlight Company To Watch Award

Diffblue received the Spryte Spotlight Company to watch award for their groundbreaking product - the only fully-autonomous AI-powered Java and Kotlin unit test writing solution, generates reliable unit tests at scale, locally and in CI environments.

companies-to-watch
Gupshup CTO Krishna Tammana Awarded Spryte AI Spotlight Award

Krishna Tammana was awarded the Spryte Spotlight Company to Watch award for his outstanding contribution and groundbreaking product Gupshup which builds AI-powered chatbots that truly engage customers and help organisations level up their customer service game.