Spryte AI Spotlight: modl.ai CEO Christoffer Holmgård

We spoke with modl.ai's CEO, Christoffer Holmgård, about his background, the history of modl.ai, and the future of AI in game testing.

About Spryte Spotlight

Spryte Spotlight awards and highlights the very best CTOs, tech leads, and technical executives in various industries. Spotlight leaders are at the forefront of digital transformation for their organizations and society at large.

sprytelabs.com/spotlight

Related Articles

Highlights

quote

How do you take an application that's so complex and provoke it into the right state where it breaks so that you discover bugs? I think the difficulty of that problem is the reason that most QA is still done manually.

Christoffer Holmgård

About

Get to know Spotlight leaders. In our interviews we delve deep into how they think and what they’re building.

2024 is the year AI will go mainstream

Will we be able to deliver on all the promises that have been put out around AI?

The mind-blowing moments in AI for Christoffer Holmgård

View the full interview

companies-to-watch

Watch Christoffer's full interview

I'm here with Christoffer Holmgård who's CEO and co-founder of modl.ai. Thanks for taking the time, Christoffer.

Yeah, thanks so much for having me. Excited to be here.

I was looking at your resume. You've got a PhD in AI and I think it's procedural content generation.

Yeah, that's right.

Very specific. I'm very excited to discuss that because Ithink generative AI has been on everybody's mind this year. And so I'm very excited to sort of learn more about the content generation side and also everything you're doing at modl.AI. Let me start very broad. You know, people are getting a little anxious with AI.Some people are getting super excited. Some people are getting downright scared with what's going on right now and some of the capabilities we're seeing out of these systems. Which camp are you in? Do you think AI is sort of the end of humanity here or is it a boon for our next evolution? You know, where do you stand?

I'm mostly in the excited camp, but I guess that's what you would expect from somebody building an AI company as well, right? But, you know, I think a lot of the things that we're going to be seeing from AI now that it's sort of like, you know,I think 2024 is in many ways the year where AI goes mainstream in many application areas. And we've had a lot of talk about how it's going to move into work life, into social life, into other parts of society. And I think we're going to see some of the practical effects of it now. And I think we're going to see people who are not in the AI space gaining even more experience with what AI can do. And so my guess is that what we're going to see is that it's going to have an impact on productivity in many different aspects of life. And net-net, I would expect that most people are going to be better off for it. We're going to get more stuff done. We're going to be able to move forward faster on certain tasks and things like that. So that's really where I, very pragmatically, think about the impact of AI, where it's going to be. It's another enhancing technology which will help us do things that we would be trying to achieve otherwise anyway.

Cool. Well, I think it makes sense considering your background. The emotion you're feeling is excitement. Is that the main emotion? Are there any other emotions involved?

Excitement, fascination. A little bit of trepidation, will we be able to deliver on all the promises that have been put out around AI? That's something that often happens for AI, right? There's a lot of promise to the technology, and then once it goes out and reaches the real world, you see what the actual practical applications are. But I definitely think with everything we've been seeing with generative AI at the moment, there's already been a lot of delivering on some of that promise. And then I guess another reason for trepidation, of course, is that like any technology, artificial intelligence and especially gen AI, that's having a big impact right now, is also a social technology. And I think that's maybe one of the things that we'll have to take some care of, in terms of how it impacts our communication, in terms of how it impacts our concept and notion of truth and things like that. We're seeing that with deepfakes, they've been around for a number of years. But as these capabilities get better and better, that's one of the places where we'll just have to, even at sort of like the largest societal scale, we'll have to figure out what that means for us as a society and how we manage this technology in a responsible way.

Yeah, yeah, definitely a lot of challenges when it comes to figuring out what truth is, right?

That's it, yeah.

Yeah, and you deal a little bit with that. Well, look, before we get into that, is there a moment, you mentioned fascination. I was fascinated when I first saw, you know, the image generation capabilities of AI, right? There's a lot of systems out there right now but, you know, DALL-E and Ideogram and some of those, but basically that’s really what caught my eye more than the text-based version, I guess I'm more of a visual person than text. What's the moment in your mind where AI, any AI system, really surprised you and you felt that fascination?

Well, in recent years, I think, you know, something that, you know, we're thinking about things that just popped up and you were looking into a system or a technology and you thought, “this, I hadn't really seen before. This was really surprising to me.” I would think it would be some of the early GPT models. Obviously we were playing around with some of this stuff also internally in modl.AI before GPT-3 really hit the mainstream. But some of that text generation stuff that we saw coming out of that was really impressive to me. It was like a moment of, “okay, we've sort of gone to a new place with this.” Then, you know, that quickly moved on to image generation, like you talked about. I think for me, a lot of it was around, you know, style transfer and image manipulation before we got to wholesale image generation. In the AI community, there's a lot of work happening on, you know, transferring styles across different images, doing that offline, then eventually doing it real time, even in games. In games, a few years ago, you were seeing some demos of that, where suddenly you hit a button and the game looked completely different. So those early technological advances were a couple of things that really opened my eyes. And then I would say some of the work that has been happening with things like decision transformers and things that aren't maybe as visceral in a media sense, but some of the progress that has been made in robotics and associated fields, where suddenly we have much more powerful methods for training robots by demonstration, which is the kind of AI that hasn't really hit the mainstream yet. People probably don't really see it, but there's actually a lot of developments happening there as well. But I predict it'll just take some more years before we see the impact on that on a broader basis.

Got it. Google has definitely been pushing out a lot of work there with sort of, I don't know if it's really agent-based or more sort of evolutionary AI, right? I think they're doing both, where they're sort of letting the circumstances teach the system how to evolve or how to move or how to interact with the system. Some of the visual representations that they do are quite amazing where you see the system learn to walk or learn to fly, right? And sort of create new types of movements that perhaps we're not really even seen in nature. I think some of those were quite striking to begin with. Yeah, I think that those things are, I think all of the different applications that we're seeing, whether it's text or image or movement or systems, it's kind of breaking new ground in all these different fields. Of course, you're in, you come from the gaming field. Tell us a little bit more about sort of how you came into building modl.AI, founding it, and what brought you to this task that you're on right now.

Yes, super happy to. Well, I come from the gaming field. And then I also come from the field of,I guess, behavioral, statistical and cognitive psychology. And those are kind of like the two things that we're putting together in modl.AI. So when I got started, I was also interested in game design, games and things like that, but did a lot of my early formal training in psychology and psychometrics. And that got me really interested in human behavior. And obviously, you know, the way that you understand human behavior in psychology and psychometrics is often for statistics, you know, observing behavior, trying to model that, trying to understand how people respond to different situations. And once you start working with that from a statistical perspective, that very quickly sort of moves into the territory that we call AI today. So think of it as kind of like a pre-step for that, I guess. Then at the same time, because I was very interested in game development, I became a game developer. I ran a couple of game studios, shipped games across multiple different platforms, and then around 2011, I had kind of been working on these two streams separately, and then I got the opportunity to combine the two in the research community. So I went back to school, I was still making games on the side, but ended up doing the PhD that he talked about, doing a bunch of studies on how we can understand, model and copy behavior inside of video games? So how can we basically model and replicate what people do when they're playing video games? And that kind of became the topic of a lot of my research and my work. So I was pursuing that, still building video games. And then around 2018, I'm sure a lot of the people who are the audience for this show will remember, there was a lot of development with things like AlphaGo, AlphaZero. A lot of technical progress was made around that time with things like deep learning, and it just became evident to a lot of people that suddenly we had made another jump in terms of how we can make computers interact with things like video games and virtual environments. Some of it was based on new deep learning techniques. Some of it was based on new hardware. And some of it was just based on figuring out how to leverage human demonstrations and basically looking at how people play these games and then porting that in and finding efficient methods on new hardware, new ways of leveraging that. So suddenly, we had a computer that could beat the world champion in Go, completely unprecedented, and a much more open-ended problem in many ways than, for instance, being really good at chess. So I was looking at that together with some of my research colleagues and together with some of my game development colleagues. And we just looked at this progress. We also talked to a bunch of game developers and said, historically, the game development industry hadn't really been leveraging machine learning and what we call modern AI today to a large extent. But suddenly, there was this openness. It was kind of like the technological logjam or interest was breaking. And so we all got together and said, “listen, if there ever were a time to start a company that tries to leverage some of this and then bring it up to game developers for them to use in general fashion, it's probably now.” So that's why we went ahead and started mold.AI, really with the notion of taking some of this new technology, some things that we've been working on for a lot of years in the labs, and then carrying it out to the games industry and figuring out how to deliver that to as many game developers as possible.

Fascinating. There's a lot of stuff to unpack there, I think. Gaming, to me, doesn't get its fair or its due merit, I think. As humans, play is inherently how we learn and how we begin to learn from a sociological perspective. Gaming is often trivialized in the sense that we don't realize how much of an impact it has, you know, with our human psyche. And I see it in industry all the time, where the gaming field is always a couple of years, if not a couple of decades earlier or ahead of where we are in the industry, right? I mean, you could think of first person shooters translating over into the defense sort of training scenarios. And it took maybe 20 years or 15 years until they started taking that seriously and really incorporating it into their training programs, and you could see that in HR, and you could see in every industry, there's something learned from gaming, right? And it often takes us a lot of time to realize what that is and then put it into practice. So it's actually really interesting that you started with the human psychology element and also you've got the fascination for gaming, at the same time or you're applying it there, right? I think that's brilliant. Tell me a little bit more about sort of this early idea that you had of, you know, if there's any time to really apply these things to gaming. So what was the original sort of application that you thought up? What was the birth of modl.AI? What were you trying to do then?

I think where we started out was really looking, and it's back to what we do today, so it's still the same sort of core vision, but the fundamental notion was that if we can understand what people do inside of a video game, if we can observe it, if we have the right models for understanding that, we'd probably be able to replicate it. And if we replicate what people do inside of the game, that must be useful somehow, right? And so really the notion and what got us going in the beginning was kind of like this notion of having a virtual player that's able to interact with the game in different ways at different levels of intelligence. But rather than thinking of that necessarily as an experience, because there can be an experience in a multiplayer game, or you can play against the AI, and that's interesting, but people also like to play against humans, against the actual other humans as part of the social experience. So that's one thing we do. But if we de-emphasize that a little bit, and then we thought about how could this be of use to the whole games industry as such, that's where we came into it. And so we went out with that thesis. The very first project that we ended up doing was for a game that's like a multiplayer mobile title that a studio in San Francisco was working on. We were talking to them and we ended up basically building a bot that would play their game like a human player. And what that meant for them was they were building this multiplayer game – and they were a small team working on it as you often are, especially in the early days when you're doing mobile game development you have a small team it's fast-paced, you're iterating as fast as you can to find a good game experience – so we built these bots for them and we just saw that it was super enabling, because when they were playing the game they had somebody to play against all of a sudden. They didn't have to organize or coordinate. They could try out and understand the dynamics of what they were building. They could start having those experiences, and they would start getting a different perspective on the artifact they were building as they were building it. And then we took it to the next level, which was interesting, because the bots were relatively fast, it was a smaller game. So suddenly, they were also able to run simulations. They would pit the AIs against one another and then start using it to generate data about the dynamics of the game, like “how does this play out? How does that play out?” All of these hypothetical scenarios. And I think for us, that first case was kind of what made us really double down on modl.AI as an idea, because we could see the impact it was having on that team. And then that led us to thinking, of course, “okay, this idea of having simulated behavior can be a really powerful toolkit across a bunch of different use cases in the games industry.” So if we can build a piece of software and the pipeline and all the things that go with that, that is kind of like a toolkit that game developers can put in there and then build out from, there's going to be a lot of use cases that we start enabling by giving that technology to folks.

Cool. So I think one of the places where AI, as you mentioned, the idea has been around for a long time and it was really the convergence of data plus the modeling and the theory, plus the infrastructure at scale, it's really all three of those components that have launched us into a new area where the results are breathtaking. In your case, in gaming, it doesn't seem like the data would be an issue. There's no scarcity of data with games. You've got millions of users. So that's a great application, obviously. I remember when video game testers used to have rooms of kids getting paid to play the game and find bugs. Is that still going on or is that pretty much over now?

The rough idea is still there. It's been professionalized quite a bit, but the fundamental notions are kind of the same. And it's interesting what you're saying about data, by the way, because yes, games generate a lot of data, but the data is different from game to game, typically. And that's one of the things that's really interesting about the games industry: that every game has its own data format. One of the things we're interested in long term is being an agent who can harmonize that so that you start having reusable data across different games. But anyway, putting that aside. Thinking about how game testing works at the moment, it is still an intentionally manual process in the games industry. And I think that, to a large extent, that comes down to games that are obviously super multimodal. They can have text. They can have movement. They can have higher level reasoning. But they also, if it's an action game, there's moment-to-moment twitch behavior where you have to act on something. It's such a wide range of experiences and interactions that you need to have. And on the other side, what you're looking for really benefits from having human creativity in there. Also, when you're building a game, you're integrating the output of obviously the developers, the engineers, the programmers who are building the code, but you also have the 3D artists, 2D artists, people working on the user interface. Maybe you have writers. You have the audio people. Maybe you have some mocap in there. Who knows what? There's a lot of technology coming together for every game production, right? And so you have all of these disciplines participating in building it, but complexity is super high, and not everyone is necessarily at the level where they'd be able to debug or contribute to the code side of things. And the complexity of this also just means that there are so many different places where things could fail. So if you're building a video game, and maybe It could be that a writer made a mistake in some of the quest design and that'll throw a bug or an error, it could be that you forgot to put in an asset that you needed inside of the game, so when you hit a point in the game where that asset needs to load, there's nothing at the reference. Nothing shows up. Or add on to that just the sheer complexity of the software itself or the development part of the project. All that comes together. So how do you take an application that's so complex and provoke it into the right state where it kind of breaks so that you discover these bugs? And I think the difficulty of that problem is the reason that most QA is still done manually, and automation is still very much a project in progress in the games industry.

How do you see that? Is this one of these jobs that's going to disappear? Is that realistic, or do you think there's still going to need to be a human element at some point?

It's one of those jobs that will change, I think. And the reason that I say that is that the general wisdom in the games industry at the moment is that around 20% of your development budget goes into testing and QA writ large. So it's a rather large amount that's getting invested into this already. People know that it's important. In spite of that, the last statistic that I saw from an industry survey was that around 61% of games ship with well-known problematic defects in the game. So in spite of this massive investment, we're not solving the problem, not as well as we wanted to, anyway. I think that's the way to think about it, right? So I think when AI comes in and automation comes in, which is obviously one of the use cases we're working on in modl.AI with these virtual players, when that comes in, it becomes kind of like a scaling factor over your existing QA team. And it starts enabling you to have the human developers who are contributing to testing the game, they can start working at a greater scale when they start having some of these technological components in place. So imagine you have your current QA testing team who are spending a lot of their time just testing the functionality, the basic functionality of the game. And now take that more functional level of testing away from them. Imagine if that gets automated and runs like once, twice a day in your CI pipeline, then imagine what these people could then do for your application, right? Then you unleash their creativity, their learning, and you start applying their efforts to the things that can move the needle to an even greater extent. So I see it as a case of AI working in conjunction with humans rather than a straight-up replacement as well.

Right, so improving human efficiency and sort of getting larger coverage, right? I've talked to a CTO who is focusing right now, especially on code testing, right? So unit testing for Java and C and C Sharp. Is it sort of the same case in game development testing, where you're able to run you know unit level tests and then integration tests and then end-to-end testing? Maybe go through a little bit about how your agents run through the game, like what's the logic there? How does that work?

Oh, sure. Yeah, so what we're doing as a company, is actually we're delivering different kinds of AI bots or different kinds of AI agents for testing, for different purposes, and sort of like different specific problems that you have when you're developing a game. So we're not down at the code level. We're not looking at the source code. This is all about replicating behavior inside of a game application. And so for that, we have three major use cases that we're addressing at the moment, or three different kinds of behaviors that we deliver to game developers at the moment. And one we call exploratory or destructive testing, so that's basically making sure that you – have you touched every wall in your game? Have you pushed every button? Have you tried a bunch of combinations of different things? It's about stress testing the game logic to find all those corner cases where you have issues that you didn't know about. And again, that's really time consuming, and it's not necessarily a very interesting job for a human tester to, every day, touch every wall in the game to make sure that the geometry didn't break and those kinds of things. So that's one use case. For that, we have certain bots that really leverage behavior that you wouldn't even recognize as a human player. It's much more like looking at a Roomba, a robotic vacuum cleaner or whatever, having a go at your game. It's really trying to fuzz and stress test everything that's in there. But that can be helpful because if you're running that very frequently, it'll provoke your application, your game into states where it breaks and keels over and then gives you a ton of telemetry and analytical data for you to consume and understand why that happened. So that's kind of like number one. The second thing that we're looking at at the moment is allowing humans to put demonstrations into the game that the AI then pretty accurately replicates when you rerun the application. So imagine that you have a quest, maybe it's like an MMO, and you have to spawn into the main city, and then you have to walk to a couple of different waypoints, you have to talk to a quest giver, then you have to chop down a tree, and you get some resources, and you combine them through crafting into some sort of other resource. That's a pretty standard case. If you're building an MMO, that's just one of the things you might want to run through continuously whenever you're testing the game to make sure this still works. And then add on a ton of these over time to make sure that all the functionality is there. That's a place where we also think pragmatically AI can have a ton of impact. We should simply record what the human is doing and then imitate it inside of the game, then replicate it whenever we have a build. And then a good question is, “hey, why do I need AI for that? I can just record what the human did and just straight up replay it. Why do we even need intelligence for this at all?” But the answer is that most games have some kind of variation or randomness, some sort of unpredictable events – that's kind of like what makes a lot of games interesting as it were – so that means you can't just spin up your game and expect a straight up, you know, action-by-action replay inside of the game to produce the same behaviors. You need something that's capable of navigating the game situation a little bit to try and stay on course but also deal with uncertainty. So that's a big one. And then the third one is that for certain games and titles, we'll go to kind of like what I described at the beginning of this session, we'll go to sort of the other end of the spectrum where we record somebody just playing the game free form. And we collect a ton of data around that and then we train an agent that imitates general play behavior inside of the game. So say if you had a first-person shooter game, we'd collect 50 or 100 hours of people playing that first-person shooter game and then train it back into an AI bot that you can put in the game, and play it. And so you won't give it specific instructions, but it'll play the game reasonably, kind of like the way that a human player would.

Got it. So all of these three scenarios, you're then able to run those in your CI system for every new release, every new build, and just kind of make sure that what's going on is being pre-tested at least, or at least tested, I guess, live post-build. What's the scenario in terms of the CI/CD pipeline, where does your testing fall in?

Yeah, that's right. Once you've integrated this and it's set up, whenever you have an artifact, a build comes out – depending on the game, that could happen multiple times a day or maybe as infrequently as once a week, it really depends on how you work – but if an artifact falls out, it's the built game so it's representative of the artifact that you'd want to eventually ship. And when you put either in your own infrastructure, in a copy of our service, or you upload it to us, and then we spin up your game and the bots play it. That's the fundamental notion. And then depending on what you're trying to address, you could use either of these methods that I described, either of these behaviors or a combination of them to get relevant data for your game. Or you can use it creatively in other ways. One thing that we've seen people request is that, well, if I can spin up my game, a game client, the artifact that I would ship, and it has a bot in it and my game's a multiplayer game, well, then I'd like to do a bunch of these and then point them at my multiplayer server, and then it suddenly becomes naturalistic load testing. So there are many ways that you can combine these capabilities in different ways.

Got it. And are you limited? Forgive my complete ignorance on this, but because time is involved in these games, are you limited by, like, are the bots acting in real time? Or are you able to speed through the game

We typically start in real time. And then depending on, it's actually typically, it comes down to the game itself, how that's architected and how it runs, whether you can speed up time or not. So base level is real time, and then sometimes you can speed it up to get faster results.

Yeah, I guess you can always scale up the number of agents. Can you have multiple players in one single game that are all sort of… I guess it also depends on the game. Some of them you might be able to do that, and some of them you might not.

Yeah, it comes down to the game again. Now we're back in games that are super complex, and it comes down to a specific case. That's the way it is. But yeah, sure. If you had a multiplayer game, you could connect multiple bots to the same game, and they'd all be doing things in a shared environment.

Yeah, I mean, it's super interesting because as you were talking, I was thinking of three or four different applications of this in other industries. You know, for example, you were talking about running through different scenarios in a certain type of game, and I was thinking you could apply that to finite element analysis and engineering and run through all sorts of different scenarios for physical modeling, and I'm sure an economist would love to have these multi-agents because that's the basic problem that they have in economics modeling is to figure out how people behave, right? And I can think of a lot of applications towards that and creating better models. So I could see all of the things you're doing here, having applications maybe 10 years from now, right? In a lot of different industries. That's kind of fascinating.

Yeah, and right back to some of the motivation that I talked about in the beginning, with modeling human behavior and play being super fundamental to human behavior, right? I could see some application errors spinning out of this in time to go in that direction. It's just that games are a super good target for starting this kind of work as well, I mean, there's an industry out there that needs it. You can see everything that somebody's doing inside of the game, and there's an immediate way of delivering it back into the same environment where you collected the data. So it doesn't get better than games for that, I guess. But I'll give you an example. I was meeting with an architect the other day and it turns out they were building environments in virtual reality to show people, but now they've also started running different kinds of spatial analysis of an architectural plan before they build it in real life. And so we were talking and he was saying, “okay, actually I have this model of a wheelchair user that I give to people so that they can see what it feels like moving around the space.” And he was like, “well, can you attach your bot to this and find all the places where the wheelchair user can go and how?” I was like, “yeah, of course we can because we'll just simulate it.” I was talking about the exploration bot where we cover everything. Well, we could use it for that purpose as well because now we have a representation of where that person can go and how, and then we can drive that automatically. And so when you have a new drawing, a new architectural drawing, well, then use this to understand where that person can go inside of the space.

Yeah, that's fascinating. I think the gaming aspect is great because you're bypassing all of the issues that hitting these problems would traditionally have, which is the amount of data and how to simulate it, right? And the infrastructure, and you can simulate all of that in the game, right? And you're just trying to solve the actual problem, and then you can maybe take that out of the game and into the real world at some point. What technology do you guys use internally to build out of this?

So are you thinking sort of like algorithmically or in terms of our stack and how we're put together and everything?

Let's start with your stack just so we get an idea of what you're building this on. And then if you want to delve a little bit more into, I guess, topographies or algorithms?

Yeah, yeah, yeah, for sure. So internally at the moment, we're all cloud-based. We're working to be cloud agnostic. The majority of our workload is on AWS at the moment. And then we have our own platform and system for provisioning resources out of that and spinning up instances when we need to run a game. The kind of machines that we use are a little atypical, I guess, because we're looking for cloud machines that have a GPU, because we video record everything that happens when we're testing the game. And obviously, it needs to have a certain CPU capacity, a certain memory capacity in order to be able to receive the average-sized game build and run it. And then we orchestrate everything using the HashiCorp stack, which is what we're using internally here. So Nomad is a big part of our setup, and basically what will happen is when you upload a build to us, we then provision that machine, we move the build over, we spin it up, we execute it, then send it back to our own telemetry system to record all the things that were happening in the game. And then we strip it back up through our own web application that we've also made. So a lot of what we're using is in-house and then based on pretty standard cloud services.

Got it. Are you focusing on a particular game stack, things like Unity? Or are you pretty much game agnostic at this point?

Oh right, yeah so we are game engine agnostic in the sense that – so we have an SDK that you have to integrate into your game, and what that basically does is that it allows us to collect observations from the perspective of the player, whatever the player is near games, like what's going on in here, what's the state of different variables in the game, and it allows us to send actions back out. And so we have a process that sits outside of the game, next to the game build, and connects in through this SDK. And most of that is shared. So that's, again, our own library that we've put together for that purpose. And then on top of that, we have a layer for each of the game engines that we support. So we have one for Unity, where we also have a GUI for that, you can also find us in the Unity app store and find a version of the plugin up there. And then we have the same for Unreal Engine. And then in many ways, once that integration is done, we always just piggyback on the game engine's build process. Once you put the SDK in there, when you're building your game as a game developer, what comes out is just including our libraries and integrations. And then that's what we connect up against.

Got it. I'm really appreciative of your time and you're probably super busy getting and shipping all this out. So I don't want to take too much more. Maybe let's close it out with, what are you excited about 2024? Anything that you can share that you guys are working on that we should be looking forward to?

Oh yeah, absolutely. So last year, we launched the exploratory bot, the one that I described in a general fashion. And this year, we're bringing up that replay bot that I described to you. I think that's going to make a really big difference, both to the people who are currently using it and to future customers. So that's a really big ticket item for us. And then other things that we're working on that we're excited about is that we're starting to support other hardware targets than Windows in a more general way. Because we've typically been focused on just testing Windows builds, it's often sort of like the common denominator for a lot of game projects, but now we're really pushing to be able to support many kinds of mobile builds and also console builds, so for us it's kind of like being available on all the different targets that game developers want to use. And then at the same time, putting out these new features that we kind of like have verified with the devs that they just really like to see in the product in order to enable them to work better. So that's kind of like what we have on the roadmap for 2024.

Nice. So rounding out the sort of availability in the whole ecosystem for gaming. Well, it's a great pleasure to be picking your brain a little bit and talking to you. I really appreciate it. Thank you so much for your time. Appreciate it.

Yeah, thank you too. Thanks for your time and interest.

Browse Articles

companies-to-watch
Diffblue Spryte AI Spotlight Company To Watch Award

Diffblue received the Spryte Spotlight Company to watch award for their groundbreaking product - the only fully-autonomous AI-powered Java and Kotlin unit test writing solution, generates reliable unit tests at scale, locally and in CI environments.

companies-to-watch
Gupshup CTO Krishna Tammana Awarded Spryte AI Spotlight Award

Krishna Tammana was awarded the Spryte Spotlight Company to Watch award for his outstanding contribution and groundbreaking product Gupshup which builds AI-powered chatbots that truly engage customers and help organisations level up their customer service game.