Return to page


Getting Started with

H2O Driverless AI

H2O Driverless AI is a fully customizable award-winning AutoML platform that empowers data scientists to work on projects faster and more efficiently. It automates data preparation, feature engineering, model validation, model tuning, model selection and model ensembling, and also provides scoring pipelines for rapid standalone deployment out of the box, as well as model interpretability. In this webinar we'll demonstrate a new graphical wizard that makes it easy to create highly accurate models for any tabular dataset, including time series, image and text use cases.

3 Main Learning Points

  • Learn about the capabilities of H2O Driverless AI as the AutoML platform of choice for tabular datasets
  • Learn how to build sophisticated models in minutes using the new graphical Wizard

  • See examples of use cases that are ideal for H2O Driverless AI

Read Transcript

Today we're gonna talk about Driverless AI. This is one of our flagship products at h2o Ai. So the company's called h2o ai products are often called h2o or something. And, you know, we started a long time ago, my personal story with machine learning started in 2012. After several years in physics, at Stanford, that's how I got to the Silicon Valley. And I've been at h2o since 2014. So I've seen all the products from open source h2, all the way to Driverless AI, hydrogen torch, and so on. But I'm only really a hands on fully in those two machine learning platforms, h2o, three Driverless AI, and today I will talk about basically the overall setting of Driverless AI in the landscape. And then also, I'll show you a couple of new developments that I'm very excited about.

And that will make a big difference to your productivity using these tools. And I'll make sure that you, you know, see what you can do with these, these platforms that we're building, and particularly the auto ml tools that are part of the platform. And it's really a team effort. And thanks again to automakers. So now let me just introduce the overall, you know, picture. So some companies are starting out with like, what can we do with AI, right, and then others are saying, we need it every day, that's our entire business model, that's better business transformation will eventually get but in the middle, there's a lot of things right, you can try out certain things, you'll see if it works, if it delivers value, you can try to multiply to things that work scaling across the organization. And then once you figured out a kind of a system where everybody in the company can benefit from Ai, then you really try to weave it in. And doesn't matter where you are on the spectrum, we have a product for you everywhere here. And the more you go to the right, the more people touch the products, the more they have to be user friendly, the more they have to be available in the cloud, the more they have to have, you know, enterprise features like authentication, security, and so on.

And that's where we are, where we're building this AI cloud as a whole, as a platform end to end, you know, from what do I want to do all the way to here, I see your mission critical applications that now actually transform your business by making smarter decisions. And along the way, there's data feature store, you know, how to deal with data, how to deal with models, how to build models, how to explain the models, how to look at the results in a way that's, you know, flexible that you can just quickly make a dashboard, for example, and then how to deploy the models? How do you build apps that are being served to these predictions, for example? And then how do you do all of that in a flexible environment where people can collaborate, and that's called the AI cloud. And that is done for multiple personas, not just data scientists, right. But in this talk will present, you know, mostly the auto ML platform called h2o, Driverless AI, and a low code app development platform called Nitro. And those together will become even more effective at helping you solve your problems.

So if you go back a step and look at AI and machine learning, you know, machine learning is really just statistics. And AI is really just machine learning, you could say, but it's in a way, it's, you know, it's more than just statistics. Now, it's really able to read documents, tell you what's in them, do it for you, tell you what's in video, right, who's running around where the cars going, and although you know, so, AI has become commonplace. But in the 50s and 60s and 70s. From the last century, you know, there was a bunch of small datasets with maybe a few columns, and people looked at those more or less by hand. And then they came up with clustering and nearest neighbors, and so on. Eventually, the concept of Neural Networks was invented. And then it took about 50 more years before they actually became powerful enough on the hardware side to actually do something useful. That's when deep learning came in. But at the same time, also random forests and gradient boosting came in.

And they were basically statistical methods, tweaked to the extreme, able to handle pretty much any data set any type of you know, input. And it would just together be amazing, especially if you blend them together or ensemble them together with stacking, which is an advanced method of ensemble meta learning. So you make a model, and then make another model on the predictions. It's almost like a committee deciding which of the models is the best or how they should act together to make an even better prediction. And when once you throw all these things into the pocket of your arsenal kit, then suddenly you can pull out whatever you need for any given problem, right. And that's exactly what we've done. Over the last almost decade now. We've taken the best algorithms and implemented them pretty much from scratch. In Java, every single line built in house, and that was an open source distributed platform for training models, okay. And then also in the end, you get a Java deployment package out where you push a button, and you get source code generated automatically. So that was very easy.

You could put models in production, the only problem was, back in 2013, you didn't know what settings to provide to those algorithms. So that was called hyper parameter tuning that was kind of missing, you had to do it by hand, right? You have to guess how many trays you need. Or you had to guess how many you know what depth you need, or how many, you know, neural net layers you needed, and so on. And then in 2015, we built h2o Auto ml, also open source also built on top of the Java platform distributed. So it can handle terabytes of data and build automatic good models, not just a model, but a good model, because it was tuning all the hyper parameters. And it was able to do ensembles, which then made it quite powerful. And then we said, well, all the data scientists today are learning Python. And we were stuck in our mostly and had a little bit of Python. But the Java building blocks aren't as agile as Python building blocks. So once you went to Python, we saw that everybody was actually not just using clients in Python, but actually developing algorithms in Python, just like SK learn scikit learn. But in the end, they always called C plus, plus, under the hood, right, they had to have really fast implementation.

So Python became the glue. And that's exactly what Driverless AI is it is the glue between all the building blocks, whether that's xgboost, or TensorFlow, or hydrogen, or torch, pipe torch, or, or, you know, NumPy doesn't matter data table, all these pieces are built in C Plus, but the glue is Python. And that's Driverless AI, calling all these library building blocks in the best possible way at the right time in parallel, in a robust way that it doesn't ever break on any data set. And it can handle time series images, text and tabular data. Basically, any data set you throw out and you can make predictions. And then in 2022, revolutionary products basically started out first started with wave and then nitro came this year as a consequence of wave. That's a low code app development platform. So in in pure Python, you can write applications that are, you know, like a website, right, an interactive website. So it's a web application, like Driverless itself, and h2o three flow, if you're used to those, you can make these applications but the only language you need to write them is Python.

And then also, at the same time, hydrogen torch came along, which is the Kaggle Grandmaster effort, where the best of the best in data science build their secrets into code for pure deep learning use cases. So that's very exciting. And you can do really crazy stuff. That's pure AI. And it uses a lot of GPUs. And it's pretty much the most accurate thing you can find out of the box for deep learning problems. And then document the AI is taking it another level further and takes a bunch of documents and tells you what's in them. And they don't have to look the same. They don't have to follow a certain template or anything. It's purely protecting it from the structure of the document. So very exciting evolutions. And this webinar, again, we'll talk about how Driverless plus the low code app development platforms can work together to your benefit.

So Driverless itself as accurate, as you can imagine, we spent, you know, pretty much more commits than all of Apache Spark on Driverless. And that was, again after all of open-source HTML taught us how to do it. So this is the second or third rewrite of a full algorithmic modeling kit, in my experience in the last decade, and then it's obviously fast, scalable, it runs on all kinds of datasets, you can completely customize it, you can literally say, every little transformation of data all the way to the models and the scores, the metrics, everything can be customized in Python if you want.

So you know what's in it. And also you get a Word document out at the end that explains all the steps. And you have a full suite of explainers in our MLA suite in machine learning interpretability suite that's built into the platform of Driverless AI. So explainability interpretability is important for us. And then also you get multimodal capabilities. Right. So as I mentioned earlier, you can have a time series that has text and images in it, and you want to see if they are somehow predicting the future. And you can combine all these normally it's either time series or image or text. But in this case, travelers can mix and match all of these. It's easy to install the single file in a Docker or self-extracting tarball. And of course, it's part of the cloud, whether it's a managed cloud where you just need to provide a login or whether it's installed on premise for you the virtual AI cloud on any of the three cloud platforms, you know, Google, Microsoft, and Amazon. So there's no excuses not to try it.

And also the again, all the models that are built in Python automatically convert themselves into a Java and C++ code, which is then easily taken to production. So you don't need to worry about pickles and all that stuff. Unless you want to, of course, you can also deploy it in Python equals. So this is an example of a pipeline on the left, you see, there are six different models, threefold cross validation, one actually goes in one light GBM. Together, they predict some outcome for a credit card problem. And you can see, on the right side, one of those six models, you know, it's a pipeline where the original features come in the original columns of the data set, and then they get transformed. And transformations here are such things as one hot encoding, for binning of Numerix into categoricals. And then target encoding of those categoricals, or dimensionality reduction, clustering, there's a lot of transformation, spinning, weight of evidence, there's a lot of things that we do to squeeze out to signal. And each of those have tremendous amount of logic built into them to be robust. And so in the end, you get the pipeline out, that does all this in Java, as you can see in the code example on the bottom. And that can be embedded in Spark or in any, you know, real time system, the same works for C++ as well. And of course, Python, which is the way we create the models in the first place, but they're all the same.

And all of them can give you reason codes, sharply reason codes for the original features and further transformed features with our automatic feature engineering. So you can see whether your original feature matters, or whether the transformed version of it matters more than some extent, right. So there can be two different insights obtained from these two different views. So there's regular predictions, and then the sharply predictions in two different forms that you get for all the different deployment scenarios. So it's quite a lot of work to make it all exactly right. But we pride ourselves on doing that. So then, the next slide shows you that, you know, people love it. And there's a reason that we, you know, we care so much about the feedback because it's the only reason we do it, right? We're really makers at heart and any feedback you provide will greatly influence the product. Sometimes, within weeks or months of you saying something, there's a new release with that feature in it doesn't have to go through a long committee cycle that you have to vote and all that.

We of course, have product management. But if you're an important customer to us, and you want this to work, we'll definitely make it work sooner than you can imagine. So what happens normally, normally, you go into Driverless, it looks like this, you go on the right side here, you say our data set, you click on it, your important data set can be from your snowflake or database somewhere else, or it can be just a local file. It can be an easy to s3, for example. And wherever that is that file, it comes in, it gets parsed, it gets dumped into this database of files inside of tribalists. And then you can browse these files, look at them, and so on. And then one of the actions you can take is to make predictions for this file. So you click predict. And then the next thing you do is you say what you want to predict. So then this, this dialog shows up, and you choose basically the target column, you can ignore most of the rest, it's for advanced users. Once you select the target column, that's enough, you can then say launch experiment, and it will run pretty much for every problem that you throw at it, it will just run and do something right. Now the question is that auto ml? Yes. But is it useful or not? Right? Well, in the worst case, you can get a warning that says, oh, there's a leak, for example, or, you know, there's something there's a lot of duplicate rows in the data set, maybe our you know, the cross validation will be cheated that way, because the same rows are in training and in validation splits. And then you think you're doing well, because you trained on the same row that you're scoring on and you say, Yep, I got it right.

But actually, it's not fair. Because in production, you will not get the same rows again, stuff like that. Those warnings sometimes are important, right? And in most cases, we think that important, but most customers actually ignore them because it's just a warning. Now that's the potential issue of auto ml with a single button, right? You run it and then you're done. But in reality, you should actually not think that it's over yet, but you should look at the warnings investigate the data. That comes out of the model the predictions, for example, or the partial dependence plots or the Shapley plots or the variable importance charts, all kinds of information comes out especially simple metrics like how is a constant model doing a comparison to me? Even a constant model can be 99% accurate, right? If one in 100 is yes, the rest is knowing is just predict No, for everybody have 99% accuracy. So you have to be careful at interpreting the results of machine learning.

And if you don't take these warnings for a seriously, seriously, then you can get into trouble. And most people are pretty good at it, they look at it, they iterate, you know, they take another experiment, they change a little bit something and then put on another one, and so on until they feel like okay, this is kind of good. But it took a lot of steps, right? You have to run an experiment, see what's wrong with it, and then run another experiment. And sometimes that's not most optimal. So these pitfalls that can happen, for example, are data leakage read, your answer is in the data already, which we wouldn't have in production. But your training data set had an extra column in it, like a target, but set, let's say, a duplicate version of the target, but maybe it wasn't exactly the same, just kind of like, for example, the target could be yes or no for fraud. And this leakage column could be the credit score after the fraud was detected, right? Or some event happened, then your credit score after that event is probably not quite right. It wouldn't be the same in production, when you ask, is this person, fraudulent or not? Are this transaction fraudulent or not. So you have to be careful about little kinds of things like that time series matters a lot, sometimes doesn't.

Sometimes you can shuffle the data doesn't matter. Sometimes you have to order it by time, in a strictly causal Li. For certain groups, for example, each store and department you want to predict the sales, you can just shuffle them all and globally make a single model, right. So you have to be careful. And of course, Driverless AI supports all that, but you have to set it up the right way. Because we cannot know a priori which one of the cases that is, for example, text and image features. If you do it right, the Grandmaster way you could have GPU enabled Bert models with neural nets that are super accurate. But if you want to deploy the model in Java, and we don't have Bert yet in Java, then there's no point training the whole model for the day, then in the end, you can deploy it right? Or maybe the pipeline is too large, you wanted it to run in a 10-megabyte memory window, but it takes 50 megabytes, you know? Or maybe it takes too many nanoseconds or microseconds or milliseconds, how do you even know how, what is fast or slow? How do you know? Is it is it even possible to be faster and things like that, sometimes you have a requirement, and you just don't know what we're going to do, if you say, go. So what we want is you want to make a visit that educates you and avoids the pitfalls in the idea that the machine plus human together better than each alone, right. And that makes sense. It's obvious. And this is what we've done. And this is what's coming in the next version of Driverless this fall. And there will be a predict visit but not just a predict partner.

And night show is what we used for this. So nitro is an open source, Apache v2. So fully open, you can take it and run with it. This is an amazing piece of software that makes a full web application. And all you need to write is Python. And you can see from that snippet in the background, that snippet is literally what makes this GUI today. And the GUI in the bottom right is that's generated from this piece of code. And most of the code in Python is just logic, right? Python logic. So that's the stuff you want to write the message I want to show to the end user, the check I want to make when I need to decide whether this is the right choice or not for a given data set. And the beauty of this ninth choice that it not only is pure Python, it's also inside the Compute Engine. And at the same time inside the web server, which means I can look at the data, compute models, even build models, look at the variable importance, and literally convert that knowledge into a GUI that I present in real time, right?

Not going through any clients not sending messages across networks and so on. It's all real time built into the core. So for example, in this case, here, you can have interactive exploration, right? You can look at the data set, you can say, well, there is a leak. If you want to predict the sentiment of a Twitter dataset for an airline, then the negative reason column is a leak because if there is something in there and it probably wasn't negative tweet right and if the negative reason had a high confidence of being negative than that number, the confidence level also is weak. But the airline matters, but it's not likely to be a leak. So we figured all that out for you in real time before we even start the experiment. That's all because you can do certain math operations in this in this visit. And of course, we could have done this before as well. But you need a lot of if else is everywhere, and the GUI has to change all the time, right?

This is not the normal GUI, this is a quickly developed iterative environment where you every screen that pops up is just a bunch of text and a bunch of buttons where you can choose and I think that's better for these kinds of questions than a fully featured GUI that's static, where every pixel is placed perfectly, because you want to be flexible in these kinds of things, right, maybe the list will become a pull, pull down list, or will become a drop down list. Or maybe it's it can be just three bullets, like here, it can be adaptive, it doesn't have to be what it is. And also maybe the text will change, and then the replacement will change, maybe the fonts and colors will change doesn't really matter. The only thing that matters is the content. And once you select this and say okay, fine, that is a leak, you're right, then you can still look at it more, right, you can do real time, variable importance charts, see what's important in the data and maybe find another leak column that shouldn't have been included.

So there is a lot of potential that is interactive computation and access to the to the end user. And the main goal really is to set these expert settings up, right, the expert settings have almost like 1000 of choices. Now you can go and change something just in case you have this one particular corner case that you want to address. But most people don't need more than maybe 10 settings. But even just to set those 10 up is a lot of work. And that's why we wrote about 6000 lines of basic code for the first iteration. And that pretty much takes care of 95% of all commonly used expert settings and make sure that they are correct for you. When you say I need to go to production in Java, or in Python or in C++, or I don't need to go to production. Or you will say this is a time series problem. And I will only be able to, you know, retrain every month, things like that, that has helped buttons, and we can really make sure that you get the right model for your needs. And it's not something that needs a human.

That's just something that needs your answer. And your answer doesn't need to know data science, you don't need to say, oh, you know, I need this metric for this reason or something unless you actually want to tell us the metric. But you don't have to be a statistician to answer it just has to be common knowledge for the domain expert, right. If you have a data set, you should probably know that the meaning of the columns, and you should probably know that, you know, text column of a certain nature is important or not. And we will guide you and say this text column looks important. Now you tell me if it actually is important. And if you say yes, this actually is important, then we can enable birth-based GPU models. But if at the same time you say you want Java production, then we'll say well, actually, you can have both you have to choose well, if you say you need Java production, more than will fall back to a statistical method that's slightly less accurate, but at least it's fast and production iceable in Java. And of course, we are working on the torch implementation for Java.

So that will come at some point, but maybe you need to run it on a Windows laptop. And then even then the x86 Torch library will not run right and you need the Windows version of torch and so on becomes more complicated. So we can help you get what you want out of the product. And I think now it's time to go to a live demo. Let me see if I can do that here really quick. So this is Driverless AI. This is a preview version running locally. And if I take this dataset here, the airline sentiment one that I looked at earlier with you. You see there's a bunch of columns here, there, one of them is text. You can look at them. All these columns here and some of the text columns are just strings, right? Tweets, so there's there are tweets that say certain things and now the question is that sentiment good or bad? Now, if I predict that normally I would just say here train predict the column sentiment and I can say go right and here it shows you all the preview like it though, it will use GLM slight GBM actually boosts are different trees and linear models equal to three for cross validation and it will try these different transformations are targeting coding, these are all complicated things that make you want to read the documentation or we're not right. But it's not clear for most people in the beginning what this all means.

So you might say this is cognitive overload, we don't really want to study all this, we just want it to run. But if you say go now, as I mentioned earlier, there might be a leak, right? This this, the outcome of the negative sentiment is somehow baked into a column in this dataset. And if you have this column available, then this represents something that's not normal for production, because you wouldn't have to and when you actually ask the model later, is this tweet, you know, positive or negative, or neutral? And we get the same here, the negative reason, confidence and the negative reason those two are leaky columns, we find it all. But you might say our evidence, just a warning, right? Who cares? Now, you see that these two guys are at the top, it's pretty much the only signal in the data set, the rest is noise. And now, I mean, there might be a column that says bad or something, some words as bad, or I hate you. And that word is still a tokenize thing that has a little bit of importance, right.

But if you notice the reason was negative, then you kind of done to the you don't really need to know that word bad shows up. So this model is totally useless. And it will still take time. And unlike the cloud vendors that don't mind, if you run something for an hour or two or a day for nothing, we actually mind it, we don't want you to not make good models, our only goal is for you to have good models. So we will make sure that you will abort this as soon as possible. That's why we gave you a warning immediately. But still, because this was a single button, experiment start how much we can do right now. We can't afford for you. Maybe you like it like this, maybe this wasn't the leak, but it was a very strong predictor, right? Maybe there is a column that really gives them the answer, maybe 99%. You know, precision. And recall is what you expect. It's just a matter of another digit at the end. So we can just afford it for you. But they would like you to abort this now and say, Okay, this is crap aboard. And now this is what you will do and say, Okay, fine, I'll run it again. New, same settings.

And now I will drop these columns. So negative reason and negativities and continents and run it again, this is how people have been doing it for years now, right a few years. And while this is working, and it's fine, it's not the most efficient way, because some people will ignore the warning, as I mentioned, and someone will not know what to do. Even if they've seen a warning, they don't really know what it means to drop a column Maybe. Or maybe there are more complicated warnings that say, there's a shift in your distribution, take care of it somehow. And then you don't know what to do. So now you can see, now it's there is still something in there, there's still a confidence about the sentiment itself, which probably should also be dropped, right? So this is not done. And you're not done here. But people might ignore it again. So this is this is exactly why we made the visit. So now you go to the same data set, and you say predict with the visit. Not only will it show you something about the problem types first. So I know that it's a multi class, because the airline sentiment has three different levels negative, neutral and positive. So I can see that already. And this makes sense, right? I like this distribution.

Otherwise, you would have had to guess it or look at the data set summary stats. So this just has a way of helping a little bit more. Now that they say this is fine, it'll ask me. So how do you want to, you know, drop things. And obviously, the ID column has to be dropped an ID is useless, right? It's just like a unique thing per row is never going to learn anything useful from it. So drop it. Now, dropping other columns. That's the one that we mentioned earlier. That's a tricky one. And that one is something you have to do properly. So I can say check for me, because I'm not an expert. I want the tool to figure it out, which seems right. And now we compute a model for every column. Is it predictive in a way that's maybe too much? And yes, it found those two leaks. So I will say yes, I'll check them. It only auto checks that if it's over 95 or so. So this could have been a good feature. But you don't want to like I said, you don't want to be too aggressive at dropping it. But I decided yes, as a user, does negative reason give away the answer. Yes, right. I can think so then I say I'm done.

Or maybe there's more to look at. So I will look at the rest. And now I see all the remaining features. And what you have to understand here is that this is a single model fitted on this remaining data set without those three columns. And these are important. So if this is a data set given to the algorithm, it will mostly look at the airline and the text. And the text is already somehow vectorized in the into a reasonable format. So we can extract some signal, it's not just a string. And now it's the question whether the airline sentiment confidence here should also be dropped, it doesn't seem to matter that much, it's very low importance, but you can still look at them and say, yes, I want that to be gone. You know, and I want maybe the underscore variables to be gone, and so on. So this is the time that you can basically use your common sense of your data and do certain actions that will greatly make the model better. And so now, we'll have some more gold features here. And some people might say, well, I don't want to click one by one, right? Yes, you can go back. And you can say, you can just go back with the with the browser here. And you can start over with the column detection case. So I can say, I know which columns to drop, and I can just choose a year, I just take these guys, you know, that's easier. Okay, and so on. So I hope this makes sense.

This is something we're quite proud of. And, you know, I can just, I can just continue, let's say, let's continue. I want to go into production with C++, I want the same results on any hardware, not just on the same hardware, I need to be super picky. This is going to bank. This is not a time series problem. The tweet created column here is not important, right? When the three was created cannot possibly be something that is so important that the behavior of the sentiment depends on time strictly. So time order doesn't matter. And now it says, Oh, wow. Yeah, this actually has a text dominated problem that these two text columns are quite important. But I don't have GPUs. So I just can't do Burke. Right. So now this warning will help me go back and run on a system that does have GPUs. But if I say, well, it's okay, I can just say to drop them, if they don't matter at all. But in this case, I really wouldn't want to drop it, or I do this, this is the default choice seems like, and now I can provide a test set, or I can make a fresh one. I will just use this one. That's the only choice we have.

Even though we had multiple data sets in Driverless, it knows that this is the only one that matches. It says everything is right. No duplicate rows, and so on. Now we can choose the type, how long do you want to wait eight minutes, 30 minutes, two hours or a day whole leaderboard, or just a single model, I think this is fine. You can also say I care about the wrong time, or I need a small production model. But yeah, these are like basic things you can choose right. And here again, to score sometimes you have a small little competition inside your company, or, you know, maybe even for cargo, and you want to see if you can get the exact right score for that. But usually, it doesn't really matter. And so now we checked all these various things, and we're good to go. And we can even create code that will run the exact same experiment with all the settings that we just set up. Right, we made it more reproducible. And we set these couple Thomas these expert settings. This is the string that was auto generated that you would have otherwise needed to make yourself if you had used the client.

Most people use the GUI, but you know, it's never bad to have copyable version of the code. And if you start to training, it's running, you can go to experiment and look at it. So yes, and now it says Oh, I found some duplicate rows. Only 1%. It's not too bad. Maybe that was shown earlier as well. What we saw earlier, that the datasets I'm using for these previews, they are sampled to 100,000 rows, right. So if you have 100 million rows, for example, massive data set, then the wizard will still be fast. But it might not be exactly right in terms of you know how many duplicates are between the data sets and so on. So that's the only caveat. But of course that can also be configured. If you'd rather wait a little bit longer, then that is fine. But yeah, I hope I gave you a bit of an idea how this looks in the in the wizard mode. And so what's the future are well, more visits, right? That's one future obviously, there's many more things that Driverless is doing and, and also the Euro platforms like hydrogen torch document, the AI the cloud as a whole the App Store that how we do the end-to-end story with a feature store all the way from the feature store to the machine learning platforms then out back to ml ops to deploy the models.

But from a Driverless point of view, I would say visits are coming. And there is a joint visit already we have a prototype, you can make joins, if you have a data set where you would like to join multiple datasets together, graphically, join them, there's a business value calculator a consider, you know, the confusion matrix, the two by two confusion matrix, what's the value of each of those, like, if you get it right, $10, if you get it wrong, you lose $5 or something. And then it'll tell you the, you know, the profit per model on a test set. So you can see which model is actually the one that would have made you the most, just to get an idea how much better random forest is, for example, in this case, I know it's a bit small compared to the constant model, you would have made 600 More right?

If you optimize the threshold, but that just shows you that if you do nothing, you still get 303,400 profits. So in this case, maybe these numbers aren't quite right to say, you get one if you get it right, and you get minus one if you get it wrong. Because, you know, sometimes you can just move the threshold around and then say yes, for everybody. And that's pretty much good enough. So that's, that's why it's important to look at these charts to make sure that you understand the actual data problem for your business. It's not enough to only know business, and it's not enough to only know data, you need to know it together. And then other ideas like, Okay, I have 10 experiments, I have a whole leaderboard now what like, what should I do next? More target encoding or more extra boost? Or more TensorFlow or, you know, maybe the tool here, driver should just do it right? So maybe it should show you a bunch of options and say, how much time do you have left?

This is what we would like to try, okay? Something like that. Or, you know, this is the subgroup of your population that really is Miss predicted the most, and, and here is the drivers for those people or that population. They all have three cars. So maybe there's something wrong with your segment there in terms of modeling. So we can make these kinds of things where it's a more of an interactive evolution to go through the data set together, instead of just showing you a dashboard of what's done. We could ask you and what would you like I can step through it. So in general, I would say Driverless is definitely the right tool for machine learning, supervised machine learning. We also have unsupervised machine learning. But unsupervised auto ML is a bit tricky, right? Like how do you want to let it run for 10 hours to make clusters until you get the best clustering? That's, that's not always obvious to understand. And we do have it, it's just, you need to be an expert to use that. And I would say it's, most people are just happy with supervised modeling. If you can have labels, right? If you don't have labels, tabular data is tricky. Unsupervised for images. And text is easier in a way because there is already pre trained models you can bootstrap yourself. And in our hydrogen product, there is a, you know, a label Genie where you actually get shown the suggested labels, and then you can say yes, that's correct.

But for 100 column text, or tabular data set with numbers, it's tough to guess if it's a fraud or not fraud, right? So do you really need to have labels so fraud, churn risk, you know, all this stuff about yes or no? Absolutely killer product multiclass. As well, it can handle 1000 classes or more, if you set the setting, it will use TensorFlow and other neural nets for that. But if you go to like 10 classes, he can sell to a mix of trees and neural nets, very strong tool. Regression doesn't matter how the numbers are distributed, right? They can all be zeros and a couple of non zeros like an insurance claim. Or it can be pricing of houses with millions it can be it can be angstroms. For chemical structures, it doesn't matter the numbers, you don't have to standardize it. It just works. And if you have missing values, via data, tall data, 100 million rows, no problem. 100 million columns, no problem. Both we can handle right of course, it's 100 million by 100 million, then the data set is so vague, you won't even get it into the platform. But if it's really big, then open source h2o can have an edge because it can handle arbitrarily large data.

Driverless is still modeling it on a single node, but it is much faster if it does fit. But if it's less than 10 or 100 gigabytes or so and it fits on a single node, there's nothing faster. It can handle multimodal data time series. As I mentioned earlier, you can mix and match and it also tells you what it's doing in a very transparently. So please let us know if you need anything. And please feel free to try it out, you can go to HDI Slash Free. And that will give you a free trial for the cloud in which you can also find Driverless AI as one of the AI engines, just like open source, h2o and hydrogen torch. So thanks again. And thanks to automakers, thanks to all the customer feedback and all the partnerships we've built in the last many years. It's been a great journey and we look forward to much more.

Thank you.