Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
[Editor's note: This podcast originally aired on Dec. 13, 2017. Find more episodes on the STACK That podcast page.]
From a cucumber farmer in Japan to Stanford students developing a cancer-detection app, users are increasingly tapping Google's open source TensorFlow machine learning library to turn datasets into actionable insights. In a recent STACK That podcast, Rajat Monga, Google's TensorFlow engineering director, explains machine learning and how TensorFlow is enabling innovative applications. He also discusses the growing open source ecosystem around the popular tool, good places to get started with machine learning, and what Google has planned for TensorFlow's future. What's really exciting, Monga says, is new development focused on bringing machine learning to the edge—on devices themselves. Find out more about TensorFlow projects and machine learning's push into the mainstream.
Byron Reese: Hi, everyone. Welcome back to STACK That, brought to you by Hewlett Packard Enterprise. I am your host, Byron Reese of Gigaom, and I'm here today with my co-host, Florian Leibert. He is the co-founder and CEO of Mesosphere. They make DC/OS, which is the most flexible platform for containerized data-intensive applications. Today, we are going to talk about machine learning going mainstream, with TensorFlow.
Byron Reese, Gigaom
In fall of 2015, Google released an alpha version of TensorFlow. Reaction was swift and favorable, and it immediately caught on. On February 11th of , Version 1.0.0 was released, and interest continues to climb week after week. In fact, GitHub just announced that it is the most forked project, with 24,000 forks.
Why is it so popular? Well, I mean, there's a great community around it. It's incredibly well-documented. It's well-supported. It's scalable. It's reliable. And those are all great, but above all of that, it is, as one reviewer called it, quote, "absurdly readable."
We now know many more things that we can do with our present level of machine learning than we are doing. There are more things that we know how to do than we have resources to do. The main reason for this is often given as a talent gap. And for the present, that is somewhat true, but equally inhibiting is the difficulty in the tool sets for non-data-scientists to master. TensorFlow is a huge step in the direction of building out more accessible tools. TensorFlow makes building and training neural nets just that much easier.
Today, we are delighted to have with us Rajat Monga. He is the director of engineering of TensorFlow there at Google. Welcome to the show, Rajat.
Rajat Monga: Thank you, Byron.
Rajat Monga, Google
Florian Leibert: Great to have you here, Rajat. So to start out, can you explain TensorFlow and what it does, like I'm 5 years old?
Florian Leibert, Mesosphere
Monga: So to explain TensorFlow, I would first have to explain machine learning itself. So machine learning is, from a kid's perspective, it's sort of like the kid learns itself, right? How does a kid learn? Now, we point kids at examples of different kinds of things, and sure, kids are really good at learning, so they learn from a very small number of examples. Turns out machines can learn as well, but today, the algorithms that we use require a lot more data to learn from. However, if you collect the data and one of the ways that you do it often is, let's take the example of images or of pictures. If you look at pictures in an app like the Google Photos, where we have essentially taken a large dataset of images, somebody has gone through and manually labeled all of them and said, OK, this is a picture of a dog, this is a picture of a cat, and so on. Now, a machine could take that dataset and learn from it to recognize dogs, cats, and many, many other things. And so then a product like Google Photo actually incorporates technologies like that to label all your pictures for you.
So now, where does TensorFlow come in? So to teach machines to learn, you basically need some programs that can do this training, etc., and TensorFlow is one of those programs or tools that really help that. Number one, it helps the training aspect for…that the machine's actually learning from all of these pictures. But then going from there, once you have what we call a model, you want to be able to run it or show it a new picture that it's never seen before and identify it automatically when you integrate this in an app that you're using. And that's another place that TensorFlow's used, where it goes all the way from training new models to running them to identifying examples or really understanding what data is coming in.
Leibert: So this is a really great explanation. Thank you, Rajat. When did the project get started internally at Google? Because I assume you worked on it before you actually open-sourced it for quite a while, correct?
Predictive analytics and machine learning can help optimize application performance and meet the needs of the business. This ESG Report describes how.
Monga: That's right. So I would say the early stages came up all the way back to 2011 when this group Google Brain got started, where we were doing research in this area. And that's when we built the first version, which we called DisBelief, and that was internal too.
Over the next few years, we used it extensively. We scaled it. We added all kinds of things to it. And sometime in 2014, we realized that there are certain limitations that this has. Yes, we can continue to fix it or we can say, "You know what? This is an opportunity for us to really rethink this all the way from beginning from whatever we've learned from our tool and some of the other tools that are there out there." And so that's how the beginnings of TensorFlow started in 2014.
Fairly early in the process, we decided to open source, so that was part of the target, part of the reason to say, "OK, this is how we're going to bring it, because it has to work for everyone, not just for us."
Reese: And in my introduction, I mentioned the dynamic ecosystem that has sprung up around it. Is that something you deliberately cultivated, or is that kind of a movement that just took you by surprise as well?
Monga: So I would say it's a mix. It's not something that you control really. These kind of ecosystems grow up. But there's a lot of effort that's gone into helping cultivate it as well. And in terms of the efforts, a few things that I would point out are, one, we've been very supportive. We answer lots of questions. We've helped people with their issues and so on, on Stack Overflow and GitHub and so on. We've also tried to keep track and learn and hear what the users want. So to give you an idea, when we released TensorFlow externally, it was a single-machine version. Internally, we did have a multi-machine version, but it was too tied to the software stack that we had at Google. And so over the next quarter or so, we actually worked hard to get a multi-machine distributed version for the users. Similarly, there have been lots of asks from the users that external users care about, and that helps build that community, which Google may or may not care about as much, but we need to help and collaborate with the community.
Another direction has been, as these people get familiar with the tool itself, they've started contributing. In fact, we have over a thousand contributors now. And we need to support them well. We need to guide them in terms of what makes sense for the project as a whole, but also, we have been very inviting in terms of accepting those contributions back where it makes sense for lots of people. So I think all of this has really come together. And in general, I'm amazed at the machine learning community and the community working with open source in how resourceful they are, how amazing they are in what they do with it…and how much they are willing to give back as well.
Reese: So I'm curious about that last part. What are some uses of it that you didn't necessarily foresee or favorite applications or anything like that that stick out in your mind?
Monga: Yeah, so a few that I could talk about. You know, there's one, and we've talked about this a fair bit, but there's this cucumber farmer in Japan, and his son, who is basically a…systems engineer, is able to pick this tool up, just download it from the Internet, and solve a problem that's very real for them. So it turns out, they have this farm and his mom every day sorts all the cucumbers that come in from the farm manually. So what he does is take TensorFlow, identifies the shapes of cucumbers automatically, ties it to Arduinos, which are these controllers that…make a real small assembly line to make all that work. So that was amazing, just somebody taking it out the blue and making it work.
Another one is, that I found out more recently about, was California Academy of Sciences has actually an app that they've built in collaboration with some other folks which basically identifies all kinds of species of plants and animals on the phone—just again, taking the software, putting the datasets together that they had been collecting for a while for different reasons as well, and really making something work for the end user. So very interesting to see that, a) people are interested in making things work, and b) that they're able to pick things off the shelf and actually put them together in so interesting ways.
Leibert: Wow, machine learning for cucumber farming. Never thought that that was going to be a real thing.
Reese: No, I thought it was going to cap out at carrots, but cucumbers is surprising.
Leibert: Well, Rajat, if you think about mainstream IT and traditional enterprises adopting machine learning, where do you think the opportunities lie for TensorFlow if you think about verticals such as finance, automotive, industry, general industrial engineering? What's exciting in those areas?
Monga: Machine learning is very exciting today because it's really changing pretty much every sector that you can think of. Think of any place that you had to make predictions or you were doing some planning, and now there's a potential to automate that or make that better. And machine learning, especially deep learning, has really changed the game there. So TensorFlow is a great tool to do that, and we see it being used in every single sector that you talk about here. One example I can take is healthcare. There have been efforts at Google and externally to, say, take radiology images and understand whether they are cancerous or not. In one case, at Stanford, some students spent a couple of months collecting pictures of skin lesions—and now literally you can take a picture of a skin lesion with your phone—and then basically downloaded TensorFlow, spent a couple of days training this model, and interestingly it performs better than a lot of the ophthalmologists out there.
So think of how it changes the world in places which do not have access to the kind of doctors and care that we are used to. So it's really, they're the standard—you can improve finance, you can improve different sectors, construction, or so on—but it's also changing real world in many, many ways.
Leibert: That's awesome. If I want to get started this weekend and really learn about TensorFlow and start to play with it, what's a good resource to get started with? Where do I find information about it?
Monga: So the best place to go is on TensorFlow.org. So just go there. There are tutorials that can help you get started. You can just, if you're familiar with Python, you can just download that pretty easily, use the Python install process. It's also supported for some of the other languages, although more so for deployments, where if you're starting out, I would recommend going with Python.
Reese: Flo, when are you going to have it on DC/OS?
Leibert: That's a great question, Byron. We're actually going to support having it deployable with a single button on DC/OS in a distributed way later this year, so stay tuned.
Reese: All right. I'm curious. To pick up on what Flo just said, machine learning's kind of an intimidating topic to a lot of people. If you are an enterprise and you get excited about this, when they hear you talking about how easy this is, how do you start? Once you kind of pick a technology, and you say, "OK, we're going to explore what we can do with this technology," how do you spot something in the organization that would be a great candidate to use this technology for?
Monga: So I think there are many ways to start. One way that I recommend, and I think works for a lot of people, is from a problem perspective. We know the kind of problems that work best today are where you can collect enough labeled data and have a model train on that. To give you an example of this, often people think of images and speech and maybe text, but often problems in the world or businesses might be OK, I want to predict this particular number—where is this going to be? And that number might be, OK, how many bottles of soap are going to sell in the next week, or it might be how many customers are likely to convert based on some data that I have, and so on. And all kinds of problems. As long as you have reasonable amounts of data for these, that might be a good problem to start—say, pick something that's simple just to get your hands dirty. And often, look for, rather than starting from scratch, OK, I have this—I'm going to start building a model or whatever. [You will] find that there's tons of resources on the web today, even in terms of what people are doing with TensorFlow, where I would personally start as just looking for somebody who's solving a problem that's similar to this and then take that as a starting point—go from there.
Reese: All right. Well, before we continue, I do want to do a big shout-out to Hewlett Packard Enterprise. They are the people that bring you STACK That. They are, of course, the leading provider of the next-generation services and solutions that help enterprises and small businesses navigate the rapidly changing technology landscape, just like we're discussing today. With the industry's most comprehensive portfolio, spanning the cloud to the data center to the Intelligent Edge, HPE helps customers around the world make their operations more efficient, more productive, and more secure. So stay up to date on the latest in hybrid IT, Intelligent Edge, memory-driven computing, and more at HPE.com.
Leibert: Rajat, what are the current limitations of TensorFlow, and what are some of the problems that it's not solving that you hope it can solve in the future?
Monga: You know, TensorFlow's a very versatile tool that people are using in all sorts of ways. And part of the reason is the way we built it. It was to solve a large class of problems, rather than just one specific problem, and we learned that over time, based on our previous experiences with the last tool. So at the bottom, it's really a numerical computation. Every…it does machine learning of different kinds of algorithms—especially deep learning was a key focus for us.
I think we have a number of machine learning algorithms in there. That toolkit is growing over time, and I think we'll see things improving there. The hardware is changing very rapidly, and scaling on different kinds of hardware is always something that we are working on and making it better.
You mentioned, Byron mentioned, how do you get started with machine learning and so on. It's in some ways, it is hard, and we continue to try and make it easier to use. But from the pooling perspective and examples and everything that the user sees—and there are lots of efforts that are happening in this area that we continue to evolve—I think across all these different kinds of axes, I hope to see lots and lots of new improvements coming up over the next months and years.
Reese: Well, go with that. You guys aren't notorious for sitting still. What can people look forward to in the TensorFlow project?
Monga: So one thing that we are working on right now—actually, I can talk about a couple of projects—is what we call Eager Executions. So to give you an idea, today, the way you work with TensorFlow is you build graphs and then you execute them. And the reason we did it this way was once you have a graph which represents your model, we can apply all kinds of compiler technology to optimize that graph, and because you're going to run the same graph again and again and again, many, many times, those optimizations are very useful.
It turns out, that's great for optimization, but when you get started, you'd rather have something that you can just write a program, just as if you were writing Python. So that's basically an additional thing that we are introducing very soon, and with that, calling it Eager, you basically as you write the program or as it runs through, it just executes, and behind the scenes we still provide all the same kind of functionality in real-time we'll bring in a lot of the optimizations that we had in the graphs there as well. So you basically get the best of both worlds, like very easy and rapid cycles of development and high performance as you move towards more production and so on. So that's one area that I'm very, very excited about, because it's really going to change and how users interact with this.
The other area I'm excited about is edge. So we've seen, while machine learning has started in the data centers and training often happens in the data centers, the final predictions that are made, while they are going to continue in the data centers, there's lots of need to actually run things at the edge as well, on the devices themselves. Phones are a great example of that. The phones you probably have in your pockets are really powerful these days, and they can run all sorts of things. And we have a number of applications at Google that already deploy these models to the devices with TensorFlow. So we have something called TensorFlow Lite that we are working on, that we'll be releasing very, very soon, that basically takes the same TensorFlow models and draws them, sort of like you do today, but it's been built from the ground up for running on device. So it's very, very lightweight, gives you the same performance or better than before, and is really going to be, going to allow you to go to all sorts of devices—not just the phones, but really think IoT and beyond.
Leibert: So there's a lot up and coming, and it seems like the development is pretty active around it. How many contributions come from the community versus from within Google?
Monga: So I would have to check the exact numbers. I think in terms of the number of contributors, I think the community is bigger than Google itself now. In terms of the actual code itself, today Google's still a fair bit ahead, but we do see that changing very, very rapidly.
Reese: OK, well I think we are about out of time. I want to thank you so much for taking time out of your day, because I'm sure you're busy making TensorFlow even better.
Monga: Thank you so much. It was a pleasure being here.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.