View analytic
Thursday, December 7 • 11:10am - 11:45am
All You Need to Know to Build Your GPU Machine Learning Cloud [B] - Ye Lu, Qunar

Sign up or log in to save this to your schedule and see who's attending!

GPU is becoming the new common, but at the moment, GPU resources are still hard to find for people who wants to have a taste. So how to build your GPU machine learning cloud?

Resource management & App templating
Even if your company or organization have purchased some GPU devices. Environment and resource isolation is always a problem. Also at the beginning the cloud is more used as a playground, so another consideration is to improve usage rate of resources. How we use Kubernetes to solve this problems.

How to use a wizard to generate machine learning, you can choose using tensorflow or theano, how many GPUs you need, etc.

Make the “customized changes” in immutable container be played back.
The features of container is immutable, which is a double-edged sword, which can ensure the environment can be unique/portable. On the other side, any changes inside the running container can be lost after recreation. How the customed env is saved and reuse?

Managing persistence storage in Kubernetes
How to turn our RBD served as hosted s3, to save models, training data, and so on. So The data scientist can access their data both as a volume and s3 standard api.
Support the running machine learning app,like tensorflow to do online resize.

App model & permission control
We'll talk about the app center , design of appcode and permission control.


Ye Lu

DevOPS, Bytedance
Devops Engineer @Qunar. Experienced in operating and managing OpenStack cloud, including Qunar OpenStack Cloud in 7 regions. Started constructing and using kubernetes Cloud since 2015. | OpenStack Ambassador.

Thursday December 7, 2017 11:10am - 11:45am
Meeting Room 9C, Level 3