I joined System Research Group at Microsoft Research Aisa after obtaining my Ph.D from Tsinghua University at June, 2011. My research interests include large-scale learning system, data-parallel computing, big data, machine learning, mobile computing, program analysis, compiler optimization, computer architecture and tool development. In the last three years, I mainly participated in research on optimizing the performance, and reducing the failure rate thus improving the user experience in distributed data-parallel computing. The research is inter-discipline among distributed data-parallel computing, database processing and program analysis. Currently, I am more interested in building a scalable, efficient, fault-tolerant and easy-to-use distributed learning system, with the belief that learning system could be built on top of existing data-parallel execution engine, thus is treated as learning library. In this way, the entire machine learning pipeline, such as feature preparing, training, online learning and model serving, can be supported by one single platform like Apache Spark. Before joining Microsoft Research, my interests include compiler optimization, computer architecture, and tool development. My PLDI paper on partial redundancy elimination could be treated as the final episode in that area. I have two years internship experience in Google (Mountain View), where I have developed a series of compiler optimizations both for data-center applications and Android system.