I’m excited to be selected as a scholar in the 2019 OpenAI Scholars program and to have the opportunity to work with my mentor, Azalia Mirhoseini!
About Me
I am a fourth year Ph.D. student majoring in Resource Economics and master student in Statistic at UC Davis. My advisors are Prof. Aaron Smith and Prof. Kevin Novan.
I was a research fellow of Data Science for Social Good in University of Chicago 2017. I was also awarded as Grace Hopper fellow of Twitter 2018 and Women in Quantitative Finance fellow.
My research interests focus on machine learning method (both classical statistical learning and deep learning) and its application to Economics, especially for energy efficiency, and heterogeneous causal inference.
Before coming to Davis in Sep 2014, I received my B.S. degree of Economics in Zhejiang University, China, under the supervision of Prof.Boming Zhu and Prof.Hongsheng Fang.
Transitions From Economics to Artifactual Intelligence
My research mainly focus on causal inference methodology and applications, especially in residential transportation and electricity consumption. Machine learning techniques are being actively pursued in the private sector and have been widely adopted in fields such as computational biology and computer vision. However, the role of machine learning in economics has so far been limited. For decades, economists have built their assumptions about prices, wages, and inflation on data sets only as large as they or their research assistants could calculate. Machine learning has the potential to dramatically enlarge those data sets and allow economists to test their models faster than ever. Hence, it would be interesting to combine computer science, statistics, econometrics and applied economics to foster interactions and discuss different perspectives on statistical learning and its potential impact on economics.
Potential Opportunity and Interest
OpenAI provides me such a good opportunity to learn and perform real AI technique. I am excited to work with and learn from research scientists, current fellows, mentors and current fellow scholars in OpenAI.
I have some thoughts about potential directions that I am interested in and I am also open to all other opportunities.
Toxic comments detection:
In many situations, some words like ?the weather is fucking good?, will be mistakenly classified as toxic sentence. Google provides their language tool Bert - contextual analysis to better understand the whole sentence. In addition, how NLP can be used to better serve misspellings, maybe character based structure / embedding can help?
Fairness in machine learning:
I used to be a research fellow of University of Chicago and the project was about building a recommendation system for healthcare. After reading the data, we found patients tend to see male doctors even though female doctors have the same education background and even more working experience. So I think there might be many stereotype about unfairness in gender. For example, when we talk about doctor, we would think a men, and when we talk about house worker, we may think of a woman. So when we do natural language processing, how can we remove or diminish such understanding bias for the model seems to be an important topic.
Auto machine learning:
According to a recent paper by Mu Li from Amazon AI lab, we cannot compare two models based on one set of parameters. Hence, it would be helpful if we can provide an open source infrastructure that can compare two models with well tuned parameters, that is doing apple to apple comparison.
Combine social social science, economics with deep learning
The positive attribute of deep neural networks is that they produce highly non-linear approximation functions between the input and the output layer that could be useful for highly complex tasks, however, the uncertainty in DL approach is that we have no idea between the connection between nodes and hidden layers. On one hand, simple linear regression has very clear interpretability but have terrible accuracy in such instances. Satellite imagery data has been used to map local area poverty estimation (Stanford Study), or night-lights to asses power outages, or twitter data to asses traditional economic variables such as unemployment rate. In addition, a multi-agent reinforcement learning model of common-pool resource appropriation. Recent research at Google’s DeepMind has explored the problem of resource allocation using Deep Learning methods.