New Statistical Methods for the Big Data RegimeDate: 2014-09-11 Add to Google Calendar
Time: 11:00 am POSTPONED, DATE TBD
Location: Holmes 287
Speaker: Narayana Prasad Santhanam, Assistant Professor
We consider new statistical frameworks to tackle problems in the big data regime that overturn the conventional dichotomy between uniform and pointwise consistency. In particular, we see how the new approaches are in the process of transforming problems in a variety of applications, with specific examples in risk management (insurance and reinsurance) and aggregation of information from the Internet. In both the insurance business and the regulatory framework around it, ceilings on the amount of loss compensated is assumed to be necessary if premiums are to be realistic. However, in our new work, we show that one can insure against arbitrary large risks with finite premiums, without making any artificial assumptions on models. Similarly, in the second problem of aggregating information on the Internet, we tackle the problem of polarization of views on controversial topics. Unlike notions of PageRank, quantifying this polarization necessitates new statistical results---some seemingly counter intuitive---in the slow mixing regime of Markov processes.
Dr. Narayana Santhanamís research interests lie in the intersection of information theory and statistics, with a focus on the undersampled/high dimensional regime, including applications to finance, biology, communication and estimation theory. He is the recipient of the 2006 Information Theory Best Paper award from the IEEE Information Theory Society along with A. Orlitsky and J. Zhang and is the organizer of several workshops on high dimensional statistics and "big data" problems over the last five years. He is currently a member of the NSF Center for Science of Information (CSoI), a Science and Technology center established by the National Science Foundation to further research at the frontiers of information sciences.