My work covers theory and practical topics at the intersection of statistical learning and information theory. In particular, the focus is on high dimensional and complex problems, that are not amenable to traditional statistical methods and guarantees. At the same time, we also try to understand the fundamental limitations on when learning is possible, and how to characterize non-uniform learning. We also explore complexity in sampling, such as from slow mixing Markov processes (for example, obtaining news on very polarized topics on the Internet), and study how to interpret data from such sources.
Currently my research program is funded by grants from the National Science Foundation, and my latest work tries to reshape the way we think about some statistical problems. At a high level, consider the nature of scientific discovery. We keep refining theories as we see more data. The natural question therefore is---will the refinements ever end? Will it happen that at some point, we can make up our minds and say with confidence that our inference is good enough and no more data will change it substantially?
There are a number of nuances to this broad perspective, and our recent publications listed below summarize some of our results. A more complete list of publications is available via my CV here.
Semester | Number | Title | Times | Location |
Fall 2021 | ee342 | Probability and Statistics | MWF 10:30-11:20 | Virtual |
Fall 2021 | ee345/lab | Linear algebra and machine learning | MWF 8:30-9:30, Thu 9-11:45 | Virtual |
Every semester | eex96 | Design Project | Mon: 10:30am, Tue: 430pm | POST 205F |
Spring 2022 | ee646 | Information Theory | MW 9-10:30 | Sakamaki B301 |