Having studied and worked in the field of machine learning and artificial intelligence for over 25 years, Professor Bilmes has a different view of the field than many people here have so far heard. Recently, he has been excited by the science of information management as it relates to machine learning. For example, how to make large datasets smaller and more efficient.

This is important for AI and machine learning, as the field is, at its core, about how to teach computers to solve complex tasks. Large and inefficient data sets make it more difficult for this to occur and significantly add to the cost of teaching computers to act intelligently. That the field has come so far in such a short time is due to three factors– (1) big data and big information, (2) large amounts of commodity vector supercomputing such as GPUs, and (3) mathematically expressive models (such as deep models) that are trainable using gradient-based methods in a way that perfectly matches the type of computing available via the commodity supercomputing–but there is still much work to be done.

Though he says fears about machines or robots taking over the world is pure science fiction, society does need to have discussions about issues of liability for machines that malfunction or otherwise cause critical accidents, transparency in creating machine learning algorithms, making sure that machines do not unintentionally become biased against different groups of people, and more, as can be heard by listening in.


Share this podcast

Listen & Subscribe to Future Tech Podcast on Your Favorite Platform

Accessibility Close Menu
× Accessibility Menu CTRL+U