Fluid intelligence
The field of artificial intelligence (AI) is broad and evolving
Many consider the summer of 1956 to be the birthdate of artificial intelligence (AI) as a research field. A group of researchers, including Marvin Minsky, John McCarthy, Nathaniel Rochester, and Claude Shannon, attended a workshop that, according to their proposal in 1955, aimed “to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
The field has since evolved rapidly, though the definition of AI remains fluid.
“The meaning of AI is still very much under debate, and I am not sure there is consensus on what does and doesn't constitute AI. If I were to attempt to describe it, I would say that a system is artificially intelligent if it can learn to react to new situations from experience and acts in a way which is consistent with how humans judge intelligence,” Zelda Mariet, a PhD candidate in Suvrit Sra’s group at Laboratory for Information and Decision Systems (LIDS) and Computer Science and Artificial Intelligence Laboratory (CSAIL) explained.
“AI is a very broad area, as humans demonstrate intelligence in many aspects,” Yen-Ling Kuo, PhD candidate in Boris Katz’s group at CSAIL pointed out. Quoting Marvin Minsky’s book The Emotion Machine, Kuo called it a “suitcase” word: “They contain many smaller concepts that can be unpacked and analyzed.” The many aspects and concepts in AI are found in the vast and rapid expansion of the research field into multiple disciplines.
“To me, the term AI comprises philosophical aspects and the discussion about (human) intelligence as a whole, which is why the term ‘machine learning’ is a more precise description of my research,” Mariet said. She elaborated further, “Machine learning, in a sense, is closer to the engineering that people think of when they talk about AI: creating, improving upon, and understanding the statistical properties of the models used to leverage patterns from data.”
Kuo mentioned the range of forms an AI can take: “An agent can be a robot in the physical world to assist people or a virtual email agent that reads your emails to generate auto-replies,” she said.
Kuo’s research focuses on machines’ planning of motions and actions to achieve specific tasks and goals, like reaching to pick up an object. “I am working on algorithms that enable agents to plan faster using their visual perceptions and past experience, and enable robots to follow commands like ‘pick up the box at the corner.’” For her studies, she uses supervised learning algorithms with data collected from simulations where she can define criteria for success. “Since plans and actions are sequences of movements, I use hidden Markov models … to learn and decode an action sequence, which can be formulated as neural network models,” Kuo explained.
There are numerous machine learning algorithms, and they can be divided into three groups: supervised, unsupervised and reinforcement learning. Each group has many subgroups, and the algorithms can be optimized and applied in different contexts.
Kuo noted the need for comprehensive data collection to train the models: “To scale up to more actions and more environments, we will need datasets that cover examples in those scenarios.” Accounting for such a vast variety of scenarios would require tens of thousands of examples.
In order to do that, Kuo mostly uses probabilistic models in machine learning. According to Kuo, these models can better deal with uncertainty. The goal is to compensate for unobservable variables and future events as well as handling sensor or perception errors.
Mariet studies machine learning model designs, a more theoretical approach to machine learning. “I work on creating and improving models that, after observing a finite number of solutions to a specific problem, ‘learn’ to solve the problem on new data,” she explained. Neural networks are a very popular and widely used model class for machine learning, but the number or parameters within a neural network can be too vast to handle or store. The GoogleNet network has roughly 11 million parameters, according to Mariet.
“One option is to set almost all parameters to zero, and only keep a very small fraction to their initial values,” Mariet explained. This method is called subset selection. How does one select these parameters to reflect the quality and diversity of the original features and still perform well on new data? “There are a lot of problems in machine learning that can be seen as particular cases of subset selection.”
“For example, a model is provided with 100,000 images of cats and dogs, and from those images learns to distinguish any cat from any dog,” Mariet explained. These images can come in many complex variants: “What if the model sees many more cats than dogs, or if the images are blurry, or if some adversary gets to choose 50 images and lie, saying it’s a picture of a cast instead of a dog?” Mariet said.
In her work, Mariet performs probability measures over subsets of a ground set (e.g. determinantal point processes) to analyze machine learning model design. Her work mostly involves mathematics and optimization. “Since I work on the theoretical side, I am mostly interested in obtaining provable results about the models I play with, although I do run experiments to make sure the practice aligns with the theory,” Mariet said.
Another example of applied machine learning is the work of Hsin-Yu Lai, a Electrical Engineering and Computer Science (EECS) PhD student working in the Energy-Efficient Multimedia Systems Group and Integrative Neuromonitoring (EEMS) and Critical Care Informatics Group (INCCI) with Thomas Heldt and Vivienne Sze. Lai uses machine learning to track changes in eye movement patterns that may correlate with neurological diseases, such as Alzheimer’s disease. “We use computer vision algorithms to acquire the eye movement patterns,” Lai said and listed face detection and neural-network-based eye-tracking algorithms that she works with.
“Current methods to track disease progression are variable and invasive. By using mobile platforms to quantify the changes in the eye movement patterns, we can develop a tool that can be personalized and less invasive,” Lai explained. Her research is aimed at applying machine learning for diagnostic purposes in the medical field.
Finally, Mariet pointed out, “One thing I think is important to consider no matter what you are doing in machine learning...is that the data we have access to is not necessarily representative of the data that the model will have to work with ‘in the wild.’” Once the system is deployed, it will encounter a much greater and more diverse range of data, according to Mariet. “So, it is vital to make sure that the model works just as well on this data.”