The principle idea of automated adaptation is that a machine anticipates future commands of its operator and adapts itself accordingly, such that the underlying intention of the operator can be realized faster and more accurately. As an example, a web browser could open the webpage I planned to open without me needing to actually type in the URL.
There has been some success of such methods, mainly based on statistical inference using large amounts of data. Here, the machine has access to a large number of previous interactions that come from a similar context, and bases its predictions on patterns that it has found in this data. This type of automated adaptation is often seen in web shops, in the form of suggestions. Let’s say Jamie is buying one specific book, “Book A”, and the web shop suggests a second book, “Book B”. Jamie indeed likes this suggestion, but is surprised as there do not appear to be any obvious similarities between the two books. The computer system, however, has observed that in the past, a relatively large percentage of customers looking at Book A, also looked at Book B. This correlation might be backed by additional information, like Jamie’s age and other personal information that is available to the web shop, including the books that Jamie previously bought in the web shop.
But there are also examples where automated adaption famously failed: Take Microsoft’s Office Assistant, better known by its appearance as Clippy, released in 1997. It was a virtual assistant that was supposed to support people in their work with the Office software, but its many false predictions and irrelevant suggestions made it highly disliked and even disruptive. Another example comes from around the same time, and was also introduced by Microsoft. The intention was to simplify the interaction with menus by reducing the number of menu items. Items that were not used for a long time were hidden, such that the menu only contained items that were, by that logic, relevant to the current operator. The problem, however, is that behavior can change. Also, some functions simply aren’t used that often, but can be important when suddenly needed. If Jessie doesn’t use a printer often, then the menu item “Print” will disappear, making it easier to find other items such as “Save” and “Open”. However, when a last-minute train ticket needs to be printed, the necessary item is nowhere to be found! There is, of course, an “Unhide” button, but the whole idea was to never need it—and thus, Jessie doesn’t know about this button. These types of confrontations can be very frustrating, especially when in a hurry.
What is the difference between the examples where automated adaptation worked and where it did not work? The first example, a web shop’s recommendation, is effective because the underlying data is generally valid: It describes general behavior of human beings in a general context. It might inspire us to think about our behavior and our decisions, and what patterns lie underneath both. It also makes me think about the freedom of our choices. But besides these considerations it clearly shows that a computer system can learn about us by looking at large amounts of data, even if this data does not come from us personally.
The second and the third example however show the limitations of statistical inference to feed automated adaptations. Decisions made by individuals in a very specific context, that are dependent on momentary information, short-term goals, and fluctuating preferences, are hard to foresee. The computer does not have the required information about that individual user at that point in time given that specific context.
A good book suggestion coming from a friend, or a helping hand at the right time that we didn’t specifically ask for, may give us the idea that this person really knows or understands us. We use our computers every day, and the potential benefits of having our computers really “understand” us and put their information-processing powers to work to support us when we need it, are immense.
A conclusion from the above thoughts is that the full power of automated adaptation can only be set free if we find a way to provide context-sensitive, momentary information about the user and the context to the machine and find a way to automatically interpret it.
When we desperately need to print something, “Print” shouldn’t just be a clearly visible option in the menu—the ink should already be dry.
Thorsten O. Zander
Laurens R. Krol