GATOR

GATOR is a 2.5-year IBM funded research project to prototype a new type of task-oriented human-machine dialogue system that uses deep learning technologies (such as transformers) to learn how to conduct a dialogue to achieve the maximum success in the agent’s given objective while also maintaining the highest possible level of user satisfaction. For example, an agent whose objective is to obtain a donation for a cause may attempt to maximize the donation amount but must be mindful of the customer willingness to pay; similarly, a customer service agent attempts to find an optimal solution to the customer request (e.g., booking a flight to a destination) that matches the customer stated requirements as closely as possible while assuring that the customer satisfaction level remains high.

To achieve this capability, the agent is made aware of the dialogue progression despite sometimes conflicting objectives. We wish that an optimally acceptable outcome is reached by the parties in dialogue, even when their main objectives are at odds and cannot be fully satisfied (e.g., the client declines to donate; no flight is available, etc.) In the first phase of the project, we designed and implemented the Progression Function that computes a dialogue trajectory through a series of Global Dialogue States (GDS). The GDS’s represent classes of likely outcomes that consider both the agent’s objective (e.g., extracting a donation) and the socio-behavioral equilibrium between the parties. We call the resulting metric “acceptability” because it ranks the outcomes based on the best compromise between agent’s objective and user “satisfaction.” In the second phase of the project, we are prototyping novel dialogue control approach that allows the agent to project into the future and select the optimal continuation path to maximize dialogue acceptability.

I worked on this project from September 2022 to February 2023. I joined on the tail-end of the project so much of my focus was on conducting human evaluations. Specifically, we were comparing self-play and bot-bot evaluations to human evaluations. A paper I co-authored, “Towards a Proper Evaluation of Automated Conversational Systems”, was published in the conference proceedings of the AHFE International 2023 conference in July, 2023.

Links