Which Papers Won At 35th AAAI Conference On Artificial Intelligence?

Source – https://analyticsindiamag.com/

The 35th AAAI Conference on Artificial Intelligence (AAAI-21), held virtually this year, saw more than 9,000 paper submissions, of which, only 1,692 research papers made the cut.

The Association for the Advancement of Artificial Intelligence (AAAI) committee has announced the Best Paper and Runners Up awards. Let’s take a look at the papers that won the awards.

Best Papers

1| Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

About: Informer is an efficient transformer-based model for Long Sequence Time-series Forecasting (LSTF). A team of researchers from UC Berkeley introduced this Transformer model to predict long sequences. Informer has three distinctive characteristics:

  • A ProbSparse Self-attention mechanism, which achieves O(Llog L) in time complexity and memory usage, has comparable performance on sequences’ dependency alignment.
  • The self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences.
  • The generative style decoder that predicts the long time-series sequences at one forward operation rather than step-by-step, which improves the inference speed of long-sequence predictions.

2| Exploration-Exploitation in Multi-Agent Learning: Catastrophe Theory Meets Game Theory

About: Exploration-exploitation is a powerful tool in multi-agent learning (MAL). A team of researchers from Singapore University of Technology studied a variant of stateless Q-learning, with softmax or Boltzmann exploration, also termed as Boltzmann Q-learning or smooth Q-learning (SQL). Boltzmann Q-learning is one of the most fundamental models of exploration-exploitation in MAS.

3| Mitigating Political Bias in Language Models through Reinforced Calibration 

About: Researchers from Dartmouth College, University of Texas and ProtagoLabs described metrics for measuring political bias in GPT-2 generation and proposed a reinforcement learning (RL) framework to reduce political biases in the generated text. Using rewards from word embeddings or a classifier, the RL framework guided the debiased generation without having access to the training data or requiring the model to be retrained. The researchers also proposed two bias metrics (indirect bias and direct bias) to quantify the political bias in language model generation.

Runners Up

1| Learning from eXtreme Bandit Feedback

About: Researchers from Amazon and UC Berkeley studied the problem of batch learning from bandit feedback in extremely large action spaces. They introduced a selective importance sampling estimator (sIS) operating in a significantly more favorable bias-variance regime. The sIS estimator is obtained by performing importance sampling on the conditional expectation of the reward concerning a small subset of actions for each instance.

2| Self-Attention Attribution: Interpreting Information Interactions Inside Transformer

About: Researchers from Microsoft and Beihang University proposed a self-attention attribution algorithm to interpret the information interactions inside the Transformer. As part of the research, the scientists first extracted the most salient dependencies in each layer to construct an attribution graph, which reveals the hierarchical interactions inside the Transformer. Next, they applied self attention attribution to identify the important attention head. Finally, they showed that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.

3| Dual-Mandate Patrols: Multi-Armed Bandits for Green Security

About: Researchers from Harvard University and Carnegie Mellon University introduced LIZARD, an algorithm that accounts for decomposability of the reward function,  smoothness of the decomposed reward function across features, monotonicity of rewards as patrollers exert more effort, and availability of historical data. According to them, LIZARD leverages both decomposability and Lipschitz continuity simultaneously, bridging the gap between combinatorial and Lipschitz bandits.

Related Posts

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence