Upgrade & Secure Your Future with DevOps, SRE, DevSecOps, MLOps!

We spend hours on Instagram and YouTube and waste money on coffee and fast food, but won’t spend 30 minutes a day learning skills to boost our careers.
Master in DevOps, SRE, DevSecOps & MLOps!

Learn from Guru Rajesh Kumar and double your salary in just one year.

Get Started Now!

Which Papers Won At 35th AAAI Conference On Artificial Intelligence?

Source – https://analyticsindiamag.com/

The 35th AAAI Conference on Artificial Intelligence (AAAI-21), held virtually this year, saw more than 9,000 paper submissions, of which, only 1,692 research papers made the cut.

The Association for the Advancement of Artificial Intelligence (AAAI) committee has announced the Best Paper and Runners Up awards. Let’s take a look at the papers that won the awards.

Best Papers

1| Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

About: Informer is an efficient transformer-based model for Long Sequence Time-series Forecasting (LSTF). A team of researchers from UC Berkeley introduced this Transformer model to predict long sequences. Informer has three distinctive characteristics:

  • A ProbSparse Self-attention mechanism, which achieves O(Llog L) in time complexity and memory usage, has comparable performance on sequences’ dependency alignment.
  • The self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences.
  • The generative style decoder that predicts the long time-series sequences at one forward operation rather than step-by-step, which improves the inference speed of long-sequence predictions.

2| Exploration-Exploitation in Multi-Agent Learning: Catastrophe Theory Meets Game Theory

About: Exploration-exploitation is a powerful tool in multi-agent learning (MAL). A team of researchers from Singapore University of Technology studied a variant of stateless Q-learning, with softmax or Boltzmann exploration, also termed as Boltzmann Q-learning or smooth Q-learning (SQL). Boltzmann Q-learning is one of the most fundamental models of exploration-exploitation in MAS.

3| Mitigating Political Bias in Language Models through Reinforced Calibration 

About: Researchers from Dartmouth College, University of Texas and ProtagoLabs described metrics for measuring political bias in GPT-2 generation and proposed a reinforcement learning (RL) framework to reduce political biases in the generated text. Using rewards from word embeddings or a classifier, the RL framework guided the debiased generation without having access to the training data or requiring the model to be retrained. The researchers also proposed two bias metrics (indirect bias and direct bias) to quantify the political bias in language model generation.

Runners Up

1| Learning from eXtreme Bandit Feedback

About: Researchers from Amazon and UC Berkeley studied the problem of batch learning from bandit feedback in extremely large action spaces. They introduced a selective importance sampling estimator (sIS) operating in a significantly more favorable bias-variance regime. The sIS estimator is obtained by performing importance sampling on the conditional expectation of the reward concerning a small subset of actions for each instance.

2| Self-Attention Attribution: Interpreting Information Interactions Inside Transformer

About: Researchers from Microsoft and Beihang University proposed a self-attention attribution algorithm to interpret the information interactions inside the Transformer. As part of the research, the scientists first extracted the most salient dependencies in each layer to construct an attribution graph, which reveals the hierarchical interactions inside the Transformer. Next, they applied self attention attribution to identify the important attention head. Finally, they showed that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.

3| Dual-Mandate Patrols: Multi-Armed Bandits for Green Security

About: Researchers from Harvard University and Carnegie Mellon University introduced LIZARD, an algorithm that accounts for decomposability of the reward function,  smoothness of the decomposed reward function across features, monotonicity of rewards as patrollers exert more effort, and availability of historical data. According to them, LIZARD leverages both decomposability and Lipschitz continuity simultaneously, bridging the gap between combinatorial and Lipschitz bandits.

Related Posts

What is AIOps?

AIOps, short for Artificial Intelligence for IT Operations, is a practice that combines artificial intelligence (AI) and machine learning (ML) technologies with traditional IT operations to enhance Read More

Read More

What is Natural Language Processing (NLP) tools?

Introduction to Natural Language Processing (NLP) Tools If you’ve ever asked Siri a question or talked to Alexa, you’ve used Natural Language Processing (NLP) tools. In essence, Read More

Read More

What are Emotion Detection Tools and Why Emotion Detection Tools are Important?

What are Emotion Detection Tools? Emotion detection tools are a type of technology that analyses human facial expressions, voice tone, and body language to determine the emotional Read More

Read More

What is Sentiment Analysis and what are the Types of Sentiment Analysis and its Important?

Introduction to Sentiment Analysis If you’re a business owner, marketer, or just someone who’s curious about what people think about your brand, then you’ve probably heard of Read More

Read More

What is Object Detection and Why is Object Detection Important?

Introduction to Object Detection Tools Object detection is the process of identifying and locating objects of interest in an image or video. Object detection tools are software Read More

Read More

What is Face Recognition and Why is Face Recognition Important?

Introduction to Face Recognition Tools We’ve all heard of facial recognition technology, but what exactly is it and why is it important? From unlocking your phone with Read More

Read More
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x