Who we are
What we do
Industry Verticals
Resources
Team
Contact
Service-as
-Agentic
AI
Resources
Following are some of the publicly available, top AI research papers:
“Denoising Diffusion Probabilistic Models” (2020) by Ho et al.
Link:
Denoising Diffusion Probabilistic Models
“Scaling Laws for Neural Language Models” (2020) by Kaplan et al.
Link:
Scaling Laws for Neural Language Models
“GPipe: Easy Scaling with Micro-Batch Pipeline Parallelism” (2019) by Huang et al.
Link:
GPipe: Easy Scaling with Micro-Batch Pipeline Parallelism
“The Annotated Transformer”
(2018)
Link:
The Annotated Transformer
“Relational recurrent neural networks” (2018) by Santoro et al.
Link:
Relational recurrent neural networks
“BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” (2018) by Devlin et al.
Link:
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
“BigGAN: Large Scale GAN Training for High Fidelity Natural Image Synthesis” (2018) by Brock et al.
Link:
Large Scale GAN Training for High Fidelity Natural Image Synthesis
“StyleGAN: A Style-Based Generator Architecture for Generative Adversarial Networks” (2018) by Karras et al.
Link:
A Style-Based Generator Architecture for Generative Adversarial Networks
“Glow: Generative Flow with Invertible 1×1 Convolutions” (2018) by Kingma and Dhariwal
.
Link:
Glow: Generative Flow with Invertible 1×1 Convolutions
“Attention Is All You Need”(2017)
by Vaswani et al.
Link:
Attention Is All You Need
“A Simple Neural Network Module for Relational Reasoning” (2017) by Santoro et al.
Link:
A Simple Neural Network Module for Relational Reasoning
“Neural Discrete Representation Learning” (2017) by van den Oord et al.
Link:
Neural Discrete Representation Learning
“Neural Discrete Representation Learning” (2017) by van den Oord et al.
Link:
Neural Discrete Representation Learning
“Variational Lossy Autoencoder” (2017) by Kingma et al.
Link:
Variational Lossy Autoencoder
“ImageNet Classification with Deep Convolutional Neural Networks” (2017) by Sutskever et al.
Link:
ImageNet Classification with Deep Convolutional Neural Networks
“Pointer Networks” (2017) by Vinyals et al.
Link:
Pointer Networks
“Improved Techniques for Training GANs” (2016) by Salimans et al.
Link:
Improved Techniques for Training GANs
“Pixel Recurrent Neural Networks”
(2016)
by Oord et al.
Link:
Pixel Recurrent Neural Networks
“Identity Mappings in Deep Residual Networks” (2016) by
Zhang et al.
Link:
Identity Mappings in Deep Residual Networks
“Neural Machine Translation by Jointly Learning to Align and Translate” (2016) by Bahdanau et al.
Link:
Neural Machine Translation by Jointly Learning to Align and Translate
“Order Matters: Sequence To Sequence For Sets” (2016) by Vinyals et al.
Link:
Order Matters: Sequence To Sequence For Sets
“Multi-Scale Context Aggregation by Dilated Convolutions” (2016) by Yu and Koltun.
Link:
Multi-Scale Context Aggregation by Dilated Convolutions
“Recurrent Neural Network Regularization” (2015) by Sutskever et al.
Link:
Recurrent Neural Network Regularization
“The Unreasonable Effectiveness of Recurrent Neural Networks” (2015) by Andrej Karpathy.
Link:
The Unreasonable Effectiveness of Recurrent Neural Networks
“Understanding LSTM Networks” (2015) by Christopher Olah.
Link:
Understanding LSTM Networks
“Deep Residual Learning for Image Recognition” (2015) by Zhang et al.
Link:
Deep Residual Learning for Image Recognition
“Generative Adversarial Networks” (2014) by Ian Goodfellow et al.
Link:
Generative Adversarial Networks
“Neural Turing Machines” (2014) by Graves et al.
Link:
Neural Turing Machines
“Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton” (2014) by Aaronson et al.
Link:
Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton
“Auto-Encoding Variational Bayes”
(2013)
by Kingma and Welling.
Link:
Auto-Encoding Variational Bayes
“Keeping Neural Networks Simple by Minimizing the Description Length of the Weights”
by Geoffrey E. Hinton and Drew van Camp
.
Link:
Keeping Neural Networks Simple by Minimizing the Description Length of the Weights
“A Tutorial Introduction to the Minimum Description Length Principle” (2004) by Peter Grunwald.
Link:
A Tutorial Introduction to the Minimum Description Length Principle
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.
Ok