You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+88-20Lines changed: 88 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -94,26 +94,43 @@ Many thanks to the people who helped make it better:
94
94
95
95
- the [DALLE-Pytorch](https://discord.gg/xBPBXfcFHd) and [EleutherAI](https://www.eleuther.ai/) communities for testing and exchanging cool ideas
96
96
-[Rohan Anil](https://github.com/rohan-anil) for adding Distributed Shampoo optimizer
97
+
-[Phil Wang](https://github.com/lucidrains) has provided a lot of cool implementations of transformer variants and gives interesting insights with [x-transformers](https://github.com/lucidrains/x-transformers)
97
98
-[Katherine Crowson](https://github.com/crowsonkb) for [super conditioning](https://twitter.com/RiversHaveWings/status/1478093658716966912)
98
99
99
100
## Citing DALL·E mini
100
101
101
102
If you find DALL·E mini useful in your research or wish to refer, please use the following BibTeX entry.
102
103
103
-
```
104
+
```text
104
105
@misc{Dayma_DALL·E_Mini_2021,
105
-
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
Original DALL·E from "[Zero-Shot Text-to-Image Generation](https://arxiv.org/abs/2102.12092)" with image quantization from "[Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)".
118
+
119
+
Image encoder from "[Taming Transformers for High-Resolution Image Synthesis](https://arxiv.org/abs/2012.09841v2)".
120
+
121
+
Sequence to sequence model based on "[BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461v1)" with implementation of a few variants:
- "[Deepnet: Scaling Transformers to 1,000 Layers](https://arxiv.org/abs/2203.00555)"
125
+
- "[NormFormer: Improved Transformer Pretraining with Extra Normalization](https://arxiv.org/abs/2110.09456)"
126
+
- "[Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)"
127
+
- "[Root Mean Square Layer Normalization](https://arxiv.org/abs/1910.07467)"
128
+
129
+
Main optimizer (Distributed Shampoo) from "[Scalable Second Order Optimization for Deep Learning](https://arxiv.org/abs/2002.09018)".
130
+
131
+
### Citations
132
+
133
+
```text
117
134
@misc{ramesh2021zeroshot,
118
135
title={Zero-Shot Text-to-Image Generation},
119
136
author={Aditya Ramesh and Mikhail Pavlov and Gabriel Goh and Scott Gray and Chelsea Voss and Alec Radford and Mark Chen and Ilya Sutskever},
@@ -124,7 +141,18 @@ year = {2021}
124
141
}
125
142
```
126
143
144
+
```text
145
+
@misc{radford2021learning,
146
+
title={Learning Transferable Visual Models From Natural Language Supervision},
147
+
author={Alec Radford and Jong Wook Kim and Chris Hallacy and Aditya Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
148
+
year={2021},
149
+
eprint={2103.00020},
150
+
archivePrefix={arXiv},
151
+
primaryClass={cs.CV}
152
+
}
127
153
```
154
+
155
+
```text
128
156
@misc{esser2021taming,
129
157
title={Taming Transformers for High-Resolution Image Synthesis},
130
158
author={Patrick Esser and Robin Rombach and Björn Ommer},
@@ -135,7 +163,7 @@ year = {2021}
135
163
}
136
164
```
137
165
138
-
```
166
+
```text
139
167
@misc{lewis2019bart,
140
168
title={BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension},
141
169
author={Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Ves Stoyanov and Luke Zettlemoyer},
@@ -146,24 +174,64 @@ year = {2021}
146
174
}
147
175
```
148
176
149
-
```
150
-
@misc{radford2021learning,
151
-
title={Learning Transferable Visual Models From Natural Language Supervision},
152
-
author={Alec Radford and Jong Wook Kim and Chris Hallacy and Aditya Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
177
+
```text
178
+
@misc{anil2021scalable,
179
+
title={Scalable Second Order Optimization for Deep Learning},
180
+
author={Rohan Anil and Vineet Gupta and Tomer Koren and Kevin Regan and Yoram Singer},
153
181
year={2021},
154
-
eprint={2103.00020},
182
+
eprint={2002.09018},
155
183
archivePrefix={arXiv},
156
-
primaryClass={cs.CV}
184
+
primaryClass={cs.LG}
157
185
}
158
186
```
159
187
188
+
```text
189
+
@misc{shazeer2020glu,
190
+
title={GLU Variants Improve Transformer},
191
+
author={Noam Shazeer},
192
+
year={2020},
193
+
url={https://arxiv.org/abs/2002.05202}
194
+
}
160
195
```
161
-
@misc{anil2021scalable,
162
-
title={Scalable Second Order Optimization for Deep Learning},
163
-
author={Rohan Anil and Vineet Gupta and Tomer Koren and Kevin Regan and Yoram Singer},
196
+
197
+
```text
198
+
@misc{wang_ma_dong_huang_zhang_wei_2022,
199
+
title={DeepNet: Scaling transformers to 1,000 layers},
200
+
author={Wang, Hongyu and Ma, Shuming and Dong, Li and Huang, Shaohan and Zhang, Dongdong and Wei, Furu},
201
+
year={2022},
202
+
eprint={2203.00555}
203
+
archivePrefix={arXiv},
204
+
primaryClass={cs.LG}
205
+
}
206
+
```
207
+
208
+
```text
209
+
@misc{shleifer2021normformer,
210
+
title={NormFormer: Improved Transformer Pretraining with Extra Normalization},
211
+
author={Sam Shleifer and Jason Weston and Myle Ott},
164
212
year={2021},
165
-
eprint={2002.09018},
213
+
eprint={2110.09456},
166
214
archivePrefix={arXiv},
167
-
primaryClass={cs.LG}
215
+
primaryClass={cs.CL}
216
+
}
217
+
```
218
+
219
+
```text
220
+
@inproceedings{liu2021swinv2,
221
+
title={Swin Transformer V2: Scaling Up Capacity and Resolution},
222
+
author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
223
+
booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
0 commit comments