Author of the publication

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

, , , , , , , , , , , and . (2020)cite arxiv:2010.11929Comment: Fine-tuning code and pre-trained models are available at https://github.com/google-research/vision_transformer. ICLR camera-ready version with 2 small modifications: 1) Added a discussion of CLS vs GAP classifier in the appendix, 2) Fixed an error in exaFLOPs computation in Figure 5 and Table 6 (relative performance of models is basically not affected).

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Alecsa: Attentive Learning for Email Categorization using Structural Aspects., , and . Knowl. Based Syst., (2016)Building a multi-domain comparable corpus using a learning to rank method., , , , , and . Nat. Lang. Eng., 22 (4): 627-653 (2016)Expert finding by the Dempster-Shafer theory for evidence combination., , , , and . Expert Syst. J. Knowl. Eng., (2018)Efficient Transformers: A Survey, , , and . (2020)cite arxiv:2009.06732.Long Range Arena: A Benchmark for Efficient Transformers, , , , , , , , , and . (2020)cite arxiv:2011.04006.Significant Words Representations of Entities.. SIGIR, page 1183. ACM, (2016)Share your Model instead of your Data: Privacy Preserving Mimic Learning for Ranking., , , and . CoRR, (2017)Scaling Vision Transformers to 22 Billion Parameters., , , , , , , , , and 32 other author(s). CoRR, (2023)Group Membership Bias., , , and . CoRR, (2023)Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers., , , , , , , , , and . CoRR, (2021)