You are viewing a single thread.
View all comments View context
11 points

Well, this is simply incorrect. And confidently incorrect at that.

Vision transformers (ViT) is an important branch of computer vision models that apply transformers to image analysis and detection tasks. They perform very well. The main idea is the same, by tokenizing the input image into smaller chunks you can apply the same attention mechanism as in NLP transformer models.

ViT models were introduced in 2020 by Dosovitsky et. al, in the hallmark paper β€œAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” (https://arxiv.org/abs/2010.11929). A work that has received almost 30000 academic citations since its publication.

So claiming transformers only improve natural language and vision output is straight up wrong. It is also widely used in visual analysis including classification and detection.

permalink
report
parent
reply
1 point

Thank you for the correction. So hypothetically, with millions of hours of GoPro footage from the scuttle crew, and if we had some futuristic supercomputer that could crunch live data from a standard definition camera and output decisions, we could hook that up to a Boston dynamics style robot and run one replaced member of the crew?

permalink
report
parent
reply
1 point

And such is the march of progress.

permalink
report
parent
reply

Microblog Memes

!microblogmemes@lemmy.world

Create post

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, Twitter X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

Community stats

  • 12K

    Monthly active users

  • 1.6K

    Posts

  • 65K

    Comments