You are viewing a single thread.
View all comments View context
-3 points
*

Could you give me an example that uses live feeds of video data, or feeds the output to another system? As far as I’m aware (I could be very wrong! Not an expert), the only things that come close to that are things like OCR systems and character recognition. Describing in machine-readable actionable terms what’s happening in an image isn’t a thing, as far as I know.

permalink
report
parent
reply
8 points
*

No live video no, that didn’t seem the topic

But if you had the horsepower, I don’t think it’s impossible based on what I’ve worked with. It’s just about snipping and distributing the images, from a bottleneck standpoint

permalink
report
parent
reply
-2 points
*

No live videos

Well, that’d be a prerequisite to a transformer model making decisions for a ship scuttling robot, hence why I brought it up.

permalink
report
parent
reply
3 points

Describing in machine-readable actionable terms what’s happening in an image isn’t a thing, as far as I know.

It is. That’s actually the basis of multimodal transformers - they have a shared embedding space for multiple modes of data (e.g. text and images). If you encode data and take those embeddings, you suddenly have a vector describing the contents of your input.

permalink
report
parent
reply

Microblog Memes

!microblogmemes@lemmy.world

Create post

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, Twitter X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

Community stats

  • 12K

    Monthly active users

  • 1.6K

    Posts

  • 65K

    Comments