I mean, you can still write and make art? AI isnt taking that away from you? If you’re upset that its replacing you career wise, maybe you’re just upset that you need a job to live and that livelihood is at the whims of capitalists?
It can be both. Why is the first thing we’re seeking to automate with this current generation of ai the creative careers that humans can do?
Its another tool. For me one that has allowed me to access my creativity FAR MORE than any artistic tool previous. Did photoshop destroy photography? Did Photography destroy realistic paintings? If you dont like the tool personally, all previous artistic tools humanity has created are still available to you
edit: for those downvoting me, Pandora’s box is opened, you can either adapt with the times, or align yourselves with the Luddites who burned down the textile factories when lace became able to be mass produced, and are now synonymous with being out of date with current technology
Photography actually did destroy photorealistic painting as a career. All those painted portraits you see in museums were basically the old timey version of selfies. Photography replaced that job.
We’re not “seeking” to do anything. Ai art is a pretty logical and inevitable step in our progress in this recent breakthrough in machine learning. But we’re making AI out of everything for which there is a large amount of data on the internet. The same tech that is creating AI art by stealing assets off of the Internet is also combing through sequenced DNA to find patterns and analyzing telescope data to find anomalies.
And none of this new AI tech has anything to do with robotics or ship dismantling so it makes sense that that isn’t the field being advanced by it. Although I bet you could fiddle with AI to analyze data around ship dismantling to make it more efficient.
Because they happened to be the fields that got there first. It’s not like these are recent trends, ELIZA and AARON are from the 1960’s. But it really is just the perfect example of “They were so preoccupied with if they could they never stopped to think if they should” spread over 60 years of technological advancement.
You use the word upset as in there is no rational reason to care about this and emotions are invalid or lesser.
Anyway, cameras and paintings…
According to this guy, only one thing is allowed to happen at a time. Sorry all, LLMs are the only option. Nothing else.
I don’t see google, twitter, facebook, nvidia and alibaba working on AIs more than the ones designed to replace humans for content generation, and I don’t see money from anyone else of that size going into such projects either.
Then you should take a better look, because most of those companies are researching AI for tasks far beyond content generation - Google and NVIDIA for example have been doing a lot of research on AI for robotics.
https://www.nvidia.com/en-us/research/ai-playground/
This is the most public place where Nvidia discusses their projects. None of these are robotics and this is the most public place where Nvidia talks about their AI projects. Admittedly we also have models that are replacing engineers as well as artists, but I still don’t see where they’re advertising their robotics work.
This is the most public place where Google discusses their projects. Again, no discussion of robotics.
They very well could still be doing robotics work, but I don’t care if they are because they haven’t advertised it to the public and tried to get us excited about it anywhere near the level they have all advertised their generative AIs.
I honestly don’t care about the extent to which they’re investing in one application of AI or the other, I care about the culture war these companies are washing against us, trying to make us all okay with AI generated content that displaces humans from doing the work they enjoy so that they can make money. If they’re making robots with AI too, why aren’t they talking about it nearly as much?
The robot dystopia will not be caused by evil AI enslaving humanity.
No matter how advanced or how self aware, AI will lack the ambition that is part of humanity, part of us due to our evolutionary history.
An AI will never have an opinion, only logical conclusions and directives that it is required to fulfil as efficiently as possible. The directives, however, are programmed by the humans who control these robots.
Humans DO have ambitions and opinions, and they have the ability to use AI to enslave other humans. Human history is filled with powerful, ambitious humans enslaving everyone else.
The robot dystopia is therefor a corporate dystopia.
I always roll my eyes when people invoke Skynet and Terminator whenever something uncanny is shown off. No, it’s not the machines I’m worried about.
No matter how advanced or how self aware, AI will lack the ambition that is part of humanity, part of us due to our evolutionary history.
The ambition isn’t the issue. Its a question of power imbalance.
The Paperclip Maximizing Algorithm doesn’t have an innate desire to destroy the world, merely a mandate to turn everything into paperclips. And if the algorithm has enough resources at its disposal, it will pursue this quixotic campaign without regard for any kind of long term sensible result.
The robot dystopia is therefor a corporate dystopia.
There is some argument that one is a consequence of the other. It is, in some sense, the humans who are being programmed to maximize paperclips. The real Roko’s Basilisk isn’t some sinister robot brain, but a social mythology that leads us to work in the factors that make the paper clips, because we’ve convinced ourselves this will allow us to climb the Paperclip Company Corporate Ladder until we don’t have to make these damned things anymore.
Someone screwed up if a paperclip maximiser is given the equipment to take apart worlds, rather than a supply of spring steel
This is kind of a dumb argument, isn’t it?
I have to imagine someone centuries ago probably complained about inventors wasting their time on some dumb printing presses so smart people could write books and newspapers better when they could have been building better farm tools. But could we have developed the tractor when we did if we were still handwriting everything?
Progress supports progress. Teaching computers to recognize and reproduce pictures might seem like a waste to some people, but how do you suppose a computer will someday disassemble a ship if it is not capable of recognizing what the ship is and what holds it together? Modern AI is primitive, but it will eventually lead to autonomous machines that can actually do that work intelligently without blindly following an instruction set, oblivious to whatever might be actually happening around it.
I get the sentiment, but it’s a bad example. Transformer models don’t recognize images in any useful way that could be fed to other systems. They also don’t have any capability of actual understanding or context. Heavily simplifying here, tokenisation of inputs allows them to group clusters of letters together into tokens, so when it receives tokens it can spit out whatever the training data says it should.
The only actual things that are improving greatly here which could be used in different systems are natural language processing, natural language output and visual output.
EDIT: Crossed out stuff that is wrong.
Could you give me an example that uses live feeds of video data, or feeds the output to another system? As far as I’m aware (I could be very wrong! Not an expert), the only things that come close to that are things like OCR systems and character recognition. Describing in machine-readable actionable terms what’s happening in an image isn’t a thing, as far as I know.
Well, this is simply incorrect. And confidently incorrect at that.
Vision transformers (ViT) is an important branch of computer vision models that apply transformers to image analysis and detection tasks. They perform very well. The main idea is the same, by tokenizing the input image into smaller chunks you can apply the same attention mechanism as in NLP transformer models.
ViT models were introduced in 2020 by Dosovitsky et. al, in the hallmark paper “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” (https://arxiv.org/abs/2010.11929). A work that has received almost 30000 academic citations since its publication.
So claiming transformers only improve natural language and vision output is straight up wrong. It is also widely used in visual analysis including classification and detection.
Thank you for the correction. So hypothetically, with millions of hours of GoPro footage from the scuttle crew, and if we had some futuristic supercomputer that could crunch live data from a standard definition camera and output decisions, we could hook that up to a Boston dynamics style robot and run one replaced member of the crew?
The argument isn’t against the technology, it’s against the application of that technology.
Path of least resistance. It is harder to build a robot who can disassemble ships with its hands than it is to pattern match together pictures.
This XKCD comes to mind: https://xkcd.com/1425/
This isn’t even close to what they’re saying. It’s closer to complaining about how the Yankees replaced their star pitcher with a modified howitzer.
It’s not about people “wasting their time on some dumb invention,” it’s about how that useful invention is being used to replace jobs that people actually like doing because it’ll save their bosses money. It’s not even like when photography was invented or Photoshop came out and people freaked out about artists being put out of work, because those require different skill sets and opened up entirely new fields of art while also helping optimize other fields. This stuff could improve the fields that they’re created for by helping people optimize their workflow to make the act of creating things easier. But that’s not what they’re doing. It’s being used to mimic the skills of the people who enjoy doing these things so that they don’t have to pay people to do it.
Even ignoring the ethical/moral aspect of this stuff being trained without permission on the work of the people it’s designed to replace, the end goal isn’t to increase the quality of life of people, allowing us more time to do the things we love - things like, you know, art and writing - it’s to make the rich even richer and push people out of well-paying jobs.
The closest example I can think of is when Disney fired all their 2d animators and switched to 3d. They didn’t do it because 3d was better. In many ways, the quality was much worse at the time. But 2d animators are unionized and 3d animators aren’t, so they could get away with paying them much less. The same exact thing happened with the practical effects vs. digital effects guys in Hollywood right around the same time.
Society has always been losing jobs, the population just pivots to other specialisations. The only reason we fear it is because of our economic system that preys on it and turns it into profit, but that’s an other conversation entirely.
On the subject of losing creative venues, both your examples(photography and Photoshop) show how technology didn’t detract from the arts but add to it, letting the average person do much more. The same will be true for AI, I can see an inevitable boom happening in the filmmaking and animation industry, not to mention comic books and most of all indie gaming. It’s in the long run empowering for the individual imo.
The economic system is what he’s talking about here. That was my point. The entire conversation from the side against this stuff has always been about the economic situation of it. Without that factor, I think the only thing people would care about is whether or not their work is being used without their permission/maliciously.
As for Photoshop and photography, that’s actually why I brought those up specifically. Because they were feared as things that would destroy artists’ jobs and actually brought about entirely new fields of art - and also because they’re the two people bring up when people argue against LLM replacing people’s jobs, acting like they’re just some Luddites afraid of science.
Right now, the way I see it with AI is that there are 2 distinct groups benefiting from it: those whose workflow has been improved from the use of AI, and those who think AI can get them the result of work without having to either do the work themselves or pay somebody else to do it. And thanks to the economic issues that are at the heart of this whole thing, that second group is set to harm the number of people who can spend time creating things simply because they now have to work a job that isn’t creating things and no longer have the time to put towards that. So I can see AI creating a whole new art boom or a bust in equal measure. That second group is of concern to the art communities as well because they only see the destination and don’t see that the journey is just as important to the act of creation, and that is already causing schisms between artists and “prompters” who think that they’re just as skilled because they used a generator to make some cool stuff. People are already submitting unedited, prompted work to art and writing competitions.
Oh no. You can’t do it for fun now because the computers are doing it.
It’s a stupid thing to be angry about because AI isn’t about making art it’s just doing that is a good benchmark because it’s very visual and you can easily see at a glance how much more advanced one AI is than another one.
You really think that mega corporations are interested in art? If that was all AI could be used for no one would be researching it.
It’s not necessarily for fine arts, but for cheap content generation.
For example, it can generate fairly accurate 3d models for environments and secondary characters without paying hundreds of people to do this manually. It can generate videos from text prompts without hours of human labor for filming, editing, post-producing, etc.