think I forgot this one
I’ve already seen people go absolutely fucking crazy with this - from people posting trans-supportive Muskrat pictures to people making fucked-up images with Nintendo/Disney characters, the utter lack of guardrails has led to predictable chaos.
Between the cost of running an LLM and the potential lawsuits this can unleash, part of me suspects this might end up being what ultimately does in Twitter.
It hasn’t been hashed out in court yet, but I suspect AI mickey will be considered copyright infringement, rather than public domain.
I get why this is bad, but gimp is free for years and you can make whatever you want with it. Is it just that these AI programs need no skill at all? Where do you draw the line here?
Is it just that these AI programs need no skill at all?
That’s a major reason. That Grok’s complete lack of guardrails is openly touted as a feature is another.
It’s also an accountability issue. If you create something in GIMP or whatever everyone agrees that you did that and are responsible for any copyright issues or defamation or whatever else arises from that work. That becomes fuzzier when people start saying “Grok made this!” Especially because Grok does operate according to a model that can and does go beyond whatever it’s been instructed to do, so you might be able to plausibly argue that if you craft the prompt right.
And I can guarantee that the cesspool formerly known as Twitter will try to play whichever side of that is more advantageous to them. Copyright infringement? That’s on the user. Unique IP? Well, Grok had a profound and independent creative role and so we deserve a piece.
I really wish we would stop calling shitty tech products (such as this) the invention of billionares like Elon Musk. He probably did jackshit during the development of this.
Musk is probably unique among Big Tech owners in that he’s using his product daily (most people think to the detriment of both Xshitter and his other ventures). He is definitely the person who both directed company resources to be devoted to a GenAI product, and ensured that it doesn’t have the “guardrails” his fans and himself decry as “woke”.
In other words, no other Big Tech CEO is dumb enough to give the OK to a product that trashes its reputation.
It’s using Flux which was developed by Black Forest Labs and is open source. Neither Elon nor twitter had any hand in its creation and simply use it on their site.
I feel like generative AI is an indicator of a broader pattern of innovation in stagnation (shower thoughts here, I’m not bringing sources to this game).
I was just a little while ago wondering if there is an argument to be made that the innovations of the post-war period were far more radically and beneficially transformative to most people. Stuff like accessible dishwashers, home tools, better home refrigeration etc. I feel like now tech is just here to make things worse. I can’t think of any upcoming or recent home tech product that I’m remotely excited about.
A lot of the tech “innovation” is actually VC “innovation” and is meant to dismantle the safety nets of the working class. Literally half of their disruption is "we’ll finance you to lose money until you’ve ruined all competition, and then you can price gouge everyone while your “contractors” don’t get a decent salary, a retirement fund or any kind of insurance.
Most of the stuff these days is behind the scenes, like clean energy, innovative water reclamation, etc. it’s life changing but we don’t really see it every day.
In my opinion we should put cars away in urban areas and go to e-bikes/rickshaws. That would be both transformative and an improvement.
Also I think it’s more of a constant stream of small incremental changes. Things like GPS, the internet, lithium batteries, etc. They have all been things that are enabling a lot of other innovation, but have been rolled out more continously.
Just looking at battery tech, things like smart phones, drones, EVs all wouldn’t be possible without them and each of those have gone through massive innovations cycles themselves.
I think there’s definitely something to be said for the exhaustion of low-hanging fruit. Most of those big consumer innovations were either the application of novel physics or chemistry (refrigerants, synthetics, plastics, microwaves, etc) combined with automating very labor-intensive but relatively simple tasks (dish washing, laundry, manual screwdriving, etc). The digital age added some very powerful logic to that toolset, but still remains primarily limited to the kinds of activities and processes that can be defined algorithmically. The ingenuity of software developers along with the introduction of new tools and peripheral capabilities (printers, networks, sensors) have shown that the kind of problems that can be defined algorithmically is a much larger set than you would first think, but it’s still limited.
Adding on to this, it’s worth noting the degree to which defining problems algorithmically requires altering the parameters of that problem. For example, compare shopping at a store with using a vending machine. The vending machine dramatically changes the scope of the activity by limiting the variety of items you can get, only allowing one item per transaction, preventing you from examining the goods before purchasing, and so on. The high-level process is the same; I move from having no soda and some dollars to one soda and less dollars. But the changes that are made to ensure the procedure can be mechanized have some significant social tradeoffs. Each transaction has less friction, but also less potential. These consequences are even more pronounced if your point of comparison is an old-school sofa fountain where “hanging out waiting for the soda jerk and drinking together” is largely the whole point and while that activity requires more from you it also gives more opportunities to interact with and meet people and to see friends outside of work or school. Even if you don’t want to spend the time or be social (or even like me get severe social anxiety sometimes!) this still leads to a world where there are more and larger blocks of time that you can’t be expected to trade away to your job or other obligations. Your boss is likely to fire you for being late to work, unless that tardiness comes from the ferry you and your coworkers rely on being late. Because it’s inevitable friction in a necessary part of working (can’t work if you can’t get to work) and because it can’t be put entirely on the individual (even if you do want to blame the employee for taking the "wrong* boat so you really want to fire the whole team?) the system is basically forced to give you more grace than it otherwise would want to.
This is another way to frame the problems with more recent “innovations” - while social media and the gig economy both arguably empower individual consumers and producers of both cultural output and of services like taxis, they do so in ways that fundamentally change the relationship and individualize the connections between consumers, producers, and the system that they interact through. And because nobody has as direct a connection to the owners and operators of that system, they have more power to increase their profits at the expense of everyone who actually has to use the system to function.
There’s definitely something to this narrowing of opportunities idea. To frame it in a real bare bones way, it’s people that frame the world in simplistic terms and then assume that their framing is the complete picture (because they’re super clever of course). Then if they try to address the problem with a “solution”, they simply address their abstraction of it and if successful in the market, actually make the abstraction the dominant form of it. However all the things they disregarded are either lost, or still there and undermining their solution.
It’s like taking a 3D problem, only seeing in 2D, implementing a 2D solution and then being surprised that it doesn’t seem to do what it should, or being confused by all these unexpected effects that are coming from the 3rd dimension.
Your comment about giving more grace also reminds me of work out there from legal scholars who argued that algorithmically implemented law doesn’t work because the law itself is designed to have a degree of interpretation and slack to it that rarely translates well to an “if x then y” model.
I’ve thought about a similar idea before in the more minor context of stuff like note-taking apps – when you’re taking notes in a paper notebook, you can take notes in whatever format you want, you can add little pictures or diagrams or whatever, arranged however you want. Heck, you can write sheet music notation. When you’re taking notes in an app, you can basically just write paragraphs of text, or bullet points, and maybe add pictures in some limited predefined locations if you’re lucky.
Obviously you get some advantages in exchange for the restrictive format (you can sync/back up things to the internet! you can search through your notes! etc) but it’s by no means a strict upgrade, it’s more of a tradeoff with advantages and disadvantages. I think we tend to frame technological solutions like this as though they were strict upgrades, and often we aren’t so willing to look at what is being lost in the tradeoff.
In reading about this I’ve seen some interesting concepts from scraping the edges of management cybernetics, focusing on organizations kind of like analogue information-processing systems. The one that really stuck in my mind is the accountability sink, am organizational function that takes the responsibility for some action or decision away from the people in the organization who actually do it and places it somewhere more abstract, like a process or a policy. This ties in to a lot of what we talk about here, since a lot of the tech industry these days seems to be about centralizing things around a few major platforms and giving the people who run those platforms as many accountability sinks as they can come up with, with AI being the newest.
… Nope. In fact one of my in-laws said that they’d buy us an air frier for Christmas once the sales came. Everyone forgot about it shortly after and I don’t care one bit.
(annoying air fryer owner voice) we have a 25 litre air fryer and it’s awesome, just a nice little countertop oven, and [METEOR FALLS ON LONDON]
Look what I didn’t make
I’m still waiting for even one argument for the usefulness of AI image generation that isn’t fucked up. Just one.
Grok seems so support nudity and deepfakes too according to some news articles I’ve seen because of course nothing screams more free speech than plastering the face of your favorite actor or political opponent into a porn scene, so now let’s see how long it takes the first bluecheck fucker to try and create CSAM with it, because I suppose that’ll be the point when it gets too hot even for Elon.
It’s pretty great for DnD. A lot of people have trouble imagining things in full detail from a text or spoken description, so being able to generate images of the scene, characters, objects etc is super fun and adds a lot of richness to the experience.
This is the best use I’ve found for it as well. Especially if I want to quickly create a unique token for an NPC.
Generally speaking I’ll commission actual artists for pictures of PCs, but for a named NPC sorcerer who’s just going to be in a handful of scenes? AI has been great.
I haven’t played DnD in decades, so I’m unfamiliar with the scene nowadays. How are these visuals presented for the players? Does everyone have a screen? Or this more for an online scenario?
Yeah absolutely. Of course the work of an actual artist will be better in almost every case. AI lacks consistency, it doesn’t always followed the prompt properly, it’s easily confused, geometry and anatomy are sometimes fucked up. But for a group of dirt poor students who just want to have a fun game to play on the weekends AI is good enough.
It’s also good for concepting an idea before commissioning a real artist.
I’m banking on the primary use case being “getting Elon sued into oblivion by Disney” .
…I mean yeah that’s a pretty obvious use case - if Elon’s given you a checkmark against your will, might as well use the benefits to cause him as much grief as possible.
(Also, loved your series on Devs - any idea when the final part’s gonna release? Seems its gotten hit with some major delays.)
Oh no, the dangers of having people read your work!
It is coming, potentially in the next week. I was on leave for a couple of weeks and since back I’ve been finishing up a paper with my colleague on Neoreaction and ideological alignment between disparate groups. We should be submitting to the journal very soon so then I can get back to finishing off this series.