So, I’m selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I’m wondering if there’s any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

4 points
*

Well how would you know which ones you’d be okay with a program deleting or not? You’re the one taking the pictures.

Deduplication checking is about files that have exactly the same data payload contents. Filesystems don’t have a concept of images versus other files. They just store data objects.

permalink
report
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
4 points

Note that Git doesnt store deltas. It will reuse unchanged files, but stores a (compressed) version of every file that has existed in the whole history, under its SHA1 hash.

permalink
report
parent
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
4 points

I’m not saying to delete, I’m saying for the file system to save space by something similar to deduping.
If I understand correctly, deduping works by using the same data blocks for similar files, so there’s no actual data loss.

permalink
report
parent
reply
6 points
*

I believe this is what some compression algorithms do if you were to compress the similar photos into a single archive. It sounds like that’s what you want (e.g. archive each day), for immich to cache the thumbnails, and only decompress them if you view the full resolution. Maybe test some algorithms like zstd against a group of similar photos vs individually?

FYI file system deduplication works based on file content hash. Only exact 1:1 binary content duplicates share the same hash.

Also, modern image and video encoding algorithms are already the most heavily optimized that computer scientists can currently achieve with consumer hardware, which is why compressing a jpg or mp4 offers negligible savings, and sometimes even increases the file size.

permalink
report
parent
reply
1 point

I don’t think there’s anything commercially available that can do it.

However, as an experiment, you could:

  • Get a group of photos from a burst shot
  • Encode them as individual frames using a modern video codec using, eg VLC.
  • See what kind of file size you get with the resulting video output.
  • See what artifacts are introduced when you play with encoder settings.

You could probably/eventually script this kind of operation if you have software that can automatically identify and group images.

permalink
report
parent
reply
13 points
*
Deleted by creator
permalink
report
reply
2 points

The first thing I would do writing such a paper would be to test current compression algorithms by create a collage of the similar images and see how that compares to the size of the indiviual images.

permalink
report
parent
reply
1 point

Compressed length is already known to be a powerful metric for classification tasks, but requires polynomial time to do the classification. As much as I hate to admit it, you’re better off using a neural network because they work in linear time, or figuring out how to apply the kernel trick to the metric outlined in this paper.

a formal paper on using compression length as a measure of similarity: https://arxiv.org/pdf/cs/0111054

a blog post on this topic, applied to image classification:

https://jakobs.dev/solving-mnist-with-gzip/

permalink
report
parent
reply
1 point

I was not talking about classification. What I was talking about was a simple probe at how well a collage of similar images compares in compressed size to the images individually. The hypothesis is that a compression codec would compress images with similar colordistribution in a spritesheet better than if it encode each image individually. I don’t know, the savings might be neglible, but I’d assume that there was something to gain at least for some compression codecs. I doubt doing deduplication post compression has much to gain.

I think you’re overthinking the classification task. These images are very similar and I think comparing the color distribution would be adequate. It would of course be interesting to compare the different methods :)

permalink
report
parent
reply
-5 points

The problem is that OP is asking for something to automatically make decisions for him. Computers don’t make decisions, they follow instructions.

If you have 10 similar images and want a script to delete 9 you don’t want, then how would it know what to delete and keep?

If it doesn’t matter, or if you’ve already chosen the one out of the set you want, just go delete the rest. Easy.

As far as identifying similar images, this is high school level programming at best with a CV model. You just run a pass through something with Yolo or whatever and have it output similarities in confidence of a set of images. The problem is you need a source image to compare it to. If you’re running through thousands of files comprising dozens or hundreds of sets of similar images, you need a source for comparison.

permalink
report
parent
reply
5 points
*
Deleted by creator
permalink
report
parent
reply
-4 points

Using that as an example. Same premise.

permalink
report
parent
reply
1 point

computers make decisions all the time. For example, how to route my packets from my instance to your instance. Classification functions are well understood in computer science in general, and, while stochastic, can be constructed to be arbitrarily precise.

https://en.wikipedia.org/wiki/Probably_approximately_correct_learning?wprov=sfla1

Human facial detection has been at 99% accuracy since the 90s and OPs task I’d likely a lot easier since we can exploit time and location proximity data and know in advance that 10 pictures taken of Alice or Bob at one single party are probably a lot less variant than 10 pictures taken in different contexts over many years.

What OP is asking to do isn’t at all impossible-- I’m just not sure you’ll save any money on power and GPU time compared to buying another HDD.

permalink
report
parent
reply
-2 points

Everything you just described is instruction. Everything from an input path and desired result can be tracked and followed to a conclusory instruction. That is not decision making.

Again. Computers do not make decisions.

permalink
report
parent
reply
0 points

Definitely PhD.

It’s very much an ongoing and under explored area of the field.

One of the biggest machine learning conferences is actually hosting a workshop on the relationship between compression and machine learning (because it’s very deep). https://neurips.cc/virtual/2024/workshop/84753

permalink
report
parent
reply
5 points
*

Not sure if a de-duplicating filesystem would help with that or not. Depends, I guess, on if there are similarities between the similar images at the block level.

Maybe try setting up a small, test ZFS pool, enabling de-dup, adding some similar images, and then checking the de-dupe rate? If that works, then you can plan a more permanent ZFS (or other filesystem that supports de-duplication) setup to hold your images.

permalink
report
reply
8 points
*
Deleted by creator
permalink
report
parent
reply
3 points
*

That’s what I was thinking, but wasn’t sure enough to say beyond “give it a shot and see”.

There might be some savings to be had by enabling compression, though it would depend on what format the images are in to start with. If they’re already in a compressed format, it would probably just be a waste of CPU to try compressing them further at the filesystem level.

permalink
report
parent
reply
9 points
*

we can have 5~10 photos which are basically duplicates

Have any of you guys handled a similar situation?

I decide which one is the best and then delete the others. Sometimes I keep 2, but that’s an exception. I do that as early as possible.

I don’t mind about storage space at all (still many TB free), but keeping (near-)duplicates costs valuable time of my life. Therefore I avoid it.

permalink
report
reply
1 point
*
Deleted by creator
permalink
report
reply

Selfhosted

!selfhosted@lemmy.world

Create post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

Community stats

  • 3.7K

    Monthly active users

  • 3.3K

    Posts

  • 71K

    Comments