cross-posted from: https://lemm.ee/post/44155947
The project is cool, but I am even more annoyed with articles that tell me what I think or want than I am with articles that used words like SLAMMED to make a mountain out of a molehill.
There are so many better ways to write that headline with the same sentiment. For example: “An open source mirrorless camera is going to be a big hit.”
My immediate, visceral reaction to that headline was, “no I wouldn’t” before I even opened it. I opened it anyway because it sounded cool, but don’t tell me what I would want to use.
Watch @FlyingSquid@lemmy.world destroy hackaday!
I actually like mirrors.
This is super interesting, and a project I’m gonna keep an eye on. Not least of all because I’ve got a good selection of E-mount lenses.
One thing that’s gonna be a struggle is all the specific lens corrections in photo software obviously will not be present for this. I wonder if the body behaves optically similarly enough to an existing Sony camera to be able to reuse those profiles.
If the sensor is the same size the lens corrections should be identical. Now if it communicates focal length info into the metadata (on a zoom lens), or any data for that matter, that’s a different issue
I believe focal length & aperture EXIF metadata do factor into modern lens correction profiles
It’s worth highlighting that the profiles are typically based on the combination of a lens and a body, one lens used on two different camera bodies would result in two different profiles being used
I’d think they’d handle this with calibration. It doesn’t need to be as sexy as commercial, it just needs to have a reasonably easy process to fix it.
Something like when you get a new lense, you aim it at a laser difraction pattern on a clean wall.
Now you don’t worry about minors differences in body or lenses.
It would be super cheap to make a laser difraction grid. You could map the lense deformation because you know the lines on the grid are straight. This would be solely for mapping the properties of the lens / mount and how to handle defamation profiles. Once you dial in the lens you probably wouldn’t need to run it again assuming it can id the lens when you mount it.
I would say you could use red green and blue lasers and look at convergence, But I’m not sure in any decent hardware that that would actually be off
Edit: you should note, iPhone already does this for face ID. It’s not really that much of a stretch to make it go the other way.
With camera sensors being so good, the major differences will be autofocus capabilities.
Imagine an open source autofocus algorithm that people can use their own photos locally so that it can focus on your shooting style.
Does this sensor have AF pixels? Otherwise it’ll be hard to get good AF unless you put a traditional AF in? Contract based AF is always going go be terrible.
Contrast based af can be kind of okay if it works. I have an old Sony camera with contrast af and it’s fast enough depending on the lens. Of course in dark or low contrast scenario’s it sucks and it can’t detect which way it has to focus, so it likes to hunt for focus if it can’t find any
I get why ppl would use something other than github, but why do they have to torture me with gitlab?
It has light mode by default and a UI that I find to be really unintuitive, but what really bothers me is that ppl go from one for-profit git host to another for-profit git host when things like Codeberg exist. With GitHub you could at least argue that you can turn your hobby project into a job since it has a huge userbase and stuff like github sponsors, but what does gitlab offer for you?
TL;DR: It’s not Codeberg
Gitlab is a security nightmare. They have zero conception how to write secure code and they don’t care to learn.
I was looking for a link to the previous CVEs I was aware of and there is yet another one that is new to me: https://thehackernews.com/2024/09/urgent-gitlab-patches-critical-flaw.html
This is not a serious service to be hosting source code on.