Hugo Hacker News

Using a laser to blast away a Bayer filter array from a CCD

barbazoo 2021-08-18 17:42:28 +0000 UTC [ - ]

Wow, that's interesting. I didn't know that's how CCDs worked. If I understand correctly, 1/3 of the "pixels" captured the red, 1/3 green, 1/3 blue. Does that mean the sensor now has 3x the resolution it had before?

_Microft 2021-08-18 17:43:37 +0000 UTC [ - ]

Often it is even 50% green, 25% red and 25% blue pixels. There are different patterns, though. What you get as "megapixel" number of cameras counts subpixels individually, that is a "10MP" labelled camera does not have 10 million pixels of each color but 10 million subpixels in total.

https://en.wikipedia.org/wiki/Bayer_filter

If you have a device that can output RAWs, you can look at a RAW image using the FOSS photo development program "Darktable". Choose "photosite color" as the "demosaic" filter to show the individual color channel values (and thereby the Bayer pattern of your camera).

But yes, after removing the filter, you have three times the number of pixels but you lost color information.

dr_zoidberg 2021-08-18 18:38:35 +0000 UTC [ - ]

Demosaicing algorithms are very good at restoring the resolution "lost" to the BFA. They can introduce some artifacts (zipper effect, "labyrinth", fringe color, to name a few) but in general, sharpness isn't lost as much as people imagine.

Nowadays they classical algorithms are being replaced by convnets that are trained on different BFA/image pairs and can get very good results -- at the cost of placing a convnet in the middle (so much higher computational cost, which can be offloaded to a GPU/AI accelerator if available).

If you want to see what a "pixel perfect" camera gives you, there are the Sigma cameras with Foveon sensors[0] or you can check the cameras that have a sensor-shift superresolution approach (some pro Olympus and Hasselblad models have this feature). Sensor-shift SR has the problem that it works best on static scenes, because it takes several images which are then later combined on a single picture, and if there's movement between the images it may introduce a few artifacts.

[0] which do full color data for every pixel, as they use silicon depth to filter wavelenth

kragen 2021-08-19 05:28:01 +0000 UTC [ - ]

> at the cost of placing a convnet in the middle (so much higher computational cost, which can be offloaded to a GPU/AI accelerator if available)

It also makes it harder to undo the effects of the demosaicking algorithm, which may be important if you're doing things like subpixel superresolution.

passivate 2021-08-18 20:59:50 +0000 UTC [ - ]

Foveon sounds great in theory, but it doesn't deliver IMHO. It can achieve parity in terms of pixel level sharpness and color at the lower ISOs, but picture quality breaks down very quickly even at moderate ISOs.

https://tinyurl.com/yyrndzkk

https://tinyurl.com/t9nnadc

gsich 2021-08-18 22:00:28 +0000 UTC [ - ]

OT: why URL shortener? This is not twitter with a length restriction.

passivate 2021-08-18 22:05:08 +0000 UTC [ - ]

I've ran into issues with forum s/w URL sanitizers mangling URLs from DPReview. Maybe I should have just tested it. Here we go!

https://www.dpreview.com/reviews/image-comparison/fullscreen...

gsich 2021-08-18 22:27:54 +0000 UTC [ - ]

Thanks!

karmakaze 2021-08-18 19:47:45 +0000 UTC [ - ]

I always wondered if this was a good ratio. I get that green usually has the strongest signal and thus better low-light performance. For bright shots, I find that preserving higher resolution in blue results in higher perceptual resolution of the final image. You can simulate something like it by using an extreme 'night mode' more-red/no-blue display mode and watching a 4k video.

dr_zoidberg 2021-08-18 20:38:09 +0000 UTC [ - ]

Green was chosen because it's to what the human eye is most sensitive. Look at Fuji's X-trans[0], and there are also RGBW arrays that prioritize dynamic range.

All in all, the BFA is "good enough" most of the times. For the use cases where it isn't, you're either:

* Budget constrained and can't really afford not using BFA

* Able to (pay for and) use either a color wheel in front of your sensor, or go with prism + triple sensor.

* Bite the bullet and go with a "strange" color array. You'll probably need to work on the software side for demosaicing to get proper support and fix eventual artifacts.

[0] even more green! 20/36 photosites are green, 8 red 8 blue

[1] with W being white, meaning no color filter or "panchromatic cell". In theory this helps on dim light conditions.

NathanielK 2021-08-19 09:15:04 +0000 UTC [ - ]

Good info, but your citations are a bit jumbled up Doctor.

NathanielK 2021-08-19 09:13:09 +0000 UTC [ - ]

> I always wondered if this was a good ratio. I get that green usually has the strongest signal and thus better low-light performance.

More importantly, green is close to what your eyes perceive as luminance. This is important because you can perceive a lot more luma detail than chroma detail. This is why things like 4:2:2 sampling work.

If you read Bayer's original patent, he proposed using Y Cr Cb (luminance, colour part red, colour part blue) instead of GBR filters[0]. This would people optimal from a computer science perspective. Sadly it doesn't work physically. Sensing negative-blue and negative-red can't really be done with a simple filter.

[0] https://patents.google.com/patent/US3971065

falcrist 2021-08-18 18:18:05 +0000 UTC [ - ]

Since the "pixels" are either formed at every intersection of 4 photosites (overlapping each other) or by interpolating data for each color to include the "missing" photosites (which is effectively the same), the megapixel count should fairly accurately represent both the number of photosites and the number of pixels in the output image.

I'm not exactly sure how the edge pixels are treated, but the difference in number between pixels and photosites should be on the order of a few thousand at most.

ansible 2021-08-18 18:34:39 +0000 UTC [ - ]

> I'm not exactly sure how the edge pixels are treated ...

It is quite common to have more than the "nominal" number of pixels in a sensor array. So there are extra pixels for the edges.

falcrist 2021-08-18 18:43:26 +0000 UTC [ - ]

Ah yes. I had forgotten about this. I believe there are also extra pixels at the edge of some sensors that are unexposed, and just used for calibration purposes.

barbazoo 2021-08-18 18:25:10 +0000 UTC [ - ]

Are you saying that one photosite might be included in more than one pixel and therefore the overall pixel count is roughly equal to the number of photosites?

falcrist 2021-08-18 18:33:09 +0000 UTC [ - ]

I'm saying that each photosite definitely is included in more than one output pixel, and I'm also saying that the number of output pixels should be about the same as the number of photosites.

This is obviously capturing less information than if you had a completely separate set of photosites for each pixel, but the megapixel count of cameras is nevertheless accurate.

Modern cameras sometimes come with a "pixel shift" function, which uses the image stabilization system to take 4 images each shifted one photosite from the others to construct an image where each pixel contains the information of 4 independent photosites with no sharing between the pixels.

The resolution of the final image is the same as a normal image, but the result is much clearer, and far less likely to suffer from blue/red moire.

2021-08-18 18:23:16 +0000 UTC [ - ]

AceJohnny2 2021-08-18 21:01:09 +0000 UTC [ - ]

Bayer filters are 50% green, 25% red, 25% blue for consumer devices.

The reason is that green actually captures much more of the luminance information, and our eyes have a much better luminance resolution than color resolution.

Tangentially, it's why the so-called YUV 420 (chroma subsampling) is so effective, where it's effectively encoding Y (luminance) data for every pixel (in a block of 4), but U/V (chrominance) only for every pair (or quad, someone correct me) of pixels.

There are examples online of pictures [1] with their luminance resolution decreased: you can immediately see the pixelation, and of their chrominance resolution decreased: you can barely tell the difference.

[1] https://en.wikipedia.org/wiki/Chroma_subsampling#/media/File...

AceJohnny2 2021-08-18 21:05:54 +0000 UTC [ - ]

Extra fun fact: Bayer filters were developed by Bryce Bayer at Eastman Kodak, who first researched digital sensors.

Despite such a head start, Kodak went on to completely fail the analog-to-digital camera transition. A prime example of the Disruption Dilemma.

labcomputer 2021-08-18 23:41:12 +0000 UTC [ - ]

Because their patents ran out before all the complementary technologies were economically feasible (and CMOS sensors were invented by someone else by that point anyway). It's actually a terrible example of the disruption dilemma.

Edit: And there was really no feasible transition path for them anyway. The business depended on skimming a little bit for every photo taken. The main selling point of digital cameras was that you could take unlimited photos at no extra cost.

Customers aren't stupid. Even with patents, if you make the camera more expensive to account for the lost revenue on film and processing chemicals, people aren't going to buy it.

colonwqbang 2021-08-18 22:15:08 +0000 UTC [ - ]

YUV/YCbCr 420 means that there is one set of chroma samples (Cb+Cr) for each 2x2 block of luma samples (pixels).

Often, the chroma samples fall on pixels in even rows and on even lines. So pixels in odd rows or on odd lines, have to borrow (interpolate) their chroma values from neighbouring pixels.

barbegal 2021-08-18 19:11:46 +0000 UTC [ - ]

The reality is that a lot of sensors provide greater resolution than the lens can resolve so the actual spatial resolution barely changes.

pjc50 2021-08-18 17:49:46 +0000 UTC [ - ]

It always had the same resolution, it's just that beforehand you had to process it down by 3x to get a colour image. What it has now is more range especially in the non visible range.

ipsum2 2021-08-18 19:44:38 +0000 UTC [ - ]

Not specific to CCDs, CMOS sensors also have Bayer filters. Actually, fancy cameras with CCDs skip bayer filters all together by using prisms to split light: https://en.wikipedia.org/wiki/Three-CCD_camera

ulfw 2021-08-19 05:39:14 +0000 UTC [ - ]

There is not single one on the market and the only one worth mentioning is from 1995 with 0.38MP per sensor.

https://en.wikipedia.org/wiki/Minolta_RD-175

ruined 2021-08-18 17:14:49 +0000 UTC [ - ]

here's a direct link to the original youtube video explaining and demonstrating the process and results

https://youtu.be/y39UKU7niRE

this is very exciting and i wonder if a similar process could be applied to consumer DSLR/MILC cameras. would love to shoot some high quality video in uv/ir

opencl 2021-08-18 17:57:55 +0000 UTC [ - ]

It's certainly possible, there's already a company[1] that sells cameras with this modification performed. And a few cameras that come from the factory with no Bayer filter, like the Leica Monochrom, but all the ones I know of are very expensive.

[1] https://maxmax.com/shopper/category/9241-monochrome-cameras

showerst 2021-08-18 19:06:50 +0000 UTC [ - ]

If you're into lasers at all, les' lab is a great channel.

It's really an example of the best part of youtube, just a dude who knows some stuff explaining how things work and showing off shop-made projects.

userbinator 2021-08-19 03:58:04 +0000 UTC [ - ]

This reminds me of how the Chinese figured out that a laser could burn through the glass and ablate the glue holding the backs on the newer iPhones, allowing them to be replaced far more easily.

zokier 2021-08-18 17:49:53 +0000 UTC [ - ]

While neat technique, you could just buy monochrome camera also. Astrophotography community in particular seems to like them so that might be good keyword to search for.

HeavenFox 2021-08-18 18:19:13 +0000 UTC [ - ]

True, astrophotographers like monochrome camera because you can prioritize gathering brightness signal over color signal, so you get more detailed photo; you can also use narrowband filter and image under full moon or in inner cities.

However, astrophotographers also complain about the price premium of monochrome camera. Given the same sensor, the monochrome version is typically 20% - 30% more expensive than the color version, which is counterintuitive - you don't need to put the Bayer filter on! So if we can perfect the technique to debayer color sensor, the astrophotographer community would be elated.

ansible 2021-08-18 18:40:31 +0000 UTC [ - ]

> the monochrome version is typically 20% - 30% more expensive than the color version, which is counterintuitive...

The market for monochrome sensors is very tiny compared to the rest of the commercial products. Every phone now has 2 or more cameras on it, and there are billions of those.

Any changes to the manufacturing steps means more setup and effort. Different test procedures, quality control, documentation, etc.. That is all overhead, to be absorbed by a relatively small production volume.

I'm surprised it is only a 30% premium, I'd have expected higher actually.

spiantino 2021-08-18 20:00:54 +0000 UTC [ - ]

I'm an avid astrophotographer, and the prices for cooled mono and cooled color cameras are the same. If you compare a dedicated, cooled astro camera to a consumer DSLR then yes, they are more expensive. But apples to apples they are exactly the same price. Actually, looking live the mono version is a bit cheaper:

https://optcorp.com/products/zwo-asi6200mc-p - color $3999

https://optcorp.com/products/zwo-asi6200mm-p - mono $3799

2021-08-18 20:00:29 +0000 UTC [ - ]

2bitencryption 2021-08-18 21:07:45 +0000 UTC [ - ]

> because you can prioritize gathering brightness signal over color signal, so you get more detailed photo

I wonder how long until phone cameras are purely monochrome, and apply ML to add the "correct" color in post-processing.

Actually, wasn't there some phone a few years ago with one high-res black-and-white sensor and one low-res color sensor, and it combined them through some tricky to produce a sharp color image?