An optimal imaging sensor for astrophotography? Thinking about what might make AP better and easier... Generic equipment discussions · Jon Rista · ... · 21 · 721 · 2

jrista 8.59
...
· 
Hello ABin! Absent the ability to image still (weather and clouds, or just poor timing on nights it does get clear), I've been pondering something:

What would the optimal astrophotography image sensor look like?

Thus far, astrophotography cameras are made using sensors initially designed for other purposes, most of the time. CCDs are often designed for entirely different purposes and repurposed for astro (although there are some exceptions). sCMOS sensors are usually designed primarily for scientific or medical purposes, and often have characteristics that are quite detrimental to their use in astro (i.e. high dark current). Most of the CMOS sensors in modern consumer grade astro cameras were designed for terrestrial photography with DSLR or mirrorless cameras, security cameras, medical imaging, etc.

So while some sensors are very good, such as the IMX455 or IMX533 they are not necessarily optimal for our use case. What WOULD be optimal? I've had a few thoughts, but I'm curious what other ideas might be combined to produce a sensor that was truly optimal for astrophotography purposes. There are some of my initial thoughts:

A. Non-destructive reads! 
    - Consider a sensor that allowed non-destructive reads. Maybe even of just some portion of the sensor (i.e. the center, or a corner, or just a portion of the side), wherein you were using the primary IMAGING sensor to ALSO GUIDE! The non-destructive reads could potentially be used for guiding purposes.
    - Also consider a sensor that could allow full field non-destructive reads...say just to preview the image, see if all is going well!

B. Split pixels in clustered groups.
    - Successive non-destructive reads for guiding might not be optimal, as the signal would differ from read to read. That could change centroid unnecessarily. It might work, there might be ways to use a dynamic gain setting or dynamically adjust the exposures to normalize them to minimize such effects, but it could still be less than optimal.
    - Split pixels (such as dual-pixel auto-focus used in a lot of mirrorless cameras that cannot use a separate dedicated AF sensor), although much more densely concentrated (and perhaps in several regions of the sensor: central, left, right, top, bottom and the four corners) so that every pixel in a region was split, might allow for more consistent partial reads for guiding purposes. With each "guide" read being a complete read of those pixels, you wouldn't have problems with successively changing signal with each read using non-destructives. Those reads could potentially then be recorded (or perhaps successively combined in some kind of pixel backing memory) so that the full pixel charge could be applied during a full readout. This would allow guiding, without affecting the total signal for the final image.

C. Voltage Binning.
    - While CMOS cameras don't support the kind of binning that CCDs do, where you can bin up to arbitrarily large groups or rows (or columns) of pixels with many CCDs thanks to the charge shifting that occurs (granted, there are limits, as eventually binning enough pixels could saturate the output register or output buffer), there are other forms of binning. You can in fact do minor amounts of charge binning in CMOS cameras that support it (most CMOS sensors use a 4-shared pixel architecture, and the addition of a backing memory in the form of, you guessed it, a CCD (!) for each group can allow for binning of those four pixels (2x2 groups). But this requires much more complex readout architectures.
    - A simpler approach is to use voltage binning, where the output voltage of each binned pixel is combined. There are different ways of doing this...IMO combining the voltage after conversion but before amplification is better, but from what I've read its more complex (doable with BSI sensors where there is more room for the necessary transistors and wiring.) 

D. Continuous read? (Just theory-mongering)
    - What if we didn't need to do a single read at the end of an exposure? What if, instead, we could do "continuous" reads and accumulate the result? There is a concept for a quantum film type sensor where "Jots", dynamic groups of insanely small pixels (1 micron or less) can be "activated" more like a silver halide grain in classic film, but with the ability for each jot to subdivide and become finer-grained as more photons arrive at the sensor, allowing for, in a sense, a dynamic signal and dynamic resolution. Quantum film sensors don't really do a normal readout, they "read out" progressively, and are in essence photon-counting, so there is effectively no read noise. You count the active pixels in each active jot, and sum the actives over time. Very interesting concept, but still very theoretical. However, perhaps some kind of periodic read, say every second, could be used and the resulting signal be combined with previous reads, progressively accumulating the signal over time? 
    - This would require sensors with effectively zero noise, otherwise the accumulation of read noise in each progressive read would overwhelm the signal. There are potentially ways to achieve this. The quantum film sensor effectively does this as its not really reading but counting. That said, there are sensors with fractional read noise (small fraction of an electron per read), and if you are able to acquire many photons per read per pixel (background sky level), then that might work.
    - Photon counting is also an interesting concept, as it effectively allows for no read noise. This might not matter in light polluted zones, but under truly dark skies, say at a permanent dark site observatory, such a sensor could allow for better images. Especially narrow band images, or any imaging of exceptionally faint details (IFN?)


So, any other ideas? What have you thought of that might make for the IDEAL, optimal imaging sensor for astrophotography?
Like
jhayes_tucson 22.61
...
· 
Jon,
A number of years ago, I reported that at SPIE, Spectral Instruments was showing a sensor designed for astronomical imaging that could do non-destructive reads.  As I recall, the problem with that scheme was in the amount of read noise but I don't recall the numbers.  You might be able to find it on their website.  They make sensors specifically for astronomical images.  If you can give them a large enough order, they'll work with you to make whatever you want.  

The last time that I was in Chile, I got a nice tour of the ATLAS telescope (also known as the KYAGB telescope *) from Brian Stadler (Associate Scientist with the Vera C. Rubin Observatory).  He mentioned that SI made the custom sensors for that program and that they had to submit and order for around 100 sensors to go along with the custom cameras.  I personally though this was crazy.  They are using CCD sensors cooled to -100C and they are locked in.  It doesn't matter how the world of camera technology advances.  They are now frozen in time by that short-sighted purchase decision, but that's often how research programs work.  They write a proposal and get the money to move the technology forward somewhat without regard to how fast the rest of the world evolves.


John


*    Kiss Your Ass Goodbye
Edited ...
Like
astroswell 0.00
...
· 
I'd vote for a noiseless sensor. Stacking is such a BS in my view. 
Like
jrista 8.59
...
· 
John Hayes:
Jon,
A number of years ago, I reported that at SPIE, Spectral Instruments was showing a sensor designed for astronomical imaging that could do non-destructive reads.  As I recall, the problem with that scheme was in the amount of read noise but I don't recall the numbers.  You might be able to find it on their website.  They make sensors specifically for astronomical images.  If you can give them a large enough order, they'll work with you to make whatever you want.  

The last time that I was in Chile, I got a nice tour of the ATLAS telescope (also known as the KYAGB telescope *) from Brian Stadler (Associate Scientist with the Vera C. Rubin Observatory).  He mentioned that SI made the custom sensors for that program and that they had to submit and order for around 100 sensors to go along with the custom cameras.  I personally though this was crazy.  They are using CCD sensors cooled to -100C and they are locked in.  It doesn't matter how the world of camera technology advances.  They are now frozen in time by that short-sighted purchase decision, but that's often how research programs work.  They write a proposal and get the money to move the technology forward somewhat without regard to how fast the rest of the world evolves.


John


*    Kiss Your Ass Goodbye

Yeah, for regular imaging frames, read noise would be a problem with non-destructive reads. My thought, though, was more for guiding with the same sensor. Imagine the benefits of being able to guide with on-axis, center-field stars that were optimal, without needing to waste image train space for an OAG or anything like that. ;) I don't know if some kind of non-destructive read approach, or some kind of split pixel approach, or some combination of both would be best for that, but that is the general idea: Guiding and imaging with the same camera. 

There is that quantum film sensor concept. It is an interesting concept, and the ability to count photons and effectively have no read noise is intriguing. The dynamic pixel size, though, and the inconsistent shape in pixels, makes me wonder how viable it might be for astrophotography. If it really is noiseless and photon counting, though, and if the image data produced could still be registered and stacked, that could be really interesting in the long run. The technology has been in development for over 15 years now, though, and I'm honestly starting to wonder if it will ever see the light of day.

It is pretty sad about the ATLAS project... Being locked into old technology is really a bummer. I really wonder how the professional astronomy community could help spur and even fund advancement of sensor technology to better suit the needs of astronomy and astrophotography. There are other things that you might be able to integrate into a modern camera design that are well out of the normal parameters of a "camera" that could be really helpful. For example, satellite trails...with all that Elon Musk and some other organizations are doing to pollute the crap out of our skies with thousands (and eventually tens of thousands!!!) of additional satellites in low earth orbits, trailing (including a horrible form of GRID TRAILS) could become extremely problematic. It would be interesting if a built in hardware form of AI could be trained to detect this kind of issue during exposure, and reject unwanted photons (say in a photon counting continuous read type sensor). I dunno, there may be other things that could be integrated into modern cameras as well... Not sure what's best there, though...let the camera deal with unwanted patterned signals, or deal with them in post? Classic rejection algorithms can get overwhelmed with repeated criss-crossing satellite trails... Maybe ML/AI algorithms could better detect and reject outlier pixels when they are part of an artificial trail during the integration process.
Like
jrista 8.59
...
· 
Maxim:
I'd vote for a noiseless sensor. Stacking is such a BS in my view. 

You would need not just noiseless, but infinite dynamic range as well, in order to do away with stacking. You would also need perfect tracking in the mount, to avoid trailing, and optimal conditions (i.e. no wind, no cable tugs, no other mishaps that might ruin your one and only frame!) 

I think that stacking will always be a process we use. The longer the exposure, the greater the risk that something will destroy it, so a noisless sensor capable of representing infinite dynamic range, which would be awesome no matter what, would probably still need to be used similar to how we image now, just to avoid the problem of losing your one and only frame due to say a gust of wind, or an airplane, or a grid of satellite trails from Starlink, etc.
Like
Marcelof 4.52
...
· 
Reading about the ATLAS telescope, the recent news that the camera of the Vera Rubin observatory, the largest camera in the world, was ready, comes to my mind. I was struck by the fact that they used CCD sensors and not a modern CMOS, I do not know if there is a technical-scientific reason for this (google does not provide much information) or if in a project of this magnitude that has been planned for decades they were trapped in decisions made years ago.

I understand the use of "old" technology in rovers, probes and space telescopes, these cannot be serviced and need to use ultra proven technology. But a ground based observatory might be a bit more risky.
Like
jhayes_tucson 22.61
...
· 
·  1 like
Jon Rista:

Yeah, for regular imaging frames, read noise would be a problem with non-destructive reads. My thought, though, was more for guiding with the same sensor. Imagine the benefits of being able to guide with on-axis, center-field stars that were optimal, without needing to waste image train space for an OAG or anything like that. ;) I don't know if some kind of non-destructive read approach, or some kind of split pixel approach, or some combination of both would be best for that, but that is the general idea: Guiding and imaging with the same camera. 

There is that quantum film sensor concept. It is an interesting concept, and the ability to count photons and effectively have no read noise is intriguing. The dynamic pixel size, though, and the inconsistent shape in pixels, makes me wonder how viable it might be for astrophotography. If it really is noiseless and photon counting, though, and if the image data produced could still be registered and stacked, that could be really interesting in the long run. The technology has been in development for over 15 years now, though, and I'm honestly starting to wonder if it will ever see the light of day.

It is pretty sad about the ATLAS project... Being locked into old technology is really a bummer. I really wonder how the professional astronomy community could help spur and even fund advancement of sensor technology to better suit the needs of astronomy and astrophotography. There are other things that you might be able to integrate into a modern camera design that are well out of the normal parameters of a "camera" that could be really helpful. For example, satellite trails...with all that Elon Musk and some other organizations are doing to pollute the crap out of our skies with thousands (and eventually tens of thousands!!!) of additional satellites in low earth orbits, trailing (including a horrible form of GRID TRAILS) could become extremely problematic. It would be interesting if a built in hardware form of AI could be trained to detect this kind of issue during exposure, and reject unwanted photons (say in a photon counting continuous read type sensor). I dunno, there may be other things that could be integrated into modern cameras as well... Not sure what's best there, though...let the camera deal with unwanted patterned signals, or deal with them in post? Classic rejection algorithms can get overwhelmed with repeated criss-crossing satellite trails... Maybe ML/AI algorithms could better detect and reject outlier pixels when they are part of an artificial trail during the integration process.

I really muddled up the point I was trying to make about customer sensors.  So just to be clear:  There’s nothing sad about the ATLAS project.  It doesn’t need the latest state of the art camera for its science mission.  Heck, it has already discovered an insane number of super novas and identified a huge number of NEOs.  They custom designed that scope and the entire processing system around the mission and that sensor fits the mission.  There are 4 ATLAS telescopes scattered around the globe to increase the odds of full-time monitoring and a key part of the design is reliability and component stability.  When it comes on line, LSST will augment that mission with a CCD sensor that is beyond state of the art.  As I recall it is about 0.75m in diameter with over 3 gig-pixels at 18 bits/pixel.  In March I heard that they are trying to determine if they will go with single 180s exposures for each field or break it into two exposures of 90s—mainly to deal with satellite trails.   I’m not sure of the actual exposure values so they could be a lot shorter—more like 15 seconds.   I was told that they can reach magnitude 25 with a 15 second exposure.  As I recall, the plan is to map the entire sky twice each night.

John
Like
MaksPower 0.00
...
· 
If nothing else stacking is invaluable for burying satellite trails (thanks Mr Musk) so hoping to eliminate it is IMHO a negative.

A big issue with 1 long exposure (and I did that in the film days) is that if something messes up during the shot - eg the guiding goes off a bit -  you may never know until te end, but the whole thing is wasted. Shooting short subs merely means a fraction are discarded and you stack just the best ones.

As for poor quality guide stars off axis, that means you  need a better optic. Admittedly I’m spoiled - I went specifically looked for such a beast, and found one. They do exist but are not the typical consumer-grade gear, or price.
Edited ...
Like
rogerg 0.90
...
· 
Jon Rista: "Imagine the benefits of being able to guide with on-axis, center-field stars that were optimal, without needing to waste image train space for an OAG or anything like that. ;) I don't know if some kind of non-destructive read approach, or some kind of split pixel approach, or some combination of both would be best for that, but that is the general idea: Guiding and imaging with the same camera".

This can be achieved without inventing a new sensor, by using the On Axis guider from Inovations Foresight. This gives you the whole field of view from the imaging camera for selecting a guide star.  Works brilliantly.  It does take up some back focus but well worth the effort. 
CS Roger
Like
HegAstro 11.99
...
· 
·  3 likes
In a different thread, I posted about the Foveon sensor that Sigma has been developing for decades. There are significant challenges, obviously, or else it would have been launched by now. If the issues with cross talk between colors and noise are solvable, it would be a big benefit for RGB imaging. Essentially, there would be no more debates  about how much (and whether) you need an L Filter versus RGB. You'd be able to get color data  at full fidelity 100% of the time. 

Like
jhayes_tucson 22.61
...
· 
·  1 like
Reading about the ATLAS telescope, the recent news that the camera of the Vera Rubin observatory, the largest camera in the world, was ready, comes to my mind. I was struck by the fact that they used CCD sensors and not a modern CMOS, I do not know if there is a technical-scientific reason for this (google does not provide much information) or if in a project of this magnitude that has been planned for decades they were trapped in decisions made years ago.

I understand the use of "old" technology in rovers, probes and space telescopes, these cannot be serviced and need to use ultra proven technology. But a ground based observatory might be a bit more risky.

The LSST camera design was laid out in the mid-1990s.  I recall a call back then from one of the engineers to discuss methods to insure that the assembled sensor was flat to within the depth of focus—and it was quite a difficult problem.  I think that a lot of folks forget that CMOS technology ultimately replaced CCD sensors for one key reason—video.   CMOS sensors can be read MUCH faster than CCD, which enables recording high resolution images at video rates.  CMOS  has evolved to have additional advantages in terms of lower read noise, improved fill factor, and better responsivity (QE) due to back thinning (which is also can be done with CCD).  But the additional reason that CMOS ultimately kicked CCD out of the market was the cost advantage that came from volume manufacturing enabled by the consumer camera market.  Back when ATLAS and the LSST sensor were developed, CMOS sensors were very small, they had a terrible fill factor, and they lagged far behind CCD in terms of sensitivity and linearity.  Yes, these telescopes use an older chip architecture but don’t assume that these are poorly performing sensors.  Even though I am completely sold on modern CMOS sensors, I happen to have two FLI-16803 cameras and they will still produce images every bit as good as what I can get from my IMX455 based cameras.  Unfortunately, those cameras are not great match to either of my two telescopes in Chile.

John
Like
MaksPower 0.00
...
· 
The Foveon sensor is not really a new idea - many years ago a version was released and used by Sigma in their DP1 camera - I had one. It was the only camera I have ever literally dumped in the trash, it was so bad I didn’t have the heart to sell it to some other sucker.

What makes more sense to me is a modified Bayer filter - instead of 4 pixels arranged as RGGB, make it RGLB where L is full-spectrum luminance, and use the RGB pixels for colour.

That way aOSC should in one shot give a result similar to LRGB from a mono camera with filters.
Edited ...
Like
HegAstro 11.99
...
· 
Nick Loveday:
The Foveon sensor is not really a new idea - many years ago a version was released and used by Sigma in their DP1 camera - I had one. It was the only camera I have ever literally dumped in the trash, it was so bad I didn’t have the heart to sell it to some other sucker.

What makes more sense to me is a modified Bayer filter - instead of 4 pixels arranged as RGGB, make it RGLB where L is full-spectrum luminance, and use the RGB pixels for colour.

That way aOSC should in one shot give a result similar to LRGB from a mono camera with filters.

Indeed, the troubles with the Foveon sensor is why Sigma has not launched it yet. And I am not holding my breath that they will, because a huge investment will be needed to produce this at scale even if the problems with it are solved. And while it would have benefit for the astro community, we are simply not large enough for a company to manufacture a radically different sensor suited specifically for our needs. So it would have to have some significant benefit for things like the sensors in smart phones.

In regards to an LRGB sensor- yes, such a sensor would capture more photons (roughly 50% more photons than an RGGB), but you'd have to be careful about assuming that that would translate directly to a corresponding SNR improvement of ~20%.  Because in LRGB combination, you are basically replacing the L of your color image with the luminance. My understanding of this is converting from RGB to L*, a*, b*, then replacing the L* with the L* from the grayscale image. I don't know what fraction of the theoretical SNR increase from pure photon capture is preserved in this combination, but I would anticipate that number to be much less than the theoretical maximum of 20%.
Edited ...
Like
MaksPower 0.00
...
· 
Arun H:
why Sigma has not launched it yet


Er, they did, in 2008. And it flopped. They tried to pretend it was 14MP when it was not even 3MP. 

https://www.dpreview.com/reviews/sigmadp1
Like
HegAstro 11.99
...
· 
Nick Loveday:
Arun H:
why Sigma has not launched it yet


Er, they did, in 2008. And it flopped. They tried to pretend it was 14MP when it was not even 3MP. 

https://www.dpreview.com/reviews/sigmadp1

Yes, they did. I was referring to their more recent efforts:

Sigma's Full-Frame Foveon Camera is Still at Least 'a Few Years Away' | PetaPixel

"In an interview with PetaPixel, Sigma’s CEO Kazuto Yamaki explains that it hasn’t moved past stage two of three in the sensor’s development — a stage it has remained in since 2022.“We’ve found the potential manufacturing partner, but we have not reached an agreement with them. So to be honest, the product stage is still the same. We are still in stage two,” Yamaki says."

"Further, even in the best-case scenario where the current prototype is perfect, the Foveon camera is still a ways off. When asked about a timeline, Yamaki responded: “at least a few years, minimum.”

 Incidentally, I never claimed the Foveon sensor was a new idea. I believe I mentioned in my original post that Sigma has been working on it for a couple of decades. And I believe I also mentioned that the sensor has significant technical issues which would need to be resolved to make it useful. This topic is about what an ideal sensor for astro would be. IF (and it is a very big IF) a sensor such as a Foveon sensor could be made with similar noise characteristics and color fidelity as our current sensors AND (also very uncertain) it was found to be commercially valuable, THEN it  could find use in astro work.
Edited ...
Like
MaksPower 0.00
...
· 
Arun H:
When asked about a timeline, Yamaki responded: “at least a few years, minimum.”v


Deja vu. In the early 2000's they took so long to get it out that Bayer sensors advanced to the point the Foveon was utterly irrelevant by the time any cameras with it actually shipped.
Edited ...
Like
jrista 8.59
...
· 
Roger Gifkins:
Jon Rista: "Imagine the benefits of being able to guide with on-axis, center-field stars that were optimal, without needing to waste image train space for an OAG or anything like that. ;) I don't know if some kind of non-destructive read approach, or some kind of split pixel approach, or some combination of both would be best for that, but that is the general idea: Guiding and imaging with the same camera".

This can be achieved without inventing a new sensor, by using the On Axis guider from Inovations Foresight. This gives you the whole field of view from the imaging camera for selecting a guide star.  Works brilliantly.  It does take up some back focus but well worth the effort. 
CS Roger

This still requires two cameras, though, and the ONAG is rather large and bulky. The thought is with a single camera, you could eliminate the need to dedicate a lot of backfocal space to something like the ONAG.
Like
jrista 8.59
...
· 
Arun H:
In a different thread, I posted about the Foveon sensor that Sigma has been developing for decades. There are significant challenges, obviously, or else it would have been launched by now. If the issues with cross talk between colors and noise are solvable, it would be a big benefit for RGB imaging. Essentially, there would be no more debates  about how much (and whether) you need an L Filter versus RGB. You'd be able to get color data  at full fidelity 100% of the time. 


I'm quite familiar with Foveon. I always loved the concept. At one point in time, Canon also had a layered sensor design in the works...although, now that I think about it, that was the better part of a decade ago.

I know that Foveon had SNR problems and a couple other issues. I wonder if Canon also ran into similar issues, as I haven't heard about their layered sensor in quite some time now. There must be some kind of inherent issue with the approach, otherwise given Canon's longed experience in the arena of sensor design and manufacture, I figure they would have figured out how to make it work over the last...10, 12 years now?
Like
jrista 8.59
...
· 
Nick Loveday:
The Foveon sensor is not really a new idea - many years ago a version was released and used by Sigma in their DP1 camera - I had one. It was the only camera I have ever literally dumped in the trash, it was so bad I didn’t have the heart to sell it to some other sucker.

What makes more sense to me is a modified Bayer filter - instead of 4 pixels arranged as RGGB, make it RGLB where L is full-spectrum luminance, and use the RGB pixels for colour.

That way aOSC should in one shot give a result similar to LRGB from a mono camera with filters.

This is because Sigma bought Foveon. Foveon was around for a while before Sigma bought them, but...it was a long time ago that they bought Foveon...20 years? That is how they acquired the tech and why they used it in their cameras for a while. I think they are still working on the Foveon technology. A while back I read something about a full frame Foveon X3 sensor. I don't know if it was ever released, but it would be the first full frame (36x24mm sensor size) layered sensor to market.
Edited ...
Like
jrista 8.59
...
· 
Arun H:
Nick Loveday:
Arun H:
why Sigma has not launched it yet


Er, they did, in 2008. And it flopped. They tried to pretend it was 14MP when it was not even 3MP. 

https://www.dpreview.com/reviews/sigmadp1

Yes, they did. I was referring to their more recent efforts:

Sigma's Full-Frame Foveon Camera is Still at Least 'a Few Years Away' | PetaPixel

"In an interview with PetaPixel, Sigma’s CEO Kazuto Yamaki explains that it hasn’t moved past stage two of three in the sensor’s development — a stage it has remained in since 2022.“We’ve found the potential manufacturing partner, but we have not reached an agreement with them. So to be honest, the product stage is still the same. We are still in stage two,” Yamaki says."

"Further, even in the best-case scenario where the current prototype is perfect, the Foveon camera is still a ways off. When asked about a timeline, Yamaki responded: “at least a few years, minimum.”

 Incidentally, I never claimed the Foveon sensor was a new idea. I believe I mentioned in my original post that Sigma has been working on it for a couple of decades. And I believe I also mentioned that the sensor has significant technical issues which would need to be resolved to make it useful. This topic is about what an ideal sensor for astro would be. IF (and it is a very big IF) a sensor such as a Foveon sensor could be made with similar noise characteristics and color fidelity as our current sensors AND (also very uncertain) it was found to be commercially valuable, THEN it  could find use in astro work.

Aye, there have been long term issues with layered sensors. Canon also has some patents regarding them, and I think the first time I read about one was around 2013 or thereabouts. Canon has also never released it...and they have extensive skill and knowledge in designing and manufacturing image sensors. I think some of the problems stem from the deep layer PDs and noise problems, as well as ensuring proper purity of the green and blue sensors so they don't absorb red light. In any case, there are definitely technical issues with layered sensors.

There was another very intriguing sensor I found some time ago...a TiN or Titanium Nitride based sensor, that was able to do realtime continuous read, with not only precise photon location detection, but also exact wavelength detection (it didn't "bucketize" photons with filters, instead it measured the actual energy of each photon and recorded that, so you could reproduce the EXACT color of that wavelength. I'll see if I can dig up an article on that sensor. This particular sensor, I think, relied on both superconductivity as well as superinsulation (a newer phenomena that was discovered around a decade ago, that allowed this kind of sensor to be developed), so it requires deep cooling beyond the average capabilities of the armature imager. Still, the concept and technology was really interesting.
Like
jrista 8.59
...
· 
Boy, this was a lot longer ago than I thought. There seems to have been a LOT of research on TiN and similar materials for superconductive junctions and image sensors since, so I don't know where this concept has gone since 2013. It sounds pretty amazing though, given it has no read noise, no dark current, absolute true color, effectively infinite dynamic range, etc.

https://spectrum.ieee.org/superconducting-video-camera-sees-the-universe-in-living-color
Like
HegAstro 11.99
...
· 
Nick Loveday:
Arun H:
When asked about a timeline, Yamaki responded: “at least a few years, minimum.”v


Deja vu. In the early 2000's they took so long to get it out that Bayer sensors advanced to the point the Foveon was utterly irrelevant by the time any cameras with it actually shipped.

I agree. Anything they put out now will be going against development and manufacture of CMOS supported by billions of dollars of R&D across two decades. I don’t think it has a chance of getting traction. Nonetheless we are talking theoertically ideal sensors here.
Like
 
Register or login to create to post a reply.