A bayer sensor has a bayer limit, and a nyquist limit. As you might know a bayer cmos sensor has a repetitive color array with blocks of 4. In each 4 pixels we see two green (our eyes are more sensitive to green hence the extra green spatial resolution), one red and one blue.

When debayering these 4 pixels we are guessing the RGB values of each pixel by interpolating it with the info of the neighboring pixels, this has ofc evolved over the last years, by better algorithms, but it’s still a bit of a guessing game. A sensor is basically a black and white sensor, that can register luminance as voltage, by overlaying such a sensor with a color array with enough density, we can filter each photosite according to it’s color array. The quality of the cfa (color filter), micro lenses, to guide the light to each photo site, the olpf, which filters both ir and uuv light and softens the image on a pixel level, it’s debayer algoritm, and ofc the quality of the sensor (noise levels, etc) and its ADC (analogue to digital converter) will jointly determine the quality of the output.

As you might be able to understand a 8k sensor for Cinematography is not really needed for mastering but for capturing there are a few benefits. As you might know the Arri Alexa standard oversamples from 2.8k to 2k when shooting Prores, if you calculate this you will find out that this is about 70% of the resolution that s initially captured, this is the minimum of oversampling you need to get passed the guessing game of the bayer concept. But let’s not get too technical in my first reaction, (still trying to find the right tone to answer questions on this forum). We can get more technical along the way. Also if your reading this and you find any mistakes, please point that out, We are not god, and I am not a camera designer, I do my truly best to study these concepts, but I learn by mistakes.

Some sensors have 2X2 mode, this is sometimes referred to as quad bayer, but quad bayer can officially do only oversampling, because theres a lens on top of each cluster of 4 pixels, that spreads the light evenly, thus it basically feeds the 4 pixels with the same light beam, and the 4 pixels filter each a different color in order to jointly determine the accurate color. IN this case no debayering has to be done. The 4 pixels are quantized instead of interpolated. Such a camera can only do oversampled, because pixel to pixel wouldn’t made sense, because luminance wise (black and white) the cluster of 4 are all the same, they only differ in color.
IN the case of the Edge the sensor has a 2X2 mode on sensor, it can still do full read out at 8k, but when it oversamples from 8 to 4k (4 pixels become one), the 4 pixels are quantized in one super pixel. By doing so you will get closer to it’s initial 12 bit adc in terms of signal to noise ratio. the 12 stops of dr will yield a cleaner signal, because noise values are evened out, but your dr wont be higher, just cleaner. Ofc there’s the IMA test, which will point to a different value, but The ima test is troublesome, but in a latitude test you will be able to retrieve about the same stops as in 1X1 , but there will be slightly more noise, and colors will not be as good. especially in the shadows where there are more variations due to noise. to make a long story short the noisefloor, is where the ADC is basically creating it’s own noise, and the noise is not part of the captured scene, this is often called dark current, and can be diminished by the use of a dedicated black balance.
What you benefit of the most when oversampling is 1. 4X less data (jump from 8 to 4k is 4X) a cleaner signal with less corrupt data, due to skipping the guessing game and combine values, there are a lot of different definition, like downsampling, oversampling, and supersampling, I mostly use the term oversampling. You basically sample at a higher rate then you plan to master on. If the analogue sampling by combined value of lens and olpf doesn’t outdo the sampling of the sensor by 2X (nyquist limit), you wont suffer from aliasing.
Now the most important reason to use 2X2 oversampled mode and that is color reproduction. As I said before debayering is interpolating, taking info frame neighboring pixels to retrieve rgb data. One pixel s green but he has to get a Red and Blue value as well, which he takes from his neighbour. This basically lowers spatial resolution, if the neightboring pixel was filming something else (the edge of the line where the initial pixel was registering a black line), it s incorrect to take the data of the neighbor and will lead to pollution in the signal. Oversampling however will just treat the 4 pixels as one pixel, with a result of a lower resolution. If your aim is to master for cinema, one need to have at least 2.5k pixels for the middle row (the best seating place), and maybe 4k at its much for the front row (where you sit and and pan and scan with your head and eyeballs because you cant overlook the screen in one fixed position) . So with Netflix and producer etc., wanting 4k, one cannot oversample a 4k sensor to 2k. Even though it will probably look amazing, its political risky. The edge has this amazing sensor with 8k resolution relatively low noise values, no fixed pattern noise, and ability to oversample to 4k. Once you combine the color info (quantization), of the 4 color arrays, you enter a whole new universum of colors, where you not only capture more correct colors, but also a
wider gamut (spectrum).

The fun part about RAW is that people who work on RAW can do this in post as well, if their program supports that type of raw and offers oversampling. The reality is that most religious raw people have no clue about the troublesome nature of RAW, which is basically a black and white image with a fixed pattern of color arrays. With raw you potentially profit datawise from recording a black and white image, thus have a lot less data, and then by debayering in post you get the RGB values of each pixel. But at the same time with RAW you cant capture (record) a oversampled signal, so you have to stall each pixel. With oversampled prores 444, you let the camera do all the necessary math to get the most honest and best looking data (thus going past nyquist and bayer issues) and record that in a lower resolution that has better data, and most probably has the same kind of ‘ high resolution feel’ because the pixel to pixel data is kind of damaged due to interpolation. Oversampled prores 444 will be smaller than RAW, and will have the same (if not better) flexibility in post. white balance, digital gain, tint, everything can be non destructive in a visual lossless manner be edited, graded. Because with RAW the first pass inside your nle (including resolve, scratch, baselight) is the debayer, and grading is always the 2nd/3rd pass. All the setting in your raw UI, are not based on the pre-debayer data, but are just numeric values based on metadata, thus if you change your wb in your ui, you can set it to exactly 5600 kelvin where in the grading panel you can only offset it, the results are the same, but the difference is that the RAW tab, knows the initial captured metadata. For a colorist this doesn’t matter, but for the religious people it is a reason to think that with RAW you can access some sort of goddish mode, in which you can change the ISO (the voltage on the sensor while shooting), etc. This is all not true. Both can do exactly the same the RAW UI, just displays the offsetted data a bit more accurate, if that’s what your looking for (knowing exactly 1 stop of extra digital gain, etc). Hope this helps . more info about the Kinefinity MAVO Edge 8K like spec sheets, footage etc can be found here
https://gafpagear.com/shop/kinefinity-mavo-edge-8k/