Resolution
Ideally when filming ;
jointly determine the resolution of the capture, the scene presents a certain contrast and details, and the lens tries to resolve that in the best manner (note that a lens always kind of compromises and it that technical compromise lays the artistry or the flaw)
Camera
But there is a camera as well! And basically the camera cannot add magic, it can only take magic away. All it has to do is to correctly sample the projection of the lens unto it’s sensor and then stall that data (color information, latitude and resolution) in a visual losless manner. This can be done by quantization / debayering in-camera and record into a codec thats beefy enough not to distort that data, or it can be done in post production, with the potential benefit of slightly smaller file sizes, some call that RAW but lets call it undebayered. However in order to display the sampled data in a truthful manner this has to be oversampled by at least 0.7X or even better 0.5X and in that case the Prores file will be smaller than the RAW equivalent with least amount of compression (we’ ve seen a lot of so called raw formats with compression that gives a much more distorted data than prores codec)
‘4k is already sharp enough for me’
The weirdest argument for low resolution sampling is also one of the most common ones. Resolution is not about sharpness, but instead about smoothness. the higher the sample rate, the fewer digital artefacts, and the smoother the image, the more analogue, the less digital . Digital by mean of artefacts like colored noise, aliasing and for instance smear . The higher the pixel density the less thicker potentially a olpf can be. Most brands bargain on the olpf or don’t use a olpf at all presenting their customers with faulty pixels, that drift all over the place and are in war with their neighboring pixels.
but our eyes can only see around 2.5k pixels horizontally?
Yes that’s right. and this is the fun part, when looking back at telecine footage of projects i shot in the past on s16 and s35(for those not into film…thats a simple way of digitalizing analogue film)
I see a very good image while the digibeta transfer is only 576X720 with a aspect ratio pixel. So how come, these images didn’t looked pixelated…? Yes of course when coming real close and observing the difference of a ARRI wet gate 4k scan one could see the differences in smoothness, which was mostly in smoothness, the digibeta kind of destroyed the grain because the resolution was much smaller than the size of the grain.(let alone that grain is not square but randomized in form). but still it looked pretty amazing for such a low pixel count
So in a simply explanation good footage is sampled in either a smart way, or with a extremely high sample rate.. And small pixel um eventually also leads to issues (lets not talk about it, because this small article is already quite intense ). All we have when shooting is an end goal! Everything that lays in between is only a tool to reach that end goal. but yet we are completely ignored if it comes to spec sheets regarding that goal. How can i Use 12k pixels if all these pixels, are not only noising, but also jumping around?
IN simple words there is an output, lets say the master, and the master can be close to our retina (2.5k or lets go to 4k so we don’t see pixelation when coming closer to the screen).
Sampling is a whole different world, and Analogue sampling is king, not only due to the random forms and spatiality of grain, but also because a grain never repeats itself, and in every new frame the deck will be shuffled all over. So by doing so you build the best possible filter.
CMos sensors not only need OLPF filters for the square pixels, with a horizontal and vertical blur to the feeded (projected) image, but they are also black and white sensors, only sensitive to luminance, and by means of adding a fixed pattern of color filters (cfa’s), they can somewhat see color (by extrapolating or more ideally quantization of that data into RGB).But issue is that the native spatial resolution on a 8k sensor for the color red is only 2k and for green 4k, because per block of green the greens have the double amount of cfa’s due to our eyes being more sensitive to green. We can extrapolate this data with a profile maximizing the resolution, but we can also optimize it for the best possible pixel quality which we call oversampling. In that case the pixels have low self noise (standard deviation) and better color accuracy. Since most brands bargain on their OLPF’s or even leave a olpf out to sell their cams for cheaper the pixels have even lower quality, because they are square and presented with organic forms that they cant display getting in war with their neighboring pixels, and drift like hell, creating weird patterns that can be fun if your into 90’s video art. Another issue with the color filters on top of image sensors, is that a lot put lighter green filters potentially give room to calculations to expand dynamic range trading in for color accuracy etc.
Since we know by now that our eyes cannot see more than 2.5k at a normal viewing angle, all we need is a camera that’s designed to accuraracy that resolution with low noise, high color accuracy, and high mtf (color and detail seperation).
Job
To me camera manufacturers should focus on making the best possible camera and respect the more ignorant position of their clients towards science.. If I see a camera no matter if its 6k or 8k, or 12 k and I see fpn, aliasing, and weird color artefacts and color edge issues, I simply dismiss it. It’s the task of a camera brand to make a camera that samples nicely. But the world is not utopia, and clients and companies like netflix have a lot of power, and the power of lies is much stronger than the power of truth.
My hope is that we can have cameras in a few years without olpf’s and without cmos patterns, that simply capture exactly what the lens sees. Than in post we can do whatever we want with that data, but we are not limited by the so called ‘esthetics’ of digital capturing.
The irony of digital
the irony, if you want aliasing, or noise… just add it digitally! Thats to me the digital dream, to become analogue, so the lens and the world it sees becomes dominant again and the vision of the filmmaker.. not the other way around
For now the Edge makes me real happy! Let’s keep pushing for the truth and educate each other! if you have any questions or you found some horrible mistakes in my theorem ( imake bloody mistakes all the time, so please correct me so we can improve each other!) .
video sample
Quick sample is a cooke triplet design of more than 100 years old, re-calculated and redesigned/grinded by the amazing Valdas Misevicius not the sharpest lens around but instead the most uncorrected. digital zooms are added in post with up to 180% digital zoom to show you the artefact free image (the irony of fb video)