11-29-2019, 12:21 AM
(11-28-2019, 05:13 PM)Ofnuts Wrote: A JPG is 8-bit per channel. A camera that does RAW does 12 or 14 bits. So a JPEG is mostly truncating the lower 4 or 6 bits where most of the noise is.
That is very welcome information
Applying that, to this instance...
In an image that has no detail (other than a more or less 'even reflection' of light), the JPG is the superior format.
This then just leaves the questions concerning ISO, and exposure time
I read that ISO setting adjusts the light sensitivity of the sensor High = High
... but high sensitivity increases noise.
In a controlled setting, such as the black box ... I presume that 'low ISO + long exposure time' will produce the best results.
However, this raises the question of accumulating light.
Ie. Presumably, if the exposure time was long enough, the image would just get lighter.
Does this raise the issue of test validity?
Perhaps a rethink is required?
Defining The Question
We are informed that Black 3.0 absorbs up to 99% of visible light.
- some people suggest that this is an exaggerated claim.
The problem lies with 'light intensity'.
Point a really bright light at it, and it becomes grey.
... but is this a reasonable test?
It seems ideal, if the tester wants to produce a visual failure.
Perhaps we should consider the realistic settings, in order to gain guidance on light intensity.
This led me to think about 'what is light intensity'?
I read that a 20W lamp issues twice as many photons as a 10W lamp.
... presumably, if the 10W filament is smaller, but is heated to the same temperature, the energy hitting the object will be doubled (for 20W).
Note the difference between photons being doubled, and energy being doubled.
The number of photons emitted could be the same ... but if the temperatures of the emitters were different, the same number of photons would carry a different energy load.
Quote:The lamp filaments must be manufactured to achieve the same temperature, regardless of the wattage.
If this is the case, the photons, emitted by the 10W and 20W lamps, will each carry the same energy, but for 20W, there will be double the number of photons.
... ergo, there will be double the energy hitting the object.
If the filament temperature is lowered ... the same number of photons are emitted, only that each photon will carry less energy
... and will therefore be a different colour.
Richard Of York Gave Battle In Vain - where Richard has the lowest energy (as per aide memoir chance LOL)
Quote:The higher the photon's frequency, the higher its energy (Violet).
Equivalently, the longer the photon's wavelength, the lower its energy (Red).
From this...
A beam of photons is required.
In the beam, there can be more, or less, photons (to a known comparable amount - say 10W vs 20W)
In the beam, in each case, the photons might be provided with more energy, in order to change the photon colour range (by increasing the power to the lamp).
Given that lamps (manufactured to tight tolerances) can be acquired (with some tech support)...
Simply changing the lamps will provide a known increase in the number of photons hitting the object.
If the power change is known, to enable each different wattage filament to produce the same colour output ... colour absorption can be tested.
Both tests would relate to total energy absorbed, either by changing the number of photons, or, by changing the energy of each photon.
Another method (for colour absorption), might be to use filters ... perhaps worth further consideration.
All this informs...
The Test Enclosures
It seems that there must be :
Two enclosures...
The light receiving enclosure, and the light emitting enclosure.
This would standardise the photon beam.
A number of different wattage lamps.
A variable power supply.
Possibly, filters (perhaps a partial green filter to balance the sensor sensitivity to green light).
The test software (Gimp) probably needs upgrading to 2.10, to gain linear light readings.
Test objectives
Given that Black 3.0 absorbs photon energy, and emits almost nothing of what it has received...
The test will determine whether it better absorbs large numbers of photons at low energy, or less photons at a higher energy (total energy remaining equal).
Ideally determining the point at which it begins to emit photons.
(I'm guessing that, up to a certain energy level, all the energy can be absorbed)
Theoretically, it will be possible to judge light absorption percentage (via as yet unconfirmed methods).
Camera Settings
These can be determined, to first work with the lowest wattage lamp, to produce a visual instance of reflected light.
Perhaps initially, this would be a subjective outcome, but thereafter with the camera set, any changes would be evaluated against the initial quantity of reflected light.
Validity?
I believe so ... though the question of validity remains open.