I have a particular np.array data which represents a particular grayscale image. I need to use SimpleBlobDetector() that unfortunately only accepts 8bit images, so I need to convert this image, obviously having a quality-loss.
I've tried it before
import numpy as np import cv2 [...] data = data / data.max() #normalizes data in range 0 - 255 data = 255 * data img = data.astype(np.uint8) cv2.imshow("Window", img)
But cv2.imshow is not giving the image as expected, but with strange distortion...
In the end, I only need to convert a np.float64 to np.uint8 scaling all the values and truncating the rest, eg. 65535 becomes 255, 65534 becomes 254 and so on.... Any help?
A better way to normalize your image is to take each value and divide it by the largest value experienced by the data type This ensures that images with small dynamic range in your image remain small and that they are not inadvertently normalized so they become gray For example, if your image had a dynamic range of
[0-2] , the code right now would scale that to have intensities of
[0, 128, 255] . You want these to remain small after converting to
Therefore, divide every value by the largest value possible by the image type , not the actual image itself. You would then scale this to 255 to produce the normalized result Use
numpy.iinfo and provide it the type (
dtype ) of the image and you will obtain a structure of information for that type. You would then access the
max field from this structure to determine the maximum value.
So do the following changes to your code
import numpy as np import cv2 [...] info = np.iinfo(data.dtype) # Get the information of the incoming image type data = data.astype(np.float64) / info.max # normalize the data to 0 - 1 data = 255 * data # Now scale by 255 img = data.astype(np.uint8) cv2.imshow("Window", img)
Note that I've additionally converted the image into
np.float64 in case the incoming data type is not so and to maintain floating-point precision when doing the division.