Pages

Wednesday, June 28, 2023

Chapter 12: Pixel Data

Frame 0001
I guess that one can't escape talking about pixels when dealing with DICOM. After all, imaging is what DICOM is all about and digital images are built from pixels. So today, to celebrate release 2.0.2.6 (and the x64 version) of the DICOM Toolkit, I'm finally going to touch the heart of every DICOM Image, The Pixel Data.

For today's post I've prepared a little C++ test application that really does nothing much other then putting pixels into the pixel data of a DICOM file and save it. Well, not exactly nothing much, because it creates a huge DICOM file, more then 0.7 GB and compress it and never use more then 20 MB of memory. If you want to know how, read on.


If you haven't done so yet, download the example source code and the latest version of RZDCX and regsvr32 it. For this example, it's important to use version 2.0.2.6 or later. You can also download the 200 frames JPEG compressed DICOM file that the test application creates. Just make sure you have enough RAM before double clicking it because most viewers will take ~770MB to display it.

The first part of the application, up to line 87 (look for the comment "Setting the image pixel group elements") is rather standard. We set some mandatory elements in every DICOM object like patient name and ID and the UID's for the study series and instance and set the object class to secondary capture.

Now comes the image pixel module with the tags starting with 0028. This group is responsible for describing how to read the pixels. I'm going to go over each one and explain its use and meaning. Here's a dump of this group from the uncompressed file created by this example:

(0028,0002) US 3                                        #   2, 1 SamplesPerPixel
(0028,0004) CS [RGB]                                    #   4, 1 PhotometricInterpretation
(0028,0006) US 0                                        #   2, 1 PlanarConfiguration
(0028,0008) IS [200]                                    #   4, 1 NumberOfFrames
(0028,0010) US 960                                      #   2, 1 Rows
(0028,0011) US 1280                                     #   2, 1 Columns
(0028,0100) US 8                                        #   2, 1 BitsAllocated
(0028,0101) US 8                                        #   2, 1 BitsStored
(0028,0102) US 7                                        #   2, 1 HighBit
(0028,0103) US 0                                        #   2, 1 PixelRepresentation
(0028,1050) DS [128]                                    #   4, 1 WindowCenter
(0028,1051) DS [256]                                    #   4, 1 WindowWidth
(0028,1052) DS [0]                                      #   2, 1 RescaleIntercept
(0028,1053) DS [1]                                      #   2, 1 RescaleSlope
(7fe0,0010) OB 00\00\00\00\00\00\00\00\00\00\00\00\ ... # 500616000, 1 PixelData


And here's how the JPEG compressed dump looks like:

(0028,0002) US 3                                        #   2, 1 SamplesPerPixel
(0028,0004) CS [YBR_FULL_422]                           #  12, 1 PhotometricInterpretation
(0028,0006) US 0                                        #   2, 1 PlanarConfiguration
(0028,0008) IS [200]                                    #   4, 1 NumberOfFrames
(0028,0010) US 960                                      #   2, 1 Rows
(0028,0011) US 1280                                     #   2, 1 Columns
(0028,0100) US 8                                        #   2, 1 BitsAllocated
(0028,0101) US 8                                        #   2, 1 BitsStored
(0028,0102) US 7                                        #   2, 1 HighBit
(0028,0103) US 0                                        #   2, 1 PixelRepresentation
(0028,2110) CS [01]                                     #   2, 1 LossyImageCompression
(0028,2112) DS [18.0721]                                #   8, 1 LossyImageCompressionRatio
(0028,2114) CS [ISO_10918_1]                            #  12, 1 LossyImageCompressionMethod
(7fe0,0010) OB (PixelSequence #=201)                    # u/l, 1 PixelData


You can see that there are differences because the data in these elements should describe the pixels as they are in the pixel data element. Notice the difference in Photometric Interpretation. In the JPEG compressed file, it's YBR_FULL_422 meaning the pixels are in the YCbCr color space. Also notice that the uncompressed file has a simple array of bytes in the pixel data element while the jpeg compressed has a sequence of 201 items each holding a frame and one more (the first one) with an offset table pointing to the offset to each frame in the sequence. Reading Chinese? Let's go over each element and explain them all.

Rows and Columns

Rows (0028,0010) and Columns (0028,0011) define the size of the image. Rows is the height (i.e. the Y) and Columns is the width (i.e. the X). In our example every frame is 1280 x 960 pixels. We'll see what is frame in a minute.

Samples Per Pixel

Samples per pixel (0028,0002)define the number of color channels. In grayscale images like CT and MR it is set to 1 for the single grayscale channel and for color images like in our case it is set to 3 for the three color channels Red, Green and Blue.

Photometric Interpratation

The photometric interpratation (0028,0004) element is rather unique to DICOM. It defines what does every color channel hold. You may refer it to the color space used to encode the image. In our example it is "RGB" meaning the first channel ir Red, the second is Green and the third is Blue. In grayscale images (like CT or MR) it is usually "MONOCHROME2" meaning its grayscale and 0 should be interpreted as Black. In some objects like some fluoroscopic images it may be "MONOCHROME1" meaning its grayscale and 0 should be interpreted as White. Other values may be "YBR_FULL" or "YBR_FULL_422" meaning the color channels are in the YCbCr color space that is used in JPEG.

Planar configuration

Planar configuration (0028,0006) defines how the color channels are arranged in the pixel data buffer. It is relevant only when Samples Per Pixel > 1 (i.e. for color images). It can be either 0 meaning the channels are interlaced which is the common way of serializing color pixels or 1 meaning its separated i.e. first all the reds, then all the greens and then all the blues like in print. The separated way is rather rare and when it is used its usually with RLE compression. The following image shows the two ways. BTW, If this element is missing, the default is interlaced.
Interlaced vs separated Planar Configuration

Bits Allocated, Bits Stored and High Bit

Luckily, most toolkits and RZDCX among them take care of extracting and manipulating the pixels for you, but if you ever need to do it yourself, you'll have to do it according to these attributes and also make sure to take little/big endian into your considerations.

Bits Allocated (0028,0100) defines how much space is allocated in the buffer for every sample in bits. In our case we encode 24 bit RGB image which is the most standard image on earth so every channel is encoded in 8 bits i.e. a complete bytes so samples are always aligned with bytes. All DICOM objects (at least that I have looked into so far) always use complete bytes for bits allocated so it is either 8 or 16 for grayscale images with more then 256 levels of gray.

Bits Stored (0028,0101) defines how many of the bits allocated are actually used. In our case, as every sample value is between 0 and 255, all the 8 bits are used so bits stored is 8. Returning to CT images, where each sample value is between 0 and 4095, bits stored is 12 (2 power 12 is 4096). The remaining four bits are not part of the pixel value and should be masked out when reading the pixels. Sometimes these bits are used to store overlay planes data.

High Bit (0028,0102) defines how the bits stored are aligned inside the bits allocated. It is the bit number (the first bit is bit 0) of the last bit used. In the standard it is always set as one less then the bits stored but hypothetically it doesn't have to be that way. In our case, the high bit is 7. In CT it is 11. Here's an image from the DICOM standard that shows how pixels are arranged bit-wise.
CT Pixel Data in Memory

Pixel Representation

Pixel Representation (0028,0103) is either unsigned (0) or signed (1). The default is unsigned. There's an anecdotal issue here with VR codes of US and SS and this attribute because when it is set to signed then all the attributes of group 0028 should be encoded as Signed Shorts (SS) and when it's unsigned they should be unsigned (US) too.

Number of Frames

Number of Frames (0028,0008) defines how many frames are in the image. Usually there's only one and this element is omitted but in DICOM you can create multi-frame image objects and then you have to set this element. In our case we create a multi-frame image with 200 frames.

This concludes all the mandatory elements of  the image pixel but for one, the pixel data.

Pixel Data

It's time to set the pixels into the pixel data element (7FE0, 0010). You may ask why all the other image pixel module elements are of group 0028 and only the pixel data is not? and though we don't ask DICOM why questions, but this time I would like to ask this question because I think there's a good answer. Think of it until we get to the end of this post.

Lets calculate the pixel data length. We have 1280 x 960 pixels in each frame, 200 frames, 3 samples per pixel, each sample is one byte and we get:
ROWS * COLUMNS * NUMBER_OF_FRAMES * 
SAMPLES_PER_PIXEL * (BITS_ALLOCATED/8) 
bytes, that's
1280 * 960 * 200 * 3 * (8/8) = 737280000 bytes!

720MB! That's a lot! You don't expect a software to allocate such memory space in one chuck and get away with it, do you? I don't. That's why I've added the SetValueFromFile to DCXELM and SetJPEGFrames to DCXOBJ. 

Let's have a look at the last part of the test application:

            ////////////////////////////////
            // Create dummy pixels frames //
            ////////////////////////////////

            el->Init(rzdcxLib::PixelData);
            el->ValueRepresentation = VR_CODE_OB;

            int frameSize = ROWS*COLUMNS*SAMPLES_PER_PIXEL;
            char *pixels = new char[frameSize];

            // Write 200 frames to a file
            ofstream s("pixel.data", ios_base::binary);
            for (int i=0; i
            {
                  number2image(i+1); // Let's add some salt to it
                  scaleImageTo(COLUMNS, ROWS, pixels);
                  s.write(pixels, frameSize);
            }
            s.close();

            delete[] pixels;

            int pixelsLength = frameSize*NUMBER_OF_FRAMES;
            el->SetValueFromFile("pixel.data", 0, pixelsLength);
            obj->insertElement(el);

            // Save it as is
            obj->saveFile("color.uncompressed.dcm");

            // Compress it as JPEG Lossless
            obj->SaveAs("Color.jpegLossless.dcm",
                  TS_LOSSLESS_JPEG_DEFAULT, 100, "c:\\tmp");

            // Compress it as JPEG
            obj->SaveAs("Color.jpeg.dcm",
                  TS_JPEG, 100, "c:\\tmp");


First we create the pixel data element and set the VR to OB. OB (other byte) means that every value in the data element is a byte and that's what we should do in this case. For CT images where every sample is stored in two bytes we should use the OW (other word) VR. Because we deal with binary data, this hint is required for the toolkit to store the data properly.

In the for loop, we write all the frames one after the other into the pixel data file. This is where you should copy your image bytes. To add some spice to this example I've burned in the frame number on each frame. This is done using the code in DigiTools.h. It's a little something I've written for this post too. I really work on these posts.

The last step is setting the pixel data file to the pixel data element value. This call doesn't load the data into memory. You are responsible to keep the pixel data file as long as the DCXOBJ instance (obj) lives.

Now we can call saveFile. What's nice is that even here the toolkit doesn't load all the data into memory. Instead it reads small chunks of the pixels data file (16K each if I remember correctly) and copy them into the DICOM file.

If you run this application and open the windows task manager you will notice that throughout the run of this test application it never takes more then 20 MB of RAM.

The SaveAs calls at the end keeps the same standard and utilize a temp folder that you provide to do the compression. Every compressed frame is temporarily stored into a file in this folder and then the frames are copied one by one to the DICOM file. 

In this example we first compress into JPEG and then to JPEG Lossless. This may take some time to run. On my workstation (Intel Core 2 Duo E6550 @ 2.33GHz with 8GB of RAM) it takes about two minutes to run, most of this time is spent on the two SaveAs calls. The toolkit is responsible for keeping the memory resources available and you are responsible to dispose the temp folder and it's content after use. I think that's nice. Try doing this with another toolkit.

If you are not keen on memory resources you can call EncodeJpeg or EncodeLosslessJpeg or simply set the TransferSyntax property of DCXOBJ. This does not require temp folder but uses a lot of memory. You can also use SetJpegFrames to set the pixel data from JPEG files and SetBMPFrames to set the pixel data from bitmap files.

So why pixel data is (7FE0,0010) and not for example (0028,9998)? I think its because it is a very long data element so we want it to be the last element in the file. As you remember, elements are written in order from small tag numbers to big tag numbers. Having the pixel data as the last element in the file, we can read all the 'DICOM header' and skip the heavy lifting of the pixel data. For example, let's say we want to scan a large data set and sort the images according to their 3D volume location and only then load the pixels into a volume buffer. In this way, we can stop reading every file after group 0028 for example and save a lot of disk time and memory. That's a good reason and good software engineering, don't you think?

66 comments:

  1. Footnote, A worth noting comment from Mathieu Malaterre on DICOM's google groups (comp.protocol.dicom):
    General purpose color Multi Frame objects should use the MF true color SC Image sop class 1.2.840.10008.5.1.4.1.1.7.4 and not the one used in this example. However, the support for the regular SC image object is wider and using the new MF color object may result in association negotiation error. So practically, I would use it as it is in the example.

    ReplyDelete
  2. Hi,
    Please clarify below query related to pixel spacing[0028,0030].
    I would like to know in One Dicom Image ,different pixel spacing is possible or not.
    For example:[0028,0030] ==> 0.175\0.275

    ReplyDelete
    Replies
    1. Hi Thiru Kumaran,
      If you use the MULTI-FRAME TRUE COLOR SC IMAGE (1.2.840.10008.5.1.4.1.1.7.4) then you may be able use the functional groups to set pixel spacing in the Pixel Measures Macro (see the DICOM standard C.7.6.16.2.1 Pixel Measures Macro) but I didn't find instructions in the standard if this macro is allowed in a per frame functional group or it can only be in the common group for this SOP Class.
      Having said that, I wouldn't recommend using this feature of the standard at all because I don't think most applications existing today will support it.
      Additionally, I couldn't imagine a good design reason to create such images. How come pixel spacing changes over a sequence of images group together as a single instance?
      Maybe you can share the reason for doing so?
      Regards,
      Roni

      Delete
  3. Hello Roni,
    With my .dcm file i can read in "osiris" or "xmedcon" positive and negative pixels values and once i use dicomread it changes all values. I have already tried to use the following solution:

    X = dicomread('the file name');
    meta = dicominfo('the file name');
    Y = X * meta.RescaleSlope + meta.RescaleIntercept;

    But the Y matrix still have the wrong values for the positive pixel values and now zero for the negative ones.
    Ex.:
    OSIRIS MATLAB [X] MATLAB [Y]
    -727 301 0
    -724 302 0
    -722 303 0
    -723 301 0
    89 1117 93
    98 1121 97
    101 1125 101
    101 1127 103
    101 1128 104

    For the 'file name', I have used the file extracted directly from the CD recorded from the CT scan or .dcm file saved on "OSIRIS" or "XMEDCON" and in both cases it generates the same matrix [X].
    You can find, please, those files in the links bellow:

    77815542: file exported directly from CT scan software
    https://sites.google.com/site/dicomvsmatlab/home/arquivos/77815542?attredirects=0&d=1

    77815542.dcm: file exported from OSIRIS
    https://sites.google.com/site/dicomvsmatlab/home/arquivos/77815542.dcm?attredirects=0&d=1

    77815542.mat: workspace from matlab matrix dicomread
    https://sites.google.com/site/dicomvsmatlab/home/arquivos/77815542.mat?attredirects=0&d=1

    And below is the OSIRIS program link:
    http://www.softpedia.com/get/Science-CAD/Osiris-Viewer.shtml

    I would really appreciate any help!
    Regards,
    Susana

    ReplyDelete
    Replies
    1. Hi Susana,
      OSIRIX is right. It shows the values after Modality LUT transformation.
      If you look at the dump of the file, at group 0028 you will see the following:

      (0028,0002) US 1 # 2, 1 SamplesPerPixel
      (0028,0004) CS [MONOCHROME2] # 12, 1 PhotometricInterpretation
      (0028,0010) US 512 # 2, 1 Rows
      (0028,0011) US 512 # 2, 1 Columns
      (0028,0030) DS [0.1171875\0.1171875] # 20, 2 PixelSpacing
      (0028,0100) US 16 # 2, 1 BitsAllocated
      (0028,0101) US 12 # 2, 1 BitsStored
      (0028,0102) US 11 # 2, 1 HighBit
      (0028,0103) US 0 # 2, 1 PixelRepresentation
      (0028,0106) US 241 # 2, 1 SmallestImagePixelValue
      (0028,0107) US 2745 # 2, 1 LargestImagePixelValue
      (0028,1050) DS [143\300] # 8, 2 WindowCenter
      (0028,1051) DS [892\1500] # 8, 2 WindowWidth
      (0028,1052) DS [-1024] # 6, 1 RescaleIntercept
      (0028,1053) DS [1] # 2, 1 RescaleSlope
      (0028,1055) LO [WINDOW1\WINDOW2] # 16, 2 WindowCenterWidthExplanation

      Rescale Intercept and Rescale Slope are -1024 and 1 reap.
      The data is unsigned as you can see from Pixel Representation.

      When you read into matlab you get a matrix X of type uint16
      So when you transform it you still have uint16 matrix.
      You need to cast it or to create Y as sint16 or better sint32 of the same size as X (512 x 512) and then make the transformation. Once done this way, you will get values identical to Osirix.

      Roni

      Delete
  4. Hi Roni,

    I am new to the DICOM, I work more on the modality side, specifically CT. my question is how PACS handles the differences between vendors, for example:
    - the old CT scanner is sending these values:
    0028 0100 2 | bits_allocated | US | 1 | 0x0010 16
    0028 0101 2 | bits_stored | US | 1 | 0x000c 12
    0028 0102 2 | high_bit | US | 1 | 0x000b 11
    0028 0103 2 | pixel_representation | US | 1 | 0x0000 0

    and the new scanner is sending these values:

    0028 0100 2 | bits_allocated | US | 1 | 0x0010 16
    0028 0101 2 | bits_stored | US | 1 | 0x0010 16
    0028 0102 2 | high_bit | US | 1 | 0x000f 15
    0028 0103 2 | pixel_representation | US | 1 | 0x0001 1

    The images of the new scanner don't look good on the PACS even though they look great on the CT console, does PACS admin need to change some confiuration on the PACS side to handle the difference in the data?
    Appreciate your help.

    AM

    ReplyDelete
    Replies
    1. Hi hi AM
      16 bits stored signed stored signed pixels is unusual but the pacs should be able to display it. My guess is that you also use lossless jpeg. compression. Transfer syntax.
      Many implementations of the jpeg lossless, specifically the ones that use cornel university code base and not the jig code have a bug and they fail on 16 bits signed pixel data.
      If this is not the case then It's probably something with the reading I 16 bits signed because to cast it to int some applications will make it 17 bits (eg dcmtk).
      I recommend changing your implementation to more common pixels like 16 bit unsigned if you have to use all the 16 bits. It's probably not your bug but it will spare you from explaining that to customers.
      Roni

      Delete
    2. Thank you Roni, very much appreciated.

      Delete
    3. p.s. let me know if it was JPEG Lossless Issue. These bugs were fixed and I might be able to send you a patch for that. Send me an email or fill the contact form on www.roniza.com

      Delete
  5. I will be working on this issue tomorrow with Agfa rep., on their Impax 6.3.1, they claim that the problem from our CT, but I was able to have our images displayed perfectly using another application on their PACS monitor.
    I will let you know the outcome.
    Thanks for your help.

    ReplyDelete
  6. I have a question

    I need pixel data in DICOM image to implement volume rendering.
    Is there no binary file or sample code which extracts only pixel data as 2D or 3D array?
    Or, how can I handle pixel data in DICOM by using DCMTK?

    ReplyDelete
  7. OFCondition cond = _item->loadAllDataIntoMemory();
    if (cond.bad())
    return DCMTKError(cond);

    DcmElement* elem;
    cond = _item->findAndGetElement(DCM_PixelData, elem);

    if (cond.good())
    {
    if (elem)
    {
    DcmPixelData* pixelData = (DcmPixelData *)elem;

    *pVal = 0;
    Uint16 *data = NULL;

    OFCondition cond = pixelData->getUint16Array(data);
    if (cond.good() && (data))
    {
    *pVal = (int)data;
    return NOERROR;
    }
    }
    }

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Hi,

      i gone through the above code which you updated.

      But how i need to display the dicom image in vtk ?

      I Decompressed the Dicom Image using DCMTK & i need to load the image

      using pixel data into VTK.

      Please can you tell me how you have done the rendering in VTK?

      Delete
    3. Your code sample seems to be off, most likely from copying the < in the for loop.

      // Write 200 frames to a file
      ofstream s("pixel.data", ios_base::binary);
      for (int i=0; i
      {
      number2image(i+1); // Let's add some salt to it
      scaleImageTo(COLUMNS, ROWS, pixels);
      s.write(pixels, frameSize);
      }

      That section. Enjoying your blog though.

      Delete
  8. Hi;

    What about 32 - bit real(float) images?

    ReplyDelete
  9. Yes, that's possible. The only example I could fine though is RT Dose and the type is integer, not float. Why would you want real pixels?

    ReplyDelete
  10. good night all right? could help me to draw a new picture coloring the pixels according to the pixel intensity of a dicom file read using vtkDICOMImageReader? I can read the matrix using two "for" and reading the intensity of each pixel where did eg, 1 to 50 = blue, and so on, as I can draw a new picture doing this?

    ReplyDelete
    Replies
    1. Hi Pedro,
      After creating the new image follow the steps in this post code examples to insert the data into a DICOM object.
      Roni

      Delete
  11. Hi Roni,

    I am a student who is currently doing a project that involves measuring lengths on x-ray images that were exported from the PACS system. I am using the Osirix software to perform the measurements. I found it to be a problem when the measurements were in pix. I need the units to be in centimetres and not pix. The furthest I managed to get to was a recalibrating option that required me to set a value in cm for the length that was being measured (in pix). I do not how to recalibrate it. Alternatively, I thought of just simply using a online conversion calculator to convert from pix to cm. I am only measuring vertical distance. I was hoping that you could help me out with this issue. I am quite illiterate in this to be honest so if you can please reply in lay man terms. Can I just convert the length measurements just by using an online conversion calculator? Alternatively, if you can teach me how to recalibrate the units from cm to pix on Osirix so that all measurements will be in cm on Osirix, that would be an even better solution.

    ReplyDelete
  12. This comment by Norbert Neubauer was deleted by mistake so I repost it:

    Hi Roni,
    Great site. I am new to DICOM so please excuse me if I mix some obvious things. I am about to implement a piece of code to transfer DICOM files via internet.

    When I studied the DICOM format specs I was sure that it is possible that in one dcm file I can have many studies, many series and many images.

    Having that in mind I thought that for huge dcm file it would be wise to extract images out of the dcm consisting of many images, send pure dicom without images and send images sequentially . Then on the other side upon request I would compose dynamically the dcm file with all the images that were completely sent.

    However until now I only managed to find dcm file with framed image or one image in the file at most.

    My question are (I kindly ask for the answer):

    1. Do you think that what I described seems feasible (stripping pixel data, sending pure small dcm without pixel data, and uploading pixel separately)?

    2. And do really exist dcm files consisting of many pixel data composed of many images.

    3. Or maybe there are dcm files consisting of "in fact" another dcm files (or some kind of subset data). So in fact I can have one big dcm, "extract" it into many dcms splitted e.g by study id. And in that case when considering transmission of big dcm I should split it into number of smaller dcm files and do something with them. I am a bit lost here. If you could clarify how it works I would appreciate that very much.

    ReplyDelete
    Replies
    1. Hi Norbert,
      'Streaming' Medical Imaging Info including images has always been a challenge. There are so many solutions out there and some of them are quite good.
      First of all, I would like to note that you may not want to transfer the entire DICOM header, specially not over insecure connection. Instead I recommend transferring only information you need.
      To your questions:
      1) Yes, very reasonable. As I said, there's many way to do this and I recommend that you make a survey of existing solutions. With the evolvement of html5 and javascript, node.js there's many open sources projects and very good commercial ones for a very reasonable price that can save you a lot of work.

      2) Yes. There are DICOM files with many 'frames' as we call it. It is very common in Ultrasound where there's a multi-frame ultrasound SOP class and there are the new CT and MR multi frame objects. Note one very important thing that the concept of STUDY is not directly related to a DICOM file. Every DICOM Instance, probably stored as a DICOM file, belongs to exactly one STUDY and is linked to it by the Study Instance UID. The complete content of a Study can span over multiple DICOM Instance and more important can change over time as new Instances are added to the study, for example reports, presentation states, post-processed images and so on.

      3) In your implementation, the system you are developing, you can do whatever you like and you are not bounded to DICOM. Remember that DICOM is relevant only when you integrate with other systems. Within your system boundaries you are free as a bird. Its when you want to communicate with other system that DICOM is your tool, your language to speak to other systems. In a way, DICOM to Medical Imaging is like Written English to International Business.

      Delete
  13. Hi Roni,
    Thank you for your answers.

    I will transfer data over secure connections only, so although I take into account your arguments I am still thinking about transferring full dicom header, . This is because I am considering only to strip "heavy" part and not to go to deep into dicom data, if possible. Just wanted to confirm that "nested" dicom files exist (ie one dcm file can contain many sets of data, specially pixel data which is heavy).

    I actually need to do 2 things.

    a). Transfer dicom data between some nodes
    b). Let my app be visible as dicom node, ie let pacs to send data to me and let operator of my app to search for some data to be send on another dicom node. It means that I need to implement a dicom listener and dicom sender be able to issue dicom queries. Currently as I do it in Java I tried some basics with dcm4chee-toolkit, but I have not decided yet which libraries will be using by the end of the day.


    Re: 1. I have to write it myself, however with the help of libraries, I don't want to reinvent the wheel

    Re: 2. Do you know the place or can offer example Dicom file to download that has some nested data. All I found until now is a singe dcm files or zipped tree of dcm files. If I understand correctly one dcm file can have some nested data and each set of that nested data can contain pixel data space. The idea is to recursively parse that kind of nested dicom exclude pixel data, and then place pixel data back dynamically when image parts are transferred. I have some doubts however because whatever I saw was rather small dicom files (but many). So maybe I should rather focus on transferring each of dcm files from the set of files and not to split the particular dcm file on parts ( data + pixel)

    Re: 3. I know I can communicate between my nodes in the way that fits transmission requirements not just DICOM, however as my system must also act as dicom node, I finally have to implement some kind of listener offering buffer to store dicom data and be able to query for the data in another systems acting as dicom node.


    Once again thank you for your answers and if you could answer / comment on current ones I would appreciate that very much.

    ReplyDelete
  14. Hi Roni,

    Like Norbert above, I've also been looking for example Dicom files containing multiple images. I'm looking into adding Dicom support to my online and offline volumetric rendering tool (which currently supports Analyze/Nif-TI1), but I'm having trouble locating example files that cover a good range of the different potential configurations/layouts/data types/compressions/etc.

    It would certainly be useful to have some sort of definitive "standards test" data set. Do you know where I could find, or do you have, something like this?

    Thanks,
    Rich

    ReplyDelete
  15. Hi Rich
    There's test data in IHE's web site, in the NEMA DICOM download ftp site.
    There's also many samples in OsiriX web site
    I try to delete any data that I get and not keep copies of files after I finish the task related to it.
    Roni

    ReplyDelete
  16. Hi Roni,
    Thanks for the prompt reply! Yeah, I've found some of my current test data from the NEMA DICOM FTP as well. However, the data sets I've found thus far have been pretty poorly labeled/categorized and scattered in respect to establishing a good data set that ensures I've got all of the core bases covered for the binary format. I was hoping such a data set might already exist for the purpose of covering all real in-use data types and layouts, compression types, etc. But if not, I guess I'll have to continue scavenging. :)

    Thanks again,
    Rich

    ReplyDelete
  17. Hi Roni,
    Thank you for excellent post. I have question about MR spectroscopy DICOM files. There are few specific attributes for this files (e.g. Data Point Rows (0028,9001), Data Point Columns (0028,9002), Data Representation (0028,9108), Spectroscopy Data (5600,0020) and so on). I have file that SOP Class UID is MR Storage (1.2.840.10008.5.1.4.1.1.4) but not MR Spectroscopy Storage (1.2.840.10008.5.1.4.1.1.4.2). That's why I can't see specific spectroscopic tags (I think so). When I open this file with dicom viewer I see set of grayscale squares.

    I want to extract the spectroscopy data (array of intensity) from this file using Pixel Data. It necessary to create spectrum for each voxel . Can I do it or I only need data from attribute Spectroscopy Data (5600,0020)? Standard says that Spectroscopy Data attribute points to a data stream of intensities that comprise the spectroscopy data.

    I look forward to hearing from you.

    ReplyDelete
    Replies
    1. Dear A.
      MR Image (1.2.840.10008.5.1.4.1.1.4) and MR Spectroscopy (1.2.840.10008.5.1.4.1.1.4.2) are two different IOD's (or classes).
      An instance of MR Image does not carry MR Spectroscopy data.
      You have to get an MR Spectroscopy Instance.

      Delete
    2. I have the software Sivic. So, using this software I can create spectrum using file with MR-Image SOP Class, but I have no idea about how Sivic does it. It mean that it's possible to storage MR Spectroscopy data into files with MR-Image SOP Class. File that I have is produced by GE MR-machine (may be this fact is important).

      Delete
  18. Hi Roni,

    From the above article we can calculate the pixel size for uncompressed images

    But Is it possible to calculate the pixel data size of compressed images with out reading the actual pixel data?

    Thanks,
    Sreeraj

    ReplyDelete
    Replies
    1. The information about the pixel size in mm is not affected by the image format in which the pixels are stored. Where in this article do you find such linkage?

      Delete
  19. I have serious concern about MR Spectroscopy SOP Class UID 1.2.840.10008.5.1.4.1.1.4.2. When I have see the rows and columns in the dcm, the values are 1 and 1. Since this is a multiframe image it checks for Bits Allocated which is absents and fails. How do we fetch such images? Please assist me in resolving this concern?

    Thank You,
    Kishor.

    ReplyDelete
  20. This comment has been removed by a blog administrator.

    ReplyDelete
  21. while loading dicom file in my software i get error pixel data not found in dicom file . Can you explain this ?

    ReplyDelete
  22. Hi all, this article is interesting and have a questions, I am worked with DICOM and PACÅ›, i would like know yes with this tag can create a dicom image?

    ReplyDelete
  23. Hi all, this article is interesting and have a questions, I am worked with DICOM and PACÅ›, i would like know yes with this tag can create a dicom image?

    ReplyDelete
  24. The links to the example image and C++ code are currently broken. Do you think you can make these available again (perhaps through a github repo?).

    In particular I'm interested in whether all elements need to be of even length. If that is the case, then how is a string like "RGB" supposed to be padded? I'd like to see how you solved this in your code.

    ReplyDelete
    Replies
    1. The application is available in our examples package of RZDCX.
      The post link was fixed as well.
      RGB is encoded with an extra space like any other odd length string in DICOM

      Delete
  25. I represented Pixel data and pixel array in Python but I am not able to distinguish between them

    ReplyDelete
  26. Hi,
    I need to know the points per inch of the image because i need to transform the pixels to mm in order to create fiducial points in 3DSLicer and i cant select the X, Y, Z values because of that.
    How can i know the dicom pixel size?
    Thanks

    ReplyDelete
    Replies
    1. The tag is pixel spacing:
      (0028,0030) DS [0.1171875\0.1171875] # 20, 2 PixelSpacing
      Its the pixel size in mm (x\y)

      Delete
    2. THANK YOU SO MUCH!!!
      I own you one! Thanks

      Delete
  27. Hello - I have been trying to create a volume render by plotting every pixel and overlaying the slice. The approach i have used works when i am working with MRI. Say for example X = 512, Y = 512 and Z = 1.25 mm slice thickness, 138 images, the MRI volume looks perfect, however when i apply the same logic to a CT image i have to halve the slice thickness for the volume to look perfect ( as it looks on 3d slicer ) . I am not sure why this is happening. Any help is appreciated. Cheers Aravind

    ReplyDelete
    Replies
    1. Hi Aravind. That’s because slice thickness is indeed the thickness of the slice and not its position. In your CT the slices are thick and overlap. Use the difference between Z’s in the last componenet of Image Position Patient instead.

      Delete
  28. Roni - Thanks for your reply. On the MRI that I am rendering, Image Position Patient ( 0020, 0032 ) z component is 98.4437. This Z is the same on all the 138 slices and the slice thickness ( 0020, 0050 ) is 1.2 . The number of slices 138. So do you mean that the volume render object must have a total z of 98.4437 ?

    If I go with slice thickness of 1.2 and 138 images then i get a total z of 165 for the volume which is incorrect ?

    For the CT i have Image Position Patient ( 0020, 0032 ) z component is 12.75, This Z is the same on all the slices and the slice thickness ( 0020, 0050 ) is 1.25. The number of slices however are 305. So what should be the Z of my total volume ?

    Sorry If i sound too confusing.

    Thanks Aravind

    ReplyDelete
    Replies
    1. Either X or Y or Z components of Image Position Patient has to change. Otherwise all the planes are on one another and its not a volume. Are your DICOM files originals or they were processed? Check ImageType. Should start with "ORGINAL\PRIMARY"

      Delete
    2. The CT imageType is Original \Secondary\ AXIAL.
      Image Position Patient ( x: -160, y: -160, z: 12.75 ) and the Image Position patient is the same for all the slices as above.

      The MR Image Type is Original \ Primary \ Other
      Image Position Patient ( x: -119.76, y: -141.253, z: 98.4437 ) and the Image Position patient is the same for all the slices as above.

      Delete
  29. Hi ROni - The CT imageType is Original \Secondary\ AXIAL.
    Image Position Patient ( x: -160, y: -160, z: 12.75 ) and the Image Position patient is the same for all the slices as above.

    The MR Image Type is Original \ Primary \ Other
    Image Position Patient ( x: -119.76, y: -141.253, z: 98.4437 ) and the Image Position patient is the same for all the slices as above.

    ReplyDelete
  30. We're having issues with US studies measurements showing in pixels instead of cm. We do not have an accurate reference on the study to calibrate our measuring tool so our readers are lost. Can you tell me which DICOM tags are missing from the study? Any tips on how to resolve?

    ReplyDelete
    Replies
    1. DICOM Ultrasound SOP Class uses Supplement 84 (ftp://medical.nema.org/medical/dicom/final/sup84_ft.pdf) - US Regions Calibration. Its a bit more complicated then pixel size because multiple regions can be defined on a single image with different axis units. for example The Y axis on Dopler images may be velocity. See here: http://dicom.nema.org/medical/dicom/2014c/output/chtml/part03/sect_C.8.5.5.html
      This module is implemented using RZDCX in our DICOMIZER. You can download and check. http://www.roniza.com/products/dicomizer/

      Delete
  31. We are having issues with US studies only allowing our readers to take measurements in pixels instead of cm. The studies do not have an accurate reference line, so we are lost. Any ideas which DICOM tags are missing? Thoughts on how to resolve?

    ReplyDelete
  32. I am working with de-identifed CT DICOM files. I am receiving this error when I try to dicomread some of them:

    Error using dicomread>processRawPixels (line 765). Not enough pixel data.

    Error in dicomread>newDicomread (line 236) X = processRawPixels(metadata,frames);

    Error in dicomread (line 86) [X, map, alpha, overlays] =
    newDicomread(msgname, frames, useVRHeuristic);

    I was looking through all the Pixel data tags and there is pixel information in the file, but after I read this blog I think this may have been compressed because it shows a 'LossyImageCompression': '00' tag which you show in your jpeg compression dump. Is there anyway to view these files? I don't have the originals.
    Very informative post! Thank you!

    ReplyDelete
    Replies
    1. As far as I know MATLAB's dicomread doesn't support compressed DICOM files. It calculates the pixel data length from 0028 group and then tries to read this length from pixel data tag. When file is compress it fails of course. Check the transfer syntax of the file and check that it is one of the supported by dicomread. It may be your anonymized dumps the transfer syntax info from the header. I don't know which one you're using.

      Delete
    2. I compared the TransferSyntaxUID of one that works to one the did not and they were both the same... '1.2.840.10008.1.2.1'. I believe that Matlab was used to deidentify the files. Unfortunately, I did not deidentify them, but I've been tasked to correct it.
      I am looking into decompressing the files.
      Thank you for your posts. I am teaching myself DICOM and this has been invaluable.
      Danielle

      Delete
  33. I have a DICOM file that has the following attributes:
    (0028,0004) CS [MONOCHROME1] # 12, 1 PhotometricInterpretation.
    (0028,0100) US 16 # 2, 1 BitsAllocated
    (0028,0101) US 12 # 2, 1 BitsStored
    (0028,0102) US 11 # 2, 1 HighBit
    (0028,3010) SQ (Sequence with explicit length #=1) # 5634, 1 VOILUTSequence
    (fffe,e000) na (Item with explicit length #=4) # 5626, 1 Item
    (0028,0000) UL 5614 # 4, 1 GenericGroupLength
    (0028,3002) US 2789\1307\16 # 6, 3 LUTDescriptor
    (0028,3003) LO [RP1KT] # 6, 1 LUTExplanation
    (0028,3006) US 0\0\1\1\1\1\1\2\2\2\2\3\3\3\3\4\4\4\4\5\5\5\5\6\6\6\6\6\7\7\7\7\8\8... # 5578,2789 LUTData
    (fffe,e00d) na (ItemDelimitationItem for re-encoding) # 0, 0 ItemDelimitationItem

    When this is opened in a particular viewer, it shows the image all in white and its window center and width are displayed as 0.

    When opened in other viewers , it displays in the proper gray scale. Do you know why this one viewer is showing it all in white?

    ReplyDelete
    Replies
    1. I can guess that it ignores the VOI Lut Sequence but you better ask them (the mabufecurer).

      Delete
  34. Hi Roni,

    Thanks for your information. This blog is really helpful.

    Please answer my followings queries:

    1) I have one MR image, which has dark pixels and white area. I opened in Radiant viewer(3rd party DICOM viewer) and When create an ROI over dark Area, Mean value is less(around 50) and over White/grey area it is higher(around 1060). So is this Ok behavior?
    Because measurement calculation should be based on Modality LUT (Rescale intercept & Rescale slope) so it should be about same.

    2) Also I want to know how to identify dark pixel and white pixel while calculation Mean Value of ROI(ellipse in my case) or any measurement calculation.

    -Akhilesh

    ReplyDelete
  35. Hi Roni (and everyone),

    Hoping this blog is still active. Needing some help.

    Working with a new application where I don't have source code (yet), but running in circles.

    DVTK dcdump (and other tools), show an error at 7FE0,0010 Pixel data:

    @0x0000036c,0x000002e8 of 0xffffffff: (0x7fe0,0x0010) OX Pixel Data VR= VL=<0x474747>

    (0x7fe0,0x0010) OX Pixel Data - Error - Bad Value Length - not a multiple of 2 - VL is 0x474747 should be 0x474748
    Error - Bad attribute value - Bits Stored = 24 exceeds High Bit 7 + 1 using 8
    Error - Dicom dataset read failed

    So far been running in circles trying to locate the tag and better understand it but am finding inconsistent information:
    - DVtK Dicom Editor Tool the tag shows as OW, VM as 1, and Value column shows: C:\Users\davex\AppData\Roaming\DVTk - The Healthcare Validation Toolkit (www.dvtk.org)\DICOM Editor\5.0.3.0\Results\W08L0010.pix

    - Radiant shows VR=OB/OW, VM=1, Value=00 00 00 00 00 00 00 00 ... ... ...

    - Weasis can't read this media message, Pixel No Value - Outside Range

    After reading several articles, thought I'd reach out for some insight (all much appreciated).

    Thanks,

    David

    ReplyDelete
    Replies
    1. Hi David,
      The log you posted states the errors.
      1) (0x7fe0,0x0010) OX Pixel Data - Error - Bad Value Length - not a multiple of 2 - VL is 0x474747 should be 0x474748
      And I guess most sdk's will overcome this.

      2) Error - Bad attribute value - Bits Stored = 24 exceeds High Bit 7 + 1 using 8

      The bits stored should probably be set to 8 (not 24) and the samples per pixels should be I guess 3.

      .

      Delete
  36. Hi Roni,

    I'm a question regarding the length of the PixelData value. I have a few examples in which the real value is much longer than the value specified in the length fields. For example, given an image of 0x200 * 0x20a pixels, which 16 bits allocated per pixel, I would expect the length field of the PixelData be 544968, and that's the value I see! however, after bypassing these bytes, there're still 0x0c (zero) bytes until I reach the next image metadata.

    Can you please give me an idea what can this happen (I see this through all the data, not only for a single image). BTW, this occurs also for overlays.

    Thanks,

    ReplyDelete
    Replies
    1. Hi,
      something is fishy in the description. What next image metadata? Is this a multi frame image? because then the length would reflect this (rows * columns * samples per pixel * bits allocated/8 * number of frames) so its not multi frame and thus there's no 'next image metadata'). Also its not compressed because you say the length of the pixel data element matches the rows*columns *2 but it doesn't because 512*522 is not 544968
      So it looks like you should verify your reading and things will get into place.

      Delete
  37. Hi Roni,
    Hope you are great!

    I'm working on a project (Web Application) to Anonymization the tags (name, sexe, etc.) on DICOM images so we can send it to doctors for research purpose.
    So how we can hide the pixels specific for this tags?
    Which language is the most compatible with DICOM? And there are some references concerning this subject?

    Thanks for helping!

    Joseph FARAH, PhD

    ReplyDelete
  38. This is great information, thanks for putting it together!

    One of the CT scans I am looking at is missing this (7fe0,0010) PixelData attribute. Is there something that can be done to fix this at the CT scanning machine? This particular scan is from an Aquilion PRIME scanner.

    Thanks for the help!

    ReplyDelete
    Replies
    1. A CT image without pixel data tag is something I don't know how to help you with. Pixel data is mandatory for CT image

      Delete