Fracture detection in hand using image proccessing
Asked Answered
V

1

4

What I have done :

  1. Taken input image and resized image to standard size

    as I have to compare it with template.

  2. converted in binary using threshold value.

  3. Detected connected component and displayed largest component.

    As it is whole hand.as shows here:

    img

  4. Place image at same coordinates to check placement of finger for comparison with template image

    but their positioning is different purple one is template image

    img

  5. I m doing comparison of image with Image subtraction method.

These case will not able to predict if their is hairline fracture as their are many small lines detected in image.

Is their any other method to do these? Let me know if their any.

Here original unprocessed images:

enter image description here enter image description here

Villar answered 27/4, 2016 at 5:56 Comment(11)
can you also share some processed images that do contain hairline fracture, along with original images. And i dont think comparison from template would yield any solution, since every image will be different.Ray
do you have also the source unprocessed images? I think you should segmentate individual bones and compare them not the whole hand... Also you can try to ignore template and compute the inside surface smoothness if fracture present the surface will be not homogenous.Towandatoward
Thank you for replying @Ray below i have attached image for hairline fractureVillar
Thank you for replying @Towandatoward below i have attached unprocessed imageVillar
These unprocessed original Fracture image dropbox.com/s/qkxexbdh7kezuiv/Fracture%20in%20hand.JPEG?dl=0 Original image of hairline fracture dropbox.com/s/ytaqi338360bvt1/… Image having hair line fracture segmented using Connected Component Labeling dropbox.com/s/drps1e3d4019n3x/hairline.jpg?dl=0Villar
I tried to segment individual bones through Connected Component Analysis, as all bones are connected it detects whole hand as single component. Is their any other to segment individual bone ? @TowandatowardVillar
Floodfill the black space (with distinct incrementing color per object).... biggest one is background all others are bones. Then sort them by x,y position so you can compare the coresponding bones between images.Towandatoward
But some skin part is also detected, how to remove ? @TowandatowardVillar
@JYOTIRAJAI filtering ... you can try casting scanlines ignoring too smoth edges (bones has bigger spikes in intensity), you can use also geometry based filter to acquire bones, You can also try to find the brightest pixels in image and grow/scan from there (inside out) ...Towandatoward
@Towandatoward Hello, I have fill image with different colour.Here is the images 1)dropbox.com/s/9theipvicem2m7a/thresholding1.jpg?dl=0 2)dropbox.com/s/rpusg9s557fdm6l/thresholding.jpg?dl=0Villar
@JYOTIRAJAI added Answer with example of what exactly I meant ... you need to play with treshold and blur strengths to make it work as intended. BTW you can also adapt/use this https://mcmap.net/q/122689/-how-to-find-horizon-line-efficiently-in-a-high-altitude-photo to remove the flesh before applying this.Towandatoward
T
3

You almost got it but what I had in mind (from comments) was more like this:

  1. prepare image

    switch to gray-scale, remove noise (by some bluring), enhance dynamic range, etc.

  2. derive image by both x,y axises and create gradient 2D field

    so recolor the image and create 2D vector field. each pixel has RGB so use R for one axis and B for the other. I do this like this:

    Blue(x,y)=abs(Intensity(x,y)-Intensity(x-1,y))
    Red (x,y)=abs(Intensity(x,y)-Intensity(x,y-1))
    

    The sub result looks like this: edges

  3. Treshold image to emphasize edges

    So pick each pixel and compare Blue(x,y)+Red(x,y)<treshold if true recolor to unknown else recolor to edge color. For your sample image I used treshold 24 After that smooth the result to fill the small gaps with blurred color. The sub result looks like this: enter image description here

    The greenish stuff is my unknown color and White are edges. As you can see I blurred quite a lot (too lazy to implement connected components).

  4. detect background

    So now to distinguish background from the bones inside I use special filling method (but a simple flood fill will do) I developed for DIP stuff and found very useful many times over the former expectations.

    void growfill(DWORD c0,DWORD c1,DWORD c2); // grow/flood fill c0 neigbouring c1 with c2
    

    Which simply checks all pixels in the image and if found color c0 near c1 then recolors it to c2 and loop until no recolor has occur. For bigger resolution is usually much faster then flood fill due to no need for recursion or stack/heap/lists. Also it can be use for many cool effects like thinning/thickening etc with simple few calls.

    OK back to the topic I choose 3 base colors:

                                 //RRGGBB
    const DWORD col_unknown   =0x00408020;  // yet undetermined pixels
    const DWORD col_background=0x00000000;
    const DWORD col_edge      =0x00FFFFFF;
    

    Now background is around edges for sure so I draw rectangle with col_background around image and growth fill all col_unknown pixels near col_background with col_background which basically flood fill image from outside to inside.

    After this I recolor all pixels that are not any of the 3 defined colors to their closest match. This will remove the blur as it is not desirable anymore. The sub result looks like this: background

  5. segmentation/labeling

    Now just scan whole image and if any col_unknown is found growth fill it with object distinct color/index. Change the actual object distinct color/index (increment) and continue until end of image. Beware with the colors you have to avoid the use of the 3 predetermined colors otherwise you merge the areas which you do not want.

    The Final result looks like this: bones

  6. now you can apply any form of analysis/comparison

    you got pixel mask of each object region so you can count the pixels (area) and remove ignore too small areas. Compute the avg pixel position (center) of each object and use that to detect which bone it actually is. Compute the homogenity of area ... rescale to template bones ... etc ...

Here some C++ code I did this with

color c,d;
int x,y,i,i0,i1;
int tr0=Form1->sb_treshold0->Position;  // =24 treshold from scrollbar
                             //RRGGBB
const DWORD col_unknown   =0x00408020;  // yet undetermined pixels
const DWORD col_background=0x00000000;
const DWORD col_edge      =0x00FFFFFF;
// [prepare image]
pic1=pic0;                  // copy input image pic0 to output pic1
pic1.pixel_format(_pf_u);   // convert to grayscale intensity <0,765>
pic1.enhance_range();       // recompute colors so they cover full dynamic range
pic1.smooth(1);             // blur a bit to remove noise
// extract edges
pic1.deriveaxy();           // compute derivations (change in intensity in x and y axis as 2D gradient vector)
pic1.save("out0.png");
pic1.pf=_pf_rgba;           // from now on the recolored image will be RGBA (no need for conversion)
for (y=0;y<pic1.ys;y++)     // treshold recolor
 for (x=0;x<pic1.xs;x++)
    {
    c=pic1.p[y][x];
    i=c.dw[picture::_x]+c.dw[picture::_y];              // i=|dcolor/dx| + |dcolor/dy|
    if (i<tr0) c.dd=col_unknown; else c.dd=col_edge;    // treshold test&recolor
    pic1.p[y][x]=c;
    }
pic1.smooth(5);             // blur a bit to fill the small gaps
pic1.save("out1.png");

// [background]
// render backround color rectangle around image
pic1.bmp->Canvas->Pen->Color=rgb2bgr(col_background);
pic1.bmp->Canvas->Brush->Style=bsClear;
pic1.bmp->Canvas->Rectangle(0,0,pic1.xs,pic1.ys);
pic1.bmp->Canvas->Brush->Style=bsSolid;
// growth fill all col_unknonw pixels near col_background pixels with col_background similar to floodfill but without recursion and more usable.
pic1.growfill(col_unknown,col_background,col_background);
// recolor blured colors back to their closest match
for (y=0;y<pic1.ys;y++)
 for (x=0;x<pic1.xs;x++)
    {
    c=pic1.p[y][x];
    d.dd=col_edge      ; i=abs(c.db[0]-d.db[0])+abs(c.db[1]-d.db[1])+abs(c.db[2]-d.db[2]);             i0=i; i1=col_edge;
    d.dd=col_unknown   ; i=abs(c.db[0]-d.db[0])+abs(c.db[1]-d.db[1])+abs(c.db[2]-d.db[2]); if (i0>i) { i0=i; i1=d.dd; }
    d.dd=col_background; i=abs(c.db[0]-d.db[0])+abs(c.db[1]-d.db[1])+abs(c.db[2]-d.db[2]); if (i0>i) { i0=i; i1=d.dd; }
    pic1.p[y][x].dd=i1;
    }
pic1.save("out2.png");

// [segmentation/labeling]
i=0x00202020; // labeling color/idx
for (y=0;y<pic1.ys;y++)
 for (x=0;x<pic1.xs;x++)
  if (pic1.p[y][x].dd==col_unknown)
    {
    pic1.p[y][x].dd=i;
    pic1.growfill(col_unknown,i,i);
    i+=0x00050340;
    }
pic1.save("out3.png");

I use my own picture class for images so some members are:

  • xs,ys size of image in pixels
  • p[y][x].dd is pixel at (x,y) position as 32 bit integer type
  • p[y][x].dw[2] is pixel at (x,y) position as 2x16 bit integer type for 2D fields
  • p[y][x].db[4] is pixel at (x,y) position as 4x8 bit integer type for easy channel access
  • clear(color) - clears entire image
  • resize(xs,ys) - resizes image to new resolution
  • bmp - VCL encapsulated GDI Bitmap with Canvas access
  • smooth(n) - fast blur the image n times
  • growfill(DWORD c0,DWORD c1,DWORD c2) - grow/flood fill c0 neigbouring c1 with c2

[Edit1] scan line based bone detection

As in the linked find horizon QA you have to cast scan lines and search for distinct feature recognizing a bone. I would start with the partial derivation of image (in x axis) like this one::

scan line

On the left is the color intensity derivation by x (gray means zero) and on the right original image. The side graphs are the derivation graphs as a function of x and y taken for line and row of actual mouse position. As you can see each bone has a distinct shape in the derivation which can be detected. I used a very simple detector like this:

  1. for processed image line do a partial derivation by x
  2. find all peaks (the circles)
  3. remove too small peaks and merge the same sign peaks together
  4. detect bone by its 4 consequent peaks:
    1. big negative
    2. small positive
    3. small negative
    4. big positive

For each found bone edge I render red and blue pixel in the original image (on place of the big peaks) to visually check correctness. You van do this also in y axis in the same manner and merge the results. To improve this you should use better detection for example by use of correlation ...

Instead of the edge rendering You can easily create a mask of bones and then segmentate it to separate bones and handle as in the above text. Also you can use morphological operations to fill any gaps.

The last thing I can think of is to add also some detection for the joints side of bones (the shape is there different). It needs a lot of experimenting but at least you know which way to go.

Towandatoward answered 5/5, 2016 at 9:13 Comment(4)
Thank you for efforts. But Their are too many parts detected in thumb , rest of fingers and lower part of hand is almost heterogeneous detected.Villar
Hello Spetre is their another way to segment bone individually.Right now trying to remove flesh Your solutionVillar
I have detected horizontal lines Horizontal line in handVillar
@JYOTIRAJAI Sorry had not time/mood for this for some time until now. I added [edit1] with such approach description and an example of mine attempt. I think the result is pretty good.Towandatoward

© 2022 - 2024 — McMap. All rights reserved.