By now I've learned how to use the *Trans options to find a pattern on different backgrounds, but i am presented with a new problem:
I'm writing a script for a cardgame to read cardvalues from the screen, i've captured tiny snapshots for each cardvalue and cardcolors (hearts/spades etc).
While Imagesearch can find all the cards if i loop parse all 52 (13 values * 4 colors) card images, it can take significant amount of time because of the amount of snapshots to compare.
Even after applying a little logic trick to reduce the set of images to 36 (cardvalues 1...9 * 4 variants) and deducting that if an image could not be found the cardvalue on screen therefore must be a 10, jack, queen or king (wich all count for 10 in this game anyway), it still takes some noticable time if for instance a 9 is to be matched.
The funny thing is, that a snapshot of the A in say ace of hearts pretty much looks like the A in ace of spades, the same raw pixelpattern, only the first one is in different shades of red and the other in different shades of gray/black.
It would be nice if I could improve imagematching speed by reducing the number of snapshots required even further, ie: imagesearch treating the searcharea as if it were monochrome pixels (every non-white pixel on screen and in imagefile would be seen as a black pixel regardless of its actual color/shade). This would allow me to reduce the set of snapshots to loop through to 9, wich is significantly faster than having to match against 36 or even 52 images.
I tried to accomplish this by toying with: *TransWhite *255, but that doesnt work, that will match anything or non at all.
So, Maybe i am overlooking something here , or it would be nice if imagesearch had this implemented as:
(this would indirectly find the pattern by ignoring the red/black pixels and match the surrounding backgroundpattern instead, however these are more pixels than the actual embedded pattern, so I suspect a *Mono implementation would make for a faster imagesearch than *NonTrans),
Besides getting faster result from searching through smaller sets of graphics, perhaps(?) *mono could make imagesearch algorithm perform slightly faster for each picture than standard mode that has to take colors/shades/bitdepth etc in account, naturally this only works for pictures were you search for shapes rather than their coloring.
Furthermore it would be nice to do an explicit verbose imagesearch (*multi *more *verbose something like that) that doesnt stop at the first occurence of an image, but continous searching the remainder of the specified searcharea and count the number of instances of this image and stores this value in a variable like A_Imagecount
One could do a *multi to find all instances of an image or *multi3 to limit the search to the first 3 instances, to get to know if AT LEAST 3 instances are on screen while saving imagesearch time if 10 instances would be on screen.
Additionally, perhaps it would then be usefull to be able to tell imagesearch of wich instance to report back the the X, Y coordinates (currently by default only the first and only instance), It could ie be implemented as: *multi3 *report2 MyPicture.bmp
Telling imagesearch to search for 3 instances and report the X/Y of the second instance.
If however only one instance be found, Imagesearch would still report back the X/Y of that first image, but the script could detect/ further handle this by inspecting the A_imagecount (=1 in this case) variable, and/or ERRORLEVEL could be set to a value that indicates that only a partial imagesearch could be completed.
Imagesearch: options *Mono *Multi and variable A_Imagecount
No replies to this topic