SUMMARY Digikam online user manual says : "In case of unsatisfying results it might be helpful to use Clear and rebuild all training data. One reason can be that there are too many face tags assigned to a person which shows this person in a way that doesn't really help the search algorithm, e.g. [...] , baby/kid/adult photographs mixed... Another reason to use that option can be false face recognition due to a wrong accuracy setting in the Parameters tab." EXPECTED RESULT Could be interesting to suggest to DNN community if picture date (EXIF: Date Taken , i.e. Original Date of Photo) could be used as an additional parameter to distinguish pictures of baby / kids / adult of the SAME person. SOFTWARE/OS VERSIONS All
No, a date is not a parameter for facial recognition. What if old children's photos are scanned? Then we have a current date with an old photo. Maik
For scanned picture, i tend to say that "Taken Date" should be anyway patched to guarantee a possible "search by date" afterwards. it could be also useful for DNN. (this is the way a human work: when we know that the picture is old, we tend to select only our ancestors, and reject the possibility that our child could be on a picture of the past decades ! ) Taken Date is one on the numerous metadata that an advanced recognition algorithm could use ; we could also have the GPS positioning that could help : if the position corresponds to your parents' home, there's higher chance to find your parents on the pictures ! This is to prepare the Roadmap. I just recommend to send this suggestion to DNN guys with whom you're working... maybe in 2 years we'll see such possibility added to the DNN module ! And then feel free to close this ticket. thank you Maik.
(In reply to Maik Qualmann from comment #1) > No, a date is not a parameter for facial recognition. What if old children's > photos are scanned? Then we have a current date with an old photo. > > Maik Not Before / Not After Date I think a simplified method could be used to dramatically reduce the rate of false recognition of faces is to filter the suggestions by the date the photo was taken. A face would not be recognised if the photo was taken before or after a set of dates entered into the faces profile. This would avoid faces being suggested for a person that is no longer alive or before they were born. The filter dates need not be the date of birth or death as it could simply be the date before or after which that face would not appear in the pool of faces. I agree that breaking up the training data by age category sounds helpful, but other than when a person is a young baby this shouldn't help the AI as the basic parameters of a face remain fairly similar throughout life. I understand the issue with photos that have been scanned at a later date, however scanned images should have their date taken changed to the date when the original photo was taken. The information on when the photo was scanned can be tagged or putting into the Meta data in some other way.
Hi guys, i'm working in a company that is now using A.I. and many parameters seem to be potentially added to the "big data" in which the -customized- AI engine works and determine statistical results. More than ever, i do believe that adding *additionnal metadata* such as "date taken" to au updated DNN module (or any other picture oriented AI) should (must?) help to improve the quality of the estimation model, and thus, the face recognition. In other words, as a face is not static, and will, fore sure, evolve with years, how does the AI enginee manages this evolution ? The above comments would deserve to be sent to the DNN team, i think, and recommend.
(In reply to Alexandre Belz from comment #4) > Hi guys, > > i'm working in a company that is now using A.I. and many parameters seem to > be potentially added to the "big data" in which the -customized- AI engine > works and determine statistical results. > More than ever, i do believe that adding *additionnal metadata* such as > "date taken" to au updated DNN module (or any other picture oriented AI) > should (must?) help to improve the quality of the estimation model, and > thus, the face recognition. > In other words, as a face is not static, and will, fore sure, evolve with > years, how does the AI enginee manages this evolution ? > The above comments would deserve to be sent to the DNN team, i think, and > recommend. With the exception of very young children, most face recognition software, and indeed people that are good at recognising faces, use parameters that don't change significantly over a persons life. These parameters are primarily ratios of distances between the main features such as the distance between the eyes and the mouth or eyes and the end of the nose. Less important parameters are also used such as the distance between the edge of the chin and the sides of the face inline with the eyes. There are many of these parameters that are used and most change little as they are based around the skull, which normally doesn't change significantly over a person's lifetime. Now using NN, these specific ratios many not be specifically programmed, but they form the basis of the identification process. After reading about this topic, including how people who are considered super recognisers perform the task so well, I realised that I personally have flaws in how I recognise people. I found that I unknowingly rely heavily on people's hair or hair style, which of course is easily changed in length, coverage, shape and colour. Clear not a good parameter to use for identification. Other parameters the people often rely on that are poor parameters to use include skin texture, skin colour and even the thickness of the skin. These parameters all change over a person's life, but they are also easily changed due to lighting, humidity, temperature and makeup.
Hi Craig, I agree with you, haircut would probably not be a good metadata parameter. Nevertheless, we as humans can use this as a complement to distinginuish two people having the same skull. Let's take the example of real twins. If at some date (Let's say between 2005 and 2010) Jane had long hair and Mary had cut them short, a human can know that. An good NN could progressively learn, converge, and finally do better suggestions based on this. And if from 2010 to 2015 they had the opposite haircut, NN could use the "date taken" to correlate better and suggest better. The advantage with a NN is that, provided it have access to more (meta)data it can progressively learn (but it's a bit of magic/black box) the differences, it can build a better estimation model. Let's take another example: if both sisters have same haircut, and Mary only plays the piano and Jane plays the guitar, a human will be able to distinguish the two when they play at band, even if they look the same. An ideal NN should be able to learn that also. But the learning process might be long for the NN. Finally, if Jane lives in London and Mary in Paris, the GPS coordinates could help making a guess. After all, a NN is just that : making stastistical guesses. And objective metadata are more reliable and easy to learn for a "young" NN. So back to my initial topic, i think that "date taken" and probably "location" would be great to be added to the NN. Of course, it's not on Digikam shoulders, but more on DNN research team.