A few months ago news outlets reported that two different groups - which both used Artificial Intelligence - had sought to attribute a painting, the so-called de Brécy Tondo, to Raphael. Only one team responded positively. This is a decidedly surprising result for a technology too often praised as the most objective and reliable of all…
Raphael, Artificial Intelligence and the Tondo de Brécy: One Technology, Two Results
A few months ago, news outlets reported that two different artificial intelligence (AI) groups, one from the University of Bradford and the other a private company called Art Recognition, both sought to authenticate the same painting, known today as the de Brécy Tondo. Surprisingly, they came out with opposing results. The former group used facial recognition, while the latter analyzed brushstrokes. Whereas the former's computer program determined that the work is “undoubtedly" by the hand of Raphael, the second group reported 85% certainty that the work is not by the artist.
Based on claims for the absolute certainty of AI, these completely divergent results are unexpected and pose a challenge to the use of AI. This is worrisome given the fact that AI companies advertise it as the new technology that brings "objectivity" to authentications. Art Recognition announces AI on its website as "a better way to authenticate art" than human methods, which it considers to be too subjective. The company's website claims “unparalleled accuracy with modern technology" and promises that results are purely data-driven, with no human intervention that may create bias.
How Can Two “Objective" Tests Give Opposite Answers?
And yet…how is it possible that two completely “objective" AI tests gave completely opposite answers? Which of the two AI groups is right? Which is the correct result? The process begins to look like a traditional battle between experts.
When asked to comment on the divergent results, the CEO of Art Recognition rightly expressed concern "that this situation could potentially undermine the progress we have made over the past five years in establishing AI as a mainstream method for authenticating art.... Now, more than ever, it is imperative to stress the significance of adhering to rigorous scientific standards. Otherwise, the entire field of AI could face criticism, and we would all suffer the consequences.”
The question for any (human) conducting due diligence on artwork is to know more about these "rigorous scientific standards." Unfortunately, when scholars raise doubts about the perfect reliability of AI, they are often dismissed as being against modern technology or of trying to hold onto their power of human authentication.
Can Artificial Intelligence Truly Replace Traditional Authentication Methods?
But can AI truly replace traditional methods? Is it really now the "mainstream method for authenticating art"? At present, the range of AI's capabilities is still very limited. To give some examples: AI cannot be used on sculptures, multiples, photos, lithographs or prints. Nor can it work for drip paintings such as those of Jackson Pollock or other modern or contemporary art that does not use painted brushstrokes, which comprises a large amount of contemporary works. Nor can it be used for artists who have a small production, such as Vermeer, or for Old Master artists who employed assistants and collaborators to help paint their works, such as Rubens. Nor does it work for artists who have numerous competing catalogue raisonnés, such as Modigliani or De Chirico. This suggests that AI requires that a painting has been previously authenticated by a reliable human in a catalogue raisonné to be fed into the program in the first place. AI does not work with artists whose techniques and style have changed over time--most artists varied their techniques and styles over their lifetimes. AI cannot be used to authenticate damaged or restored paintings, such as the Salvator Mundi. Nor can it assess the drawings beneath the layer of paint (underdrawings) because it relies only on a photograph of the painting's surface. Since AI is trained to recognize style, it can only determine whether a work of art is in the style of a certain artist, but not whether it is by that artist, as has been recognized by some AI experts.
AI as a Shortcut for Desperate Collectors
Professor Ahmed Elgamma of Rutgers University and founder of the university's Art and Artificial Intelligence Laboratory group is far more cautious, rightly warning that "some of the claims about AI use in authentication can be fraudulent themselves." Unfortunately, some desperate collectors will spare no expense in trying to have their works authenticated and will eagerly bear the cost of an AI test in the hopes that it will resolve definitively what is sometimes an unsolvable problem. But even a 100% authentic AI certificate does not automatically guarantee the authenticity of a work without acceptance by the human scholarly community.
The Need for a Clear Scientific Method
For me, the most pressing issue is to have a clearer sense of the scientific method being applied to AI. Although those promoting AI talk about having a method that ensures the transparency of the process, scholars have long called for AI companies to be transparent themselves about the datasets they choose to feed the machine (choices that are entirely human). In a recent podcast, Art Recognition claims it uses all known works by an artist to “teach" the computer about authentic works, but is this truly possible? Gathering images of all works in museums and those held in private collections, not to mention those lost or damaged and restored, lifetime and posthumous copies made by others (and how would the machine recognize that they are copies?), would be a Herculean and perhaps utopian task.
Therefore, in each case, we would need to know how many and which works were used to train the machine, otherwise specialists cannot evaluate the validity of the tests. As any scholar who conducts due diligence on artworks knows, attribution is a complex task that can take years and relies on many factors, such as detailed provenance research, forensic scientific analysis, and a trained eye. Perhaps AI's claim to be able to "authenticate" should be rethought. Finally, as with all scientific tests, the results produced by AI must be perfectly repeatable by a neutral third party who has no conflict of interest and can independently confirm the veracity of the results. It seems that to date AI has not yet come to operate on its own.
Sharon Hecker is an Art Historian and Curator, Author, Art Researcher, Research Consultant
This article originally appeared in We-Wealth on 31st October and is reproduced with the kind permission of the author