This past Thursday, I had the good fortune to attend a ridiculously engaging discussion at the Graduate Center called, “The Art of Seeing: Aesthetics at the Intersection of Art and Science”. Among the audience, I might have been just about the only one not involved with either Art History or Computer Science, though I’m sure I follow the trend of a lot of library students, when I admit that my interpretation of work in Digital Humanities is largely based in the realm of literature or other text-centric projects. This presentation proved the perfect remedy to what I’m now considering a serious lack of imagination.
The presentation itself was essentially divided into two parts, with a perfectly smooth and sensible transition from the Art History perspective to the Computer Science elements. Art historian Emily L. Spratt, from the department of Art and Archeology at Princeton University spoke first on her Digital Humanities (DH) work from the perspective of an art historian. Her presentation offered the ground work for understanding both some of the issues art historians concern themselves with, as well as the hesitation that that demographic often communicates when it comes to incorporating computer analysis into their work. Spratt began by calling to mind the often blended notion(s) of sight and wisdom. She reminded us of Odin – who gouged out his owns eye for a draught from the wisdom-giving Mimir’s Well. Then, she cited a passage in the Corinthians, “For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known.” Already, I was completely won over by the waxing poetic scene setting.
For Spratt, it seems to be that very tendency towards Art (with a capital A) that can make the use of vision technology in its analysis have some potentially uncomfortable philosophical implications. It’s a discomfort common in a lot of the humanities realm as it develops digital elements more frequently. If art – no matter the form – is computable, what would that imply about our prized human culture? I think there’s a good reason for not wanting to cede too much of this world to computers. Our identity as humans, and then as humans with a sense of culture, is so permeated with the idea that we have something unique and artistic to offer that can’t be replicated. The threat computers pose seems to make us especially defensive. “Maybe they are smarter than us” is a fear we reserve for subconscious nightmares.
A survey conducted by Spratt within the art historian community showed a strong disinclination towards using computers as a central tool in their field. However, when the survey questions were reshaped to be more specific, the response was less aggressive. For instance, when asked if computers should be used to help analyze a work to draw up other pieces that is was structurally similar to, the hesitation in responses lessened significantly. In this sense, it’s almost more of a linguistic issue.
Spratt outlined three possibilities:
- First – computers might be more capable of processing visual information than we give them credit for.
- Second – we, as humans, are more computer-like in our mental processes that we imagine.
- Or third – the processes involved in art interpretation are less subjective than we’ve historically believed.
So, even the potential ability of computers to understand and interpret art requires humans to have this ability first, but also to be more critical in their understanding of the act of art interpretation itself. In other words, a more thorough analysis of aesthetic judgement is key to creating computer programs that can perform the processes that would benefit art history. This sort of meta-critical process is another point that we see in the wider discussion of DH’s best practices.
Spratt then talked about her experience at the European Conference on Computer Vision (ECCV), where she found computer scientists working through some of the problems that art historians had an already established discourse for. These issues serve to signal the value in overlapping methodology between the two areas of study. Here, with the stress on necessarily collaborative efforts, we again see a theme common throughout DH.
At various points in this half of the presentation, Spratt seemed rather explicit in repeatedly assuring that this was not something to replace the field, just facilitate solutions to some of the bigger problems.
When I say bigger, I mean to suggest a completely different scale, not imply any markers of importance. Dr. Ahmed Elgammal, before he launched into a closer look at his work, had a similar disclaimer marking the difference between macro study, that computers could be useful at performing, and the more micro scale that humans are specifically skilled at. Dr. Ahmed Elgammal is from the department of Computer Science at Rutgers University and his work focuses on the concrete tasks, experiments, and algorithms involved when we want to, “give a machine the capacity to see”.
His first stated goal was developing computer models for analysis, specifically style analysis. In the example experiment Elgammal presented, he was surprised to find that the semantic model was most successful. Typically, this model of analysis is geared towards recognizing physical components in a painting, object classification – like a haystack from Monet. All of his experiments had a testing component designed to challenge the initial product. For this style analysis, his work used an image search across styles i.e. works of art that share elements but don’t share the style, Baroque for instance, of the original date-work.
Identifying artists proved easier that identifying styles. This wasn’t so surprising in my mind since I think of styles as more fluid that an individual’s artistic style. Renaissance art and Baroque art are fairly neighborly. Elgammal referenced the work of Heinrich Wölflinn that identified pairs of opposing characteristics that he outlined as key differences between Renaissance and Baroque art styles. When these difference-pairs were considered in relation to the computer program Elgammal had written, he found that his work’s output actually reinforced Wölflinn’s theory. However, he recognizes that identifying styles wouldn’t really be the primary goal of art historians since they’re already capable of that from within their own skill set. However, this does provide an interesting potential starting point in aiding the discovery of what drives changes in style.
The second aim of computer visuali learning in this project, is the potential to discover influences. With the macro ability to simultaneously “see” a wide number of paintings, computers can more easily recognize similarities of painting that might have huge gaps in terms of style and time. Paintings like the ones from Bazille and Rockwell described here.
Quantifying creativity was, in my opinion, the most awe-inspiring potential use. Elgammal brought us back to Kant – for the night’s second reference – who posited that artistic genius was the product of two features, novelty and influence. While it might be easy to think that Kant’s theory is strictly philosophical and that creativity is something inherently unquantifiable, using a network analysis of nodes (representing individual works of art) and edges (representing influence with respect to History/time), Elgammal’s study produced a visualization that art historians could actually agree with.
Of course there are limits to the algorithm and Elgammal was clear in outlining them. First off, is a common problem with a lot of data-based analysis that is found in the humanities – there is a simple lack of usable digitized content to act as input data. There’s also a question of what elements a computer scientist involved in the project is going to select to make a valid algorithm, which is where art historians should come in to play. Then there’s the question of the most useful and accurate parameters.
Still, Elgammal’s “Creativity Implication Network” is certainly capable of one of the presentation’s stated goals, namely to “demonstrate the merit of bridging the fields of art history and computer science”. Questions from the audience at the end were largely hyper-micro, and focused on concerns of potential holes. For a project that is in its infancy, its possibilities though seem to greatly outweigh any real issues at this point. I was a bit curious to see how or if metadata came into any of the experiments, but in a room of computer scientists teaching machines the power of sight, it felt a tab out of place. Altogether, it was a brain candy event with plenty of take-aways to mull over in your daydreams. And, for me personally, it was a very eye opening event – and I don’t even care for puns.
Erin E. McCabe
Latest posts by Erin E. McCabe (see all)
- Basics of Statistical Literacy: an Event Review (NYCC DH Week) - February 18, 2016
- Mapping Visualization Tools: Tableau and/or Google My Maps - December 14, 2015
- “The Art of Seeing” event where Brain Candy Meets Eye Candy - December 12, 2015