💼 Professional

Portrait Data Visualisation (ANU TechLauncher)

A year-long capstone project building an interactive data visualisation app for a PhD art exhibition, and what happens when the gallery backs out halfway through.

July 1, 2024
JavaScriptSvelte.jsPixiJSNode.jsANUTechLauncherData VisualizationUI/UX

Watch showcase video 1

Watch showcase video 2

The context

ANU TechLauncher is a capstone program that pairs computing students with real clients to build real software over a semester. Most projects are what you would expect: startups, government agencies, research groups with a tool they need built.

Ours was a PhD art student.

The client, Melita Dahl, was completing a doctorate at ANU's School of Art and Design. Her thesis centred on a critique of how AI and facial recognition tools are used to categorize people's images without their knowledge or consent. The National Portrait Gallery had given her access to a portrait dataset, including metadata generated by a facial expression recognition model, and she wanted to turn that dataset into an interactive web exhibition.

That is what our team of seven was asked to build.

What we actually built

Three separate web applications, all built around the same dataset of portrait images and their associated metadata.

The core idea was that the portraits themselves would form the visual elements of the charts. Instead of bars made of rectangles, the bar charts were made of stacked portrait thumbnails. The data being visualized, things like subject age, pose, and predicted emotion, was extracted from the same faces that appear in the charts. That double-layered quality was intentional. It was the artistic point.

VIS-16 worked with a cropped, face-only dataset. VIS-15 used full-body portraits. VIS-6 was a scatter plot where you could place portraits along two axes simultaneously, plotting any two data dimensions against each other.

The tech stack was PixiJS for the HTML5 canvas rendering, Svelte.js on the frontend, and Node.js on the server. We ran on a DigitalOcean VPS for staging, with the production build deployed to the client's own domain for the exhibition.

When the gallery pulled out

About halfway through the project, the National Portrait Gallery withdrew their support.

They had become uncomfortable with the nature of the project. Displaying these portraits in the context of an explicit critique about AI categorization and consent was a bigger ethical and reputational position than they were willing to take. So they stepped back.

It did not stop the project, but it did change the atmosphere around it. The critique the whole thing was built to make had just been illustrated, somewhat accidentally, by one of the institutions it was aimed at.

It also made the redaction system more central to the work than it had been originally.

The redaction system

Since the portraits could not be displayed without raising the exact consent issues the project was about, we built a redaction layer. Eight distinct styles: a plain black square over the face, a small rectangle, rectangles that overlaid the predicted emotion score, face landmark lines drawn over the image, and variants that replaced the face region with its average color or with a color keyed to the model's predicted gender and age.

The eight styles shared enough behaviour that I structured them with a parent class and inherited subtypes, which kept the frontend code manageable as new variants got added.

The redaction was not just a technical workaround. Showing a face covered by a black rectangle while the bar chart around it reports that face's "predicted emotion" gets the point across more cleanly than showing the face itself would.

My role

I was the UI/UX designer and a frontend developer. The design work covered the landing page, the introduction flow, the in-app navigation, and the loading screen. On the development side I contributed to the landing page and the redaction system alongside the rest of the frontend team.

I also presented for the team at Audit 3, the final formal review, handling the summary of work section.

The spokesperson part of the role was something I had not done before in a group project this size. Managing communication between a seven-person computing team and a non-technical client whose primary concern was artistic intent required a different kind of attention than the code did. Melita had strong and specific opinions about how things should look and what the work was trying to say. Getting those opinions into the development process cleanly, without the usual telephone-game distortion, was its own skill to learn.

The exhibition and after

The finished applications ran on a 4K display for Melita's doctoral examination, with visitors interacting via mouse. The project was also submitted as part of ANU TechLauncher's end-of-year showcase.

In 2024 the project was named a finalist in the Australian AI Awards under the AI Innovator category in the Information Technology track. That was not something any of us had on our radar when we were debugging Svelte components at 11pm.

Looking back, this was the project that most surprised me. A year of work for an exhibition that ran for a few days, built on a dataset from a gallery that eventually walked away from it. The final product existed, for a while, on a 4K screen in a room full of art theorists. That is a stranger career moment than I expected to have at 21.