Abstract
Keywords
Introduction
The following article emerged from the UK Research and Innovation (UKRI)’s creative R&D project MyWorld which commissioned a sequence of experimental projects containing active R&D. The ‘Virtual Production at Model Scale’ project was commissioned within Aardman Animations. It asked a sequence of targeted questions regarding the utility and applicability of virtual production tool-kits and methodologies to a stop-frame animation production studio. Methodologically, the project consisted, first, of scoping areas for innovation through testing how off-the-shelf virtual production tools interacted with existing production practices. Second, addressing areas of potential innovation with a sequence of experimental toolkits demonstrated in a production-floor sandbox that was freely accessible to Aardman’s key creatives and which would surface possible trajectories for implementation. The overall aim of the project was to discover ways of producing more minutes of animation, in a smaller amount of time, within a limited studio footprint. That is to say, a means of optimizing a materially exacting process that, at first glance, would seem to be resistant to the headline promises of virtual production.
Aardman Animations' reputation is largely defined by the craft of hand-animating clay puppets, so much so that a report that Aardman might run out of clay recently became national news (Pulver, 2023). This focus on the company’s material roots has intensified, rather than diminished, as CGI and VFX have become increasingly prevalent across the animation industry and within Aardman’s own workflow. Nick Park, director of multiple iconic Aarmdman productions recently spoke to popular YouTube channel Corridor Crew. Whilst acknowledging the accelerating use of digital techniques in Aardman’s own production practices, Park re-enforced how the essence of animation lies in interacting with the material between every frame – ‘you’re in contact with the puppet every single frame, just tweaking and teasing out the emotions… discovering it as you go’ (Corridor Crew, 2025).
What follows is an account of the development and testing of experimental tools within Aardman Animations, seeking to implement virtual production techniques at model scale. It seeks to respond to the question of whether a studio practice and iconic aesthetic as materially grounded and complexly orchestrated as Aardman’s is amenable to Virtual Production as a ‘disruptive’ force (Willment and Swords, 2023). A key question shaping this research and activity is: what adaptations to existing practice are necessary to enable virtual production adoption? However, in the context of Aardman Animations the converse question is equally relevant: what adaptations can be made to virtual production toolkits themselves, in order to drive adoption? Will the material limits of stop-frame animation reveal the limits of virtual production as an emergent animation pipeline and even the limits of virtualization within already established production pipelines?
Aardman and technology – The Wrong Trousers (1993)
First things first, it’s necessary to stress that Aardman Animations has never been anti-innovation or technologization. A look at the studio’s early output is enough to illustrate this. In the climax to
The phrase ‘dance of agency’ is drawn from Andrew Pickering’s book
In a production context such as Aardman’s, the relationship of humans and their tools, is of paramount relevance to the overall operation of the studio. At full production capacity up to 50 individual production units (where the film is being actively animated and captured one frame at a time by an animator with a few assisting crew members) can be in operation simultaneously. With each unit delivering increments of animated performance (seconds, or fractions of seconds a day). The production units are overseen by the director and production team, their work is directed by pre-production materials like storyboards and animatics and delivered (in a typical contemporary context) to a post-production team, in charge, not just of editing, but colour grading and any additional VFX work. In this context it is necessary to, ‘view creation as a situated process wherein new cultural forms are made, without assuming an a priori distinction between supposedly creative acts and routine activity, as well as creative actors as opposed to assistants, equipment and tools’. (Farias and Wilkie, 2016: 1) Pickering’s concept of a ‘dance of agency’ is a means of deepening our understanding of the relationship between creativity and routinized, technologized and rationalized activities of the industrial production of animation, without losing sight of the creative act of animation where ‘you’re in contact with the puppet every single frame, just tweaking and teasing out the emotions… discovering it as you go’. Pickering outlines a ‘dialectic of resistance and accommodation’ (Pickering, 1995: ix) between human agency and captured material agency (aka machines/technologies), that results in the ‘reciprocal tuning’ of both machines and disciplined human agency. Technology is understood as the ‘surface of emergence for the intentional structure of human agency’. (Pickering, 1995: 20) The generation of motion blur in
The creation of motion blur behind physically animated stop-frame puppets is a technical innovation that complements the core craft of animating by hand. And whilst it both literally and metaphorically sits in the background of the animation, as an on-set practice the creation of motion blur is accommodated within every aspect of the workflow: from the design and rigging of the set, to the lighting set-up, to the staging of the movement during the capture of each frame. What this demonstrates, by analogy, is that accommodations can be distributed across several layers of the production and across multiple scales, temporal as well as spatial. The challenge that faces virtual production’s ‘emergent orthodoxies’ (Swords and Willment, 2024) in the context of stop-frame animation, is that ‘resistance’ to virtualization is prominent and finding accommodations within stop-frame practice, means adapting virtual production itself to alternative temporal and spatial scales on the production floor.
Traditional style/new scale
Countering the prevalent synonymizing of stop-frame animation with ‘craft’ practice, typified by books such as Susanna Shaw’s
Drawing closer to the themes exposed by the Virtual Production at Model Scale project, Aylish Wood’s discussion of the ‘entanglement’ of stop-frame and VFX practices is concerned with the way in which the boundaries between hand-made and digital-made are struck at various discursive levels, from on-screen aesthetics to production disclosures (Wood, 2020: 250). Wood shows that the discussions – like Nick Park’s conversation with Corridor Crew – of digital interventions within traditional hand-made practices serve multiple purposes. A prominent function is to drum up public interest in the latest Aardman production signalling that the entanglement of VFX techniques within the production means
As such, several workflows adjacent to virtual production pipelines are already in place within Aardman. Notably to produce
Significantly, the use of simulcam was designed to wrap around the material practice of animation and is indicative of the degree of care given, within the studio’s culture, to not having the animator ‘driven’ by unnecessary and/or unwieldy technology. There is a ‘talent-first’ culture within the studio which allows for the preferences of the key creatives to shape the workflow. Likewise, there is a ‘floor-first’ culture which enshrines the performance of the animation as of paramount importance, and indicates that within the studio, the delivery of this performance cannot be easily augmented by other non-tactile, digital methodologies. These are situational orthodoxies ingrained within the studio that will prove to be points of resistance to a virtual production pipeline that isn’t flexible and accommodating of the core material activity of stop-frame, but nevertheless they indicate a general trajectory to seek digital solutions within pipelines what have an increased reliance of digital pre- and post-production workflows.
Dragonframe – Softwarization prior to virtual production
That said, there is one software platform that has become integral to the slow-motion physical labour of hand-animating puppets in stop-frame: Dragonframe. When stop-motion production shifted to DLR still cameras in the early 2000s Dragonframe was designed to provide playback for animators, now that they were working with digital stills as opposed to rolls of film. Built by stop-frame animators specifically for stop-frame production, Dragonframe consolidates – within a single interface and with a custom built controller with programmable hot keys – the multiple tasks that go into setting up, previewing and recording a frame of animation. Furthermore, given the simultaneous control the software offers over camera settings and lighting states, the software can build a great deal of data management into the act of animation. For example, it can be programmed to capture multiple lighting states per frame, control the movement of motion-control camera rigs, and organize the data at source. Dragonframe takes the maximum amount of data off from the production floor to the post-production workflow, whilst simultaneously assisting the tactile process of animating by hand. Incidentally, Dragonframe is also responsible for the popular articulation of the craft of stop-frame, as it has a function automating the capture of in-between images for the purpose of generating behind-the-scenes footage. The now ubiquitous time-lapse footage of popular stop-frame characters moving, seemingly assisted by an infinite sequence of hands, is a result of Dragonframe’s data-maximalist approach to stop-frame capture. Despite this industry dominance and emerging cultural prominence, an analysis of Dragonframe and its impact on animation is yet to be undertaken, and it would make a fascinating case-study of the ‘softwarization of cultural production’ (Lesage and Terren, 2024). For the time being it’s worth noting that Dragonframe gives the animator a high degree of control within the production unit, automates repetitive data-capture tasks, at the same time as producing consistency: from frame to frame, but also from unit to unit, across the span of an entire production. Dargonframe is being highlighted here, above other software platforms in use at the studio such as ZBrush, Houdini, Nuke, Maya and so on, because of its relationship to the material task of animation. It is the culturally dominant software in the studio, adoption was driven by the animators themselves. Dragonframe, therefore, presents a challenge to any process of R&D within a workflow that is dependent upon it: compatibility.
This issue of compatibility is not just about software interoperability, but instead as the interface that is so integrated into stop-frame that the Dragonframe is essentially the primary means by which stop-frame animation is produced, additional toolkits must be compatible with the ‘ideology’ that Dragonframe instantiates. As Wendy Chun pointed out software is an analogue of ideology, because it offers us ‘an imaginary relationship to our hardware’ (Chun, 2005: 43). When the software mediates a relationship with a tangible object – as is the case with Dragonframe, which is programmed to capture and instrumentalize all the reams of relevant data produced by the interaction between the animator and their physical materials – this ideology is extended beyond the relationship with the computer, and to the craft and physical practice itself.
The compatibility of the virtual production toolkit for stop-frame animation, then, is dependent upon the means by which the process of softwarization re-imagines the relationships between user and hardware
Virtual production at model scale – Overview
This brief overview of the academic, cultural and internal considerations of Aardman Animation’s balancing of a core material practice of animation with processes of technologization and softwarization helps establish the landscape of challenges and opportunities within which the R&D project operated. The descriptions of the experimentally developed tools draw from interviews with the R&D team, as well as observations taken during the sandbox sessions where various stakeholders, ranging from production managers to animators, were able to interact with the experimental toolkits and feedback directly to the R&D team.
The project began by reviewing its current pipeline in order to test a range of assumptions about the applicability of off-the-shelf virtual production tools to stop-frame, and also to scope where the building and demonstration of experimental toolkits would be the most impactful. Ultimately, in order to maximize chances of implementation downstream the team focussed on identifying production processes that, by their nature, contained a high degree of friction/waste/latency and seeing whether or not virtual production toolkits could meaningfully impact those processes. The search was for workflows and processes that could be improved (accelerated, de-risked, made more collaborative/efficient etc.) by virtual production techniques, without the adoption of entirely new workflows, which might require different skillsets to navigate. This extensive surveying process identified a few areas of interest that correlates to various scales of work within the studio: working with an individual asset, working with a number of assets in context, and operating within the full workflow to capture final frame images. These areas of investigation lead to the scoping of three experiments: volume testing, puppet sculpt and blockout.
Volume testing
As part of the sponsorship of the Virtual Production at Model Scale project by MyWorld, the team was given extensive access to the Experimental Production studio facilities at The Sheds in Bristol, which features a full suite of Virtual Production equipment including a large LED wall and two smaller moveable LED panels know as wild walls. This was an opportunity to test a range of assumptions about LED-volume virtual production processes and their implementation within stop-frame animation, and query whether or not the affordances of VP cinematography – for example, interactive lighting and reflections, as well as a shorter post-production pipeline – were applicable to a process that, by merit of proceeding frame by frame, is de-coupled from real-time applications, works with still rather than moving images and has a different relationship to post-production management than live-action film-making.
As a member of the R&D team David Gray made clear: it would be good to prove out that ICVFX is something that can be done at model scale, […] but there was a lot of validation to be done in terms of using stills equipment, and things like that… A lot of how Aardman uses stills technology at the moment meant that it was quite easy to transfer to ICVFX, like they’re using very slow shutter speeds, not trying to synchronise to […] single frames. So, it was valuable in terms of a hearts and minds showcase of technology […] getting people to think about the future ramifications of the technologies. It was useful to show people, technologically, the state of play (Gray, 2025, research interview).
The experiment and its accompanying sandbox, consisted of taking a shot from the climactic sequence of
A camera move was also integrated into the set-up, to demonstrate how virtual production can manage parallax, as well as show how the LED wall (or a model scale equivalent such as an OLED TV) can be integrated into the DMX capture cycle automated in Dragonframe. Aesthetically speaking, the completed shot from
In terms of production density, the translation of a stop-frame workflow to a fully fledged virtual production stage was obviously not going to result in space saving solutions. Likewise, given the complex assemblage of software and hardware within a virtual production stage, the addition of the Dragonframe workflow and motion-control camera operation workflow, resulted in an increase in the number operators required on-stage at any one time. Whilst the set-up was experimental it was clear that even implemented as a stream-lined practice, integrating the process of hand-animating the puppets in a stop-frame workflow, whilst using Unreal Engine as a driver for the content on the walls would make the animation process a heavily ‘assisted workflow’, requiring extra operators during production without gaining meaningful post-production savings.
This experiment was most impactful from a demonstration point of view rather than an implementation point of view. In reflecting on the experiment the R&D team were careful to caveat: ‘if planned carefully, it could actually be used […] to do moving shots in one volume’. (Bolotova, 2025; research interview). However, the overall assumption was that the post-production pipeline for set extension is not broken, and therefore does not need fixing. Likewise, the simulcam workflow, that allows animators to gain visual context for digital assets that will be added in post-production, provides adequate visual and informational context without complicating the physical business of animation. From the perspective of a workflow that produces fewer hurdles for the animator (in a smaller space) this demonstration amounted to a complexification of an already labour intensive and highly pressurized procedure.
Puppet sculpt
The identified need in this experiment was the process of developing puppet designs. The traditional workflow consisted of the puppet department working within 3D modelling software – ZBrush – showing designs to directors and production designers in a 2D format before having the puppets 3D printed for another round of notes and sign off. This process is inherently inefficient, both in terms of time and resource, as 3D printing is expensive and relatively slow. Nevertheless, a tangible asset that can be closely examined is essential for feeding back and signing off on the designs. The Puppet Sculpt experiment sought to optimize this process by investigating what virtual environments might provide sufficient spatial context for the detailed examination of digitally designed puppets, to eliminate any needless cycles of feedback and re-iteration. The question was how far could the inspection process be virtualized? A number of formats were available to experiment with: VR, AR and a spatial reality display that offered 3D images, using eye tracking and a lenticulated display to render stereoscopic images without the need for 3D glasses or a VR headset.
As David Gray summarized: ‘initially we thought the iPad was going to be more useful for them because we thought […] they’re going to care about viewing scale in the real world, but actually the viewport of the iPad turned out to be fairly limited. It still feels like a flat frame, even though you’re getting some 3D contextual cues’ (Gray, 2025). By contrast, when viewing the puppet in VR using Unreal Engine, a key creative within Aardman reported ‘this gives me way more context, let’s me manipulate the object, and see it from different angles’ (Bolotova, 2025). VR thus proved the more appropriate means by which to translate the inspection of a sculpt to a virtual context. However, in terms of increasing the utility of the tool, it was determined that interactions with the asset in virtual space were distractingly gamified, with the mechanics of grabbing and manipulating the puppet too reliant on an interface and VR controller that interfered with the intuitive handling – revolving and manipulation – of the object. Having to learn the interface disconnected the creative from the puppet design being examined and from the habits and naturalized practices of the inspection process that was being virtualised.
The team experimented with a range of solutions, which revolved around the idea of having a physical proxy twinned with the digital asset, such that physically moving the proxy – in one iteration a Meta Oculus Quest controller, and in another a small statue with an internal gyroscope – would move the digital asset in the viewing environment. This innovation proved a success, and the learning curves for demonstrating the tool kit to Aardman stakeholders with limited VR experience shrank dramatically. Being able to rotate and translate the digital asset in virtual space, without reliance on a game mechanic, provided sufficient spatial context for the examination of the assets, and narrowed the gap between the embodied process of inspection and its virtual production counter-part.
The process of spatially contextualized examination gave rise to the next problem space: close inspection. ‘It felt very natural, to see something in detail, to put the puppet as close to your face as possible’, the team pointed out. ‘But we assumed in VR that this was impossible, given the way that interocular distances work’ (Gray, 2025). VR headsets struggle to present digital objects in extreme proximity to the eyes, given that the measurement of the interocular distance, integral to stereoscopic display, breaks down when ocular focus resolves on a point too close to the face. The solution the team developed was based on an insight drawn from established use of ZBrush, where to gain more detail, the artist doesn’t ‘get closer’ to the object, but instead ‘zooms in’. As such the team integrated the ability to scale the digital puppet into the tool kit. This proved an important step in clarifying an implementation trajectory for the tool kit and should perhaps be noted as a moment at which the ‘ideology’ of a given software proved malleable. As it was being used – to provide stable spatial context to the process of inspecting a design iteration of a puppet – the game engine driving the interaction prioritized scalar consistency over scalability, to the point where bringing an object close to one’s face broke the mechanism. By loosening the rigidity of the interaction and allowing for ‘zooming’ to replace close inspection, the useability of the tool greatly increased.
The ‘malleability’ of the digital puppet and its scale, and the software ideology guiding the interaction between puppet and user, proved valuable to the overall creative process of the studio as well. The tool went from being concerned with accurate pre-visualization, to being active in the design process. Eve Bolotova reported that creatives ‘can put the VR goggles on and set the scale [of the puppet] how they like it’. This enables what Dan Efergan refers to as a ‘playful creativity’ less constrained by the cost of physical iteration. Across the team, the language converges on this concept of creative discovery as opposed to pragmatic deliberation. ‘The result of being able to scale the puppet [in the Puppet Sculpt VR interface] was being able to view the puppets in context with other puppets in order to
Blockout
Building on the insights gained by virtualizing the interaction between stakeholders and individual assets/puppets in the Puppet Sculpt experiment, the team moved onto looking at processes that involved multiple elements. An initial experiment in the sandbox demonstrated the applicability of an AR photography app (Cyclops) to the shot-planning phase of pre-production, where a storyboard, consisting of still sketches describing individual shots, is developed into an animatic, in which rough sequences are developed from storyboard materials in order to preview how the shots will look when edited together. Cyclops is a highly modifiable AR platform that can – using an iPad – insert digital assets into physical scenes, even using the iPad’s depth sensors to occlude aspects of the digital assets that are covered by foreground elements. During the sandbox sessions it was also demonstrated that the app could be used to build out set-extensions around physical sets. The team were pursuing the use-case wherein a fully stocked digital library of project specific assets – say posed Wallace and Gromit digital puppets – could be used to ‘find the shots’ at the planning stage, relaying these AR generated shots back and forth to editing and pre-visualization teams to add an extra layer to the development of the animatic, that would then inform the animation on set.
However, the feedback from those sessions revealed several things. First, the tool-kit was very similar to the simulcam process already discussed. Secondly, whilst the context that was provided for establishing and storyboarding shots with physically present sets was useful, it didn’t relieve demand on physical spaces or offer greater amounts of data to the creative teams at an early enough stage in the process. Whilst shots that were planned in the storyboards and animatic could essentially be ratified with a minimal team (i.e. without the puppets and/or camera and/or lighting team in place) the degree to which this extra interface complemented a process already subject to a high degree of planning, and the fact that this process would take up precious time with the physical sets themselves – elaborating on production materials, rather than getting on with the actual animation – was problematic.
Nevertheless, given that the sandboxing exercise was designed to introduce a variety of interfaces to a variety of stakeholders and prompt conversations around possible use-cases not immediately scoped by the R&D team, a further lead was developed in the Blockout Tool experiment. The intervention that was scoped occurred substantially
The teams’ expertise in scanning physical assets within Aardman and translating those scans into VR and other spatialized computational contexts synchronized in the development of the Blockout Tool. In essence, the tool scaled up the 20% scale white-card models to production size. Within the scaled-up environments, VR and AR interfaces could be used to plot action and plan shots. In scaling up the white-card model to production scale, the team chose to disregard any artefacting or lack of fidelity to the final world/image. Instead the priority fell on enabling a physical planning process to occur without access to the physical sets and props that were still being made.
The development of the toolkit continued from there. David Gray explained: We thought it would be about finding frames via the iPad but immediately [the stakeholders opted for] the VR because the director would say ‘I need to understand how far away is that fence, how far away is this puppet from there, I want to walk around the scene and understand that spatial context’. So, when it came to the second demo, we framed it up as they’ll go into the white-card one by one – the DOP will go in, the director will go in – they’ll understand the space and start thinking about how the story board will translate to the set. (Gray, 2025)
This exercise in spatially experiencing an early reference asset was then extended by the presence of a digital viewfinder, with a library of lens pre-sets and so forth. However, the collaborative process of building the scene meant that an AR interface that the director and DOP could work with physically side-by-side, was considered more valuable. The demonstration went as follows: They both go into VR and scouted it virtually. Then they came out of VR and started building the scene. And basically within 90 min we were able to take a storyboard and build a very basic pre-viz with full 3D context with just a DOP and a director…. (ibid)
This was a significant optimization of a slow process. As Gray makes clear: ‘to do that in a traditional previs pipeline would have taken far longer and the revision cycles would be far longer’ (ibid) as the director and DOP would sit at the workstation of the pre-viz artist, verbally relaying feedback on every aspect of the process, from framing to puppet positioning and waiting for that feedback to be implemented. In the scaled up white-card, accessed both in full VR and via an AR interface, the users of the new toolkit were able to enact their decision making in a much more direct fashion. Dan Efergan clarified, once again, that this unlocked a playfulness within the workflow: ‘what is empowering is that because there are less people [present] in that moment they will have a greater freedom to find the things they want to find’ (Efergan, 2025).
By way of illustration, during the demonstration Gray highlighted a point where having built the five shot scene to the pre-prepared storyboards, we got the last shot where the puppet ran up the stairs to the balcony, and the director said, well now I’m up here I actually think the fourth shot could be more interesting if we were looking down the stairs at the puppet as it came up. So he basically created a new shot, and was like ‘I would not have done that just looking at the animatic, I wouldn’t have thought about switching the context for that shot,' so by using the sandbox and having the director and the DOP just chatting they managed to come up with a new shot… by the nature of being able to access that environment tactile-y (Gray, 2025).
It is within this context that Eve Bolotova zeroed in on how virtual production would most usually operate when detached from the real-time applications that define it in live action productions. ‘Blockout demonstrates how we actually define Virtual Production: a process or a way to make stop-frame animation where you take some of the critical decisions and you make them in a spatialised digital space’ (Bolotova, 2025; research interview). The output of the creative use of the white-card scan and the block out tool-kit is a detailed pre-visualization with full 3D context that can be used to ‘find’ the shots sketched in the storyboard, and develop them into an edited animatic without any access to physical sets, or physical assets of any kind, required. With this Block Out tool the circularity of the Virtual Production pipeline begins to fall into place, perhaps because it does not interact with any of the material processes that sit at the heart of Aardman’s stop-frame animation. Instead, preceding and creatively pre-empting the fabrication of the material aspects of the animation the Block Out tool offers several opportunities for implementation. The impact of the tool from the perspective of production density is inarguable: as Bolotova summarizes: ‘we just blocked out the whole sequence with the set that was not there, with puppets which are not there… save money, save time, save space’ (Bolotova, 2025).
Conclusion
From a production standpoint, the sequence of experiments that form the Virtual Production at Model Scale project was a qualified success, resulting not only in a companywide engagement with the
An irony that guides my reflection on these case studies is that the Block Out tool, developed within a project concerned with scaling down Virtual Production practices, succeeds because it returns a miniaturized reference asset to production scale. As such, it generates a virtual asset that is functional at model scale and therefore easily integrated into the physical workflows associated with Aardman’s stop-frame animation. This indicates an area of investigation that is worth unpicking in this concluding discussion, specifically the operational role of images within Aardman’s workflow and, by extension, the impact of virtualizing these ‘operational images’ to amplify their functionality, within a virtual production pipeline.
Rather than diverting to a discussion of the instrumentality of image-culture as laid out by several prominent media theorists (e.g. Parikka, 2023), it’s worth considering Aardman’s internal culture of operational images through an animation specific lens. Animation scholarship has usefully, if intermittently, focussed on the image cultures of production materials such as storyboards and animatics (Angelone, 2021; Ichifuji, 2023; Pallant and Price, 2015). Aardman’s workflows are replete with 2D images that serve a communicative as well as creative function. As is evident from the discussion above, 2D images mediate between the design and planning stage of pre-production and the physical production in myriad ways – as blueprints, sketches, animatics, schematics and storyboards.
Animation scholar Janet Blatter discusses the multiple roles that storyboard images play, with regards the final animated image. Blatter argues that storyboards are simultaneously ‘fictive’ and ‘filmic’, expressing the cinematic nature of the story being told (filmic) and the story-world in which it is unfoldling (fictive). Additionally, however, storyboards are ‘directive’: they shape future activity, namely the execution of the shot (Blatter; 2007). As such these artefacts, whilst products of creative and artistic labour, nevertheless contribute to the management of other people’s labour within the production pipeline. As Paul Ward puts it: ‘the functional value of storyboards lies in the way they are used to regulate, manage and predict workflow in this most labour intensive of production contexts’ (Ward, 2019).
In developing the blockout tool the R&D team were initially anxious that the white-card model they were scanning would produce a digital asset that was insufficiently ‘fictive’, and ‘filmic’. That is, so detached from final frame aesthetics as to inhibit utility with the workflow. However, arriving at an early enough stage in the pipeline, the ‘directive’ content of the asset outweighed the ‘filmic’ and ‘fictive’ content in determining its utility.
The Block Out tool mediates between ‘directive’ sets of images – the storyboard and the previz/animatic – through enabling a more intuitive interaction with a spatial asset, the white-card model, at a scale 1:1 with the production. What is more, it does so in a manner that re-introduces the possibility of ‘filmic’ and ‘fictive’ decision making into the process of working with ‘directive’ material. Here the much-vaunted promise of Virtual Production at scale – aka ‘creativity at the speed of thought’ – is fulfilled within a pipeline that is resistant to the real-time applications of the virtual production process. Introducing real-time capabilities into the interactions between creative decision-makers and early-iteration production assets allows for an iteration cycle that transcends the material limits of this pre-production phase, whilst also inflecting the downstream creative processes, insofar as key creative decisions are made in a materially friction-free, digital space.
To round up, I’ll suggest that the success of this toolkit presents overlapping horizons: first, for the future of virtual production as it pertains to production paradigms such as stop-frame animation, that have a material component resistant to VP as a means of producing final-pixel; second, for the critical evaluation of virtual production processes as they integrate into pipelines in a fashion that is inaccessible from the point of view of screen aesthetics.
The Aardman project indicates that a potential area for innovation within materially grounded pipelines is in the generation of ‘directive’ image assets. Using virtual production tool kits to mediate between sets of ‘directive’/operational images, not only allows for the greater communication between creative stakeholders across the pipeline (a feature shared with the use of game engines and virtual production in architectural visualization) but also for a qualified re-weighting of the filmic/fictive/directive content of early-iteration material (a factor less explored in the critical study of architectural visualization processes). This represents an enfolding of a hitherto distinct aspect of the pipeline within the Virtual Production ecosystem and its protocols.
The adaptation of this process of generating directive assets such as animatics to real-time applications should not be ignored. After all, the real-time mediation of interactions with reference assets, will have ramifications downstream, related to, amongst other things, the automation of labour, the influence of the real-time UI and wider impact on the practices of situated creativity brought about by the altered workflow. It will significantly alter ‘the dance of agency’ that is at play between the studios creative workforce and the technical systems they work with and amongst. The challenge for screen scholarship is that these impacts are not readily visible as aesthetic artefacts – they do not show up as motion blur – and the ‘directive’ images that they generate are not considered culturally significant. Nevertheless, a critical focus that endeavours to keep the technological underpinning of a given aesthetic available to critique will profitably turn to archives of ‘directive’ images. This turn will be in line with both the operational turn already underway in various forms of media and animation scholarship. Moreover, as tools such as Puppet Sculpt and Block Out are adopted in stop-motion contexts, there will be an accelerated generation of archives of this category of images. This will represent an untapped resource for further scholarship.
Therefore, the future of virtual production as it pertains to production paradigms such as stop-frame is the same as the future of virtual production scholarship: a turn to the relations between ‘directive’ images, increasingly produced via virtual production, and the cultural artefacts whose production is regulated, managed and predicted by real-time applications. This focus on the ‘directive’ and/or operational images within visual and material culture pipelines will be aided by the theoretical perspectives introduced earlier in this article. Andrew Pickering’s concept of the ‘reciprocal tuning’ of technological and human agencies, and the insights to be drawn from the emergent and intertwined trajectories of studio studies and media theory’s operational turn could be mobilized to profitably illuminate the relationship between ‘directive’ image cultures and the technologies that hasten their production and dissemination. This will represent an interesting critical approach to virtual production wherever it is applied: within image-based workflows, visual cultural production, or emergent hybrid physical-digital pipelines, such as that in place in Aardman. What is more, this critical perspective should be attentive to the ways in which the archive of unfinished, provisional ‘directive’ images that are generated by virtual production processes manage the heterogenous and distributed labour that produces the finished filmic and fictive qualities of a film. The analysis thus produced will be commensurate to the rapidly complexifying landscape of virtual production even as its emergent orthodoxies become incontestable norms across a variety of production contexts.
