360-degree video is an affordable and easy-to-use technology for social science research. It holds significant potential for capturing spatio-temporal aspects of the social world from a fully omni-directional spatial perspective; however, gaps remain as to how it can be used to support field-based data collection and analysis. In this short piece we offer two contributions to the literature on 360-degree video for qualitative social science research on place. First, we draw on evidence from our multi-city study of ‘urban platform temporalities’ to develop a step-by-step procedure for producing and analyzing 360-degree digital video datasets, demonstrating the potential of the technology for what we term whole scene capture. We provide practical advice on software, hardware, camera usage, video processing, and ethical considerations; and introduce the 360-video qualitative coding technique of spherical simultaneous perspective. Adding new evidence of its use to already established literatures on 360-degree immersive video ethnographies and virtual human-environment exposure research, our method for systematic 360-degree capture of spatio-temporal data is applicable to a range of social science studies with a field-based data collection component. Finally, drawing together technological understandings of immersion from the field of VR with its ethnographic meaning, we then articulate the notion of immersive holism as a quality of 360-degree video that enables deep, meaningful, and comprehensive knowledge of place.
360-degree video is a novel visual format heralded for its sense of ‘immersion’. Immersion is generally understood as a state of feeling like one is present in a non-physical world; a perception of being surrounded by and enveloped within a virtual environment (see Nilsson et al., 2016). The growth of 360-degree video is particularly associated with virtual reality (VR) applications in journalism, marketing, tourism, and education, offering viewers a highly realistic immersive virtual experience that is often more affordable, accessible, and user friendly than computer-generated forms of VR (Evens et al., 2023). Yet 360-degree video also offers significant opportunities for social science, notably for research on the ‘animate and active’ properties of social environments – from temporalities, to mobilities, stories, and events – all from a ‘frameless’ omni-directional spatial perspective. Research uses for 360-degree video emerges against a backdrop of growing interest in visual (Pink, 2021; Rose, 2023), mobile (Büscher et al., 2011; Merriman, 2019), and immersive (Jones et al., 2022; Mathysen & Glorieux, 2021) methods across the social sciences and humanities. And while video research methods have a long history (see Erickson, 2011), they have accelerated in the age of digital visual research (Cruz et al., 2017; Garrett, 2011). Most recently, following the widespread availability of digital 360-degree cameras (capturing photo, video, and audio), an established literature has formed on the use of 360-degree video as a social research tool, primarily drawing on its immersive affordances to demonstrate its value for experiential ethnographic research (e.g. Baraldo et al., 2025b; McIlvenny, 2020; Westmoreland, 2020), and experimental studies of virtual human-environment exposure (e.g. Browning et al., 2020; Griffin & Muldoon, 2020).
However, in a recent review on the use of 360-degree video in research, Cinnamon and Jahiu (2023, p. 1) note one as-yet under-acknowledged affordance of the technology: the “ability to capture and analyze places in a fully panoramic field of view”. In addition to its qualities of realistic immersion for experiential and experimental research, as the authors contend, its second key affordance of comprehensive spatiotemporal representation offers unique possibilities as a research tool, due to what we call ‘whole scene capture’. In our work, we understand holism as an epistemological ambition to know the world as continuous, fluid, and connected rather than as a collection of discrete and unrelated entities. Methodologically then, whole scene capture via 360-degree video provides for a mode of research on place in which the spatial and temporal aspects of objects and their complete surrounds can be observed, interpreted, and analyzed for both positional and relational meaning, not as atomized entities but as part of a whole. Distinct from conceptions of immersivity that are typically ascribed to 360 imagery – as ‘authenticity’ (Aitamurto, 2019), ‘presence’ (Gold & Windscheid, 2020), and ‘being there’ (Vettehen et al., 2019) – whole scene capture is a methodological articulation of a different way of thinking about immersion: as holism rather than realism. ‘Immersive holism’ thus offers the potential to know places more completely, including both the spatial and temporal components of a scene, rather than through a feeling or perception of being ‘surrounded’ by a virtual environment. This distinction, we offer, provides a basis for using 360-degree video for a wide range of place-based social science research, particularly where temporal elements are important. Some studies have demonstrated the possibilities of 360-video for full panoramic representation in fieldwork (e.g. Davidsen et al., 2023; Kim & Lee, 2022; Vatanen et al., 2022; Vichiensan & Nakamura, 2021). However, there remains a dearth of evidence on tested procedures for how to use 360-degree video for whole scene capture in the social sciences.
In this short piece, we first demonstrate a low cost and easy to implement procedure to collect, process, code, and analyze 360-degree digital video datasets for whole scene capture. We draw on field experiences from our study on the spatial and temporal aspects of digital platform operations in cities, focusing on how the ‘urban objects’ of platformization – such as insulated cubes used in on-demand delivery, ridehailing vehicles, and shared e-scooters – are indicators of wider processes of urban change (Leszczynski & Kong, 2022). These field experiences were situated within a larger project on the material and aesthetic expressions of ‘platform urbanism’ (Barns, 2019), which primarily used 360-degree still photography for field-based data collection (Leszczynski et al., 2025). 360-degree photography has overlapping affordances with 360-degree video; however, we noted some limitations that prompted us to develop a video-based approach. In particular, still photography was ineffective for capturing platform object movements, which was a key component of our in-person sensory experience at street-level. In response, we developed the 360-degree video method described here to enable the capture of platform object movements and other forms of temporality observed in our fieldwork in 6 study cities in 3 countries (Toronto and Vancouver in Canada; Warsaw, Kraków, and Gdańsk/Gdynia in Poland; and Istanbul, Turkey). Some key examples of ‘urban platform temporalities’ include: the haphazard movements of a shared e-scooter transitioning between vehicle and pedestrian space; the flow of a series of delivery bags strapped to the back of on-delivery couriers coursing down a street; the contrasting temporal stasis of these delivery cubes placed on park benches and sidewalks by couriers dwelling in space awaiting new orders; and temporal juxtapositions of modern, brightly coloured on-demand delivery advertising billboards placed on old, preserved buildings in a heritage district. Together, such material temporalities reveal how urban timescapes (Kitchin, 2019) are altered by the advent of digital platform-based mediations of social and economic life (Chen & Sun, 2020; Diz & Casas-Cortés, 2024). Beyond this specific project, the procedures described below could be deployed in a variety of social science field-based studies. Following this, we consider the meaning of immersion in the context of whole scene capture, and develop the notion of immersive holism as a quality of 360-degree video that enables deep, meaningful, and comprehensive insights on place.
Methodological Procedures for Whole Scene Capture with 360-Degree Video
Step 1: Choosing Equipment
The purpose of the study was to examine the connections between place and temporality in the empirical context of digital platform operations in cities. For this purpose, we opted to collect visual video data using a mid-range 360-degree camera optimized for video recording. We chose the Insta360 ONE RS with dual one-inch imaging sensors for outside use in the highly variable light conditions of urban settings. This camera simultaneously captures video through 2 opposite-facing 180° lenses, and ‘spatial’ audio through 3 microphones. Built into the camera are automatic stitching of 180° videos into a fully omni-directional image, and a 6-axis gyroscope and horizon-levelling and stabilization algorithms to improve in-motion video capture (such that video does not appear ‘shaky’). Geographic coordinates can be recorded for the location or route of capture using a small external remote-control device with GPS sensor, or by accessing coordinate readings through the Insta360 mobile app installed on a GPS-enabled smartphone. 360-degree video produces large digital files, so we purchased multiple large capacity memory cards (e.g. 128 GB) for use during field data collection.1
Step 2: 360 Video Data Collection
After an initial round of field testing, we determined that at least 2 team members are preferable for video data collection: the videographer who handles the equipment and captures video recordings; and at least one ‘spotter’, tasked with identifying and relaying relevant locations for recording to the videographer. We modified procedures for data capture from two studies on related applications of 360-degree imagery for virtual tours and visual storytelling. First, following Cinnamon and Gaffney (2022), we captured video using a monopod (selfie stick) held above the videographer’s head, high enough (approximately 25-50 cms) to not be visible at conventional horizontal viewing angles (though remaining visible at the ‘bottom’ of the video). Although not feasible in the busy urban context of our data collection setting, another option is to mount the camera on a tripod and operate it using an app or remote device, which would allow the videographer and spotters to ‘hide’ behind objects to avoid being captured in the video, as Jones et al. (2022) describe for a 360-video project in a forested area. Videos were captured standing at a distance of between 2 and 5 meters from the object/location of interest to capture both it and the wider surrounds in sufficient detail (Figure 1). Pointing one of the lenses directly at an object of interest is advisable to avoid evidence of video distortion at stitch lines.
360-Degree Video Data Collection. Videographer Positioned on a Wide Sidewalk Capturing a Stationary 360-Degree Video of Brightly-Colored on-Demand Delivery Cubes Strapped to the Backs of Motorbikes, flowing Past Along a Busy Road in Istanbul
Second, following Cameron et al. (2020), we used both stationary and tracking (movement) shots. Stationary shots were used to capture a specific subject (e.g. an on-demand delivery bicycle courier moving through traffic) or a general area with no focal point (e.g. a large sequence of passing e-scooter riders). Stationary shots were typically captured by the videographer standing still on a street corner, public square, or area of wide sidewalk, and recording for between 8 and 45 seconds (median 20 seconds). Tracking shots were used to capture larger areas with multiple forms of activity occurring. Here the videographer followed a modified ‘walking with video’ approach (Pink, 2007), which required careful camera positioning and smooth bodily movements to minimize evidence of rhythmic striding motions in videos. Tracking shots were recorded while walking on public space for between 20 seconds and 3 minutes (median 35 seconds), using Cameron et al.’s (2020) advice to avoid panning, tilting, zooming, or rotating the camera while in-motion.
Although the freedom to record photos and videos in public space is recognized in many countries, there are ethical considerations that arise through the use of visual field methods, particularly 360-degree video. In our project, our empirical focus was on the material objects of platform urbanism. While inevitably we captured people going about their everyday lives in city spaces, we intentionally aimed for minimal inclusion of people in videos, and exclusion where it was possible without impacting our research focus – including waiting for as many persons as possible to move out of the camera frame where possible before capturing (e.g., rhythmic breaks in pedestrian flows). As guidance that may apply to other studies, we suggest the following principles for ethical practice with the capture of 360-degree video: avoid capturing people if unnecessary to the empirical context; avoid capturing children, vulnerable individuals, or activities that might be deemed harmful or illegal; deleting recorded data upon request from members of the public when asked; and, carefully consider any sharing or public display of captured videos or screenshots, including the possibility of blurring faces or and personally-identifiable information (as we have done in all images in this paper). Although such considerations are largely a matter of personal ethical principles, the ‘panoptic’ nature of this technology requires careful consideration of local expectations in field site locations and a respect for individual privacy. We also acknowledge that research ethics requirements will be different between institutions and jurisdictions, and researchers should adhere to the guidelines that apply to them given the contexts in which they work.
Step 3: 360 Video Processing and Management
After each fieldwork session we transferred video files to an encrypted portable hard drive. Next, using the free cross-platform Insta360 Studio software (https://www.insta360.com/download), we converted videos created in Insta360s proprietary spherical file format to smaller mp4 files that can be viewed in any video player in equirectangular format, a requirement for the video coding technique we developed (see Step 4). Video files were then synced with a secure cloud-based drive for image data storage and viewing.
Step 4: 360 Video Data Analysis
We conducted a two-step qualitative visual analysis (Rose, 2023) of our 360-degree videos (n = 205), coding for content and theme. The content analysis focused on coding the videos for metadata such as video date and location (city, neighbourhood); as well as basic details about the digital platform objects captured (e.g. number, type, brand, positioning) and contents of the surrounding ‘scene’ (e.g. stores, signs, landmarks, greenspace, etc.). Thematically, the videos were coded for different types of urban platform temporalities – e.g. rhythm, movement, flow, stasis, and juxtaposition. These codes were developed a priori through initial observations made by the research team during field testing and confirmed during in situ data collection, which also populated detailed field notes that informed our etic coding schema. In the field-testing phase both the spotter and videographer took notes independently and met to discuss ideas and possible themes both informally at moments between instances of video data capture, and in a formal debrief meeting at the end of a field work session. These meetings focused on developing themes of interest through deliberative discussion based on the observations of team members. The unique field experiences of each role and the different vantage point that each affords enhanced the reliability of our thematic code development, and we suggest having at least two team members in defined roles be involved in the code development process if codes are to be developed through an initial field testing or pilot study approach. Once field testing was complete, note taking and debrief meetings shifted focus from recording and discussing possible themes of interest to re-confirming their relevance and considering their meaning through empirical observation.
We developed and tested two coding techniques we refer to as (1) spherical partial perspective and; (2) spherical simultaneous perspective. Both coding techniques are based on viewing clips in a video player on a flat screen and manually coding using a separate Excel spreadsheet; however, each offers different possibilities and will have distinct implications for analysis and interpretation. For our initial test of coding in spherical partial perspective, we attempted to code while viewing the videos in their intended spherical format within the Insta360 Studio software (Figure 2), which requires continuous on-screen vertical and horizontal panning over the duration of a video clip. Spherical partial perspective introduces user interactivity into the video observation and coding process, offering an element of agency to the coder to observe content and identify themes through hands-on interaction. However, spherical partial perspective offers only a delimited view of a fully spherical scene, such that the video must be watched numerous times (at different viewing angles at different moments) in order to effectively observe, interpret, and code each object of interest (see also Baraldo et al., 2025a). 360-degree image viewing interfaces in-effect “[reintroduce] the rectangular frame” (Westmoreland, 2020, p. 259) to accommodate human vision (approximately 120°, for both eyes), meaning the viewed portion of a video excludes a majority of its ‘off-frame space’ (Metz, 1985) in any given viewing moment. The resulting partial view after which this perspective is named makes it difficult to observe and interpret actions that occur contemporaneously outside the viewing software’s ‘movable window’. Aitamurto (2019, p. (9) describes this as a key paradox of the 360-degree view, where despite whole scene capture, viewers “may miss important elements… and thus obtain only a partial picture”. For our study of urban platform temporalities, we needed an approach that could allow us to observe the simultaneity of objects and actions in full spherical perspective. As a solution, we developed what we term spherical simultaneous perspective.
360 Video Coding in spherical partial perspective. A Screenshot of a 360-Degree Video Rendered in Spherical Format, and Coded Using a Spreadsheet Template (Above). The Partial View of This Scene From Kraków Captures the Temporal stasis of on-Demand Delivery Bags (e.g., Lime-Green Cube Left of Centre in the Image Frame) Dwelling in Place Against the Continuous movements of Pedestrians, and the Disjointed rhythm of Another Delivery Bag (Marigold-Yellow Cube in the Image Centre) Being Navigated Between other Street Users. Missing From This Perspective are other Relevant Temporalities occurring Simultaneously in the off-Frame Space, including the Temporal juxtaposition of Brightly Coloured e-Scooters Against a Backdrop of Muted Heritage Buildings
We chose to analyze the 360 videos in spherical simultaneous perspective to allow the coder to observe and code multiple instances of platform temporalities occurring simultaneously within an omnidirectional field of view. Spherical simultaneous perspective requires adapting 360-degree video’s key affordance of whole scene capture to the constraints of human vision. To do so, we converted spherical videos to equirectangular format in the Insta360 Studio software (Figure 3). While spherical simultaneous perspective does not offer user interactivity (beyond starting and pausing the video), it does enable the coder to view the entire panoramic scene (360 × 180) on a single flat screen. For our study we used this technique effectively to observe and code for content and theme, including capturing multiple instances of platform temporalities happening concurrently anywhere within a scene. While this approach was successful for our purposes, researchers should be aware of some potential limitations. As a planar representation of a 360-degree scene, equirectangular images project spherical coordinates onto a rectangular grid, introducing distortion and ‘stretching’ of objects. This distortion may be limiting in some applications where scene depth and distance, and object positional and geometric accuracy, are important to identification and interpretation (Fraser & Wang, 2022).
360 Video Coding in spherical simultaneous perspective. A Screenshot of a 360-Degree Video Rendered in Equirectangular Format, and Coded (Using a Spreadsheet Template, Above) to Capture the Continual Temporal flow of on-Demand Delivery Cubes on Motorbikes Against the stasis of Idle e-Scooters Leaned up Against Utility Poles in the Moda Area of Kadıköy, Istanbul
One further option for 360-degree video coding is to use custom software such as AVA360VR (https://bigsoftvideo.github.io/AVA360VR/), which is used for annotating 360-degree videos in a VR environment. This approach, called immersive qualitative analytics (McIlvenny, 2020), offers a potentially more immersive way to code spherical images, which may enhance interpretation. This approach could also be useful if video annotation and object labelling is desired, for instance, to share explanatory video clips with research team members or members of the public. For our purposes we decided against coding in an immersive environment due to several limitations. This approach typically requires a powerful computer (gaming type, with advanced processor and graphics card), as well as additional VR viewing equipment (e.g. head mounted displays) and software (e.g. SteamVR). Evidence of nausea and discomfort in users of VR headsets is now widely documented (Mouratidis & Hassan, 2020), which may impact researchers who are subject to feelings of nausea and motion sickness (Vatanen et al., 2022). In their 360-degree video ethnographic examination of sense of place in a post disaster city, Baraldo et al. (2025b) also highlight how virtual immersive coding can be time consuming. Moreover, the approach also suffers from the limitations of spherical partial perspective as outlined above.
On Immersivity and 360-Degree Video
As an emerging visual method in the social sciences, the procedures documented above provide a basis for capturing and analyzing everything visible within a scene, within a reasonable radius of the video capture location. This affordance of what we have termed whole scene capture adds a further dimension to the possibilities of 360-degree field-based research, which has been primarily dominated by experiential and experimental applications that focus on leveraging its sense of ‘being there’. For our project, the development of the method for whole scene capture combined with coding in spherical simultaneous perspective fundamentally changed what was possible, offering an accessible yet powerful way to make sense of urban object temporalities in their full spherical spatial contexts. From the divergent temporal rhythms of on-demand delivery couriers’ bicycles and e-scooters competing with pedestrians for public space, to the temporal juxtapositions of historical architecture and modern objects of mobility, our study provides new insights to an existing body of work on the substantial ways in which cities and urban life have changed under digital platformization (Strüver & Bauriedl, 2022). Although we focus on the development of a novel visual method in this piece, our larger aims are to show how digitally-mediated cities are not only altered through the expansion of gig labour, novel economic and capital accumulation models, and new governance and policy arrangements (Gregory & Maldonado, 2020; Vallas & Schor, 2020; Zwick & Spicer, 2021), they are also changed in material and aesthetic ways too (Leszczynski et al., 2025). Put simply, cities ‘look different’ under conditions of platformization, a reality that demands novel visual methods to capture urban change at street-level.
Through our empirical experiences with 360-degree video we contend that, rather than thinking about whole scene capture as a separate affordance from immersivity, such uses of the technology instead articulate a different way of thinking about this classic VR concept. ‘Immersion’ is a long-contested idea with two main schools of thought in the VR literature (Berkman & Akan, 2019; Skarbez et al., 2017). On the one hand, the term immersion is used to describe the psychological perception of being in a virtual environment, understood and measured subjectively and experientially. As an example, Agrawal et al. (2019, p. 407) define immersion as “a phenomenon experienced by an individual when they are in a state of deep mental involvement in which their cognitive processes (with or without sensory stimulation) cause a shift in their attentional state such that one may experience disassociation from the awareness of the physical world”. Embodied, experiential, and experimental types of research with VR and 360-degree video leverages this affordance of immersive realism (Blackman, 2024; Osborne & Jones, 2022), which is supported by two factors: (1) a multisensory experience of an environment, underwritten by the synchronicity of sound (where audio was recorded) with the movement of phenomena (objects, persons) within the viewing frame, which allows for a veracious audiencing of data that users can easily connect with a real-world actuality in time and place; and (2) user control in the ability to pan around omnidirectionally and advance through the video stream, proxying being in the real world (e.g., being able to turn around, look up and down, move in a direction of one’s choosing).
However, in the context of research leveraging the separate affordance of whole scene capture, 360-degree video instead aligns with a second understanding of immersion: an objective property of the technology or system mediating a virtual experience, rather than a psychological state as expressed through a virtual reality user’s perception of being physically immersed in a non-physical environment (Nilsson et al., 2016). As leading VR scholar Mel Slater argues, we should “reserve the term ‘immersion’ to stand simply for what the technology delivers from an objective point of view” (2003, p. 1), and instead use the term ‘presence’ to refer to the psychological effects of the technology in producing the illusion of ‘being there’ (see Cummings & Bailenson, 2016; Skarbez et al., 2017; Slater & Sanchez-Vives, 2016). Drawing a conceptual connection to this ‘technological’ understanding of immersion, the affordances of 360-degree video technologies offer researchers a means of capturing and analyzing places in their full multi-sensorial texture; or more simply, it delivers immersion through whole-of-scene knowledge. Immersion therefore must also mean the capability of the technology to capture rich place-based knowledge (including both spatial and temporal elements), an understanding that also draws from the ethnographic meaning of immersion. In ethnographic research, immersion refers to a deep and meaningful engagement with research participants to “capture what [they] do, think, and believe, resulting in granular descriptions and nuanced analyses… that are typically invisible to distant observers” (Dumont, 2023, p. 441). Transposing this definition to the research approach of whole scene capture, immersive holism afforded by 360-degree video is the epistemic possibility of deep, meaningful, and comprehensive knowledge of place.
While we are interested in methods that advance epistemological holism, whole scene capture is not an approach aligned with claims to objectivity and truth. Despite the possibility of detailed spatial and temporal knowledge, the approach, like qualitative methods more generally, represents the partial and situated view of the researcher (Haraway, 1988). The 360-degree perspective does enable researchers to record space and phenomena without the subjectivity of deciding ‘where to point the camera’ – which arguably does afford a degree of objectivity – however numerous decisions underpin 360-degree video data collection and analysis (Vatanen et al., 2022; Westmoreland, 2020). In our study, decisions made regarding video capture locations, frequency, and length, as well as the video coding and analysis choices discussed above, all add up to a comprehensive, if ultimately subjective, understanding of space and temporality in the context of urban digital platform objects. Additionally, it is also important to consider the promise of whole scene capture and the affordance of immersive holism in the wider context of a politics of vision that underpins all visual technologies. Novel visual methods, particularly panoptic and immersive ‘reality media’ (Engberg & Bolter, 2020) such as 360-degree imaging and VR, hold the potential for harm as tools of surveillance and asymmetrical seeing (Cinnamon, 2024; Westmoreland, 2020). However, such technologies additionally have a seductive and performative power (Blackman, 2024; Harley, 2024) and can create in viewers an “illusory sense of place and illusory sense of reality” (Slater & Sanchez-Vives, 2016, p. 5), which can be leveraged by content creators to add credibility to claims, sway opinions, or generate desired emotional responses (Hendriks Vettehen et al., 2019; Nakamura, 2020). Thus, although our approach of whole scene capture for field-based data collection purposes largely avoids these issues, any use of 360-degree video demands careful consideration to researcher positionality and reflexive application.
While our aim was to introduce a novel perspective on the affordances of 360-degree video that goes beyond immersive realism, our investigation of urban platform temporalities actually revealed both realistic and holistic conceptualizations of immersion. While the technology was leveraged in our study to enable whole scene capture for thematic qualitative data analysis, it was also a useful tool to ‘virtually return’ to the field where our data were captured, supporting our recollections of specific sites, diurnalities of data collection, and even to assist in interpreting our field notes. Through enabling not only analytical rigor but also embodied recollection, 360-degree video offers the potential to bridge representation and field experience, an affordance that clearly aligns with the realism framing of immersion. This ability to virtually return to the field further informed subsequent in vivo (emic) coding of 360-degree video data for temporal rhythm, inclusive of the speed with which things moved (e.g., slow vs. fast) and the nature and continuity of their movement (advancing/intensifying vs. abating; halting vs. flowing), as well as temporal atmospheres (that is, affective interpretations of specific video files as conjuring feelings of leisure vs. hurry; fleeting chaos vs. enduring calm, etc.). These two thematic categories only emerged during additional engagement with the data post-fieldwork. They are immediately contingent on the two immersive qualities of the 360-degree video we articulated above – comprehensive spatiotemporal capture (immersion as holism) and the realistic viewing experience (immersion as realism) – and would likely not have emerged from an analysis of still imagery, even that captured in 360-degree perspective.
Conclusion
Countering the notion that video is a “stream of temporality where nothing can be kept, nothing stopped” (Metz, 1985, p. 83), the methodological procedures outlined here provide a basis for extracting detailed temporal data from video captured in the field, in a fully spherical spatial perspective. 360-degree field video capture, video data processing and management, and video coding and analysis are all relatively easy to implement with moderate technical expertise and relatively low equipment costs. Practically, we highlight how coding 360-degree video rendered in an equirectangular format offers spherical simultaneity, the ability to capture multiple concurrent spatiotemporal actions occurring anywhere in sight of a 360-degree camera’s panoptic point of view, “enabling researchers… to document everything in sight” (Baraldo et al., 2025b, p. 7). Conceptually, we draw on our experiences of what we call the whole scene capture approach to consider the meaning of immersion in the context of 360-degree video. We add depth to the understanding of immersion in a research context by highlighting how collection and analysis of 360-degree video enables deep and comprehensive knowledge on real places, an affordance we articulate as immersive holism. The whole scene capture method, combined with the ability to ‘re-enter’ the field and re-experience specific sites from our field work, provided our team a means to make sense of digital platform temporalities and their placement in and impact on surrounding urban space, and we offer that it would also be valuable for studies of temporality across the social sciences. Beyond this specific conceptual focus, researchers could adapt the procedures for a wide variety of qualitative empirical social research on place and space.
Footnotes
ORCID iDs
Jonathan Cinnamon
Agnieszka Leszczynski
Suzi Asa
Lindi Jahiu
Funding
The authors disclosed receipt of the following financial support for the research,authorship,and/or publication of this article: This research was funded by a Canadian Social Sciences and Humanities Research Council (SSHRC) Insight Grant,Award #435-2022-1542.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research,authorship,and/or publication of this article.
Data Availability Statement
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. *
Note
References
1.
AgrawalS.SimonA.BechS.BærentsenK.ForchhammerS. (2019). Defining immersion: Literature review and implications for research on immersive audiovisual experiences. Journal of the Audio Engineering Society, 68(6), 404–417.
2.
AitamurtoT. (2019). Normative paradoxes in 360° journalism: Contested accuracy and objectivity. New Media & Society, 21(1), 3–19. https://doi.org/10.1177/1461444818785153
3.
BaraldoM.DolcettiF.Di FrancoP. D. G. (2025a). 360-degree ethnography: Affordances and limitations in place-centric research. Visual communication.
4.
BaraldoM.DolcettiF.Di FrancoP. D. G. (2025b). Enriching qualitative inquiry: Exploring immersive technologies in place-based research. International Journal of Qualitative Methods, 24, 16094069251331352. https://doi.org/10.1177/16094069251331352
BerkmanM. I.AkanE. (2019). Presence and immersion in virtual reality. In LeeN. (Ed.), Encyclopedia of computer graphics and games (pp. 1–10)Springer International Publishing.
7.
BlackmanT. (2024). Virtual reality and videogames: Immersion, presence, and the performative spatiality of ‘being there’ in virtual worlds. Social & Cultural Geography, 25(3), 404–422. https://doi.org/10.1080/14649365.2022.2157041
8.
BrowningM. H.MimnaughK. J.Van RiperC. J.LaurentH. K.LaValleS. M. (2020). Can simulated nature support mental health? Comparing short, single-doses of 360-degree nature videos in virtual reality with the outdoors. Frontiers in Psychology, 10, 2667. https://doi.org/10.3389/fpsyg.2019.02667
9.
BüscherM.UrryJ.WitchgerK. (Eds.), (2011). Mobile methods. London: Routledge.
10.
CameronJ.GouldG.MaA. (2020). 360 essentials: A beginner's guide to immersive video storytelling. Toronto: Toronto Metropolitan University Library.
11.
ChenJ. Y.SunP. (2020). Temporal arbitrage, fragmented rush, and opportunistic behaviors: The labor politics of time in the platform economy. New Media & Society, 22(9), 1561–1579. https://doi.org/10.1177/1461444820913567
12.
CinnamonJ. (2024). Visual imagery and the informal city: Examining 360-degree imaging technologies for informal settlement representation. Information Technology for Development, 30(4), 1–18. https://doi.org/10.1080/02681102.2023.2298876
13.
CinnamonJ.GaffneyA. (2022). Do-it-yourself street views and the urban imaginary of google street view. Journal of Urban Technology, 29(3), 95–116. https://doi.org/10.1080/10630732.2021.1910467
14.
CinnamonJ.JahiuL. (2023). 360-degree video for virtual place-based research: A review and research agenda. Computers, Environment and Urban Systems, 106, 102044. https://doi.org/10.1016/j.compenvurbsys.2023.102044
15.
CruzE. G.SumartojoS.PinkS. (2017). Refiguring techniques in digital visual research. Springer.
16.
CummingsJ. J.BailensonJ. N. (2016). How immersive is enough? A meta-analysis of the effect of immersive technology on user presence. Media Psychology, 19(2), 272–309. https://doi.org/10.1080/15213269.2015.1015740
17.
DavidsenJ.McIlvennyP.RybergT. (2023). Researching interactional and volumetric scenographies – Immersive qualitative digital research. In JandrićP.MacKenzieA.KnoxJ. (Eds.), Constructing postdigital research: Method and emancipation (pp. 119–136): Springer Nature Switzerland.
18.
DizC.Casas-CortésM. (2024). On delivery waiting: The entanglement of gig and border temporalities in platform cities. Environment and Planning D: Society and Space, 02637758241290881.
19.
DumontG. (2023). Immersion in organizational ethnography: Four methodological requirements to immerse oneself in the field. Organizational Research Methods, 26(3), 441–458. https://doi.org/10.1177/10944281221075365
EricksonF. (2011). Uses of video in social research: A brief history. International Journal of Social Research Methodology, 14(3), 179–189. https://doi.org/10.1080/13645579.2011.563615
22.
EvensM.EmpsenM.HustinxW. (2023). A literature review on 360-degree video as an educational tool: Towards design guidelines. Journal of Computers in Education, 10(2), 325–375. https://doi.org/10.1007/s40692-022-00233-z
23.
FraserH.WangS. (2022). Monocular depth estimation for equirectangular videos. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Paper presented at the 2022.
24.
GarrettB. L. (2011). Videographic geographies: Using digital video for geographic research. Progress in Human Geography, 35(4), 521–541. https://doi.org/10.1177/0309132510388337
25.
GoldB.WindscheidJ. (2020). Observing 360-degree classroom videos–effects of video type on presence, emotions, workload, classroom observations, and ratings of teaching quality. Computers & Education, 156, 103960. https://doi.org/10.1016/j.compedu.2020.103960
26.
GregoryK.MaldonadoM. P. (2020). Delivering Edinburgh: Uncovering the digital geography of platform labour in the city. Information, Communication & Society, 23(8), 1187–1202. https://doi.org/10.1080/1369118X.2020.1748087
HarawayD. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575–599. https://doi.org/10.2307/3178066
29.
HarleyD. (2024). “This would be sweet in VR”*: On the discursive newness of virtual reality. New Media & Society, 26(4), 2151–2167. https://doi.org/10.1177/14614448221084655
30.
Hendriks VettehenP.WiltinkD.HuiskampM.SchaapG.KetelaarP. (2019). Taking the full view: How viewers respond to 360-degree video news. Computers in Human Behavior, 91, 24–32. https://doi.org/10.1016/j.chb.2018.09.018
31.
JonesP.OsborneT.Sullivan-DrageC.KeenN.GadsbyE. (2022). Virtual reality methods: A guide for researchers in the social sciences and humanities. Policy Press.
32.
KimS.-N.LeeH. (2022). Capturing reality: Validation of omnidirectional video-based immersive virtual reality as a streetscape quality auditing method. Landscape and Urban Planning, 218, 104290. https://doi.org/10.1016/j.landurbplan.2021.104290
LeszczynskiA.CinnamonJ.AsaS.JahiuL. (2025). Docklessness, aesthetic governance, and the urban ‘micromobility mess. Urban Studies, 62(12), 2508–2525. https://doi.org/10.1177/00420980251316773
35.
LeszczynskiA.KongV. (2022). Gentrification and the an/Aesthetics of digital spatial capital in Canadian “platform cities”. Canadian Geographies/Géographies canadiennes, 66(1), 8–22. https://doi.org/10.1111/cag.12726
36.
MathysenD.GlorieuxI. (2021). Integrating virtual reality in qualitative research methods: Making a case for the VR-assisted interview. Methodological Innovations, 14(2), 20597991211030778. https://doi.org/10.1177/20597991211030778
37.
McIlvennyP. (2020). The future of ‘video’ in video-based qualitative research is not ‘dumb’ flat pixels! exploring volumetric performance capture and immersive performative replay. Qualitative Research, 20(6), 800–818. https://doi.org/10.1177/1468794120905460
38.
MerrimanP. (2019). Rethinking Mobile methods. In MerrimanP.PearceL. (Eds.), Mobility and the humanities (pp. 118–138). Routledge.
39.
MetzC. (1985) Photography and fetish (34, pp. 81–90). October.
40.
MouratidisK.HassanR. (2020). Contemporary versus traditional styles in architecture and public space: A virtual reality study with 360-degree videos. Cities, 97, 102499. https://doi.org/10.1016/j.cities.2019.102499
41.
NakamuraL. (2020). Feeling good about feeling bad: Virtuous virtual reality and the automation of racial empathy. Journal of Visual Culture, 19(1), 47–64. https://doi.org/10.1177/1470412920906259
42.
NilssonN. C.NordahlR.SerafinS. (2016). Immersion revisited: A review of existing definitions of immersion and their relation to different theories of presence. Human Technology, 12(2), 108–134. https://doi.org/10.17011/ht/urn.201611174652
43.
OsborneT.JonesP. (2022). Embodied virtual geographies: Linkages between bodies, spaces, and digital environments. Geography Compass, 16(6), e12648. https://doi.org/10.1111/gec3.12648
RoseG. (2023). Visual methodologies: An introduction to researching with visual materials 5th ed. London: Sage.
47.
SkarbezR.FrederickP.BrooksJ.WhittonM. C. (2017). A survey of presence and related concepts. ACM Computing Surveys, 50(6). 96. https://doi.org/10.1145/3134301
48.
SlaterM. (2003). A note on presence terminology. Presence Connect, 3(3), 1–5.
49.
SlaterM.Sanchez-VivesM. V. (2016). Enhancing our lives with immersive virtual reality. Frontiers in Robotics and AI, 3, 74. https://doi.org/10.3389/frobt.2016.00074
50.
StrüverA.BauriedlS. (2022). Platformization of urban life: Towards a technocapitalist transformation of European cities. transcript Verlag.
VatanenA.SpetsH.SiromaaM.RauniomaaM.KeisanenT. (2022). Experiences in collecting 360 video data and collaborating remotely in virtual reality. QuiViRR: Qualitative Video Research Reports, 3, 1–28. https://doi.org/10.54337/ojs.quivirr.v3.2022.a0005
53.
VettehenP. H.WiltinkD.HuiskampM.SchaapG.KetelaarP. (2019). Taking the full view: How viewers respond to 360-degree video news. Computers in Human Behavior, 91, 24–32. https://doi.org/10.1016/j.chb.2018.09.018
54.
VichiensanV.NakamuraK. (2021). Walkability perception in Asian cities: A comparative study in Bangkok and Nagoya. Sustainability, 13(12), 6825. https://doi.org/10.3390/su13126825
55.
WestmorelandM. R. (2020). 360° video. In VanniniP. (Ed.), The routledge international handbook of ethnographic film and video (pp. 256–266): Routledge.
56.
ZwickA.SpicerZ. (Eds.), (2021). The platform economy and the smart city: Technology and the transformation of urban policy: McGill-Queens University Press.