Abstract
Introduction
Scope of the research
This research proposes the implementation of a NetLogo framework for spatial agent-based analysis building on the knowledge of the one implemented by Turner et al. along the development of University College of London Space Syntax analysis software (DepthMap).
The results and validation of the framework have been tested on the best well-known example of Space Syntax analysis on an already built scenario, the Tate Britain Museum in London.
Results presented aim to set up a tested and validated ABM framework that can be the basis of further research on the path for human spatial navigation analysis based on cognition within unmapped environments.
The automata agent versus the evolved automata
Instead of having the agents gaze outward into the environment, that will be computationally expensive, Turner opts for a different approach: the agents sample the existing possibilities at specific locations within the environment precomputing what Gibson would term an “ambient optic array” - a set of potential scenarios accessible to the agent at any given point within the environment. This computational framework is developed through a grid of cells overlaid onto the environment, where each cell contains information about the locations that are directly visible from it. 1 For that a simple agent-based automata was built. The agent moves in response to the visibility field available from its location attempting to rule out what type of moves are more likely than others in natural movement of pedestrians. The author checks then the system against the tracking of 19 real people at the Tate Museum obtaining a correlation of R2 = 0.76 for 170 field of view agents.
In 2007 the author published an extension on his agent’s implementation bridging the line-base topological analysis and the visually directed agents 2 He analyses the transition matrix of the 1st order Markov chain the agent system is, (if with infinite number of steps), concluding that it is predictable and hence time reversible, if the agent is left with 360-degree view. Stating that the vector of the stationary distribution is the eigenvector of the transition matrix Turner realized that, by encoding memory in his automata, the resultant patterns of agent movement did not plot good correlations with observed patterns of movement and occupancy. The correlation scores were much lower in fact, (decreased to R2 = 0.41 when considering 360° of view agents), than the original simple automata and his earlier evolved automata.
Stating that the eigenvector is not typically easily found but, in the case of automata with 360 vision, as transition matrix is time-reversible, the correlation obtained R2 = 0.99 for the case study of the Tate Britain museum.
On the same test author obtained a correlation of R2 = 0.89 between the Through-vision diagram and the model run with 170 field of view agents as the ones in DepthMap software. 3
This validation will be fundamental for this research as it will be the one mimicked for testing the validity of the developed NetLogo framework before implementing extra ABM features.
DepthMap agent based analysis
Different types of pedestrian behavior are allowed to be analyzed by the implementation of several walking agents in DepthMap X being Visibility Graph Analysis, VGA used as the core of the agent-based analysis. Agents are released on a graph and not on a layout and, as so, a grid is created and filled despite measures not needing to be calculated. Every agent can check the visual accessibility of its location by doing so as its next choice. Some parameters can be customized like timesteps of the analysis, the release location, the field of view of the agents, number of steps before turn decision or how many agent trials are recorded.
For testing the methodology Turner compared the agent paths of his first implementation to the actual behavior of human walkers visiting the Tate Britain. This research will mimic that numerical test with the proposed NetLogo framework.
Existing current approaches
Existing criticism on the methodology
Space Syntax has been criticized, still recently, for lack of rigor by mathematicians regarding the way they defined the fewest-line axial map and on how it is implemented mathematically. In fact, Turner et al. published in 2005 a research clarification paper on their method to retrieve axial lines algorithmically 4 trying to answer precisely that criticism.
It is the purpose of this research to analyze the current implementation for the agent-based analysis of the methodology UCL Depthmap and FIPA were founded both in the 90s (Depthmap in 1998 and FIPA in 1996) which defined their relationship: by the time Depthmap appeared FIPA had just published its first set of agent basic standards. There is where this research’s focus is precisely: on the investigation on how to implement those methodologies and standards together.
Existing implementations on the literature
A comprehensive literature review uncovered only two previously published papers addressing the primary challenge that this research aims to tackle: the expansion of the Space Syntax methodology into a comprehensive Agent-Based Modeling system.
The only known study to introduce Gaze Vector and attractors from the Space Syntax methodology was published in 2010. 5 Despite focusing on urban environments rather than indoor spaces, it marked a significant milestone with the introduction of attractors. Jalalaian et al, aligned with Space Syntax methodology by examining how spatial configurations affected pedestrian behavior in urban environments focusing on analyzing movement patterns influenced by visual attractors and maps authors simulated urban spaces to understand how different features impact pedestrian navigation By using multi-agent systems, the research provides predictive modeling to anticipate changes in pedestrian behavior based on alterations in spatial layouts, much like Space Syntax’s objective of predicting movement patterns and understanding spatial dynamics.
Apart from the aforementioned trial the only known example is the one developed in Stanford in 2008 when some Space Syntax ideas were tried to be implemented. Interesting is the fact that the test was developed by a robotics student 6 that was also interested in architectural design and was trying to reduce the method’s computational cost.
Building on this, Ma et al. in 2023 further expanded on these concepts by exploring the notion of a ‘desired path,’ emphasizing the importance of the angle of vision. 7 Their work represents the only known study introducing hotspots as attractors while incorporating a ‘desirability’ value for patches within the environment. Although Ma et al.'s primary focus on emergent paths differs from the objectives of this research, their work has nonetheless served as a source of inspiration. The specific angles of vision (30, 90, and 170°) they tested for agents did not align with Turner’s proposals for Through-vision. As such, the incorporation of Through-vision calculations and the use of automata agents with a 360-degree field of vision provide innovative contributions to the field. Furthermore, the implementation of message passing is a distinctive feature not found in existing literature.
Varoudis,8,9 explores the use of agent-based simulation to analyze complex spatial scenarios. His work bridges the gap between agent-based analysis techniques commonly used in Space Syntax and the movement trajectory techniques from video games that create realistic, human-like behavior for non-player characters. Varoudis introduces a methodology that incorporates the movement behavior found in computer games into the simulation tools traditionally used for agent-based analysis in Space Syntax. He tested this hybrid model in two gallery spaces and found that agents displayed greater exploration capacity, enhancing traditional Space Syntax agent-based methods The study is significant for its innovative approach, as it merges analytical and simulation tools with advanced techniques from video game design, potentially setting a new standard for agent-based spatial analysis as is the one that provoked the interest on further research based on Unity environment mapping.
It was just recently the only deep literature review research was published. Mohamed et at, 10 provides a comprehensive analysis of Space Syntax research over nearly five decades. Their analysis reveals that research in this field has increased significantly since the first paper was published in 1976, especially after 1997 with the initiation of the International Space Syntax Symposium.
Developing agent-based models for space syntax analysis: architectures, communication, and tools
Macal and North 11 propose a structured process for developing agent-based systems, involving agent identification, relationship modeling, data collection, behavior model validation, and system analysis. For agent systems, defining agent properties, architectures, and multi-agent systems (MAS) is essential. An agent should exhibit autonomy, rationality, and flexible action in dynamic environments. Developing a Space Syntax-based agent system requires careful consideration of communication languages, agent cooperation, and selecting an appropriate agent architecture.
Proposed communication
An agent-based model fundamentally comprises agents, agent relationships, and a framework for simulating agent behaviors and interactions. Developing a Space Syntax-based agent system raises several critical questions, such as how agents should communicate. That is the fundamental basis for the criticism of the existing DepthMap agent-based implementation: it cannot be considered a proper ABM (if referring to Agent-Based Modeling) due to the lack of agent’s communication protocols.
For understanding how to approach communication along the agents that will be acting as humans on the platforms blackboard systems and message passing mechanism were studied. Willing to avoid having a direct look up table about the environment, (for being strictly aligned on the protocol implemented along DepthMap by UCL CASA Lab.), blackboard systems have been discarding at this stage of the research. Direct message will be implemented: 20% of the population will have a message that will pass along other agents on crossings. The content of those messages are the coordinates of a recommended painting.
Agent architectures, foundations of agent’s reasoning
In an agent-based framework where agents need to be reactive, proactive, and socially capable, it’s important to choose an architecture that supports these features effectively. A reactive architecture has been chosen for this study. In a reactive system, agents act based on a set of predefined rules that directly map environmental inputs to corresponding actions. This approach is considered well aligned for the initial implementation of the Space Syntax ABM as it typically focuses on:
Simplicity
Reactive agents are designed to perform specific, well-defined tasks. They don’t engage in complex reasoning or long-term planning, which helps them to remain efficient and straightforward. Agents on the behaviour will have simple tasks implemented in all cases.
Adaptability
Because they respond directly to changes in the environment, reactive agents can adapt quickly to dynamic and unpredictable conditions, making them well-suited for environments where conditions change rapidly.
Low computational cost
Without complex internal modeling or planning, reactive architectures generally require less computational power, enabling the system to handle numerous agents in real-time.
Limited perception and knowledge
Reactive agents often operate with a limited understanding of their environment, relying primarily on immediate sensory inputs rather than building comprehensive internal representation.
Agent tools
For implementing agent-based analysis, NetLogo and Repast are among the most suitable tools aligning well with the intentions of this research of setting the ground by developing the basic Space Syntax Agent Based Model framework. NetLogo excels in user-friendliness and supports reactive and BDI architectures and will easily allow built-on from future researchers adding to the framework more ABM features. Repast, however, will be considered as a possible base for future research as it offers advantages in tracking and updating agent positions relative to spatial environments, making it useful for integrating Space Syntax analysis. Gaming technologies like Unity also provide valuable features, particularly for upgrading agents via direct coding, compensating for GIS limitations that might be taken forward in the future.
Methodology
This research focuses on the implementation of space syntax analysis using NetLogo, a multi-agent programmable modeling environment, adapting the core functionalities of the UCL DepthMap X software. By integrating these capabilities into NetLogo, this project aims to leverage the interactive, simulation-based features of NetLogo to enhance and expand the analytical potentials of space syntax studies.
Research objectives
The primary objective of this methodology is to develop a robust simulation model that replicates the analytical features of DepthMap X within NetLogo. This involves: 1. Translating the spatial network analysis algorithms from DepthMap X into the agent-based modeling framework of NetLogo. 2. Ensuring that the model effectively captures the dynamic interactions and complexities of spatial configurations. • Evaluating the accuracy and efficiency of the NetLogo implementation in comparison to the traditional DepthMap X analyses. • Implementing basic Agent-Based Systems features like message passing as other like space attractors as new proposal the evolution of Space Syntax methodology onto a proper ABM (if referring to Agent-Based Modeling).
Significance of the methodology
The significance of this methodology lies in its potential to transform traditional static space syntax analyses into more dynamic, interactive simulations. This transition allows for real-time manipulation and observation of spatial data, facilitating a more nuanced understanding of the impact of spatial configurations on human behavior and urban dynamics.
Moreover, the implementation of this methodology in an open-source and widely accessible platform like NetLogo democratizes the tools necessary for advanced spatial analysis, making them available to a broader range of researchers and practitioners continuing DepthMap and Turner’s will of the methodology always being open access and a basis for research enhancement on the matter.
Phase 1 – replicating Turner’s initial implementation. Initially, the research will replicate the seminal experiments conducted by Alasdair Turner at the Tate Britain as a validation mechanism for the results originally published in the early 2000s with the current version of the Space Syntax software. This initial phase serves to establish a baseline for comparison and to confirm the fidelity of the DepthMap X environment when reproduced within NetLogo.
Phase 2 - Development of the Simulation Framework. Following the replication phase, a parallel framework will be constructed within NetLogo. This development will involve the implementation of integration diagrams in both software environments to verify functional equivalence. Subsequently, the model will be expanded with key ABM (if referring to Agent-Based Modeling) characteristics absent in traditional space syntax approaches, specifically agent communication and attractors in the form of spatial “pins” within the museum environment.
Comparison of Agent’s features present in DepthMap X and the proposed NetLogo framework.
Comparative analysis
The comparative aspect of this study will focus on analyzing the behavior of agents within the established framework both with and without the newly integrated ABM (if referring to Agent-Based Modeling) features. This analysis will mimic the procedures employed by Turner in his initial experiments, with a specific focus on achieving a correlation in path diagrams comparable to those observed in Turner and Penn’s 2001 research, which reported an R2 correlation of 0.76 against actual human visitor movements.
For doing so three calculations will be developed along the experiment implemented: i Mean Squared Error (MSE) ii Pearson correlation (R) iii Pixel to pixel deviation calculation.
The methods of comparison of the two existing literature sources that have tried ABM implementation of Space Syntax, Ma et al. 12 and Zaoznik, 13 have been reviewed prior to deciding methodology for validating results.
Ma et al. 2023 research was focused on demonstrating the hypothesis of the angle of the field of view of the agents being more relevant than the depth of vision. Use OFAT strategy to compare emergent paths applying different angles of vision and depth of vision for concluding the relevance of the angle over the depth of vision. Authors use 100 hotspots of heterogenous attractiveness to evaluate the different paths emerging from the agents’ space exploration. With a similar approach Zaloznik developed a graphical mapping of the tick counts and agents’ vision on the patches visited, developing a histogram similar to Turner’s initial approaches.
This approach will be the one used for the validation of the results generation of the following simulations on the Tate Britain after the initial approach is mimicked with the current version of DepthMap software. Nevertheless, it is not known the exact mathematical set up for Turner calculations and while R2 is typically used in the context of regression analysis, when talking about comparing two images, as referring to a measure similar to a correlation coefficient that assesses how closely two sets of data (in this case, pixel values of two images) are linearly related, it will be used then better Pearson correlation coefficient.
Implementation
This research is developed in three phases:
Phase 1. Implementation on DepthMap X, current version of the software, of the Tate Britain analysis developed by Turner for validation of the results obtained.
Phase 2. Development of the NetLogo framework • Creation of the Through-vision diagram • Path tracking diagram for 19 automata agents exploring space • Test of the framework validating the result on
2
with no attractors.
Phase 3. Implementation on the NetLogo framework of proper ABM features. • Test of the framework with 19 agents and a range 1-100 attractors (paintings)
For every of the phases proposed 20 runs were completed for 1.000.000 being the runs studied from the NetLogo frameworks for Phases 2 & 3 the ones used for the validation of the results generation the following simulations on the Tate Britain.
Graphical user interface, GUI, of the final proposed model
The GUI model (Figure 1) was designed based on the usability requirements and the types of variables that needed to be tested for the case study. Nevertheless, all control variables added to the interface were deemed relevant for future experiments. Aside from developing an initial ground-floor plan of the Tate Britain to be analyzed, the design of the interface’s environment was grounded in the feasibility of considering each path in the environment as equivalent to one human step. As observed, the following variable control sliders were added to the interface for ease of use, • Number of museum visitors • Number of paintings in the room • Vision angle of the agents-visitors’ field of vision • Visual range of the agents-visitors • Time (ticks) spent viewing a painting during a visit Final interface used for the demo in NetLogo v6.04.

Additionally, a variable counter was implemented to calculate the Through-vision diagram, indicating the maximum value reached while developing the diagram.
Pseudocode for the NetLogo model
The focus of the research is to expand Turner´s model
14
developing a model in NetLogo that defines the Space syntax process into a proper ABM (if referring to Agent-Based Modeling). As so, special attention has been paid to the implementation of: • TV, through vision • 2 levels of decision-making: general goal of visiting the maximum number of paintings in a reasonable amount of time and local next step goal • Message passing. • Attractors in the form of paintings in the museum.
This decision-making process reflects a dynamic interaction between the agent’s goals, environmental stimuli, and the calculated desirability of paintings depending on their distance within its cone of vision. The overall behavior emerges from the agent’s ability to sense and respond to its surroundings, leading to adaptive and context-dependent movements (Figure 2). Implemented flowchart for agents’ decision-making process.
Phase 1: The UCL DepthMap X model of the Tate Britain museum
The first step was to mimic the Tate Britain analysis with DepthMap X. Results obtained are compared then with Turner’s results obtained testing the software in 2010.
Path analysis diagram
The analysis was developed releasing the same number of agents and conditions. It nevertheless implemented the latest plan of the museum, the one that includes the latest extension, as authors developed a good Space Syntax analysis of the space that was used for testing results. As it can be seen in the following figures, paths followed and results obtained were almost identical to the ones obtained by Turner with the only difference, of the path followed in the refurbished areas.
Integration analysis
As revealed by the integration analysis, the primary alterations in integration are evident in the central element of the building. Opening the building to the public, particularly on the northeast side has resulted in a reduction in public intensity and accessibility in the main central halls. At the conclusion of the simulation, the results were cross-referenced with the analysis conducted by the Space Syntax team at UCL during the refurbishment works. Consequently, the outcomes were validated against the variables discussed. This validation process ensured that the subsequent step of the research, establishing the ABM (if referring to Agent-Based Modeling) via a NetLogo framework, could be confidently initiated based on well-tested trials.
The NetLogo model for space syntax
Phase 2: Through-vision analysis
Empty space agent’s exploration for framework validation. The environment patches were mapped biased onto walkable or non-walkable to calculate the Trough Vision diagram that has been developed following a similar color code mapping as the ones when originally analyzed Tate Britain plan. As it can be seen in the final diagram, the big influence of the new northeast area opened to the public is clear as the pre-built developed studies showed.
Initial runs were conducted without any paintings in the museum’s space aimed to understand how agents moved based on the implemented decision-making process. The purpose was to ensure that the agents followed the protocol correctly before introducing new variables and features of the agent-based systems in subsequent research. When the automata were calibrated to the environment by adjusting the number of steps to three and the field of view to 170°, a correlation of approximately R2 = 0.77 was observed with actual movement pattern. 15 This behavior aligns well with the results obtained using the NetLogo framework when no attractors were placed in the rooms. As shown in the figure, visitors often tend to visit the same room repeatedly, taking a significantly higher number of steps before agents move to the next room. These results are consistent with the poor correlation results obtained by Turner for agents with a 360-degree field of view, which were shown to move in circular patterns that don’t accurately reflect human movement (R2 = 0.41).
Phase 3: Development of goal and path finding mappings and procedures
After the generation of the Through-vision diagram with NetLogo, the next step is setting the grounds for agents’ behavior testing on the framework to be able to test the developed framework against Turner’s results. A color code is implemented for the mapping of the number of agents passing on every patch in a similar way it was developed along the original experiments on the Tate Britain. As so two types of diagrams were generated for comparison: • Agent’s paths diagram • Number of agents passing on patches and rooms diagram
Correlation is studied between the results obtained from Turner mimicked implementation and the one proposed including attractors and message passing.
Two levels of decision making were integrated, alongside the implementation of message passing. The over achieving goal for the agents was to visit at least 50% of the displayed pictures within a reasonable timeframe and then exit the museum. On a more localized scale, the agents’ goal was to understand which picture to visit next and the subsequent steps to reach it. This local goal could be influenced by receiving messages from nearby agents suggesting a specific painting as the next one to visit.
A crucial and foundational procedure within the model involved implementing a behavioral rule to prevent agents from becoming stuck in black patches. This rule aimed to ensure that agents could navigate efficiently and avoid getting trapped in areas with limited visibility or accessibility due to black patches.
Tests were conducted with agents having a 170-degree field of view and a three-step process for decision-making regarding destination selection, based on Turner’s angular method and steps for updating the decision-making process. Runs were run with 19 agents, with real human data available from observation in 16 and run was left until 1.000.000 steps following the studies presented of implementation. It should be noted, however, that some rounds exhibited interesting behavior, with some agents prematurely leaving certain rooms in the museum before reaching their goal. As shown in the figure below, which corresponds to round three, the primary objective of visiting half of the museum’s paintings was achieved even at 300,000 steps, three rooms were still left unvisited.
After completing the runs with path tracking, diagrams generated after 1,000,000 steps were filtered through a grayscale process to facilitate analysis and comparison with the original Through-vision diagram, following Turner’s 2007 procedure for validating his framework (Figure 3). As it will be explained in the findings section, the correlation between the Through-vision diagram and the results of 19 agents exploring the museum for 1,000,000 steps was evaluated as positive, showing a correlation coefficient of 0.89. This confirmed the reliability of the initial validation framework for future implementations, which will require further research (Figure 4). Path tracking results for #Run 1. Above, results at 1.000, 5.000 & 20.000 ticks. Below, results for 40.000, 50.000 (goal 50% paintings & 100.000. Sample of path tracking diagrams obtained when running the framework for 100.000, 300.000 and 100.000 steps.

Findings on UCL DepthMap X versus NetLogo versus 6.3.4
The results and validation of the framework have been tested on the best-known example of Space Syntax analysis on an already built scenario, the Tate Britain Museum in London. The findings in this study holds significant implications both for the existing literature on the application of Agent-based-modelling to cognition based spatial analysis and to the existing knowledge on the areas of application of Space syntax proposed by a more advanced framework built on the one proposed by A. Turner.
Phase 1&2: DepthMap implementation, integration calculation
The final integration diagram shown below is juxtaposed with the current Space Syntax analysis developed by the team responsible for the refurbishment of the museum, as well as Turner’s and Automata’s results.
As evident from the diagram, the opening of the large northeast room stands out as the most influential difference, highlighting the validity of the ABM (if referring to Agent-Based Modeling) methodology implemented within the Space Syntax framework. Implementing the exact analysis on the current version of the software that was implemented by Turner in 2001 showed the validity of his results as still updated. Implementing the same process was demonstrated highly successful in the enhanced understanding of the data shown by Turner along his publications. That validation studies settle the ground for the ABM framework in NetLogo proposed along this research.
Through-vision diagram versus path tracking diagram. This analysis will mimic the procedures employed by Turner in his initial experiments, with a specific focus on achieving a correlation in path diagrams comparable to those observed in Turner and Penn’s 2001 research, which reported an R2 correlation of 0.76 against actual human visitor movements.
First step for setting the initial ground for the comparison of path histograms and result was to develop the Through-vision diagram of the Tate Britain with the developed NetLogo framework. The obtained diagram is shown above. Calculus for Through-vision were implemented as stated in (Turner, 2007) (Figure 5). Left, final integration diagram obtained with DepthMap X. Right, Final Through-vision diagram obtained by the author of this research along the NetLogo implementation for the Tate Britain museum. Below, Pixel to pixel scattered plot , centroid and mean deviation calculation for comparison between the Through-vision diagram and the 1.000.000 steps trail mapping diagram.
Simulations run just with one agent were developed just for the purpose of validating path tracking mechanism; behavior observed in the NetLogo framework without attractors shows that visitors frequently linger in the same room, often requiring many steps before agents transition to a different room. This finding echoes Turner’s results, where agents with a 360-degree field of view showed low correlation with human movement (R2 = 0.41) due to their tendency to move in circular patterns.
General comparison and calculation aimed at the R2 = 0.89 used by Turner in (Turner, 2007) for the validation of the framework working process before the add-on of the extra ABM features along Phase 3 of the research. (Figure 5) The diagrams chosen for the analysis, within the samples obtained from the 20 runs, were converged in one that was consider to be the mean deviation of all the one obtained. Two runs were discarded for the collection as they were outliers at the worst-case scenario as shown in previous tables of results. Final calculations resulted as follows:
Mean Squared Error (MSE): 0.1889
Pearson Correlation Coefficient: 0.8978
Those results were considered a validation for the implemented framework opening the door to the implementation of the key agent-based-models features along the next phase of the research.
Findings: Phase 3: Implementation of ABM features
The model was run for the same number of agents 19) as real human trackings were available and observation during 1.000.000 steps. The network considered a scale of implementation for which one patch on the environment is one human step as value is derived from the average step length of mature humans approximately 0.77 m (Sutherland et al, 1994).
The initial run with 75 paintings across the museum’s various rooms showed an examination threshold between 25,000 and 45,000 steps. Agents were heavily influenced by the presence of paintings in adjacent rooms, showing little incentive to move to rooms without paintings and progressing directly to rooms with paintings as shown below. The results indicated that the majority of museum visitors spent most of their time visiting the main two initial rooms of the building. Consequently, the pathfinding behavior was reevaluated and subsequently modified to better accommodate and reflect this observed visitor behavior. Agents move in circular patterns when there are no paintings, as Alasdair observed. However, they exhibit movement closer to human behavior when paintings are present.
At this stage, agent movement improved, with agents visiting more rooms in the museum within the same number of steps, covering approximately 70% to 80% of the museum. Tracking the goal of visiting at least 50% of the paintings in the museum showed that this was achieved between 50,000 and 75,000 steps at the earliest, with the general trend being around 200,000 steps.
As seen in the figure below, the agent began to explore the space with smoother transitions between rooms. Several initial runs were conducted with just one agent to better understand its behavior before implementing message passing since agents often stagnated in rooms where they couldn’t clearly see the paintings in the next room. The full implementation was done with 19 agents, taking 1,000,000 steps, and one run was extended to 3,000,000 steps to observe whether the percentage of rooms visited by the agents significantly increased.
After all the rounds, the diagram was compared with Turner’s trail mapping diagrams developed in 2002 and 2007. Agent behavior with implemented message passing and attractors in the form of paintings closely resembles observed human movement patterns from Hillier et al. (1996). Additionally, agent behavior while visiting paintings in various rooms aligns well with
17
concept of the evolved animat. The paths are similarly linear than those of evolved animats, which aim to visit as many museum spaces as possible in a reasonable timeframe. Implementing the goal as the fitness function and leaving the animat (Figure 6) to run for 20.000 ticks using mirrored neural networks to map the visual arrays based on his previous research. Both diagrams are shown below for comparison, highlighting the alignment of the rectilinear path trail maps. Left, Sample of histograms obtained. Centre, sample of pseudo-rectilinear paths observed. Right, Turner’s ‘animat’ pseudo-rectilinear paths.
Conclusions & implications of the findings
The findings of this study confirm the validity of the NetLogo-based framework as an alternative agent-based modeling approach for spatial analysis in alignment with Space Syntax principles. By replicating Turner’s experiments at the Tate Britain Museum, the model achieved a strong correlation (R2 = 0.89) with the original DepthMap outcomes, reinforcing the reliability of this simulation environment.
One of the key observations is that the introduction of agent-based system features, specifically message passing and attractor-based navigation, significantly influenced agent movement patterns. Agents exhibited more human-like exploratory behavior when visual attractors were incorporated into the environment, contrasting with the random or circular movement observed in cases where no attractors were present. This behavior aligns with Turner’s observations regarding the limitations of automata with full 360-degree vision and supports the claim that spatial cognition and navigation are enhanced by agent communication.
The study also identified that agent behavior evolved through interactions with the environment, resulting in emergent movement patterns. The integration of message passing, wherein agents influenced each other’s decisions by sharing information about visual stimuli, contributed to a more efficient exploration of the museum space. These results suggest that incorporating social behavior modeling into space syntax simulations can provide new insights into pedestrian movement analysis.
Conclusions and future research
This research successfully established a validated framework for integrating agent-based modeling into space syntax analysis using NetLogo. By aligning the results with Turner’s original studies, the study demonstrated that the developed framework can serve as a robust tool for spatial cognition analysis. The findings underscore the potential of agent-based simulations to enhance our understanding of movement behaviors in architectural environments.
While the study has demonstrated the feasibility of integrating agent communication and attractors into the NetLogo framework, further research is needed to refine these methodologies. Future work should explore:
Expanded agent interactions
Introducing additional cognitive capabilities such as memory retention and adaptive learning to allow agents to develop movement patterns over time.
Diverse spatial configurations
Testing the model across different architectural environments to assess its applicability beyond the Tate case study.
Real-world data comparison
Validating the framework using large-scale real-world pedestrian tracking datasets to further enhance its predictive accuracy.
Computational efficiency enhancements
Optimizing the simulation to accommodate larger agent populations and more complex spatial networks without compromising computational performance.
By addressing these directions, future studies can continue advancing the use of agent-based modeling in space syntax research, bridging the gap between computational simulations and real-world spatial cognition phenomena.
