SILC Showcase

Showcase March 2012: Where do you think you are: A virtual environment assessment of navigation ability

Share this article using our bitly.com url: http://bit.ly/UQmrcf

Where do you think you are: A virtual environment assessment of navigation ability

Steven Weisberg1, Russell Epstein2, Nora S. Newcombe (PI)1, Victor Schinazi3, and Thomas F. Shipley1

Temple University1, University of Chicago2, Strategic Spatial Solutions3

 

See the below two published articles that use the Virtual Silcton (or Ambler) paradigm:

  • Weisberg, S. M., Schinazi, V. R., Newcombe, N. S., Shipley, T. F., & Epstein, R. A. (2014). Variations in cognitive maps: Understanding individual differences in navigation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(3), 669-682. [DOI]
  • Weisberg, S. M., & Newcombe, N. S. (in press). How do (some) people make a cognitive map? Routes, places, and working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition. [DOI]

Imagine travelling to a friend's house in an unfamiliar area. What would help you get there: being told the names of the roads, and which way to turn? Looking at a map? After just one trip, would you be able to find your way back? Navigating successfully requires several different pieces of information to be stored in and retrieved from memory, and relies upon several different strategies (Wolbers & Hegarty, 2010). To gain a greater understanding of individual differences in navigation ability, we assessed navigation ability objectively, in a virtual environment. We provide evidence that our assessment provides information not accessible via self-report data alone, and may be a more insightful measure of navigation ability.

Research on navigation ability has demonstrated large individual differences (e.g., Blajenkova, Motes, & Kozhevnikov, 2005; Wen, Ishikawa, & Sato, 2011), but has suffered from the lack of a rigorous, objective assessment method, relying instead upon subjective self-reports (e.g., Santa Barbara Sense of Direction Scale [SBSOD]). The SBSOD has demonstrated a reliable and robust relationship to myriad navigation tasks (Hegarty, Richardson, Montello, Lovelace, & Subbiah, 2002), despite its brevity; and thus has been an incredibly useful addition to the literature on navigation. However, self-report data in general, while having the advantage of aggregating recollected data from across an entire lifespan, may be biased, incomplete, or otherwise inaccurate (Kruger & Dunning, 1999). Moreover, the SBSOD may not capture variance in navigation performance accounted for by a variety of cognitive factors, due to its single-factor structure (Hegarty et al., 2002). Although self-report data correlate strongly with real- and virtual- world navigation tasks, a behavioral measure of navigation ability that is easy to administer and score could provide a complementary tool.

We created a desk-top virtual environment, based on a real-world college campus, for which we have data on spatial learning (Schinazi, Nardi, Newcombe, Shipley, & Epstein, under review; see the June 2010 Showcase for more details), and administered it to a sample of undergraduate psychology students (N = 49). In the virtual environment, participants learned the names and locations of eight buildings along two separate, non-connecting routes. (See the layout in Figure 1). Participants then learned two connecting paths between the first two routes. This design required that participants made spatial inferences about parts of the environment that were not directly visible and is based on the design of previous real-world navigation studies (Ishikawa & Montello, 2006).

 Figure 1

Figure 1. Graphical depiction of the layout of buildings in the virtual and real world environments. The red lines indicate the routes participants followed to learn the buildings. The blue lines indicate the paths participants followed to learn how the routes were connected. Participants never saw this view of the environment. The model-building task required participants to recreate this layout of buildings.


After virtually travelling along both routes and the connecting paths, participants completed two tasks designed to test their accuracy in learning the layout. For a pointing task, participants were shown a first-person view of the virtual environment from the perspective alongside each of the eight buildings. Participants used the mouse to rotate the view, and clicked to indicate where they would point to all the other buildings. The angle of error between a participant’s response and the correct angle was measured, with higher errors indicating worse performance. For a model building task, participants were shown a blank screen on the computer monitor which they were told was a blank map of the virtual environment. Participants then dragged and dropped images of the buildings around the screen to indicate where they belonged in the environment In addition to tasks in the virtual environment, participants completed a battery of spatial tasks and self-report measures including a mental rotation task, a perspective taking task, and self-report measures of navigation ability, verbal ability, and small-scale spatial ability.

 

 Figure 2

Figure 2. The view of the virtual environment at the start of one of the two routes. Participants virtually travelled along the path from “Start”, following the arrows, to “Finish,” and back to “Start.” Along the way, participants were required to learn the names and locations of four buildings per route (eight overall). Buildings to be learned were indicated by a sign alongside the route (yellow, center of the image). The pointing task required participants to point to all other buildings using the mouse from a viewpoint like the one above near one of the buildings learned.


The design of the pointing task allowed two components of navigation ability to be analyzed. For 14 trials, the building being pointed to was directly visible from the pointing location (Seen trials). For 42 trials, the building being pointed to could not be seen (Unseen trials). Seen trials tested only participant’s ability to remember the names of the buildings. Unseen trials tested memory for the names of the buildings as well as for the accuracy of the participant’s spatial representation of the layout of the buildings.

Figure 3 displays the distribution of participants’ performance on Seen compared to Unseen trials, identified by score on the SBSOD. Immediately apparent in Figure 2 is the vacant quadrant of the graph on the upper left that would consist of participants accurate pointing to Unseen buildings, but inaccurate pointing to Seen buildings. This suggests that accuracy on Unseen trials requires meeting an accuracy threshold on Seen trials (but being accurate for Seen trials is insufficient for being accurate on Unseen trials). Accuracy on both Unseen and Seen trials was significantly correlated with SBSOD, but the variability suggested by Figure 2 shows that SBSOD score alone does not entirely predict navigation performance.


Figure 3

Figure 3. Average error for each participant is plotted for both unseen and seen buildings. Participants who performed well on the unseen pointing trials were unvarying accurate on seen pointing trials. However, participants who performed poorly on unseen pointing trials were split between good and poor performance on seen pointing trials.


The data suggest a broad range of performance on both the pointing task overall (M = 36.42° error, SD = 11.91) and the model building task (M = .48, SD = .27). We thus have reason to conclude that a full range of navigation performance can be captured by this assessment. Moreover, while the SBSOD is correlated with both navigational tasks, the small-scale spatial ability questionnaire, and the verbal ability questionnaire are not as strongly correlated. This supports our assertion that the virtual environment tasks primarily tap navigational ability, and are not confounded by general intelligence factors (even though all three questionnaires correlate significantly with each other).

Future work will further tease apart the individual differences in navigation that make it an easy job for some of us to find our friend’s houses, while for others it can be a daunting chore. Tools, like this one, that can easily and accurately assess navigation ability, will also pave the way to determine the efficacy of interventions designed to improve spatial skills.

References

  • ♦ Blajenkova, O., Motes, M. A., & Kozhevnikov, M. (2005). Individual differences in the representations of novel environments. Journal of Environmental Psychology, 25(1), 97–109. [doi: 10.1016/j.jenvp.2004.12.003]
  • ♦ Hegarty, M., Richardson, A. E., Montello, D. R., Lovelace, K., & Subbiah, I. (2002). Development of a self-report measure of environmental spatial ability. Intelligence, 30(5), 425–447.
  • ♦ Ishikawa, T., & Montello, D. R. (2006). Spatial knowledge acquisition from direct experience in the environment: Individual differences in the development of metric knowledge and the integration of separately learned places. Cognitive Psychology, 52(2), 93–129. doi:10.1016/j.cogpsych.2005.08.003
  • ♦ Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77, 1121–1134. [doi: 10.1037/0022-3514.77.6.1121]
  • ♦ Wen, W., Ishikawa, T., & Sato, T. (2011). Working memory in spatial knowledge acquisition: Differences in encoding processes and sense of direction. Applied Cognitive Psychology, 25(4), 654–662. doi: 10.1002/acp.1737
  • ♦ Wolbers, T., & Hegarty, M. (2010). What determines our navigational abilities? Trends in Cognitive Sciences, 14(3), 138–146.
You are here: SILC Home Page SILC Showcase Showcase March 2012: Where do you think you are: A virtual environment assessment of navigation ability