Oct 18, 2017
THEME: Technology + Innovation

Spatial Intelligence: Applying Data and Location to Create Insight

 Note: The following post is abridged from a full feature article. Follow this link for the complete page and interactive elements.



For our purposes, it is data related to location. Spatial intelligence is applicable to any scale, from a room or a building to a city or a planet, and may range from the occupancy information of a single workstation to one’s global web of social-media connections. Human beings have amazing spatial intuition (we inhabit a range of spaces daily). But today’s plethora of data from devices and systems allows for even greater spatial understanding, with digital tools and scalable algorithms—including machine learning—allowing us to execute ever-more complex analyses and visualizations on the information. To demonstrate our point, we’ve leveraged these resources to show the process applied to an example question: Where might Amazon situate its second North American headquarters?


Let us begin with intuition. We distributed an internal survey to over 100 of our practice area leaders and to members of our Cities + Sites group. Below are the results, with the answers based on each respondent’s personal metrics, experiences, and gut intuition.


Our next step was to apply data insight to the problem. We started with a series of demographic variables tied to regions in the continental U.S.

Then we took our nationwide dataset and broke it down to the areas of focus—MSAs and regions within a 45-minute drive of an international airport. For each MSA, we included a broad series of demographic indicators such as age, ethnicity, education, employment, commuting time, commuting methods, and migration patterns.

The advantage of our method is that we can easily intersect multiple variables, turning the sets of data on and off as we need them. This allows us to highlight and understand the underlying differences between the MSAs, viewing the results visually in interactive maps. Amazon is using its impact and success in Seattle to craft the parameters of its RFP for a second headquarters and we can visually compare their current home against other potential cities.





Our process provides great possibilities. However, selecting and modifying each variable can be time-consuming. Enter machine learning, a branch of data science that uses algorithms that become progressively smarter with each iteration. These algorithms perform at scale; they also allow users to add additional variables and produce increasingly detailed results, all in a time-efficient way.

Our aim was to group like zip codes based on all of 220 variables we provided. To do so we selected the data that we wanted the algorithm to operate upon, but we did not specify the combinations or weighting that should be used to reduce the data’s dimensions into a 2D field of points. It was an example of unsupervised machine learning, and here are the results.


To summarize this exercise, we began with a survey of our experienced designers and planners to see what their intuition said. We collected and organized publicly available data and then embarked on a mapping expedition to interpret and visually present the parameters of the Amazon RFP. Finally, we utilized machine learning to analyze 220 different variables from each potential city and its zip code sub-regions. Sacramento and San Diego shared the most similarities with the Seattle market, but Austin—our survey’s second most common response—was also well-supported by our analysis. The principles exhibited here can be applied to many potential projects and investigations.

Read our in-depth (and interactive!) analysis.  

Leave a Comment