-
The Map Happenings ‘Must Have’ Mappy Books for 🎅🏼🕎🎄🎁
Ok, it’s that time of year isn’t it? When people in the US celebrate something called “Thanksgiving” while everyone else celebrates… well, let’s not get into that.
And immediately afterwards of course it’s all about commerce. Money has to be spent. Wallets have to be emptied. And credit cards have to be maxed out.
How could Map Happenings possibly not do its part?
So without further ado, and in no particular order, please gorge on this rather delicious list of fourteen ‘mappy’ books that you, your friends or your family might slurp up.
These mapping gems range from amusing, to epic, to colourful, to insightful, to quaint, to just pure ‘must have’.
I didn’t bother with any affiliate links, so don’t worry, no enrichment of yours truly will take place if you happen to buy something.
Terrible Maps
One of my favourites. A very cathartic book of gloriously silly maps that will just make you laugh. Enough said.
Michael Howe (2024)
Maps on Vinyl
Who doesn’t like music and who doesn’t like maps? Maps on Vinyl provides the perfectly delicious combo. Showcasing 415 album covers that feature maps of all kinds, it’s the perfect gift for music lovers who have a soft spot for maps.
Damian Saunder (2025)
This Way Up: When Maps Go Wrong
Written by the “Map Men”, the guys that make the most fun and amusing mapping videos on YouTube. Now they’ve published their first book and it’s already a bestseller. In summary: the world’s very best wrong maps.
Mark Cooper-Jones and Jay Foreman (2025)
Longitude
This book tells the unbelievably epic story of one of the most important technical innovations of the 1700s: how to accurately measure longitude. Told by Dava Sobel in her bestseller, it’s a gripping tale of clocks, rivalry, obsession and precision that quite literally shaped how we navigate the world. A reminder that behind every neat, gridded atlas lies a messy human saga.
Dava Sobel (1995)
The Atlas of Unusual Borders
A treasure chest of peculiar enclaves, exclaves and geopolitical leftovers that somehow still exist. It’s geography’s blooper reel — except it’s all real, and in many cases still actively confusing the people who live there.
Zoran Nikolic (2019)
Our Dumb World by The Onion
Last published in 2007 but still relevant, this 73rd edition contains fewer clouds, curvier latitude lines and better veiled xenophobia. What better distraction for today’s world?
The Onion (2007)
At Atlas of Extinct Countries
Defoe guides us through the neglected graveyard of failed, forgotten, or downright absurd nations. Each entry reads like a eulogy for a geopolitical oddball — humorous, sharp, and quietly insightful. For anyone who loves borders that didn’t survive contact with reality, this atlas is both entertaining and surprisingly poignant.
Gideon Defoe (2022)
A History of the World in 12 Maps
In this New York Times bestseller, Brotton chooses a dozen maps that shaped how humanity sees itself — from ancient Babylon to Google Earth — and digs into the power, politics, and ideology baked into each one. It’s not just cartography, it’s a history of human ambition disguised as geography. A smart reminder that every map tells a story — and none are neutral.
Jeremy Brotton (2014)
How to Lie with Maps
First published in 1991 and now in its third edition this book has become a cult classic for anyone who has even a slightest interest in maps. Monmonier pulls back the curtain on how maps exaggerate, distort, manipulate and, intentionally or not, mislead. It’s not a takedown of cartography but a celebration of its power. Read this and you’ll never look at a map the same way again (which is exactly the point).
Mark Monmonier (2018)
Transit Maps of the World
Pure catnip for anyone who geeks out over subways. Ovenden assembles transit maps from cities across the globe, revealing their weirdness, elegance, and design lineage. From tangled spaghetti networks to minimalist masterpieces, this is an atlas of urban identity as expressed through tunnels and tracks.
Mark Ovenden (2015)
Utterly British Maps
Ok, this brand new book is for Anglophiles and it’s McKenzie’s first publication. It feels like rummaging through a century-old Ordnance Survey drawer with a steaming mug of Yorkshire tea. It uncovers the oddities, curiosities, and wonderfully niche cartographic trivia that only Britain could produce.
Helen McKenzie (2025)
Great City Maps
As a publisher, DK always creates the most beautiful books and this one is no exception. Great City Maps provides a lush visual tour of the world’s iconic cities through historical maps, bird’s-eye illustrations and drawn urban fantasies. It’s the closest you can get to time travel using paper, making the book as much about culture as cartography.
DK Publishing (2016)
Brilliant Maps for Curious Minds
Wright calls himself a ‘cartophile’ and it shows. This is the world explained through maps you didn’t know you needed. He curates a collection that ranges from clever to enlightening to delightfully bizarre, each one offering a tiny, addictive hit of geographic insight. It’s the sort of book that eats an afternoon before you realise it.
Ian Wright (2019)
The Rand McNally Road Atlas!
Ok, this one is to mainly confuse or piss off the young ‘uns. And for all you yanks: I’ll bet you didn’t even know it was still published, did you? The atlas a charming relic and love letter to the era when you had to use your own noggin’ to plan your trip.
Rand McNally (2025)
-
Mapping Amazing: Bee Maps
As many of you may know, my career started back in 1985 at a wee company called Etak. This startup, seed funded by Nolan Bushnell, was most famous for pioneering in-vehicle navigation systems. It was about 20 years ahead of its time.
But it didn’t stop there. Navigation systems require digital maps. The problem was that there weren’t any available at that time, so Etak had to invent a system to manufacture them at scale. And yours truly was privileged to be part of the team that designed and built the production line.
It wasn’t straightforward. Using a VAX minicomputer with 2MB RAM the team designed a system that was used to scan photographic images of topographic maps. These were then manually digitised on cobbled together PC-clones that used very expensive state-of-the-art graphics cards.

\Topographic map used as source for digital map making. Credit: USGS Despite the herculean efforts of everybody involved there was a little problem. The accuracy of the maps depended on the source material. In our case we relied on topographic maps published by national mapping agencies. Alas, these maps were often years out of date.
Don’t forget this was 1985. Aerial imagery and satellite imagery just wasn’t readily available back then. We could (and did) contact local agencies for more up-to-date maps of critical intersections, but this was really hard to do at scale. It was just too time consuming and too expensive to track all the material down.
The original Etak Navigator didn’t provide turn-by-turn directions. Instead it guided you to your location by a flashing star on the map. You zoomed the map as you drove and had to use your own noggin to figure out which roads to take to get there. As a result the requirements for the digital map were incredibly light: we didn’t need to collect information on one way systems or turn restrictions. We just collected the streets, the street names and the addresses and we made sure we got the road topology right.
It was only later when the enterprise customers like Bosch and GM wanted real-time guidance that we had to collect more information like one ways and turn restrictions. And it was so painful!
Fast forward a decade to 1995 and satellite imagery and low altitude aerial photography became more readily available, so the job of collecting the basic street network got easier. But the road attributes were still hard. Digital mapping companies had to send out crews in cars, with one person driving and another person collecting data using pen, paper and a clipboard. Ouch!
It was natural to start looking at technology to collect this information automatically. Indeed in the 1990s mapping companies started mounting cameras on vehicles. But it wasn’t until 2007 when Google launched Street View that it became common enough that the general public started to notice.
People often got excited to see a Google Maps car on their street — as well cars from rival fleets such as TomTom, HERE and later Apple Maps.

Google Maps Car 
HERE Maps Car 
Apple Maps Car 
TomTom Maps Car These fleets, each in their hundreds, drive around collecting all manner of data, not just the photographic images, but also a ton of information deduced from the images: for example street names, one ways, turn-restrictions, lane information, stop signs, traffic lights, speed limits to name just a few.
But there’s a problem.
Running a fleet of 100+ dedicated vehicles solely for map data collection is expensive.
And then there’s an even bigger problem: that 100+ vehicle fleet simply can’t collect everything, everywhere, all at once.
Even Google Maps can’t do this. It’s evident from a typical Google Street View image: if you look closely you can see the image capture date. Take the example image below: in this case it’s from October 2024, so now it’s over a year old1:

Google Street View Image — Credit: Google. This issue of trying to keep volatile map data current became abundantly clear to me in 2019 when Apple Maps became the first consumer mapping product to display stop signs and traffic lights on a map.
I remember thinking: “Do you realise what you’ve just done? Do you realise what it’s going to take to maintain all that data? Do you realise just how often this data changes?!”
And it didn’t stop there. Speed limits were added to the map too. And that data is even more volatile…
But it gets worse.
Let’s move on to a product whose mapping needs go way beyond that of a consumer navigation app: autonomous vehicles.
With very few exceptions these vehicles require a map an order of magnitude more detailed. And the mapping industry has given this very detailed map a name: they call it an “HD Map”. It has not only a ton more data, but centimetre level accuracy too.
Here’s what a typical HD map looks like:

As you can see it’s capturing gobs of detail that you don’t typically see in your everyday Google Map or Apple Map.
The companies developing autonomous vehicles rely on HD Maps to varying degrees. Waymo and GM/Cruise are the biggest users. If you’re interested, you can nerd out on the details in this table (click/tap to embiggen):

Credit: ChatGPT So what on earth do you do?
First you could try collecting all the necessary data yourself and try to keep it up to date. This is what Waymo does. It may go a long way to explain the astronomical cost for their service:

Image credit Leo Y. Liu. Full post here. Alternatively you could take another approach. And that’s where this week’s “Mapping Amazing” company, Bee Maps, comes in:

Credit: Bee Maps Now I should make it clear from the start that Bee Maps isn’t trying to do everything, but they are doing a hell of a lot. And it’s not just the technology that’s interesting. It’s their unique business model for accomplishing it.
So let’s dive in…
Founded by Ariel Seidman2, this 45 person company based in San Francisco got its start 10 years ago in 2015. Their first product was a platform called Hivemapper. Rather bravely it required the development of hardware as well as software.3
Why hardware?
Because you can’t just write a map data capture app for a smartphone and expect it to work in the real world. In vehicles phones overheat — quickly and easily. Also, they don’t have accurate enough GPS. Besides which in order to capture map data continuously you need a dedicated device, so it would never be practical to use a personal phone.
Bee’s hardware is now on its third generation4. The device is about the size of a small hardcover book. It takes about five minutes to install and get running. After that you never have to do anything. No app to start up. No button to press. No firmware to update. It’s completely “fire and forget”.
The device has a main camera, stereo depth cameras and, crucially, a high precision GPS. And it has built-in LTE for uploading data anywhere around the globe:

The Bee Maps device — Credit: Bee Maps The original goal of Bee Maps was to gamify the collection of map data. The incentive? As you drive and collect data you earn currency: in this case a cryptocurrency called HONEY. And the Hivemapper network is built in such a way that it can be used to incentivise users to collect data in particular locations. They call these HONEY Bursts.
The idea was that an individual who was driving all hours for their job (e.g. an Uber driver) would opt to install a Bee Maps device in their vehicle, and use it to earn some extra dosh on top of the money they were earning for their main job.
And the customers for all this data the Bee Maps devices collect? The traditional map makers of course! And it must make them salivate when they see the ever growing list of data being sucked up:
- Signs for one way, speed limit, stop, yield, turn restriction, highway exit
- Signs for parking restriction, height restriction, rail crossing
- Traffic lights
- Road widths and lanes
- Road construction
- Fire hydrants
In the US they’ve even started to collect speed cameras, gas prices and toll prices.
Bee Maps’ business model has pivoted a little since they were founded. It’s now a subscription model: starting at US$19 per month you get access to a Bee device. And yes, you still earn HONEY cryptocurrency by driving around.
With a Bee device you get all the benefits of a traditional dashboard camera: continuous video to be used in case of accident reports and telemetry to be used for car insurance discounts.
Commercial fleets get even more: monitoring of all trips and events, including any exciting driving. Here’s a snapshot of the Bee Maps dashboard proving yours truly is a great supporter of exciting driving himself:

In effect Bee Maps is now competing with the likes of Garmin for consumer dash cams and the likes of Samsara and Lytx for fleet dash cams.
But they have a huge edge.
Unlike their competitors, Bee Maps is earning an ever growing revenue stream from the map data they collect. And they’re already licensing their map data to some very significant customers. For example: TomTom, HERE, Mapbox, Lyft, Trimble, MAXAR, and more recently, VW.
All this has attracted new investors: Bee Maps closed a US$32M Series A last month.
It’s not only the wealth of data that Bee Maps is capturing that’s impressive, it’s their coverage too:
- 22M unique kilometres mapped (over 13M miles) ~= 3X road network in US.
- 36% of all roads around the whole globe.
- 665M km mapped total (meaning each road has been captured multiple times)
One statistic I find particularly cool: there are tens of thousands of active weekly Bee devices. Compare that to a traditional map maker who has a few hundred dedicated mapping vehicles at best (which are certainly not all active all of the time). In other words Bee Maps’ fleet is at least two orders of magnitude bigger than a traditional mapping company. Nice.
You can start to appreciate the coverage by looking at Bee’s Strava-like heat maps. See below (click/tap to embiggen) or go here for the full interactive coverage map.

Bee Maps: Europe 
Bee Maps: Japan, Korea 
Bee Maps: SE Asia 
Bee Maps: US, Canada So I know what some of you nerdy types might be thinking: surely companies like Tesla are capturing enough data from all the cameras on their cars that they far surpass what Bee Maps is doing?
Well no.
Firstly: if you approach Elon and ask him nicely if you can license his data, he’s simply going to tell you to fuck off. Secondly: Teslas are only sold to well-to-do people, ipso facto they only generally collect data in well-to-do places. Even Toyotas don’t go everywhere!
Yeah, I know, perhaps one day the likes of the NVIDIA Drive Platform might get embedded in enough vehicles. Or perhaps we’ll all end up wearing some data-hoovering douche bag glasses from some dubious social media company. Or maybe Jony Ive will wow us all with a must-have OpenAI facehugger. But that’s all speculation.
In the meantime, get real, do your part, and get Bee Mapping!
Footnotes
- And yes, it’s even worse for Apple Maps. The Apple Maps “Look Around” image of the house where I used to live in Oakland, California is five years out of date. 😱 ↩︎
- Ariel is pronounced “R-e-ull”, not like Ariel in the Little Mermaid. ↩︎
- I say “rather bravely” as so many hardware startups have failed miserably or limp along at best. Juicero, Pebble, Rabbit or Humane AI Pin anybody? ↩︎
- So, Donald Trump, since I know you’re reading, I have four words for you: built in the USA. ↩︎
-
Mapping Amazing: Mily Tech
We’ll start today’s post with a little segue.
My son is a “gear head”. He got his start working for a Mercedes dealership but quickly got bored. He quit and went back to trade school and learned all about machining: the use of mills, lathes, drill presses, CNC machines and industrial size 3D printers. This turned out to be the perfect set of skills — knowing how to work on cars and knowing how to craft and fabricate complex custom parts.
He’s now a founding partner of a small start up in town. They build custom cars for very rich clients who typically spend $200,000 – $500,000 on their vehicles. Many of them are what arecalled “resto-mods” which is an abbreviation for restoration-modifications. In other words, taking an old car and completely restoring it, but replacing all the old components — suspension, brakes, engine, transmission — with modern equivalents. The finished vehicle often looks like the original but drives like a brand new modern one, right off a production line.

Ford GT 500 Resto-Mod 
Dodge Coronet Custom Chassis 
Dodge Coronet Custom Engine 
Dodge Coronet 
Dodge Coronet Custom Dash Here’s one of the finished products (click to play video with the sound turned up. It’s worth it!):
The art of building these custom vehicles requires not only an extensive set of standard tools but also a large number of specialised tools, and often some very custom built tools too. For example, here’s a totally custom tool specially developed by the shop to enable cutting exhaust pipe at precise angles:

Custom Tool for Precise Cuts of Exhaust Pipe The end result of all this extensive work? A beautiful and unique vehicle that precisely fits the requirements of its customer in every single detail, perfect for its job.
Now, why am I bringing all this up? Well, funnily enough it all relates back to mapping in a very roundabout way.
For roughly the last 30 years people have been starting to use maps in their lives. It started slowly and then grew exponentially, especially for consumers, once the internet took off. And in parallel with that the use of maps in business has gradually grown. Even now it’s not pervasive in the enterprise, but at least maps have slowly made their way into business analytics products like Tableau and Power BI. It’s now even possible to add a basic map to a spreadsheet in Microsoft Excel 365, at least in a rudimentary way.
But there is a fundamental problem: all of these mapping products, including ones from the enterprise mapping technology behemoths like Esri, are general purpose tools. Translating back to the automotive world for a moment, it’s like having a very basic set of spanners and wrenches. To use these generic products for any very specific business problem is actually very difficult, and even if you do use them, they don’t really provide what you need and they really aren’t fit for purpose.
What you need in these cases then is a very specific product. Rather like that custom built resto-mod vehicle — precisely fitting the needs of the customer in every single way. But, unlike that custom vehicle, it can’t break the bank! It has to be affordable and provide a significantly better return on investment.
So that brings us to this week’s topic: the second in the series of “Mapping Amazing” companies. Today we’re going to focus on a small start up based in Munich, called Mily Tech.
Mily Tech got its start just a four years ago in 2021, when its founder, Miloš Zlatković, realised that there was a significant problem in the pervasive and growing industry of last mile delivery.
The problem revolves around something called “Out of Home delivery” or “OOH delivery” whereby a parcel or package is delivered to a smart locker or a pickup and delivery counter. OOH delivery is perhaps slightly less common in the US than in Europe and Asia, but the need for it everywhere remains strong and it’s growing like crazy…
Think about it.
The need for OOH delivery is obvious:
- Parcels get stolen from front door steps.
- Deliveries get missed because the customer isn’t home.
- Deliveries can’t be made because there is a locked or key coded gate.
And there’s another significant incentive for delivery companies: a smart locker is a single delivery point for dozens of parcels, so use of them significantly increases delivery efficiency and reduces carbon footprint.
In the US this OOH delivery is dominated by Amazon Lockers. But in other markets many delivery companies national mail carriers are providing OOH delivery, mostly in the form of smart lockers.
At first glance the problem would seem to be a classic site selection challenge. Where do you put the lockers? You can’t just carpet bomb a whole city. It costs too much money and you can’t just set up a smart locker anywhere you want. And just like opening up a retail outlet in the incorrect location, it’s a very costly mistake to pick the wrong location.
And this site selection problem is different in every market. Tokyo is very different from Sydney which is very different from NYC.
But it doesn’t stop at putting the lockers in the correct location — it’s not fire and forget problem.
Once you’ve established your locker network you to have to make sure the utilisation of them is continuously monitored, optimised and adjusted. Why? Because the environment around each locker constantly changes. The number of parcels being delivered changes constantly. Sometimes a particular set of smart lockers is under utilised and sometimes it’s over subscribed. Competitive locations may pop up or disappear. And the population and demographics are always evolving.
Like all good companies, when Mily Tech initially looked into solving this problem, they started by talking to a ton of potential customers. They wanted to understand the precise issues, the root causes and, just as importantly, the set of tools the companies were currently using in an attempt to manage the challenge.
It was during this discovery process that the light bulbs quickly started to go off. It was like these big delivery companies and national mail carriers were trying to build a custom car using a set of basic wrenches and spanners.
Not only that, the set of tools they were using fell well short of addressing all the problems. Even if they came close, they had to accomplish it using multiple sets of tools across several disciplines. You can probably imagine: spreadsheets in one department, BI tools in another. And if you were lucky —very lucky — maybe a geographically insidious system (GIS) tool in a third.
Basically it didn’t work. Last mile delivery companies were desperately in need for the equivalent of that perfect, custom resto-mod car!
So that’s what Mily Tech set out to do. Build the perfect custom software platform to solve this very tricky problem. It’s a platform that transcends several disciplines. It’s not a spreadsheet. It’s not a BI tool. And it’s definitely not a general purpose GIS.
Using Mily Tech customers are able to build an optimal network of OOH delivery points, manage them for maximum efficiency and then easily scale them to expanded or entirely new markets.
And they’ve already had tremendous success. They’ve won business from DHL in Europe as well as a number of national mail carriers, for example in Italy and Slovenia. And now potential customers are starting to ask the internal question: “Why aren’t we using MilyTech?”
You’d think the market for last mile delivery is actually pretty small. There’s only so many delivery organisations after all. But they deliver hundreds of millions of parcels every day and every tiny gain in efficiency gain makes a huge difference to the bottom line.
But delivery isn’t the only problem to solve. There’s actually an even bigger problem. And it’s about optimising returns. Why? Because returns are a pain and expensive. They are the bane of not only every delivery company, but also every e-commerce company too. And there are orders of magnitude more e-commerce companies than delivery companies.
Mily Tech contrasts this problem with last mile delivery and they’ve given it a name. They call it “First Mile Returns”. And they’re coming after the problem to solve it. Stay tuned!
In the meantime, if you’d like to learn more check out milytech.com.
Until the next time…
The Mily Tech Product in Action:

Building a delivery point network in Mily Tech — Credit: Mily Tech 
Managing goals in Mily Tech — Credit: Mily Tech 
Analysing network performance in Mily Tech — Credit: Mily Tech 
Evaluating potential locations in Mily Tech — Credit: Mily Tech 
The Mily Tech team — Credit: Mily Tech -
Music + Maps = Pure Deliciousness
I have to credit President Trump.
He was once interviewed by non other than Ali G. Now some of you young-uns may not know about Ali G, but he was all the rage back in the early 2000s. Played by Sacha Baron Cohen, Ali G is faux character portrayed as a clueless, wannabe gangster from the suburbs of London, using exaggerated slang and ignorance to expose the prejudices, arrogance, or lack of awareness in his interview subjects, particularly politicians, academics and celebrities.
To secure interviews on the Ali G show the production team used clever misdirection: instead of mentioning Ali G they they presented the show as a legitimate documentary program aimed at youth culture or educational outreach. Interviewees were told they would be appearing in a serious program for a prestigious broadcaster (Channel 4 in the UK, HBO in the US), often with vague titles like “youth issues” or “modern culture.” The producers also suggested it would air only locally or have limited reach, downplaying its prominence. This tactic led to many prominent figures unknowingly agreeing to be interviewed by a ridiculous character. This made for famously awkward, revealing, or hilarious moments.
Here is Ali G’s interview with Trump:
And what did we learn from this interview?
Although Trump thinks humans were trading in rocks “hundreds of millions of years ago” he did, correctly as it turns out, identify the most popular thing in the world:
Music.
And that most popular thing in the world is only more popular than one other thing:
Maps!
So, dear reader, I ask you what would happen if you combined the two most popular things in the world, Music and Maps? Well, as I stated in the title of this post, you get one thing:
Pure Deliciousness.
And now there is a place to find this Pure Deliciousness. And it’s a brand new book called “Maps on Vinyl” self published by Damian Saunder, containing not only the artwork of over 400 album covers that contain thought provoking maps, but also the stories behind them.

Damian Saunder with his new book, “Maps on Vinyl”. Photograph: Steve Womersley / The Guardian It turns out Damian has quite the mapping pedigree. He started his career at Penguin Books in Australia, recommending new trends in map design. He later went on to becoming a lead cartographer at the educational publisher, Dorling Kindersley. He then established and worked at his own cartographic design consultancy for nine years before then joining Esri where he co-won a prestigious Henry Johns Award. Following Esri he was briefly the Director of Cartography at National Geographic but for the last eight years has been on the Maps team at Apple 1, 2.
So he knows maps. But Damian has also always had an overwhelming passion for music.
He recounts the story in the introduction to his new book:
AS A TEENAGER, I remember being stopped in my tracks by the riff on ‘Need You Tonight”, from the album Kick, by INXS. That killer hook not only helped rocket the band to the top of the global charts in 1987 but it also launched my almost four-decade love affair with collecting records and obsessively listening to music whenever and wherever possible (Kick was the first record | bought). Yet it has been more than just the music that has captivated me; it is the packaging and artwork of the records-from the typography and photographs to the graphic compositions, and even the locations pictured on the sleeves. From the start, the cover art has provided a way for me to connect visually with my favourite artists and their music, particularly during the time when the only other way of doing so was by reading Smash Hits magazine or watching the ABC’s Countdown music program on Australian television. The sleeve art lives with you as you move through time, making the music feel more personal than some long- discarded magazine article or transient television broadcast.
When I became a professional cartographer I started to explore the relationship between maps and music more deeply. Listening to music has continued to be an important part of my daily life and I have learnt that geography has played an important role in popular music: think of albums such as the Beatles’ Abbey Road, Dylan’s Nashville Skyline and Sonic Youth’s Murray Street. They were all named after the places where they were created. Lyrically, Frank Sinatra (‘New York, New York), Billy Joel (Allentown’) and, more recently, Taylor Swift (‘Cornelia Street’) have all written tracks in honour of their beloved hometowns, favourite destinations or places of interest.
So here we are then. With Damian’s tenacious efforts over a four year period he collected, painstakingly photographed and lovingly collated covers of 415 albums that all contain some form of emotionally moving map. And not only that — he had to get permission from every record publisher to include them. But didn’t stop there: wherever possible he tracked down the designer for each cover to find out how it came to be and the meaning behind it.
There were a few red lines: no paintings of landscapes (not a map) and no satellite or aerial photography (also not a map). Many of the covers selected were based on the band’s origins, but others were based on a band’s journey to new territories, or on imaginary places or simply just a commentary on a world view.
So without further ado — do this: Buy the book. It’s fantastic and you’re simply going to love it.
All proceeds go to Support Act, which is the music industry’s charity, delivering crisis relief services to musicians, managers, crew and music workers across all genres who are unable to work due to ill health, injury, a mental health problem or some other crisis.

Maps on Vinyl. Produced and published by Damian Saunder 
The Equatorial Stars by Fripp & Eno, with sleeve design by Brian Eno and Hugh O’Donnell.
Photograph: Damien Saunder
Absolute Truth by Lawrence Arabia. Sleeve Design by Anns Taylor. Photograph: Damian Saunder 
Eastern Medicine, Western Illness by Preservation. with sleeve design by Mark Shaw. Photograph: Damian Saunder 
Sounds of a City by Roza Parks with sleeve design by Khuan+Ktron. Photograph: Damian Saunder 
Go South by various artists, with sleeve design by Mark Harfield. Photograph: Damian Saunder 
You’re Next by various artists, with sleeve design by Justin Erickson. Photograph: Damian Saunder 
New Routes Out of Innerland by Mark Peters, with sleeve design by Marc Jones. Photograph: Damian Saunder 
Atlantide by The Trip, with sleeve design by Francesco, Logoluso, Studio Up & Down. Photograph: Damian Saunder 
Fate of Nations by Robert Plant, with sleeve design by Martin Collomon. Photograph: Damian Saunder 
Hit+Run, Road Kill, Vol. 2 by various artists, with sleeve design by unknown. Photograph: Damian Saunder 
Letter to Brezhnev by various artists, with sleeve design by Martin Huxford, Green Ink. Photograph: Damian Saunder. 
Little Creatures by Talking Heads, with sleeve design by Howard, Finster, Tibor Kalman. Photograph: Damian Saunder. So now you’ve perused just a few of the 415 fantastic mappy album covers, I meant what I said: Buy the book!
Thanks for reading.
Footnotes:
-
Mapping Amazing: POSTX
Welcome to the first in a new series of articles on Map Happenings that I am calling “Mapping Amazing”.
Each post in the series is going to feature a small company or organization that, through use of geospatial technology, is doing innovative / rockin’ / lit / vibing / happening / mapping amazing work (pick your adjective!) to make a small dent in the universe.
And today we’re going to feature a little known startup in London called POSTX. It’s a little different, but it’s using geospatial excellence at its core.
I got to know the founder of POSTX, Max Glaisher, as a result of business dealings I had with him during his career at DPD, one of the largest delivery companies in Europe. While at DPD Max was one of the leaders on their innovation team. I was always impressed with his energy, spark and drive to “think different”.
Sure enough, he did. In 2023 Max founded POSTX with the aim of changing the way the world moves goods with zero emissions.
A big hairy audacious idea. But how can a tiny startup with fewer than 10 full time employees possibly begin to go head to head with the likes of FedEx, UPS, DHL, DPD and Amazon Logistics?
Well the answer is two fold.
First — they don’t. They’re not aiming to replace any of these behemoths. Instead they’re aiming to transform the old sausage making plants the big guys use and bring them firmly into the 21st AI century. So, for the boomers among you, POSTX’s MO is more akin to the famous BASF ad from the 1980s: “We don’t make a lot of the products you buy. We make a lot of the products you buy better.”
Second — through innovation. To deliver all goods with zero emissions is about SO much more than buying a fleet of electric trucks and using them in your delivery network. And actually, it’s not about buying any trucks at all.
But before we get into the details, let’s start with a little history lesson on the last time the whole world of global delivery got disrupted …
Wind the clock back 54 years to 1971. This was the year that Intel released its first commercially available microprocessor (the Intel 4004). It was also the year that electronic calculators and digital watches started to appear on the market.

The Hamilton Pulsar Watch launched 1971 But something else happened in 1971. While at Yale University, a chap called Frederick W. Smith came up with a devastatingly disruptive idea for a new kind of logistics company that would revolutionise time-sensitive shipments. This was around the era that “just-in-time” manufacturing was becoming vogue, and Fred saw a critical need for fast, reliable delivery.
The revolutionary idea behind this new company was to use a hub-and-spoke model, where by all goods were first shipped to, and then sorted at a centralized hub in Memphis, Tennessee. This model allowed the company to guarantee next day delivery, something no other carrier could offer reliably at the time.
But this wasn’t the only secret ingredient. The company also used a system of barcode tracking and real-time package updates that set a whole new standard for logistics transparency. By pioneering overnight delivery and package tracking, this new upstart fundamentally changed customer expectations and forced competitors like UPS and the U.S. Postal Service to adapt. It didn’t just deliver parcels — it redefined speed and reliability in global commerce.
The company? Well, it’s FedEx of course.
So what’s happened in the intervening half century? Well most logistics companies have copied FedEx, not only the hub-and-spoke model for air shipments, but in using the same hub-and-spoke model for ground shipments too. This has all meant the need to build massive warehouses so parcels can be collected, sorted and then shipped back out to their final destination. In other words: a huge capital expense.
So has anything innovative happened since the dawn of FedEx? Well yes, inside the warehouse there’s now a ton of expensive computer controlled conveyer belts and sorting systems, as well as increased use of robots (although robots are more applicable to organizations that have to pick products as well as sort them —think Amazon or Walmart). And for route planning obviously everyone is using digital maps and mapping software to help drivers find their way.
But there’s some fundamental underpinnings of all logistics companies that are still stuck in the mid 20th century. What am I talking about? The answer: postcodes — or to you Yanks — ZIP codes.1
How ZIP Codes Helped Organize the US — Credit: Vox Media For example, in the US, ZIP codes were invented in 1963 and in the UK, postcodes were invented in 1959. And yes, both were invented to help streamline and automate sorting. But 60+ years on they are all getting rather long in the tooth. Coupled with that, the postcode system is different in every country and by no means do all countries have them. For example, Ireland only released their “Eircode” system in 2015 and populous countries such as the UAE, Saudi Arabia and Hong Kong have yet to create or fully develop a system.
So what’s wrong with postcodes?
Well first they don’t tell you how close things really are to each other. Then they don’t indicate how far one postcode is from another postcode. And they certainly don’t convey how routes should be grouped to be as efficient as possible based on distance, time and cost.
Take a look at the postcodes for London as an example and you see the issue. The denser the population the smaller the postcodes are (look at the inset for EC, SW1, W1, WC1 for example). So for logistics purposes the postcode actually tells you little about geography or geographical distance:

Postcode sectors for London This issue creates a range of planning issues when designing service areas for depots and routes.
So what do legacy logistics companies do to try and get around the issue? They build “origin-destination” matrices of pre-computed distances between postcodes. But even then they don’t create these matrices on the full postcode — there are simply far too many of them2. So instead they fall back to creating matrices just based on general postcode areas.
For example, in the UK they only use the first few letters of the postcode, known as the “outcode”:

Credit: IdealPostcodes Ultimately, these precomputed matrices don’t reflect real travel distances, times or costs — so it’s not actually very useful for route planning.
To work around that, companies do the following:
- Pay a third party to validate postcodes
- Pay a third party to geocode each address into coordinates
- Run a distance matrix to calculate travel time and distance between every delivery point for a particular day
- Use a solver to figure out the best stop sequence based on business rules
It’s expensive, slow and relies on multiple external systems — just to get inefficient routes.
Even Amazon, with all their sophistication, still uses these matrices as a fundamental part of their mechanism for sorting out where packages should go.
Your Amazon Shipping Label Explained — Credit: @FridayAdventureClub Couldn’t there be a better way?
POSTX not only thinks so. They know so. And it became one of the primary pillars of their new venture.
Pillar 1: Nexus Grid + the Atom Sorting Index
One of POSTX’s first tasks was to generate an entirely new sorting index designed specifically for logistics, not for just for delivering letters. And it’s not national. It’s global.
They call this new sorting index “Atom”, and Atom is part of an overall platform that they are building called “Nexus Grid” which will also provide predictive routes and sequencing.
And how is Atom different? Well it replaces postcode matrices with a smart, machine-readable grid — so every location is already sorted by how it connects to others at local, sub regional, regional and national levels.
This lets them:
- Sort vertically (national → regional → local)
- Sort horizontally (between any two places nearby)
- Automatically group routes based on real travel cost and time, not mailing zones
- Do it all without 3rd party maps or postcode logic
Unlike postcode boundaries that are drawn by people, Atom was built using a machine learning model to map and organise the world into hierarchical 6-unit geospatial zones. This pre-indexed structure embeds real-world distance, time, and cost into every location, allowing routes to be clustered and optimised almost instantly — without relying on postcodes, third-party maps, or heavy sorting infrastructure.
This video compares traditional postcodes to the hierarchical nature of Atom:
Another groundbreaking change …
Nexus Grid enables something POSTX calls Virtual Sortation.
Virtual sortation eliminates the traditional “collect → central sort → deliver” model.
Instead of loading everything onto a large truck and driving it to a central sorting center for unloading and belt sorting, goods are pre-sorted directly at the origin warehouse into modular numbered trailer PODs (individually tracked by Bluetooth) — each one already grouped by its delivery destination or route.
Those PODs can be:
- Pre-loaded at ground level for speed in or outside warehouses efficiently
- Swapped between vehicles without unloading
- Sent directly to their destination depot or delivery zone
- Used as mobile depots for local delivery drivers, skipping the long commute into industrial estates to collect their parcels
This removes several-handling steps, eliminates the need to expand large sorting hubs, cuts loading/unloading times dramatically, and lowers transport costs by reducing unnecessary miles.
Now actually POSTX’s method for using trailer PODs isn’t altogether new. In fact the concept of separating the goods that need to be delivered from the driver all goes back to the horse and carriage days, and this was later replaced by the “mechanical horse”. While separating the goods and the driver has remained with us for long haul routes (e.g. 18-wheelers and 53′ trailers) it’s been lost for middle distance and local deliveries. POSTX is bringing that concept back to life.

The way goods used to be delivered. Credit: Steam: The Museum of the Great Western Railway The 1948 Scammell Scarab: A “Mechanical Horse”. Credit: OnWheels Let’s look at one of POSTX’s mailing labels. They are actually bi-directional and work for returns. And they eliminate the need for manual re-labelling for in-network transfer of goods or transfer across networks:

Credit: POSTX Note Einstein is embedded in the QR code! As Max from POSTX told me:
Einstein gave us the tools to understand the chaotic dance of atoms. POSTX uses the Atom sorting index to decode the dance of shipments, reordering how we plan routes using space, time, and cost for logistics. I thought it made sense!
“But come on now!” I hear you say, “How could this tiny company with fewer than ten people get any business just by inventing a geospatial index for logistics?”
Well that’s where the other pillars of POSTX come in. Let’s tackle them one by one.
Pillar 2: Zero Emissions
If you visit the POSTX website, you’ll see that zero emissions is front and center in their message. And it’s certainly a key selling point that their customers are keen to leverage. The ability to claim “100% Emission Free Delivery” is a value that resonates strongly with the majority of consumers.
So every single one of the deliveries made within the POSTX network is done with electric vehicles. But they’re not doing it like Amazon and paying Rivian hundreds of millions of dollars to develop custom electric delivery vehicles. And nor are they doing it by buying a fleet of Ford E-Transits or Mercedes eSprinters. And they are not buying any electric trucks.
Nope, in fact they’re not using vans or trucks at all. They’re using electric cars, SUVs to be precise, all sourced from (cue “hiss”) … Tesla. But even then they’re not buying them. They’re renting them. And part of the secret sauce is they are using Teslas not on their own, but with trailers. It’s the modern day Scammell Scarab:

A POSTX Delivery in a Local Neighbourhood. Here’s a short promo video explaining how it all works. If you pick one video to watch in this article then choose this one:
Phew, well that’s a bit different, eh? But that’s not all. Let’s move on…
Pillar 3: No CapEx
So, no purchasing of any electric vehicles. And no purchasing of any trailers either (they’re rented too). So no CapEx in fleet costs. In fact eventually POSTX envisages using a network of drivers who happen to own an electric vehicle that is capable of towing (for example, a Tesla Model Y).
The owners get to use their car as their daily driver but then they also get to make money from it by making deliveries. The folks at Tesla love this because POSTX is short circuiting Elon’s master plan for people to make money from their Tesla vehicles…
But it doesn’t stop with the fleet.
If you operate a logistics company — remember the story about FedEx? — one of your biggest costs is setting up a sorting warehouse. But due to the magic of the 21st-century geospatial logistics software behind POSTX, guess what?
You don’t need any warehouses with expensive conveyor belts dedicated to sorting goods and packages.
Goods are pre-sorted at the beginning of the fulfilment process, tracked with POSTX’s Atom destination code and return numbered labels, and then loaded directly into modular trailer PODs with the assigned numbers.
Any further sorting happens virtually — it appears in the driver’s route as a scheduled stop. At that stop, drivers either drop-swap & go (exchanging trailers moving in opposite directions) or perform a shipment exchange (transferring goods from one trailer to another).
Think of it like internet data packets being sent from router-to-router, but one step further — instead of the routers being at a fixed locations they’re actually mobile.
These exchanges remove regional sorting overheads, significantly reduce sorting time, and can be completed while vehicles charge at designated “stop points” — eliminating the delays and costs of traditional hub-based processing.
Bam. There’s a humongous cost eliminated.
Pillar 4: Get Anywhere: Faster and MUCH Cheaper
There are a number the advantages of using a car with a trailer rather than a traditional panel truck or van.
First, unlike a truck, they are small. They can get virtually anywhere — for example, up someone’s narrow gravel driveway.
Second they are permitted virtually anywhere — there are generally no weight or size restrictions unlike there are for trucks.
Third, trailers are much faster to load and unload than trucks with tail lifts. Also, they are also a lot safer to load and unload. POSTX has shown that their trailers can be loaded 14X faster than a truck with a lift gate:
Fourth, unlike vans the trailers can be dropped off anywhere, sans driver. So they can be dropped off at a manufacturer’s facility the night before, loaded, and picked up later. They have the advantages of a full size 18-wheeler rig and trailer, but the size and weight advantages of a van.
But then let’s talk about cost. Not theoretically, but as proven in real world operations:
POSTX is showing their costs are 50% less than using a diesel truck, not including the driver. If you take into account the full cost of the driver, this cost savings rises to 70% or more, not because POSTX pays their drivers less, but because of the efficiency brought to bear by the geospatial excellence of their system. As a result they need significantly fewer driver hours to make the same deliveries.
Pillar 5: Logistics-as-a-Service
So actually POSTX isn’t looking to compete with the big boys at all. You see they don’t see themselves as a logistics company. They see themselves as a Logistics-as-Service platform that helps existing logistics companies significantly reduce their costs. They do this by automating planning, routing and sortation, all powered by a network of driver owned EVs and rented trailers.
As such POSTX is aiming to provide a service to logistics companies for 20-40% of their routes with a much more flexible and lower cost solution.
By optimising sortation and routes using their modern Nexus Grid platform they can increase speed, efficiency and reliability all while significantly reducing costs.
Pillar 6: Focus!
For any startup, focus is critical. And for POSTX it’s no different. They are focused on just one aspect of the logistics market. It’s called the “middle mile”. As opposed to last mile, the middle mile is involved with bulk shipments that commonly involve moving pallets of goods over longer distances, often with fewer stops.
In the UK, existing middle mile fleets typically spend £300+ ($400+) per route and cost £60/hour ($80) to run. POSTX has proven they can cut these costs in half.
And so POSTX’s is the end-to-end platform for logistic companies to make their middle mile deliveries.
And there’s another network effect. As more customers use the POSTX platform, it continuously indexes detailed movement patterns — rather like Google indexing the web — enabling increasingly accurate predictions on how to move goods with the optimal balance of time, cost, and service.

So How is POSTX doing?
Incredibly well.
Growing 29% MoM. A solid base of middle mile logistics customers across the UK. Providing a capital light / no CapEx model with simple pricing. And already over 300,000 miles of tailpipe emissions avoided.
One of their key customers is a group called Segen, who is a top distributor of solar systems, energy storage and EV charging solutions across the UK. Faced with high damage claims, delays, failed deliveries and rising middle mile costs they turned to POSTX to help them with deliveries.
The outcomes?
- 100% on-time, in-full deliveries with 0% damage
- 50% cost savings
- Multi-site expansion after 12 week pilot
And POSTX’s success is showing up in other areas too:
- In October last year they won a National Sustainability Award together with their customers Segen and E.ON
- This year they are being recognised as building the UK’s safest courier business.
Mapping Amazing: POSTX
So there you have it. The first in a series on “Mapping Amazing” companies as highlighted by Map Happenings.
POSTX, providing a next-generation Logistic-as-a-Service platform with geospatial excellence at its core.
Thanks for reading.
Disclosure:
I am on the advisory board of POSTX. Cynics will say: “That’s why you promoted them.” Non-cynics will say: “If they really are good, why wouldn’t you join their advisory board?”. I’m a cynic. So I get it.
Further Viewing:
- AirTags Expose Dodgy Postal Industry (DHL Responds)
- “The Postcode Connection”, A 1970s documentary about how the Post Office deliver post
Acknowledgements:
- POSTX
- Vox media
- FridayAdventureClub on YouTube for explaining Amazon shipping labels
- IdealPostcodes for their explanation of UK postcodes
Footnotes:
- Gold star if you know what ZIP stands for. ↩︎
- In the UK there are about 1.8 million full postcodes (e.g. “SW1A 2AA”). So an origin-destination (OD) matrix for every postcode would have over 3 trillion entries. And that just for one variable, e.g. distance. That’s a massive matrix — definitely not something you’d want to hold in memory. This contrasts with the number of outcodes in UK postcodes (e.g. “SW1A”) of which there are only about 3,000. So an OD matrix just based on outcodes for one variable only has 10 million entries. ↩︎
-
A New Series on Map Happenings: “Mapping Amazing” – Coming soon
Just over three years ago, having been lucky enough to be immersed in mapping technology for many decades I thought it was time to give back. Thus the creation of what you are reading now, Map Happenings.
In the first post, on 27 July 2022, I wrote:
So after 37 years in the mapping industry I thought it might be fun to inflict my view of happenings in the geospatial technology world on a few of you.
I’d like to throw you the occasional tasty tidbit of various ‘goings on’ and, from time-to-time, do a deeper dive into some relevant topic or company. Oh, and I might try to invoke a reaction from one or two of you through a particular musing or postulation.
While I’m sure much of the focus will be on what’s happening now, it’s always fun to make guesses about the future. And, of course, let’s not forget the past. As Churchill said: “Those that fail to learn from history are doomed to repeat it.”
I will aim for quality, not quantity. Hopefully I’ll meet your expectations.
Included in the many posts was an attempt at a fun and irreverent series on the history of maps and mapping technology called “The 12 Map Happenings the Rocked Our World“.
Despite my poking fun at the French, the series still got a very favourable reception from none other than the New York Times Literary Review 1
Given this history of maps and mapping series is now finally complete, (you can let out big sigh of relief), I thought I might transition to something new, with the emphasis on the word ‘new’ rather than ‘history’. So I have decided to embark on a series that I am calling “Mapping Amazing”.
Each article in the series is going to focus on a small company that, through use of geospatial technology, is doing innovative / rockin’ / lit / vibing / happening work (pick your adjective) in an attempt to make a small dent in the universe.
To get a flavour of what the articles will be like, take a look at the Map Happenings article from May 2024: “PLACE: Fuelling Maps for the Rest of the World“. This featured the work of a small organisation that is doing an awesome job to provide the “data fuel” for the creation of detailed maps for the developing world.
So, stay tuned for the new series, “Mapping Amazing”. I think you’re going to love it.
- Gold star if you can spot the AI. ↩︎
-
12 Map Happenings That Rocked Our World: Part 12
The Whole World in Your Pocket.
Something happened on November 10, 2001.
Suddenly, you could buy a mobile device that would store about 100 albums of music and allow you to listen to it — anywhere. At the time, this capability was considered astounding and was unmatched by any other product.
The device that enabled all this? iPod, of course — the first dedicated music device from Apple in the renaissance years of Jobs 2.0.
The tag line for the product — “1,000 Songs in your pocket.” — immediately resonated and people quickly ‘got’ what the product was for.1
It didn’t take long before the whole iPod product line took off. Within a few years nearly everybody seemed to be carrying one of these handy devices in their pocket, pinned to their shirt or strapped to their arm.
But of course it didn’t stop there. Six years later, by the time Apple had produced its 100 millionth iPod, Apple launched another earth shaking product: iPhone.
Announced in Steve Job’s now much lauded keynote speech at Macworld on January 9, 2007, it generated shock and awe around the world.
As I am sure many of you will remember, iPhone was portrayed as three amazing products in one: a widescreen iPod with touch controls, a revolutionary phone and a breakthrough internet communications device. When Steve demo’d it, the crowd went wild over how it made previously extremely difficult (or impossible) tasks super easy and fun.
Much of the demo time was dedicated to showing how iPhone could be used for the basic stuff — listening to music, watching videos, making calls, sending email and texts and browsing the web.
But there were a few other things that Steve showed off: a weather and stocks ‘widget’, which we would now call an ‘app’, and then something else. Something that he spent several minutes demoing … something he called “the best version of Google Maps on the planet”.
This particular part of the keynote caused roars of applause, especially when he used the maps app to find a nearby Starbucks — which he promptly called, live on stage, to order 4,000 lattes “to go” for the audience.
And that wasn’t the only maps feature to get a roar — the audience was particularly amazed when he zoomed into the Washington Monument, the Eiffel Tower and the Colosseum in Rome using satellite view. Watch the excerpt:
Steve Jobs shows off maps in the first iPhone at 2007 Macworld. Credit: Apple and Macworld Amazing though it was there is a whole story behind this part of the demo. Steve Jobs talked about it later that year when he and Bill Gates were interviewed by Walt Mossberg and Kara Swisher at the D5 conference on May 7, 2007 (video is well worth a watch BTW! Also here is the transcript):
I love Google Maps, use it on my computer, you know, in a browser. But when we were doing the iPhone, we thought, wouldn’t it be great to have maps on the iPhone? And so we called up Google and they’d done a few client apps in Java on some phones and they had an API that we worked with them a little on. And we ended up writing a client app for those APIs. They would provide the back-end service. And the app we were able to write, since we’re pretty reasonable at writing apps, blows away any Google Maps client. Just blows it away. Same set of data coming off the server, but the experience you have using it is unbelievable. It’s way better than the computer. And just in a completely different league than what they’d put on phones before.
But maps weren’t even part of the original plan for iPhone until just a few weeks before the unveiling on January 9, 2007. Nick Wingfield and Brian Chen recount their 2012 story in their article on Apple Maps in the New York Times:
Including a maps app on the first iPhone was not even part of the company’s original plan as the phone’s unveiling approached in January 2007. Just weeks before the event, Mr. Jobs ordered a mapping app to show off the capabilities of the touch-screen device.
Two engineers put together a maps app for the presentation in three weeks, said a former Apple engineer who worked on iPhone software, and who declined to be named because he did not want to speak publicly about his previous employer. The company hastily cut a deal with Google to use its map data.
Not only did maps get featured in Jobs’ launch keynote it also became a mainstay feature of the original iPhone ads:
Credit: Apple and AT&T And, 18 years later, where are we now?
Well today there are estimated to be 7.2 billion smartphones in use around the planet. They are used by 4.7 billion people.2 That’s about 58% of the world’s population.
And every one of those devices has access to a maps app. And every maps app has a map of the planet.
So, if you boil it all down: at least one out of every two people on the planet now has a detailed map of the whole world in their pocket.
If that isn’t a Map Happening, I don’t know what is!
Epilogue
Well you did it. You suffered through all 12 episodes of “Map Happenings that Rocked Our World”. Hopefully many of you will have enjoyed at least a few of the posts.
If you missed some of the earlier articles and want to subjugate yourself to them, you can bookmark the whole list at maphappenings.com/history or simply browse them here:
12 Map Happenings That Rocked Our World
Part 1 — The First Map
The story of how, many thousands of years ago, the first map came to be.Part 2 — The Birth of Coordinates
Geographic coordinates haven’t been around for ever. But they have been around for a while. Read the fascinating story of who invented them.Part 3 — Road Maps!
Think road maps came out when people started driving cars? Wrong! Read on to find out the real story.Part 4 — The Epic Quest for Longitude
Best selling books have been written and films have been made of this amazing story from the 1700s. It truly is an epic tale.Part 5 — The Dawn of Tube Maps
The London ‘Tube’ (or subway) was the first of its kind to invent an entirely new kind of map. Now everyone’s copied it. Learn the history of how it came to be.Part 6 — The Advent of Computer Based Mapping
Computers started to emerge after the Second World War. Learn about the brilliant minds that first started using them for maps and mapping.Part 7 — Those Views from Above…
Today we’re all accustomed to looking at satellite and aerial imagery of cities and places. This is the story of how it all came about.Part 8 — Oh Brother, Where Art Thou?
Determining your current location is easy. You just open your phone and look at a map. But it wasn’t always that easy. The full story is in this post.Part 9 — A Curious Phenomenon Called ‘Etak’
This is the story of the world’s first practical navigation system for cars. And it’s an amazing one that is hard to believe!Part 10 — A Relentless Quest for Maps
Before Google Maps launched in 2005 there was another website that dominated maps on the web. This is the insider story of that site: MapQuestPart 11 — Maps, Earth and Views
The fascinating background story about how Google Maps, Google Earth and Street View came to be.Part 12 — The Whole World in Your Pocket.
This article!Thanks for reading.
- The tagline was invented by the advertising agency Chiat Day, not Steve Jobs. ↩︎
- Source: Genius: How Many People Own Smartphones In The World? (2025-2032) ↩︎
-
12 Map Happenings that Rocked our World: Part 11
Maps, Earth and Views
So tomorrow, Friday February 7, 2025 at 7pm Pacific time, marks an anniversary.
That precise time will be the 20th anniversary of the launch of Google Maps. And what a journey it has been. Not only for all the hard working folks at Google, but also for the billions of human beings on this planet that have benefited from this ubiquitous geographic platform.
Only someone at Google could delve into this, but I’d have to guess that the number of journeys that Google Maps has aided must number in the many trillions.

Google Maps in 2005 — Credit: Google Many of you mapping nerds out there might already know the full history, but perhaps many of you don’t. Or perhaps you need a refresher…
Let’s wind back the clock to 1999. At that time if you wanted online mapping and directions — at least in the USA — there was only one place to go: MapQuest.com. For the history of that groundbreaking company see “12 Map Happenings that Rocked our World: Part 10: A Relentless Quest for Maps“.
But that same year another tiny, tiny seed was being been planted. And the people doing the planting were a couple of chaps called John Hanke and Brian McClendon. John and Brian were both tracking the big bang explosion of the internet and the web that was consuming everyone at the time. And they had a new stealthy startup. But it turned out it wasn’t anything to do with the internet — their secret new product all ran on a huge Dell PowerEdge server.
And on one warm spring day in Austin they demo’d the product to Bill Kilday, who had been John’s friend for 15 years.
Before I get to reveal the meat of the demo, let’s quickly wind the clock forward to the present day: as a result of that demo, John, Brian and Bill started a journey together that lasted for the next 24 years. It begat a geographic product that nearly everyone on the planet knows: Google Maps. How about that for a Map Happening That Rocked Our World?!
In 2018 Bill wrote a book about their travails. It is the definitive, insider story of Google Maps. The book is called “Never Lost Again” and I can’t recommend it enough. Among other places it’s available on Amazon and Apple Books:

Credit Bill Kilday In writing this article I’ve referenced the book liberally and, speaking frankly, there is absolutely no way this article would have the detail it does without having had “Never Lost Again” as my guide. So @Bill — thank you, thank you and thank you.
I recently had the privilege of chatting with Bill and I told him about my plans for this post. We both agreed that the timing should coincide with the 20th anniversary of Google Maps.
So, with that in mind, here we go!
Let’s go back to that demo that John and Brian gave Bill in the spring of 1999. Bill recounts the demo:
On the screen floated a perfectly detailed rendition of Planet Earth. Oh, I thought, there’s that photo. You know the one. It was on the cover of every junior high science book. Known simply as the Blue Marble, this classic image was captured on December 7, 1972, by the crew of the Apollo 17 spacecraft five hours into a lunar mission. This one spectacular photo is widely credited with jump-starting the modern-day environmental movement.
The Earth seemed to be rotating. Moving somehow. “What’s your address again?” John asked. “It’s 465 Joe Sayers, Austin, Texas,” I said as he typed.
That’s curious, I thought. The photo on the screen. The Earth. It is moving! Maybe it’s a QuickTime movie animation of some sort? But QuickTime movies aren’t interactive in any way, they are linear. Why would John need my address if this were a movie and . . . Wait, how is this thing zooming in? What the hell is happening here!?
These thoughts raced through my head during the fifteen seconds that we zoomed from outer space down to what was clearly my house. I recognized my standing seam metal roof, the neighbor’s trampoline in their backyard, my red Ford Explorer parked out front, and the Arroyo Seco running behind the house.
“Holy shit!” I exclaimed.
What Bill was seeing was the beginning of something that would take the world by a storm. But it wasn’t Google Maps. What he was seeing was a product that would come from a new stealthy startup, called Keyhole 1.
Keyhole was born out of the work of another Silicon Valley startup, Intrinsic Systems, who had pioneered a super efficient way to stream large image databases to a screen. It was this technology that made the demo possible. Initially the thought of putting it on the web was deemed a risky strategy. To quote John Hanke at the time:
If we put it in a browser, it will be just like MapQuest and they already own that market.
By 2001, the Dot com bubble had burst and Keyhole missed their initial opportunity for a funding round. They scampered to find a business model that could be built around their incredibly sexy demo. And their attention turned to enterprise mapping — at the time still very much dominated by Esri. Bill recounts the history:
Creating maps using Esri software required sophisticated data analysis and training. You could get an accredited college degree in GIS, spending four years learning how to use it. More realistically, a team of Esri-trained specialists (or Esri’s own consulting arm) was needed to configure a custom solution for a client.
Esri was essentially a blank sheet of paper. In order to begin using it, you had to go out and find the data you needed, and download and import it. In the meantime, you had to hope that the data was the right format and map projection so that the multiple layers of data worked together. Esri specialists were needed to acquire and integrate mapping data in order to make the application work.
Lastly, it was slow. Viewing an Esri-generated map online involved a click followed by a long, painful wait for the map to refresh. If you wanted the base map for your Esri map to be an aerial image, the software bogged down even more. Esri was glacially slow, almost unusable, especially if you didn’t have the aerial image loaded on your local hard drive.
By comparison, Keyhole was simple to use, came bundled with access to terabytes of data, and was lightning fast.
Bill went on to say, half joking:
Maybe Keyhole could be a simpler GIS aimed at non-GIS experts? Maybe we could be the GIS solution for people who can’t spell GIS?
It was with this idea that Keyhole’s first product was born on June 14, 2001: the Keyhole EarthViewer. It was aimed at one just vertical: commercial real estate. But scaling sales proved hard. By 2002 Keyhole was running on fumes. Despite cash injections from Nvidia and another Japanese company their burn rate was way too high. There were other deals on the horizon, one with CNN and more intriguingly an opportunity with the US government in the aftermath of September 11. But by January 2003 the board was at the point of laying people off or shutting down the company. Management was given three months.
On March 20, 2003 the US invaded Iraq. And a week later Keyhole’s deal with CNN was closed. Within a few days CNN was using EarthViewer — CNN called it their Map Viewer — on live TV to zoom interactively to military action on the ground.
If there was one single event that saved Keyhole’s you-know-what, it was a key clause in their contract with CNN: John had got them to agree to display Keyhole’s URL on the screen anytime they used EarthViewer on air.
Within a few days a headline in the USA Today newspaper told the whole story: “Tiny Tech Company Awes Viewers”.
CNN’s use caught the eye of the US government. It wasn’t long before In-Q-Tel, essentially the US Federal Government VC arm, was heavily engaged. And Keyhole still only employed 20 people.
Keyhole started to blossom — and not just in financial ways. They managed to snag John Rohlf who is still considered among the best 3D graphics programmers in the world. It was John who invented something groudbreaking and it was called: KML.
KML stood for Keyhole Markup Language and has gone on to become a phenomenon that enables one of the most prolific ways to exchange geographic information. Much like you and I can easily exchange and read PDF documents, KML allows people to exchange and read geographic information, which could then be displayed on top of Keyhole’s interactive digital globe. This changed Keyhole EarthViewer from a single user app into a hugely collaborative tool.
Now organizations could collaborate and share geographic data, viewing it in Keyhole’s super easy and super fast tool. This caught the attention of public safety and law enforcement departments across the country. Fire and police departments started to use Keyhole in place of their legacy enterprise “Geographic Information Systems” (GIS) software made by the 800 pound gorilla, Esri.
I was working at Esri at the time. I can tell you that it sent their CEO and founder, Jack Dangermond, into an apoplectic panic. In fact the shivers were felt down the spine of the entire organization. How could this tiny company be having such a disruptive effect?
But for Esri at least, the worst was yet to come.
By early 2004 Keyhole reached a classic inflection point. It was time to raise a “Series B”. Menlo Ventures was the lead. On April 21, 2004 they provided a term sheet and it was time to make a decision. But instead of signing John and his cohorts delayed and holed themselves up in a conference room.
After a few hours the news finally broke to the other “need-to-know” employees.
Google wanted to buy them.
Series “G”
By October 27, 2004 the deal was finally made public. Keyhole was now wholly owned by the Google. Google had only just gone public a few months before. They had become super bold and were mushrooming fast.
Strategic meetings with Sergey Brin and Larry Page ensued. Keyhole was almost immediately challenged. Bill Kilday, by then Product Marketing Manager for Keyhole, was anxious to hear about Google’s plans for the company. He wanted to ask Larry and Sergey:
A year from now, thinking about what success looks like, would it be better if Keyhole generated ten million dollars in revenue or had ten million users?
To Bill these were audacious goals.
But Larry and Sergey had other questions, such as “How big would their database need to be to cover the entire planet at one meter resolution?”
Bill finally got the chance to ask his question. Larry and Sergey thought for a moment: “I think you guys should be thinking much bigger than that.”
It took quite a while for their answer to sink in.
Where2Tech
It turned out Google had separately been experimenting with mapping as a way to add location information to search results. But it was it was about to get another lift, and that lift came from Where2Tech.
Where2Tech was founded by two Danish brothers, Jens and Lars Rasmussen. In the latter part of 2003 they found themselves unemployed from a company called Digital Fountain in Sydney, Australia.
Jens had been tinkering with the idea of adding mapping capabilities to a CD-ROM based business search app (or “Yellow Pages” app). This was to be an app that you installed — it was not a website. And, like Keyhole, Where2Tech stayed clear of browser based mapping because of the dominance of MapQuest.
With both of them unemployed they pitched their idea to anyone that would listen. One of Lars’ contacts made a fortuitous introduction to Larry Page and a meeting was finally scheduled on June 4, 2004. Legend has it that with holes in his socks and only $13 in his bank account, Jens booked a flight to California on his credit card.
Larry immediately challenged the brothers to make their app work in a browser. The Where2Tech app was certainly better than what MapQuest provided: it was fast — much faster than MapQuest. Unlike MapQuest, who generated each map image on the fly, Where2Tech relied on a vast array of pre-generated map “tiles”.
Where2Tech’s approach allowed not only fast maps, but it also allowed big maps — and not only big maps, but big “slippy” maps — maps that you could simply click and drag to pan rather than having to click on a ‘Pan’ button and wait for a new image to be generated.
Three weeks later Jen and Lars came back with another demo. It was enough to get Larry to offer them employment contracts to work at Google.
By October 2004 the Where2Tech and Keyhole teams were combined. In time they were collaborating to make each of their fledgling products better. By November 2004 the goal was set: they were to launch Google Maps in three months.
Data, Data, Data
The planned Google Maps product was not without its dependencies. It relied on satellite imagery and digital street map data. Furious negotiations ensued with imagery and street map data providers.
For imagery data Google primarily worked with Digital Globe. A deal was negotiated for imagery of the 200 most populated cities on the planet for $3 million. In Keyhole terms it seemed like a staggering price, but with Larry Page’s challenge to think bigger, the proposed deal was sent up the chain.
When Larry Page reviewed the deal his question was simple: “Is this all their content?” Larry didn’t want imagery for the top 200 cities. He wanted imagery for the whole planet. The maps team was flummoxed. Had they lost their collective mind?!
The team’s focus wasn’t just on maps and imagery. It was also on business locations.
At the time MapQuest, Google and others relied on business listings from the likes of a company called InfoUSA. InfoUSA in turn relied on teams of telemarketers to call businesses once a year. Their data was therefore always painfully out of date and, as a result, users of MapQuest were often directed to businesses that had moved or had closed.
Google started taking a different approach. They set up a system to crawl the now pervasive web pages that every business had, making the assumption that businesses had the incentive to keep their information much more up-to-date than any electronic “Yellow Page” directory. This effort fit perfectly with Google’s mantra to “organize the world’s information”. Now the mantra was simply being applied to “organize the world’s geographic information”.
Google Maps Launches
With fast, large, “slippy” maps and vastly superior list of businesses, Google Maps was launched, for initially just the US, at 7pm Pacific time, Monday February 7, 2005.
One hour later traffic started to pick up.
By 7.45am the next morning the site had been “slashdotted” — meaning that someone had started a thread about it on the influential tech website, slashdot.org.
By 9am the site exceeded the forecast (and server allotment) for the entire first 24 hours.
A phenomenon was birthed. Within days people in countries outside of the US were clamoring for the same UI. It didn’t take too long. In June 2005 Google Maps was launched in Japan. Then came the UK, then Ireland and then France.
For licensing reasons satellite imagery was held back as part of the initial launch. But it arrived on April 4, 2005, causing the servers to once again become overwhelmed.
Then, in June 2005 Google launched the Google Maps API. It was the equivalent of Apple’s App Store moment for iPhone. It unleashed whole new industries: Yelp, Trulia, Zillow and Strava to name just a few.
For a while things went swimmingly, but there were some snafus. For internal political reasons the Google Local team (focused on businesses) and the Google Maps team were later merged. And the name of the newly merged product? Google Local.
Google Local? WTF was that?
It didn’t take too long before the name was changed back!
But Whithersoever Google Earth?
You maybe thinking … didn’t Google Earth come out before Google Maps? The answer is no. While Keyhole EarthViewer continued to be nurtured under Google’s ownership, the priority was put on the launch of Google Maps rather than the relaunch of Keyhole’s product.
The relaunch was going to have access to 10 times the satellite imagery of Keyhole EarthViewer. It was also going to be much faster due to the vast infrastructure that was allocated to it.
As work continued on the relaunch an internal debate ensued: should it be “Google Earth” or “Google Globe”? The answer was decided through a simple test: say “Google Earth” ten times fast. Then try the same test by saying “Google Globe”. The decision was made. Google Earth it was!
There was also debate about the price. Keyhole EarthViewer was still $29.95 per year for personal use and up to $400 per year for GIS professionals. The answer from up high came down.
It was going to be free.2
Larry and Sergey preferred to do something great for users rather than make a few extra tens of millions of dollars in revenue. I guess also they also weren’t too concerned about cost!
Then there was a debate about registration. Given it was free, would you at least be required to register? Bill Kilday succeeded in pushing for removing even that barrier.
Google Earth was set to launch on 27 June 2005. A last minute internal scuffle with Marissa Mayer, then the final approver of everything that went live on Google, almost tanked the launch at the last moment. But finally everything was approved and the launch went ahead as planned.
At 28 MBytes per install (large for those days) the initial load on Google servers took its toll.
On the first day of its release Google Earth was downloaded 450,000 times. And within 28 hours the team met their download goal for the first two years.
Those Views of Streets
In 2005 Google went from having zero mapping products to having two ultra viral ones within just a few months. By the end of 2006, Google Maps had launched in 47 countries, and Google Earth had been downloaded 120 million times.
But like all good product managers the Google geo team didn’t rest on their laurels.
It just so happened Google had hired a guy called Luc Vincent in 2004. He was given the task of scanning the contents of libraries throughout the world. Ethical and moral questions aside, this was a monumental undertaking.
But soon after joining Luc was summoned to Larry Page’s office. Larry told Luc about another project, one that had started with a computer science professor at Stanford named Marc Levoy.
The project involved capturing videos of cities at the street level, collating them into a strip and then making them searchable by typing in an address. Larry was focused on something entirely new: making the physical world searchable from an entirely different point of view. Thus the idea of Google Street View came to be.
To cut a long story short, John Hanke introduced Google Street View to the world at the Where 2.0 mapping conference in San Jose on May 29, 2007. I was attending the conference at the time and I remember it well. There was a palpable gasp from the room, but I also remember people commenting “Yes, but can they scale?”
And I think you all know the answer to that question.
But Did Google “Win” — and What of the Future?
When pop stars go on a “World Tour” it doesn’t mean they go on a world tour. It generally means they tour in the USA, Canada, Japan, Europe and — if you’re lucky —Sydney.
With that in mind does Google dominate the consumer mapping world? Well the answer is no. Not quite. There are still a few key areas of the world where people don’t flock to their product:
- In China it’s Baidu Maps, Amap or Tencent Maps
- In Russia it’s Yandex Maps
- In India it’s sometimes MapMyIndia
- In South Korea it’s Naver Maps or KakaoMap
- In the rest of Southeast Asia Grab Maps is gaining ground
Then there’s Waze — but that’s a Google owned property.
Finally there is my alma mater — Apple Maps, still valiantly trying to usurp its rival. For owners of iPhones they’ve had a lot of success in grabbing mindshare in the USA. And it’s actually quite fascinating: many users don’t even realize they’re using Apple Maps, and remain adamant they’re using Google!
What of the future? Will some 20 person disruptive team tear up the mapping world with some artificially intelligent geographic agent? It’s going to be fascinating to watch.
But looking back on the last 20 years you have to admire the grit and perseverance of the whole Google Maps team.
It was truly a Map Happening That Rocked Our World.
Final Thanks
My heartfelt thanks and acknowledgment to Bill Kilday, author of “Never Lost Again“, without whose insider view of Google Maps my story would forever be lost!
So, as a final thought: go buy Bill’s book. 3
Footnotes
-
Is AI Finally Starting to ‘Get’ Geography?
Avid fans of Map Happenings may recall an article titled “ChatGPT (et al) for Geospatial. You Ain’t Seen Nothing… Yet.” That was 18 months ago, which is essentially an epoch in AI terms.
Given it’s been so long I decided it was high time to do a little digging and provide an update on where things stand. Has any progress been made?
If you read the article mentioned above you might remember that back in that infantile period ChatGPT not only failed miserably at basic directions, it couldn’t even do basic maths: 1

ChatGPT’s rather pathetic skills at basic mathematics in February 2023 — Credit: OpenAI Today, for mathematics at least, ChatGPT looks like it’s started to pay a little more attention at school:

Credit: OpenAI But what about geography? Has ChatGPT — or any other so called ‘AI’ for that matter — had any schooling?
Lets Start with the Basics
For the following examples I used ChatGPT. I also tried using Claude: it produced similar results.
My first attempt to poke and prod was to determine if ChatGPT could figure out basic information about addresses and coordinates.

Ok, let’s plot that coordinate on a map:

Map Credit: Apple Oh dear, ChatGPT seems to have the location in the middle of the freeway, about 3.3 miles away from the actual address:

Map Credit: Apple If you read my last missive on the topic you may recall the attempt to use ChatGPT for directions: the result was a hilarious hallucination, resulting in the need for catapults. Has it got any better? Well you won’t need catapults anymore, but you’ll still get hopelessly lost:

Credir: OpenAI The Issue with LLMs
Those who have a smidgen of perspicacity will by now have realized that there is a basic flaw in my poking and prodding LLMs. It is like poking Michelin-star chefs and expecting them to produce the winning goal at the World Cup final. Yes, dear reader, we should focus on the meaning of the acronym ‘LLM’ and, in particular, the second ‘L’ . As I think most of you will know, it stands for ‘language’. LLMs are power-hungry beasts that are designed principally to spit out words — and they’re barely capable of doing anything else.
In order for any geographic intelligence to become apparent in your favorite chatbot it has to get plugged into something that understands concepts beyond human language. ChatGPT started down this path in April 2023 with the very concept of ‘plugins’. Plugins didn’t take off, and instead were quickly replaced with the concept of custom GPTs. One of the first companies to jump on the custom GPT bandwagon was Wolfram. Wolfram is best known for developing Mathematica, a powerful computational software, and perhaps more well known for Wolfram|Alpha, a computational knowledge engine. While signed into ChatGPT you can use Wolfram’s GPT to ask natural language questions regarding just about anything that Wolfram|Alpha can answer.
And it does a better job on geographic basics than vanilla ChatGPT. For example, here is Wolfram GPT’s answer to the question, “what is at 51.50104° N, 0.14207° W?” — it correctly deduces that the coordinate is at Buckingham Palace, whereas ChatGPT thinks the coordinate is about a mile away at Wellington Arch:

Credit: OpenAI and Wolfram Ask Wolfram GPT a question even remotely more complex and it quickly falls over, but at least it doesn’t hallucinate:

Credit: OpenAI and Wolfram Perhaps this is unfair to the primarily mathematical Wolfram GPT. Perhaps instead I should use another custom GPT that is more capable of understanding geography and geospatial functions?
Good luck finding one.
The problem with the search function in ChatGPT’s “GPT Store” is that the search results are singularly execrable — to put it mildly: they are an unfilterable quagmire of useless slop.
If you search for geospatial GPTs in ChatGPT’s GPT store it will come back with a seemingly never ending list, but you’ll quickly find that the vast majority of them are nothing more that textual guides to try to help you navigate Geographically Insidious Systems (‘GIS’). It’s impossible to find the nuggets. And asking ChatGPT directly isn’t much help either.
But we shouldn’t overlook another key issue with trying to use LLMs for geospatial …
They fail to provide a map-centric UI.
But then conversely, if you use traditional consumer or professional geospatial apps, you will find that they lack the deliciousness of a natural language interface that LLMs provide.
You can see this with a simple query to Google Maps where I ask for “grocery stores within 2 miles of 311 E Congress St Tucson, AZ 85701 United States that are open after 8pm”:

Credit: Google Google Maps fails to search on “grocery stores within 2 miles” and “that are open after 8pm”. It can only find the address:

Credit: Google I should also note that if you try the same query on Apple Maps you will find it fairs no better.
But Are There Signs of Progress?
It turns out there are a number of areas where AI is beginning to seep in to the world of geospatial, and it’s just starting to get interesting.
First let’s go back to the Google query and use it to poke once again at ChatGPT:

Credit: OpenAI At first glance: the results look amazing — especially compared to the non-answers from Google Maps and Apple Maps. But dig a little deeper and you’ll find the results aren’t so rosy:
- The first result is acceptable, it is roughly within two miles.
- The second result just does not exist: it’s an empty lot.
- The third result may well be open 24 hours, but it’s over three miles away.
- More importantly perhaps: the results missed an amazing local store called Time Market, which is just one mile away and is open until 10pm.
So lovely, friendly answers, but entirely untrustworthy. For the query above ChatGPT claimed to talk to Bing as well as a bunch of yellow page and grocery store sites. Yellow page sites? Are you surprised it didn’t work?!
Now my tinkering is just that. I am not even scratching the surface. However, it turns out there is a highly respected institution that’s been doing some much more serious digging into the question of geospatial AI. That organization is the Alan Turing Institute who was commissioned to write an independent report on Geospatial AI for Land Use, specifically to “to explore how land use decision making can be supported by geospatial data, including earth observation data, and artificial intelligence”.
In the report they include a section on the value of LLMs:

Credit: The Alan Turing Institute The point therefore is to make the data more accessible, enabling a non-geospatial analyst to get the answers they’re looking for by simply asking natural language questions . The Turing Institute explored how a map-centric UI with a built in specialized LLM chatbot could help broaden the audience, in their case with an app called ‘Demoland’:

Demoland app — Crediit: UK Government and the Alan Turing Institute Here is typical conversation with the chatbot in Demoland:
Question: In areas of high deprivation has access to jobs improved?
Answer: In approximately 15.4% of the areas with high levels of deprivation, job accessibility has improved. This suggests that while some progress has been made, there is still a significant proportion of high deprivation areas where job accessibility has not improved. It’s important to continue efforts to improve job accessibility in these areas, as this can play a key role in reducing deprivation.
Question: How has air quality changed around schools?
Answer: Sure, let’s take a look at the changes in air quality around some schools:
- St Joseph’s Roman Catholic Aided Primary School: The air pollution index has increased by 0.1074 units. This school is located in Gateshead.
- Newcastle Bridges School: The air pollution index has increased slightly by 0.0479 units.
- Alternative Education Service – The Beacon Centre: There has been no change in the air pollution index at this location in South Tyneside.
- Yamaha Music School: Similarly, there has been no change in the air pollution index at this location in North Tyneside.
- New Silksworth Academy Junior and Infant: Both these schools, located in Sunderland, have also seen no change in the air pollution index.
Please note that these changes are relative to the current air pollution index at each location.
This certainly seems to be very promising. One of the key recommendations coming out of the report is to develop a set of Geospatial AI Toolkits for LLMs:

Credit: The Alan Turing Institute Clearly this is precisely what we’re missing from geospatial apps — it’s evidenced not only by my tinkering but by the institutes performing much more serious study. But when those geospatial AI toolkits are integrated into LLMs the results are very promising.
I’m sure there are many organizations, large and small, focused on such integration. One startup I’ve come across that is doing precisely this is Aino. Their natural language interface is used for fairly atomic queries as this point, but it still significantly lowers the barrier to entry relative to a traditional Geographically Insidious System.
Here’s a demo of Aino’s platform being used for site selection (click to play):
Credit: Aino But what about the larger organizations? What are they doing?
One notable and recent example of employing AI and natural language interfaces to query geospatial data has been the collaboration between Microsoft and NASA for something Microsoft is calling “Earth Copilot”.
If you follow Microsoft then you’ll know that “Copilot” is the brand they use to indicate integration of AI into their products. You can think of it like their version of “Apple Intelligence”. You might think, therefore, that Earth Copilot is Microsoft’s attempt to turn the whole planet into a computer to answer the Ultimate Question of Life, The Universe, and Everything.
Alas, it’s not quite that bold. But it is designed to help non-geospatial analysts quickly answer questions about the planet from the petabytes of earth observation data that NASA has collected. Here’s Microsoft’s CEO, Satya Nadella, announcing Earth Copilot at Microsoft Ignite 2024 just a few days ago:
Now we all know Microsoft has a tendency to pre-announce something as a product well before it’s generally available, and Earth Copilot is no doubt one of those cases. Nonetheless, the announcement paints a clear vision of what’s (hopefully) coming to all of us.
An American might summarize all this by saying: “We want to democratize access to geospatial data and geospatial systems”.
However, a Brit might simply say: “Let’s make stodgy and dodgy mapping technology work for everyone!”2
Beyond Natural Language Interfaces
When contemplating using AI for geography we shouldn’t be myopic and limit our thinking to the use of LLMs and natural language interfaces. Indeed there are two other important use cases for AI that are worth exploring:
- Use of AI for classification of objects on the ground
- Use of AI for positioning — i.e. locating oneself
Let’s dig into this a little further…
Object Classification
The concept of using AI for object classification is not new. Indeed Google Photos started giving users the ability to search for objects inside photos back in 2015. That’s almost ten years ago. So back then you could attempt to search your photo library for “cats” or “cars” and the app would miraculously list the photos with cats or cars. In 2024 this has now advanced significantly and with the Ask Photos feature you can ask questions like “show me the best photo from each national park I visited”.
Now this approach also works in the geospatial world for identifying objects on the ground. In fact organizations like NASA and the US Department of Defense were experimenting with rudimentary approaches to automatically identify like forests and urban areas back in the 1980s. However, it wasn’t until recently, with the advent of machine learning, neural networks and sheer GPU horsepower that the capability has really taken off.
Obviously there’s a great deal of work going on in this space that we don’t know about. For example, it has tremendous applicability to military applications. So no doubt the likes of Palantir are focused on this problem full bore.
For the rest of us though there’s another company that enabled some enormous progress. And that company is …
Meta.
Meta’s research labs have developed and released something they call “SAM”, otherwise known as the “Segment Anything Model”. By ‘segmenting’ Meta means classifying things or objects that are present in a photographic image. And Meta’s lofty goal is to live up to its tag line: they want to be able to recognize any object or thing in any image.
You can learn a bit more about SAM by visiting the Segment Anything website. Here’s a demo from the site (you may need to tap/click to play):
Credit: Meta Why is SAM relevant to geospatial? Well it’s been used by geospatial organizations to interpret aerial and satellite imagery very effectively. One such organization that is successfully employing SAM is that age-old enterprise mapping technology juggernaut: Esri.
Esri’s R&D arm has published something they call “Text SAM” that combines the power of Meta’s SAM with another open source model called “Grounding DINO” which can find objects given a text prompt.3 The results are pretty impressive:

Text prompt: “car” — Credit: Esri Text SAM 
Text prompt: “cloud” — Credit: Esri Text SAM 
Text prompt: “umbrella” — Credit: Esri Text SAM You could argue that Esri has not actually invented anything here. They are merely integrating capabilities developed by other organizations into an Esri workflow. Indeed this may be true, but their work still illustrates great potential.
Imagine if you were to combine this object recognition capability with the power of a natural language interface:
- “Produce a map of all the buildings under construction in the suburbs of Chicago together with an accompanying chart that shows total area under construction by ZIP Code”
- “Compare the tree cover for the years 2016 to 2024 and produce a chart showing changes by county over the years”
Positioning
The challenge of figuring out your location is not new. It’s also a problem that has by no means been solved. In fact with GPS jamming it’s getting worse. BTW: if you’re interested in the history of positioning technology then I recommend you read “12 Map Happenings that Rocked our World: Part 8 — Oh Brother, Where Art Thou?“
I’m sure you’ve all experienced the frustration of trying to locate yourself.
One of the most confounding issues facing anybody who has used a navigation app is “which way do I go first?”. For example, this is particularly perplexing when exiting a subway station: should I start by going left or by going right?
In an effort to solve this problem organizations like Google and Apple have started to provide augmented reality guided solutions to help people get off on the right foot. Basically you are told to hold up your phone and wave it around the buildings in front of you. It will then magically recognize exactly where you are and promptly encourage you to go in a specific direction.
Apple calls this “Visual Localization”, but more recently the industry has been calling this “VPS” which stands for “Visual Positioning System”.
Apple Visual Localization as used in Apple Maps — Credit: Apple Now it turns out providing VPS requires collecting a tremendous amount of imagery data, and so this feature is generally only available in a few cities around the world.
But there is an organization that is about to change all that.
Their name is Niantic, and they’re probably most famous for building the incredibly popular game: Pokémon GO.
Now what you might not know is that the super smart guys who formed Niantic also invented Google Earth and later made Google Maps what it is today. Niantic started as a separate organization within the greater Google empire back in 2010 and was later spun off as an independent company in 2015.
In 2016 Niantic launched Pokéman GO. This game quickly became the most popular — and most profitable — augmented reality app of all time.
But just a few weeks ago, on November 12, Niantic announced they were pioneering something well beyond a popular game:
They announced that they were pioneering something they call a “Large Geospatial Model”.
In other words Niantic has set out to replace the ‘L’ (for ‘Language’) in LLMs with a ‘G’ (for ‘Geospatial’) to create some called an ‘LGM’.
But what, exactly, does this all mean?
Well to quote from their blog:
At Niantic, we are pioneering the concept of a Large Geospatial Model that will use large-scale machine learning to understand a scene and connect it to millions of other scenes globally.
When you look at a familiar type of structure – whether it’s a church, a statue, or a town square – it’s fairly easy to imagine what it might look like from other angles, even if you haven’t seen it from all sides. As humans, we have “spatial understanding” that means we can fill in these details based on countless similar scenes we’ve encountered before. But for machines, this task is extraordinarily difficult. Even the most advanced AI models today struggle to visualize and infer missing parts of a scene, or to imagine a place from a new angle. This is about to change: Spatial intelligence is the next frontier of AI models.
Even with all the scans that Niantic collects — they have 10 million scanned locations around the world and are receiving about 1 million fresh scans each week, each containing hundreds of discrete images — it’s a drop in the bucket. There is simply not enough data to make VPS pervasive, even with Niantic’s scale.
But that’s where Niantic’s LGM will come in: it will essentially fill in the blanks:
Imagine yourself standing behind a church. Let us assume the closest local model has seen only the front entrance of that church, and thus, it will not be able to tell you where you are. The model has never seen the back of that building. But on a global scale, we have seen a lot of churches, thousands of them, all captured by their respective local models at other places worldwide. No church is the same, but many share common characteristics. An LGM is a way to access that distributed knowledge.
An LGM distills common information in a global large-scale model that enables communication and data sharing across local models. An LGM would be able to internalize the concept of a church, and, furthermore, how these buildings are commonly structured. Even if, for a specific location, we have only mapped the entrance of a church, an LGM would be able to make an intelligent guess about what the back of the building looks like, based on thousands of churches it has seen before. Therefore, the LGM allows for unprecedented robustness in positioning, even from viewpoints and angles that the VPS has never seen.
The global model implements a centralized understanding of the world, entirely derived from geospatial and visual data.
With Niantic LGM the goal is to be able to locate you to within a few centimeters — just about anywhere on the planet.
Essentially it will be your ultimate hack for GeoGuessr … 😱
Credit: James Likes Games So What Does The Future Hold?
As is usual for all things AI, things are moving fast. Here are just a few predictions as to where we’ll end up…
Converse With Any Geospatial App
Gone will be the days of brain dead search boxes. You will now expect any mapping app, consumer or professional, to be not only capable of understanding what you type into a text box, but you’ll also expect to be able to have a conversation with it. So answers to questions like the one I tried asking Google Maps above will be answered flawlessly and with the ability to maintain context.
This capability will obviously be of tremendous benefit to the billions of people that use consumer mapping apps every day, but where it’s really going to make a huge difference is in professional apps, in many cases eliminating the need for professionally trained “GIS analysts”.
Now the department or individual that normally has to rely on the geospatial geeks in the basement will be able to get the answers they’re looking for themselves. You only have to look at the queries used in the “Demoland” app that the Alan Turing Institute highlighted to understand where it’s all going.
Gone will be the days of contingent values, complex edges, feature barriers and topological unfathomables. You’ll just ask questions.

ArcGIS Pro Ribbon Menu Customization — Credit: Esri For example, I can imagine how a map-centric site selection app might work in this world. No buttons. No pull-downs. No ribbons. Just an interactive map, a query text box and an icon to invoke the microphone:
- “show me the areas in west London that have a median income greater than £65,000 and have a higher density of Millenials”…
- “now show me all available retail locations less than 200 square meters that are within a five minute walk of these areas”…
- “now eliminate all potential locations that are within a three minute walk of a Lululemon store”…
- “ok. now give me a list of those locations together with the contact details of the leasing agent”
Get ready. It’s going to happen.
Locate Me and Orient Me — Immediately, Anywhere
When they work, satellite positioning systems are great. But they don’t work in concrete canyons. And they don’t work indoors. They don’t work underground. They get spoofed. Unless you invest in very expensive equipment, they’re mostly accurate to only few meters. And, when you start your journey, they’re not good at telling you which way to turn.
VPS is beginning to change that. But it relies on pre-scanning the area (outside or inside) to make it all work. And to do that globally, everywhere is plainly not achievable — especially for areas which are not drivable. We need to add some intelligence to fill in the blanks. And it’s organizations like Niantic that are showing the way.
In this new world HD location will be a pervasive entitlement. Not being able to locate yourself precisely, accurately, anywhere — particularly indoors — will be expected. Just like Wi-Fi, it’ll be a new layer in Maslow’s hierarchy of needs.
A World of Geospatial Agents
Now I haven’t mentioned AI agents thus far. But there’s a lot of buzz about them across many companies and industries.4 As you might expect these creatures will be lurking in the background, monitoring, checking, acting on your behalf. You can imagine their use in consumer apps for keeping an eye on on prices, availability, and taking actions when certain specified conditions are met — think reservations for that table at a popular restaurant, or perhaps that amazing Airbnb villa in Tuscany that you’ve always wanted to stay at.
In the professional world it will enable you to be productive in other ways while the geospatial AI agents are doing grunt work on your behalf. In a site selection app, keeping an eye out for properties that meet your requirements. Or in a public safety app, automatically alerting you and others should dangers arise.
Agentive AI for geospatial is going to be huge. And we haven’t even started.
Footnotes
- BTW, with the exception of American spelling this article is written in English, so it’s “maths”, not “math”. Q: did you sadly only study a singular “mathematic” at school? ↩︎
- I can envisage a new marketing campaign for an innovative new mapping startup: “Not stodgy and dodgy — easy peasy, lemon squeezy”. Who says I don’t have a second career as a CMO? ↩︎
- ChatGPT’s comparison of SAM vs. Grounding DINO. ↩︎
- And more only yesterday: Google’s AI enters its ‘agentic era’ ↩︎
-
Navigating the Moon: The Insanely Amazing Apollo Missions
Living in the US in the lead up to a national election is sure to increase your anxiety at the best of times. And experiencing it in 2024 is even more collywobbling. And then there is everything else going on. In these situations I’ve found that escape is always good. So, if you’re like me — and are looking for a welcome distraction — perhaps you’d like to join me on a journey some 380,000 kilometers from Earth: a journey to the Moon…
An Amazing Era
A few of you fellow crinklies might have grown up in the amazing era of NASA’s endeavors in the 1960s and 70s. The 1960s marked the dawn of the space era, but the most exciting period was from 1969 to 1972 when NASA successfully landed six manned spacecraft on the Moon. These were of course the Apollo Missions.
But how did they navigate there? Where exactly did they land? How far did these astronauts venture from their landing craft? And how did their excursions compare to the journeys us Earthlings take every day? To learn more, please join me on this amazing expedition…

Astronaut Buzz Aldrin standing outside the Apollo 11 Lunar Module on 20 July 1969 — Photo credit: Neil Armstrong, NASA Navigating to the Moon
Yours truly was a wee six years old when Apollo 11 landed, so almost too young to be impressed. But impressed I was.
If you’re a young ‘un, you might be a bit blasé about NASA’s efforts. After all, everyone’s BFF, Mr. Musk, seems to launch about 1,000 satellites every day. So what’s the big deal? Well let’s just say in 1969 things were a little more challenging.
For those of you who read the Map Happenings post on the birth of in-vehicle navigation systems, you might have been gobsmacked by the fact that the 1985 Etak Navigator came out before GPS, CD-ROMs and ran on an Intel 8088 CPU which had only 29,000 transistors. And you might be even more impressed when you compare that to Apple’s latest M4 chip: it has not a million more transistors than the Intel 8088, but a million TIMES more1.
But, by Apollo standards the Intel 8088 was like magic.
Imagine if you were fortunate enough to have talked to one of the engineers who developed the guidance and navigation computers for Apollo. I’m sure they would have given their two front teeth for something as powerful as an Intel 8088. In fact I have no doubt they would probably have promptly regaled Monty Python’s Four Yorkshiremen sketch — “You were lucky!”
By the way, speaking of Yorkshiremen, at the time Apollo 11 landed on the Moon in 1969 my family lived in the sheep farming country of northern England — the Yorkshire Dales to be precise. And our house was in a valley with no line of site to any TV signals, so we had to make do with the radio and printed newspapers. That didn’t stop yours truly from learning everything I could about Apollo. My most distinctive memory of the time was building a 1:144 scale Airfix model of the Saturn V rocket, complete with separate stages and an extractable Lunar Module. It was fantastic.

I still have this newspaper from the day after the Apollo 11 landing — Credit: The London Daily Telegraph We finally got a television signal in time for Apollo 13. My grandmother was visiting at the time. I remember her exclaiming: “I’m not going to watch it. If I do, it’s sure to go wrong!”
But let’s get back to the Apollo guidance and navigation computers…
There were four guidance computers on Apollo:
- one for the Saturn V booster to get the astronauts off the launch pad and into Earth’s orbit
- one for the Command Module to get the astronauts from Earth’s orbit to the Moon’s orbit and back to Earth
- two for the Lunar Module: one for landing and subsequent ascent back to the orbiting Command Module and a separate emergency abort computer in case something went horribly wrong.
Designed by the bright lads and lasses at MIT, these computers were super advanced for their age. In the 1960s most computers filled multiple cabinets with electronics and had to be housed in large air-conditioned rooms. The Apollo Guidance Computer, or ‘AGC’, in the Lunar Module weighed just 30kg (66lbs) and was about the size of a small suitcase:

The Apollo Guidance Computer (AGC) — Credit: Wikimedia 
The Apollo Guidance Computer opened up, showing the modules inside Credit: Ken Sheriff’s blog To get a sense of some of the challenges engineers were faced with to develop these computers, I recommend you watch this nine minute video by ‘Curious Droid‘. It’s seven years old, but still very informative:
The AGC in the Lunar Module, with only 2K of erasable memory (RAM) and 36K of fixed memory (ROM)2, had to do the complicated calculations to help the astronauts land on the Moon and return safely to re-dock with the Command Module. It was one of the first computers to use integrated circuits (ICs) for its CPU, but don’t get too excited: each IC contained just three (3) transistors. A far fly from the super advanced Intel 8088! Also, don’t assume this 30kg suitcase was filled with these diminutive ICs. It wasn’t.
Probably the most complex modules contained in the AGC were the extremely laboriously hand woven “core rope memory” modules for the fixed memory (or ROM).
What’s intriguing to me as a former software engineer is that all the software programs in the AGC were actually encoded in the hardware using the core rope memory. So, the software was actually hardware. A mistake made in painstakingly weaving these core ropes by hand could therefore have introduced a software bug. As one module of core rope memory took three months for someone to weave, making mistakes was incredibly expensive.
To give you an idea of just how complex the core rope memory was, here are some visuals. A single core rope module was about 30cm (12″) wide and a about 10cm (4″) tall:

Core Rope Module used for Fixed Memory (ROM) in Apollo Guidance Computer: Credit: Ken Sherriff Zooming in to see the detail. The number of wires and complexity of the weaving was astounding:

Detail of Core Rope Module used for Fixed Memory (ROM) in Apollo Guidance Computer: Credit: Ken Sherriff Here is a lady painstakingly weaving the wires in one of the modules:

A lady weaving the Core Rope Module used for Fixed Memory (ROM) in Apollo Guidance Computer: Credit: Ken Sherriff So imagine the job: taking this big heavy code listing, and literally weaving it into hardware:

Credit CuriousMarc For a fascinating deep dive, read Ken Sherriff’s blog. As he says in his article:
At a high level, core rope is simple: sense wires go through cores to indicate 1’s, or bypass cores to indicate 0’s. By selecting a particular core, the sense wires through that core were activated to provide the desired data bits.
So the software was encoded in the hardware. And there were bugs.
In the case of Apollo 11 a bug was discovered in the code for the abort landing just one a month before the planned launch. If it ever manifested there was a chance it would have sent the lunar lander tumbling head over heels. Needless to say Neil Armstrong was not happy. As a result somebody had to get inside the already fully stacked Saturn V rocket, make their way into the Lunar Module, remove the AGC and extract the offending core rope module. The replacement had been painstakingly modified to correct the bug. This involved locating the offending wire from the many, many thousands of wires inside the module, terminating it at both ends and then threading a new replacement wire, this time through the correct cores.
Most of the staff tasked with weaving the core ropes were women. One of the most impressive women on staff was Margaret Hamilton. David Brock recounts the history in his article “Software as Hardware: Apollo’s Rope Memory” in IEEE Spectrum:
The supervisors responsible for overseeing the careful integration of changes and additions to the software were known as “rope mothers,” regardless of their actual gender identity. The rope mother’s boss, though, was a woman: Margaret Hamilton. Before Apollo, Hamilton worked as a programmer at MIT’s Lincoln Laboratory on the Semi-Automatic Ground Environment (SAGE) air-defense system. After MIT won the contract to supply the guidance and navigation system for Apollo, Hamilton got a job working on the systems software, and she eventually led the team that created the onboard flight software.

Margaret Hamilton: mathematician and computer programmer at the MIT Instrumentation Laboratory, sits in the mock-up of Apollo 12 command module on display at the Cambridge school where she headed the group that programmed Intrepid’s pinpoint landing in the Sea of Storms, on the moon. Credit: AP Wirephoto If I’ve succeeded in piquing your interest in the Apollo navigation computers, and you want to nerd out on the really gory details, then watch this superb 1965 interview with the MIT engineers who designed and developed the AGC:
All pretty astounding, and I’d frankly be surprised if today’s young MIT graduates could pull it off given the same constraints.3
The Six Apollo Landings
The Moon travels around Earth once every 27.322 days in an elliptical orbit, or an elongated circle. It is tidally locked with Earth, which means that it spins on its axis exactly once each time it orbits our planet. Because of this, people on Earth only ever see one side of the Moon. All the Apollo landings were on the ‘near side’ — the side we can see from Earth.
[BTW: China is the only country to have successfully landed spacecraft on the far side of the Moon. This first landing was only a fairly recent accomplishment: achieved in January 2019. Earlier this year China also became the only country to have successfully collected lunar samples from the far side of the Moon and return them safely to Earth.]
If you’ve ever done any geospatial work, you may have done something called ‘site selection’. And you don’t have to be a geek to do it. For example, many people use off-the-shelf geospatial software to select the best site for a retail store or the best site for a smart locker. The folks at NASA went through a similar process when selecting potential landing sites for each of the Apollo missions.
Using images from uncrewed lunar orbiters that were launched in 1966 and 1967, NASA looked for sites that met the following criteria:
- Smoothness of the area: the sites should have relatively few craters;
- Approach paths: there should be no large hills, tall cliffs or deep craters which could cause incorrect altitude signals to the landing radar;
- Propellant: the sites were selected to allow for the expenditure of the least amount of propellant;
- Countdown delays: the sites were selected to allow for the ‘recycling time’ of the Saturn V if the countdown were to be delayed;
- Free-return: the sites must be within reach of the Apollo spacecraft in the free-return trajectory, that is: a path that would allow a coast around the Moon and safe return to Earth without any engine firings should a problem arise on the way to the Moon;
- Lighting: for optimum visibility during the landing approach, the Sun angle should be between 7 and 20 degrees behind the LM; for any given site, this results in a one-day launch window per month;
- Slope: the general slope of the landing area must be less than 2 degrees.
For Apollo 11 this selection process narrowed it down to five sites. All were within an area on the visible side of the Moon between 45 degrees east and west longitude and between 5 degrees north and south of the lunar equator.
A similar site selection approach was taken for the other six planned missions (Apollos 12- 17), each one being slightly more ambitious. Of course Apollo 13 never landed. But five subsequent missions did. Here is a marvelous video created by Ernie Wright (USRA) and Noah Petro (NASA/GSFC) of NASA’s Scientific Visualization Studio showing the six landing sites. The timing of the landings and the phase of the moon are all relatively correct — notice the big gap between Apollo 12 and 14. [note: you may need to tap to play video]
Credit: NASA’s Scientific Visualization Studio. Created by Ernie Wright (USRA) and Noah Petro (NASA/GSFC) Here’s a summary showing the locations and statistics for each mission:

Credit: NASA’s Scientific Visualization Studio Apollo 11 was certainly one of riskiest landings. Not only did the AGC throw error codes and reboot several times during the decent, Neil Armstrong encountered a rock field at the intended landing spot. He had to manually maneuver the lander to another smoother location. In doing so they almost ran out of fuel. The Lunar Module indicated just 17 seconds of fuel left when they touched down4. The final landing spot end up being about 7 kilometers (4.25 miles) downrange from the intended landing site.
To begin to understand the complications of the landing you need to get to know Peter Adler and Don Eyles. These were two of the ‘young experts’ at the MIT Instrumentation Lab — the Draper Lab — who worked on the software for the Apollo Guidance Computer. Peter recounts the story in his 1998 article on the “Apollo 11 Program Alarms“:
You have to constantly keep in mind the amazing — to anyone using a PC today — constraints we had to work with in programming the [Apollo Guidance Computer]. There were 36,864 15-bit words of what we called “Fixed” memory, which today would be called ROM, and 2048 words of “Erasable” memory or RAM. With only rare exceptions, all of the executable code was in the Fixed memory, along with constants and other similar data. Erasable memory was used for variable data, counters, and the like. With so little Erasable memory available, we were forced to use the same memory address for different purposes at different times. Thus, a location whose contents might be altitude-over-the-lunar-surface during the landing stage might have contained the results of a sextant sighting of a navigational star from the alignment program. I think there were some memory locations that were shared seven ways. You can imagine the testing we had to do to ensure that the same memory location was not being used by more than one program at the same time.
You also have to remember that, long before Bill Gates, we had developed a real-time multi-tasking operating system. There were interrupt-driven, time-dependent tasks — e.g., turn the [Lunar Module] Descent Engine on at the correct time — as well as priority-ordered jobs that dealt with less time-critical things. Each scheduled job has some erasable memory to use while it was executing.
During the descent, the Apollo Guidance Computer (AGC) repeatedly threw 1202 alarms and later 1201 alarms. This was due to the fact that repeated jobs to process rendezvous radar data were scheduled because of a hardware bug: a misconfiguration of the radar switches. As a consequence the AGC quickly ran out of space to store data in its erasable memory (or RAM). Peter continues to recount the story:
On Apollo 11, each time a 1201 or 1202 alarm appeared, the computer rebooted, restarted the important stuff, like steering the descent engine and running the [Display/Keyboard] (DSKY) to let the crew know what was going on, but did not restart all the erroneously-scheduled rendezvous radar jobs. The NASA guys in the [Mission Operations Control Room] knew — because MIT had extensively tested the restart capability — that the mission could go forward.
Apollo 11 wasn’t the only hairy landing. Coming eight months after the near disaster of Apollo 13, Apollo 14 suffered a critical issue: a loose ball of solder was floating around in zero gravity inside the ‘Abort’ switch in the Lunar Module and was randomly shorting it out. It was the young MIT engineer, Don Eyles, that came to the rescue. Stephen Cass and Christine Dabney recount the story in writing for IEEE Spectrum:
In the early hours of 5 February 1971, Don Eyles had a big problem: Apollo 14 astronauts Alan Shepard and Edgar Mitchell were orbiting the moon, preparing to land, but it looked like they were going to have to come home without putting so much as a single footprint on the surface. The only way to save the mission was for Eyles to hack his own software.
Now remember, the software was all encoded in rope memory hardware, so Eyles had to develop an extremely clever work around. It involved the astronauts having to punch in 61 very carefully typed instructions into the computer that would result in the abort switch signal being bypassed. He later won a NASA award for his efforts. Eyles talks about it in this IEEE Spectrum interview from 2018:
None of the other landings had particularly significant issues with the exception of Apollo 15. It apparently landed on a tilt of about 10 degrees. The design limit for Lunar Module ascent stage lift-off is variously stated as 12 or 15 degrees. So it was close to the limit, but no issue ensued.
Mapping the Extra Vehicular Activities (EVAs)
The excursions that the astronauts made for the first three landings were pretty limited as the astronauts didn’t bring along any Cybertrucks to cause havoc. As a consequence they were constrained by their own two feet and, in the case of Apollo 11, time. That all changed for the last three missions when the astronauts brought along their 210kg EV buggy — the Lunar Roving Vehicle.
It’s pretty interesting to look at how far they actually ventured and compare that to where you and I might travel on Earth.
Let’s start with Apollo 11.
Neil Armstrong and Buzz Aldrin didn’t spend much time outside the Lunar Module — only about 2.5 hours. Much of it was spent collecting a few rocks, planting a flag and setting up experiments. So they didn’t get too far.
Fortunately we have a way to see exactly where they went with the help of the the Lunar Reconnaissance Orbiter or LRO. Launched in 2009 this robotic spacecraft orbits the Moon at an altitude of 50-200 km (30-125 miles). LRO’s primary objective is to make fundamental scientific discoveries about the Moon. It contains a number of instruments, including a system of three cameras that capture high resolution black and white images and moderate resolution multi-spectral images of the lunar surface. The high resolution camera has about 0.5m resolution per pixel, so it’s good enough to spy on the Apollo landing sites. 😁
So here’s what we can see of the Apollo 11 landing site:

Credit: NASA LROC You can clearly make out the left-behind Lunar Descent Module5 and the path that the astronauts walked to the edge of a nearby crater. But at no time did they venture further than 60 meters (200 feet) from their spacecraft.
Let’s compare where they went on the Moon to a trip across Paris: [note: you may need to tap to play video]
Credit: NASA LRO and Apple Maps So, for all the hard work of those intrepid explorers, they didn’t even get out from under the Eiffel Tower!
Apollo 12 was a little more interesting. On April 20, 1967, about two and a half years before Apollo 12 landed on the Moon, another spacecraft had landed. It was called Surveyor 3. The Surveyor missions are little remembered but impressive precursors to the Apollo missions. Starting with Surveyor 1 which landed in June 1966 and ending with Surveyor 7 which landed in January 1968 these uncrewed craft laid the groundwork for Apollo. Five of these spacecraft, Surveyor 1, 3, 5, 6 and 7, successfully soft-landed on the lunar surface. In addition to demonstrating the feasibility of lunar surface landings, the Surveyor missions provided photos and the scientific and technological information needed for the Apollo manned landing program.
Why was Surveyor 3 so interesting? Well the Apollo 12 astronauts, Commander Charles “Pete” Conrad and Lunar Module Pilot Alan Bean, landed the Lunar Module “Intrepid” in the Ocean of Storms, a mere 160 meters (535 feet) from the robotic Surveyor 3 lander. A pretty amazing feat. You can get an idea of how precise the landing was by looking at this photo of Alan Bean inspecting the Surveyor with the Lunar Module in the background:

Credit: NASA The Apollo 12 astronauts were much more energetic than their Apollo 11 counterparts. Over the period of their 31 hour stay on the Moon they made two “extra vehicular activities” (EVAs) for a total of almost eight hours trudging across the Moon. Again comparing their travels to a trip across Paris: this time they made it out from under the Eiffel Tower, across the Seine to the Trocadéro! [note: you may need to tap to play video]
Credit: NASA LRO and Apple Maps Apart from the scary loose ball of solder floating around inside the ‘Abort’ switch, Apollo 14 went off without a hitch. Astronauts Alan Shepard and Ed Mitchell landed the Apollo 14 Lunar Module in the Frau Mauro formation on 5 February 1971. They stayed on the lunar surface for 33 hours, during which they performed two EVAs totaling a little over nine hours. Just like their predecessors they set out to beat the Guinness Moon Record for distance traveled:

Credit: NASA LRO On the second EVA, shown in green, the crew made a round-trip traverse of 2.5 kilometers (1.5 miles) toward the rim of Cone Crater, east of the landing site.
On to the last three missions: Apollo 15, 16 and 17. These were effectively “Apollo 2.0” with beefed up Command, Service and Lunar Modules designed to support longer stays. The main upgrade though was the addition of the Lunar Roving Vehicle (LRV) which was strapped to the side of the Lunar Module and subsequently deposited on the lunar surface like a fold down Murphy Bed:
Just like the first road cars, the LRV enabled ambitious exploration, allowing the astronauts to roam far from their humble Airbnb:

Credit: NASA Here’s a map of Apollo 15’s drives across the moon, giving you an idea of the hilly terrain. The Apennine Mountains are 3,000 meters (10,000 feet) high to the east of the landing site, which required an unusually steep landing approach:

Credit NASA LRO When you look at the map of their tracks a question starts to come to mind. Just how did they find their way around? Open their iPhone and bring up Google Maps? Well it turns out the Lunar Rover did have some navigation aids. There was no GPS, so it had to rely on dead reckoning. Each of the four wheels of the Rover were driven and also had odometers. Compasses on the Moon are useless as the Moon has no magnetic core. Instead the heading had to be determined from a gyro, which was initially calibrated by measuring the pitch and roll of the Rover using an attitude indicator and measuring the Rover’s orientation with respect to the Sun using a “Sun shadow device”. Here’s a picture of the Rover’s dashboard: you can clearly see the bearing and distance indicators, as well as the speedometer:

Credit: NASA I’ll end this post with one final set of maps: the journeys of Apollo 17’s astronauts. Commander Eugene Cernan and Lunar Module Pilot Harrison “Jack” Schmitt landed the lunar module Challenger on December 11, about 240 meters (800 feet) from the pre-planned landing site. Cernan and Schmitt performed three moonwalks totaling just over 22 hours. At one point they traveled 7.4 kilometers (4.6 miles) away from the safety of the Lunar Module, the farthest during the Apollo program. To get an idea of just how far that was let’s compare their journeys to a trip across Manhattan: you can see that they made it as far as north as the Bronx and as far south as the Brooklyn Bridge: [note: you may need to tap to play video]
Credit: NASA LRO and Apple Maps Or, if you’re more familiar with London, you’ll see their first EVA (in green) started from Harrods with a short trip to Battersea Park. Their second EVA (in Yellow) went from Harrods, to Regent’s Park, over to the City of London, and back through Trafalgar Square and St James’ Park. But their third and final EVA (in Blue), took them all the way from Harrods to Kew Gardens and Richmond-Up-Thames: [note: you may need to tap to play video]
Credit: NASA LRO and Apple Maps If you’re wondering what it looked like from their perspective, here’s a photo they took on their second EVA. Far, far away in the distance you can just about make out the Lunar Module: [note: you may need to tap to play video]
Photo taken by Eugene Cernan at Station 6 of the Second EVA. Credit: NASA So, does it make you feel lonely? Or do you want to return to the mayhem of planet Earth?
Acknowledgments
- NASA
- Peter Adler and Don Eyles of MIT
- Ernie Wright (USRA) and Noah Petro (NASA/GSFC) of NASA’s Scientific Visualization Studio
- David Brock, Stephen Cass, Christine Dabney: writing for IEEE Spectrum
- The Lunar and Planetary Institute (LPI)
- Curious Droid on YouTube
- Monty Python
- The London Telegraph newspaper
Further Reading and Viewing
Moon Trekking and Landing Sites
- Do your own “moon trekking” at https://trek.nasa.gov/moon
- “The Apollo Lunar Surface Journal“: NASA
- Website for the Lunar Reconnaissance Orbital Camera (LROC): Credit LROC
- Downloadable high resolution animations of the six Apollo landing sites: credit NASA’s Scientific Visualization Studio
The Apollo Flight Computers
- The Apollo Flight Journal: “The Apollo On-board Computers” by Phill Parker.
- “Inside the Apollo Guidance Computer’s core memory” by Ken Shirriff
- “Software woven into wire: Core rope and the Apollo Guidance Computer” by Ken Sherriff.
- CuriousMarc: 50 minute YouTube video delving into the Apollo Core Rope memory. This team restored an Apollo Guidance Computer to get it running again. Watch the video to find out how.
- “The Apollo 11 Program Alarms“: Peter Adler, MIT.
- “Light Years Ahead | The 1969 Apollo Guidance Computer“: excellent, nerdy presentation into the Apollo 11 program alarms by Robert Wills.
- “Don Eyles: Space Hacker This programmer saved the Apollo 14 mission with a few dozen keystrokes“: Stephen Cass and Christina Dabney in IEEE Spectrum.
- Sunburst and Luminary: An Apollo Memoir by Don Eyles, recounts his story of developing the onboard software for the Apollo spacecraft.
Other Items of Interest
- The full Apollo 11 Astronaut communications voice transcript: NASA
- The Lunar and Planetary Institute (LPI): Apollo Missions
- List of all missions to the Moon: crewed and uncrewed. Published by Wikimedia
- Hack the Moon: a website about the people, the tech and the missions that made the Apollo program possible
- The Yorkshire Dales
Footnotes
- The Apple M4 chip has approximately 28,000,000,000 transistors. That’s 28 with a ‘B’. ↩︎
- These were 16 bit words, so 2048 x 16 bits of RAM and 36,864 x 16 bits of ROM. 15 of the sixteen bits were for data and one was for parity. ↩︎
- By the way, just for fun I asked ChatGPT if it was up to the task of creating a program to land on the moon. As is becoming all too common, ChatGPT misunderstood and made mistakes. It didn’t understand the memory constraints so I had to correct it. Here is the full chat log. ↩︎
- Actually it was later found that Apollo 11 had about 45 seconds of fuel left. Sloshing in the fuel tank during Armstrong’s search for a safe landing site caused the fuel gauge to give an inaccurate reading. ↩︎
- For a better understanding of what was left behind watch this video of Apollo 17 lift-off from the Moon. ↩︎














