-
ChatGPT (et al) for Geospatial. You Ain’t Seen Nothing… Yet.
So with all the recent froth about ChatGPT and Clippy 2.01, err, I mean the new Bing, I thought it might be fun to do a deeper dive and think about how all this might effect the geospatial industry.
In other words, what does the future hold for ‘Map Happenings’ powered by generative AI?
In order to write this article I started by doing a little research and investigation. I wanted to discover just how much these nascent assistants might be able to help in their current form. Now unfortunately I don’t yet have access to the new Clippy, so I had to resort to performing my tests on ChatGPT. However, while I suspect the new Bing might provide better answers, it might also might decide that it loves me or wants to kill me or something2, so for now I’m happy to stay talking to ChatGPT.
I picked a number of different geospatial scenarios — consumer based as well as enterprise based.
The first scenario is based on a travel premise.
I imagined I was planning a trip to an unfamiliar city, in this case to Madrid. I was pleasantly surprised with the results — they weren’t too bad:
But if you try using ChatGPT for something a little more taxing than searching all known written words in the universe, like, for example, calculating driving directions, you will quickly be underwhelmed.
Take this example of driving from Apple’s Infinite Loop campus to Apple Park. At first the directions look innocuous enough:
However, digging in, you’ll find the directions are completely and utterly wrong.
It turns out ChatGPT lives in an alternate maps universe.
Diagnosing each step:
- “Head east on Infinite Loop toward Homestead Rd”: Infinite Loop does not connect to Homestead Rd. Get your catapult!
- “Turn right onto Homestead Rd”: so after catapulting from Infinite Loop over the freeway to Homestead you turn right. OK.
- “Use the left 2 lanes to turn left onto N Tantau Ave”: Err, you can’t turn left from Homestead to Tantau … unless the wind blows your balloon east of Tantau.
- “Use the left 2 lanes to turn left onto Pruneridge Ave”: Really? Hmm. Wrong direction!
- “Use the right lane to merge onto CA-280 S via the ramp to San Jose”: It’s actually I-280, but wait … Pruneridge doesn’t connect to the freeway… get out your catapult again!
- “Take the Wolfe Rd exit”: but if you took “CA-280” towards San Jose then you were traveling east, so now you’re suddenly west of Wolfe Rd. The winds must have blown your balloon again!
- “Keep right at the fork and merge onto Wolfe Rd”: Ok, I think.
- “Turn left onto Tantau Ave”: You’ll be stumbling on this one. Wolfe and Tantau don’t connect.
- “Turn right onto Apple Park Way”: wait, what?
Trying to make sense of ChatGPT’s incredibly bad driving directions. But wait, it gets worse:
ChatGPT runs out of energy at step 47 somewhere in New Jersey, presumably completely befuddled and lost.
Now this authoritative nonsense isn’t limited to directions.
Let’s look at some maths3.
First a simple multiplication:
So far, so good. But now lets make it a little more challenging:
ChatGPT certainly sounds confident. But is the answer correct?
Well’s here’s the answer you’ll get from your calculator, or in this example, WolframAlpha:
Credit: Wolfram|Alpha Huh? It looks like ChatGPT not only lives in an alternate maps universe it also lives in an alternate maths universe.
Now the founder of Wolfram|Alpha, Stephen Wolfram, recently authored an excellent and fascinating article about this: “Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT”. In it he’s lobbying for ChatGPT to use Wolfram to solve its alternate maths universe woes.
Architectural differences between ChatGPT vs. Wolfram|Alpha
Credit: Wolfram|AlphaStephen points out not only ChatGPT’s inability to do simple maths, but also its inability to calculate geographic distances, rank countries by size or determine which planets are above the horizon.
Credit: Stephen Wolfram Stephen’s big takeaway:
In many ways, one might say that ChatGPT never “truly understands” things
ChatGPT doesn’t understand maths. ChatGPT doesn’t understand geospatial. In fact all it understands is how to pull seemingly convincing answers out of what is essentially a large text database. You can sort of see this in its response to the question about what to do in Madrid — this is likely summarized from the numerous travel guides that have been written about Madrid.
But even that is flawed.
In order to work efficiently the information store from which ChatGPT pulls its answers has to be compressed. And it’s not a lossless compression. It therefore is vulnerable to suffering from the same kind of side effects as audio, video or images that use a lossy compression.
Ted Chiang covers this in his New Yorker article: “ChatGPT is a Blurry JPEG of the Web”
Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
In other words, don’t let ChatGPT’s skills at forming sentences fool you.
What’s missing?
Clearly ChatGPT’s loquacious front-end needs to be able to connect to computational engines. That is what Stephen Wolfram argues for, in his case for a connection to his Wolfram|Alpha computational engine.
I can easily imagine a world where a natural language interface like ChatGPT could be connected to a wide variety of computational engines.
There might even be an internationally adopted standard for such interfaces. Let’s call that interface CENLI (“sen-ly”), short for “Computational Engine Natural Language Interface”.
I challenge folks like Stephen @ Wolfram-Alpha and Nadine @ OGC to push such a CENLI standard. In that way we could build natural language interfaces to all sorts of computational engines. This might include:
- All branches of Mathematics
- Financial Modeling
- Architectural Design
- Aeronautical Design
- Component Design
- … and — of course — all manner of Geospatial
It turns out making a connection between a generative AI and a computational engine has been done already — by NASA. A chap called Ryan McClelland, a research engineer at NASA’s Goddard Space Flight Center in Maryland has been using generative AI for a few years now to design components for space hardware. The results look like something from an alien spaceship:
NASA’s AI designed space hardware — Credit: NASA / Fast Company Jesus Diaz has recently wrote a great article for Fast Company about Ryan’s work:
NASA is taking generative AI to space. The organization just unveiled a series of spacecraft and mission hardware designed with the same kind of artificial intelligence that creates images, text, and music out of human prompts. Called Evolved Structures, these specialized parts are being implemented in equipment including astrophysics balloon observatories, Earth-atmosphere scanners, planetary instruments, and space telescopes.
The components look as if they were extracted from an extraterrestrial ship secretly stored in an Area 51 hangar—appropriate given the engineer who started the project says he got the inspiration from watching sci-fi shows. “It happened during the pandemic. I had a lot of extra time and I was watching shows like The Expanse,” says Ryan McClelland, a research engineer at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “They have these huge structures in space, and it got me thinking . . . we are not gonna get there the way we are doing things now.
As with most generative AI software, NASA’s design process begins with a prompt. “To get a good result you need a detailed prompt,” McClelland explains. “It’s kind of like prompt engineering.” Except that, in this case, he’s not typing a two-paragraph request hoping the AI will come up with something that doesn’t have an extra five more limbs. Rather, he uses geometric information and physical specifications as his inputs.
NASA’s AI designed space hardware — Credit: Henry Dennis / NASA / Fast Company “So, for instance, I didn’t design any of this,” [McClelland] says, moving his hands over the intricate arms and curves. “I gave it these interfaces, which are just simple blocks [pointing at the little cube-like shapes you can see in the part], and said there’s a mass of five kilograms hanging off here, and it’s going to experience an acceleration of 60G.” After that, the generative AI comes up with the design. McClelland says that “getting the right prompt is sort of the skill set.
What’s really interesting about McClelland’s work is that it is streamlining the long cycle of design -> engineering -> manufacturing. No longer does he need to pass off the designs to an engineering team who then iterates on it and subsequently passes it on to a manufacturing team who iterates even further. No. Now the generative AI tool compresses that process:
It does all of it internally, on its own, coming up with the design, analyzing it, assessing it for manufacturability, doing 30 or 40 iterations in just an hour. “A human team might get a couple iterations in a week.”
Jesus Diaz sums it up perfectly:
Indeed, to me, it feels like we are the hominids who found the monolith in 2001: A Space Odyssey. Generative AI is our new obsidian block, opening a hyper-speed path to a completely new industrial future.
So, given that a natural language interface to all sorts of computational engines is both possible and inevitable, what might a natural language interface to a geospatial computational engine look like and what might it be capable of doing?
First, let’s start with a consumer example.
I don’t know about you, but I love road trips. But I abhor insanely boring freeways and much prefer two lane back roads.
Many years ago when I lived in California I discovered the wonderful world of MadMaps4
MadMaps has developed a series of maps for people of my ilk. Originally they were designed for those strange people who for some reason like motorbikes, but for me, at the time when I had my trusty Subaru WRX, they were also perfect.
You see MadMaps’ one goal was to tell you about the interesting routes from A to B. So, when I was driving back to Redlands from my annual pilgrimage to the Esri user conference in San Diego, I would be guided by MadMaps to take the windy back roads over the mountains. It would take me about twice as long, but it was hellish fun.
Imagine if the knowledge of MadMaps was integrated into a geographic search engine or your favorite consumer mapping app. And imagine if it also happened to know something about your preferences and interests so that it could incorporate fun places to stop along the way.
It turns out I’m not the first person to think of this.
It was only recently that Porsche announced a revamped version of its ROADS driving app.
Porsche ROADS driving app — Credit: Porsche ROADS is a valiant attempt to use AI to do what MadMaps does but in an interactive app. Unfortunately the generated routes are, well, pretty simplistic and not particularly enthralling. They lack the reasoning and context that you get from studying a MadMap.
However, I don’t think it would take a huge amount of work by the smart boys and girls at Google Maps and Apple Maps to do something similar, but much more powerful. Imagine this prompt:
“Hey Siri, I’m looking to drive from Tucson to Colorado Springs. I’m traveling with my dog and I’d love to take my time, but I want to do the trip in two days. Can you recommend a route that takes in some beautiful scenery and some great places to eat and stop for good coffee? And by “good coffee” I mean good coffee, not brown water or chain coffee schlock. I’d obviously like find good places to stop for walks to exercise the dog and I’d love to spend the night at some cute boutique hotel or motel close to some eclectic restaurants.”
If you try it today5 you will find what first appears to be a good answer, but on closer analysis it’s lacking in detail and is very vague in some places.
More importantly perhaps: it’s also just a text answer.
It’s not a detailed trip plan displayed on an interactive map that you can then tweak and edit. In other words, it’s only about 50% of the way there.
Switching gears, now let’s imagine a natural language interface to a complex geospatial analytics problem, this time applied to business.
As an example I’ll use the geospatial problem of something called “site selection”. This is a process of determining the best location for some object, some business or some facility. Traditionally this is performed with huge amounts of geospatial data about things like roads, neighborhoods, terrain, geology, climate, demographics, soils, zoning laws … the list goes on.
Organizations like Starbucks and Walmart have used these geospatial and geo-demographic analysis methods for decades to help determine the optimal location for their next store. Organizations like Verizon have used similar processes to help determine the best locations for cell phone towers based on where the population centers are and what the surrounding terrain looks like.
This methodology has not been limited to commercial use cases.
A long time ago I remember someone performing a complex geospatial analysis on the location of Iran’s Natanz uranium enrichment facility. They looked at things like the geology, the climate, the topography, access to transportation and energy. Using this information they spent a significant amount of time, energy and brainpower to determine other locations in Iran that might have similar characteristics — in other words: where else Iran might be hiding another such facility? I think there were only one or two places that the algorithm found.
What’s common about all these enterprise use cases is the complexity of getting to the answer. You have to set up all the right databases, you have to invent, develop and test your algorithms. And just like with the design -> engineering -> manufacturing process that NASA faces with component design, there is a feedback loop — for example, one of the challenges for locating a Starbucks is determining exactly what factors are driving the success of its most profitable stores.
All of this is compounded by the horrible complexity of the user interfaces to these systems. To get the best results you not only need to be well educated in something called ‘GIS’ 6, but it also doesn’t hurt to be an accomplished data analyst. My good friend, Shawn Hanna, who also happens to be a super sharp data analyst, used to work on these site selections scenarios for Petco. He can attest to the complexity of the problem.
But imagine if instead data analysts could issue a prompt to a geospatial computational engine to help them find the optimal answers more quickly:
“I’m looking to figure out the best location to open a new Petco store in the Atlanta metropolitan area. I’d like you to take into account the locations of current Petco stores, their sales and profitability and the location of competitive stores. I’d also like you to take into account the demographics of each potential location and match that against the demographics of my best performing stores. Also take into account likely population growth and predicted trends in the respective local economies. And, of course, information on which households own pets. When you’ve derived some answers, match that against suitable available commercial properties in the area. Rank the results and explain why you chose each location” 6
The trick, as McLelland at NASA says, will be in good prompt engineering.
And of course, you’ll have to have the confidence that your chatty interface is connected to a reliable, dependable and knowledgable computational engine.
It’s not going to eliminate your job, but it sure as hell is going to make you tons more productive.
We’re not there yet. But it’s coming.
Hell, we might even be able to do this:
Can you fly that helicopter?
Credit: The Matrix / Warner Bros. Entertainment Inc.
Footnotes:
1 For those of you that don’t remember, here is Clippy 1.0 in action:
2 By now many of you will have read Kevin Roose’s conversation with Bing in his New York Times article. If you don’t have access to the New York Times then you can see a reasonably good summary of the conversation in The Guardian.
3 If you live in the United States, that translates to ‘Math’. Why I’m not sure. People generally don’t study ‘Mathematic’. Perhaps that’s why people from the US sometimes have a reputation for not being as good at mathematics as people in other countries? They don’t realize there’s a number bigger than one.
4 Here is one of my favorite MadMaps:
5 ChatGPT’s answer to a road trip challenge. It’s a reasonably good start, but the directions are pretty vague:
6 GIS stands for ‘Geographically Insidious System’
7 FWIW, here is ChatGPT’s answer to this prompt:
Acknowledgments:
- The folks at OpenAI for letting me highlight ChatGPT
- Stephen Wolfram for his article making the case to connect Wolfram|Alpha to ChatGPT: “Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT”
- Ted Chiang for his excellent New Yorker article: “ChatGPT is a Blurry JPEG of the Web”
- Jesus Diaz for his fascinating Fast Company article: “NASA’s new AI-designed parts look like they’re from an alien starship“
- The great folks at MapMaps!
- Shawn Hanna for teaching me a few things along the way
- All the folks who came up with the Matrix
- My good friend, Dr. Barry Glick, who is always an inspiration!
-
12 Map Happenings That Rocked Our World: Part 5
The Dawn of Tube Maps
So the astute readers among you1 will have realized by now that this series of posts on the 12 Map Happenings that Rocked Our World are slowly advancing through history:
- Part 1 was about The First Map which was probably invented about 45,000 years ago
- Part 2 was about The Birth of Coordinates, specifically latitude and longitude, which happened in about 245BC
- Part 3 was about the invention of Road Maps by the Romans somewhere around 20BC
- Part 4 was about The Epic Quest for Longitude and how it came to be measurable at sea in 1759
Today we move forward yet again, this time to the year 1933 and the invention of the ’Tube Map’.
First of all through, what the hell is a ‘Tube’?
Well, if you’re not familiar, please let me enlighten you.
The Tube refers to the London Underground, which in 2023 is celebrating its 160th anniversary.
‘Love the Tube’ Roundel – Celebrating the 160th Year of the London Underground
Credit: Transport for LondonThe first line opened on 10th January 1863 between Paddington and Farringdon Street. Initially the trains were powered by steam locomotives that hauled wooden carriages. It wasn’t until 1890 until the first deep level electric line was opened:
London Electric Underground Train in 1890 — Credit: Wikimedia The London Underground first became known as the ’Tube’ in 1900 when the then Prince of Wales, Prince Albert Edward (later Edward VII), opened the Central London Railway from Shepherd’s Bush to Bank. This line was nicknamed the ‘Twopenny Tube’2,3.
Many maps of the Tube were created, the first being in 1908:
London Underground Map from 1908 — Credit: Darien Graham-Smith It wasn’t until 1949 that the Tube Map that that we all know and love truly came into being4.
The map was created by one Henry Charles Beck (4 June 1902 – 18 September 1974), a.k.a. Harry Beck.
Beck’s map was first published in 1933:
Beck’s First Work, Published in 1933 But It wasn’t until 1949 that Beck was completely satisfied with the the design:
Harry Beck’s Favorite Creation from 1949 — Credit: Darien Graham-Smith Harry Beck (1902-74) — Credit: Wikimedia Beck had created something of beauty and it was truly a game changer: eliminating all extraneous information — even topography — to create the most simple and easy-to-understand map you could possibly achieve. Jony Ive would have been proud.
The history of how this map came to be and Beck’s trials and tribulations to get it approved is a story that has been told many times and, I hasten to add, with great comedic wit and wisdom. I could never come close to doing these prior works justice. Instead please let me point you to some delightful muniments worthy of your time:
One of my favorites is by Darien Graham-Smith who wrote about the History of the Tube Map in his article for the Londonist. In this article you will see the progression from the messiness of the pre-Beck maps to Beck’s 1949 masterpiece.
Another of my favorite history lessons is given by the amazing Jay Foreman who created two delightful 10 minute videos. They are full of acerbic British wit and most definitely a ‘must watch’:
The Tube Map nearly looked very different — Credit: Jay Foreman
What went wrong with the Tube Map? — Credit: Jay Foreman
So how much did Beck’s map influence the rest of the world? You only have to take a look at the official subway maps from around the globe to see:
Subway Maps of Beijing, Delhi, Mexico City and Tokyo Even the city of Venice has adopted Beck’s style for its official maps of Venice’s water taxi network:
Venice Water Taxi Network — Credit: Actv Now while I prattle on about Harry Beck, I’m sure the map purists among you are probably whinging5 that the London Tube Map is not a map, it’s a schematic. Well in the sense that the topography was trounced by topology that may strictly be the case. But the Tube Map accomplished what so many of today’s ‘maps’ fail to do today — distilling the horrible complexity of the real world into the atomic essence of the information you really need. And, let’s not forget, they still depict space, albeit without the equal scale of a traditional map.
But where is it all going?
Well there is one land yet to be conquered — that fair city of the Big Apple, which so far has steadfastly refused to adopt Beck’s non-topographic mantra:
New York City Subway Map — Credit: MTA And, I’m sorry to say that since Beck’s passing the London Tube Map itself has regressed. Somehow the attractive simplicity of Beck’s finest work in 1949 has now been lost to complexity and incoherence:
London Tube Map in 2023 — Credit: Transport for London However, in my research I did come across one bright light. This is a map of the roads of the Roman Empire, in what is now a very familiar form:
Roman Roads in the Transit Map Form — Credit: Sasha Trubetskoy – sashamaps.net So, perhaps it is the Romans we should thank after all? 😉
Footnotes:
1 By suffering through my blogs you have to be somewhat astute, or at the very least, patient and tenacious
2 Two things here:
- ‘Twopenny’ perhaps unsurprisingly means two pence. This was the initial cost of a ticket on this line
- To those unfamiliar with proper British pronunciation, ‘twopence’ is actually pronounced ’tuppence’ not ‘two pence’
3 The term ‘Tube’ could also have come from the fact that, well, the tube looks very much like a ‘tube’. It could also have come from the concept of London’s Victorian Hyperloop, run by the London Pneumatic Despatch Company between 1863 and 1874.
A London Tube train emerging from the Tube — Credit: Wikimedia 4 You could argue that Beck’s first map actually dated from his 1931 sketch, drawn in pencil and colored ink on squared paper in his exercise book:
Sketch for a new diagrammatic map of the London Underground network by Henry C. Beck in 1931
Credit: Transport for London and the Victoria & Albert Museum Collection5 ‘Whinging’ — pronounced ‘winge-ing’ (like hinge-ing) — is British for whining in a particularly irritating way. In other words, it’s much worse than simply whining.
References and Acknowledgments:
- When Topology Trumped Topography: Celebrating 90 Years of Beck’s Underground Map, Alexander J. Kent, The Cartographic Journal, Volume 58, 2021 – Issue 1
- Sasha Trubetskoy – sashamaps.net — for his fantastic maps of the Roman Empire’s Roads in Transit Map form
- Jay Foreman for his deliciously witty mapping videos
- Darien Graham-Smith for his History of the Tube Map
- Wikimedia
-
Apple Business Connect: A cure for Apple Maps’ weak spot?
Last week Apple issued a press release for a new tool, something they call ‘Apple Business Connect’ and it’s tightly linked to Apple Maps.
Press releases about Apple Maps don’t come particularly frequently from Apple. If you include last week’s release there have been just four dedicated press releases about Apple Maps since 20161. The prior one was in September 2021, announcing their 3D city maps.
‘Apple Business Connect’ seems like a very specialized topic. Almost too much in the weeds for Apple to stoop so low and give it press release.
So what’s the big deal?
Well, now businesses and organizations are being given the opportunity to “Put your business on the map.”
Put Yourself on the Map — Credit: Apple Huh, but weren’t all businesses on the map already?
Well, not always.
It turns out getting all those businesses on the map is hard — super hard. And it’s even harder to keep all the information about them current.
Having accurate, complete and up-to-date information about businesses is also absolutely crucial to the success of you map product: it doesn’t matter how pretty your map looks, it’s pretty much useless if you can’t find the organization you’re looking for.
The issue of how hard it is to keep the information up-to-date quickly became apparent with the onset of the pandemic. Restaurants and other businesses were suddenly closed or suddenly had very different operating hours. And it was extremely difficult to keep track of all the changes.
Keeping this information current is a constant struggle for all map makers, and Apple is far from immune.
So how does one even begin to address this challenge?
For you millennials in the audience, let me start with a little history:
Back in the old days we had something called the ‘Yellow Pages’. These were big printed books published by your national or regional telephone company. The yellow pages listed all the businesses in your city or region and complemented the ‘white pages’ which contained the residential listings2.
Yellow Pages were a big business: they generated a ton of advertising revenue for the phone companies. As a business you could buy a block of space — advertising your trade, your shop or perhaps your legal practice. If you really wanted to grab someone’s attention you bought a full page ad at great expense and renamed you business so it started with the letter ‘A’ — or indeed many As — so as to increase the likelihood that your listing was the first a prospective customer would see.
For you Millennials: This is what a Yellow Pages book looked like — Credit: Wikimedia A Typical Yellow Pages Ad for a Lawyer
Credit: Movie Posters USABeing big, heavy and expensive to produce the phone books were published just once a year.
In the 1990s, with the advent of mobile phones and the quickly growing popularity of the internet, the business models of the phone companies began to change. The data started to move online. Suddenly the world became awash with something called “Internet Yellow Pages”. Back in their hay day Internet Yellow Pages were a key feature of both America Online (AOL) and Yahoo! The legacy of this era lives on today, for example with “Pages Jaunes” in France, but I’m pretty certain almost nobody uses it.
The issue in the 1990s was the currency of the data. These digital yellow pages were updated using the same low cadence methodology as had been used for decades with the printed yellow pages. The publishers would proudly tell you: “We call every business once per year!” 😱
Moreover, as these companies were making money from advertising, they were far more concerned with getting another year’s revenue from the lawyers, locksmiths & plumbing companies than they were about deleting listings for organizations that were no longer in business. So not only was there a currency issue, there was also a quality issue.
Back in the heady days of the dot com boom in the late 1990s I was one of the people at MapQuest that had to deal with these companies. Let’s just say that they didn’t move at the speed of the internet.
I remember dealing with all the various companies operating in the US at that time — InfoUSA, Dun & Bradstreet and Database America to name just a few — trying to understand their processes and their data quality.
A quote from a salesman at Database America sticks with me still to this day:
“It’s not a question of how good these databases are, it’s a question of how bad they are!”
Monte Wasch, c. 1995So what about today? How do mapping organizations like TomTom, HERE, Google and Apple Maps keep their own ‘business listings’ current?
If you dig a little you can quickly find out that they don’t do all the work by themselves. And that’s true even for Google. It’s a massive aggregation and collation of data from dozens and dozens of sources. To get an idea of what sources are used you simply have to find the ‘acknowledgments’ page for each product. For example, here is the acknowledgements page for Google Maps’ business listings and here is the same page for Apple Maps3. These pages don’t list all the organizations that contribute data, but they list many of them.
At its inception Apple Maps relied solely on third parties, the most prominent being Yelp. Unlike Google and unlike Facebook, Apple has never seriously been collecting data about businesses.
That is until fairly recently.
It all started a couple of years ago in the latter part of 2020. Apple Maps suddenly gave users the ability to rate businesses in Australia as well as upload photos. It wasn’t long before this ability was extended to many more countries. This didn’t mean Yelp and other partners were suddenly swept aside, but it was a telltale sign that Apple was beginning to shift towards a homegrown solution.
Of course Google had taken the same approach many years before. It started with Google Local in 2004 and, via a long, winding and horrendously convoluted road, to the launch of Google Business Profile in November 2021:
The Evolution of Google Business Profile
Credit: BluetrainDue to the enormous popularity of Google search and Google Maps businesses knew that they had to be found on Google and that they needed to be visible on Google Maps. Google didn’t have to do much to encourage businesses to seek out the page on Google where they could provide the information. Today Google Business Profile offers a myriad of options to enable businesses to not only add or correct basic information, but enrich it with details to entice people to visit:
Google Business Profile Marketing Page — Credit: Google So what is Apple Business Connect?
Well, it’s taken them a while — err, 19 years4 — but it’s actually Apple’s response to Google Business Profile.
Like Google Business Profile you can add your business if it’s not listed, correct information if it’s wrong and enrich your listing with things like official photos, menus, special announcements and offers. The information you provide doesn’t just make its way to Apple Maps, but it also gets shared across the Apple ecosystem to services like Siri. Similar to Google Business Profile, Apple Business Connect also provides access to an analytics dashboard so you can see how users are interacting with your listing.
But here’s the $64 million dollar question: wiil businesses even realize that Apple Business Connect exists?
The problem — of course — is all about mindshare.
In most countries Google Maps is nearly always top of mind5. So much so that many iPhone users will swear to you that they use nothing but Google Maps, but when you ask them to point to the icon of the app they use it turns out it’s not Google Maps, it’s Apple Maps.
So will the owner of Joe’s pizza parlor even even think about Apple Maps, let alone go on a hunt for Apple Business Connect?
I think we all know the answer.
‘No.’
Not unless Apple starts a major campaign to significantly increase the awareness of Apple Maps and Apple Business Connect.
But how?
It’s extremely unlikely Apple would start a massive billboard advertising campaign. Even if they could foist the costs of such a campaign on carriers, I don’t think this would ever happen.
A more logical approach might be to promote Apple Business Connect as part of the Apple Business Essentials, a program which helps organizations optimize use of the Apple devices they use at work.
Or perhaps Apple Business Connect could become a more prominent feature of Apple Pay, for example in the promotional pages that help businesses learn more about Apple Pay and how to set it up:
Apple Pay Marketing Page — Credit: Apple A conjecture that seems to me to be far more likely, however, is that Apple Business Connect is just the start. The rumor mill has been rumbling about the likelihood of ads coming to Apple Maps. While I have no information to substantiate or refute such rumors, I wouldn’t be at all surprised if Tim and Luca would salivate at the prospect of recouping some of their massive geospatial investments.
Then promoting Apple Business Connect in order to effect more accurate, more complete and more up-to-date businesses in Apple Maps would be easy. They could just make use of unsold inventory.
One thing is for sure, however: Apple Business Connect is not a case of “if you build it, they will come”.
Let’s all stay tuned, ‘cos Apple is going to have to do something big to make your average Joe aware.
Footnotes:
1 Links to Apple press releases about Apple Maps:
- Apple delivers a new redesigned Maps for all users in the United States [January 30, 2020]
- Apple Maps now displays COVID-19 vaccination locations [March 16, 2021]
- Apple Maps introduces new ways to explore major cities in 3D [September 27, 2021]
- Introducing Apple Business Connect [January 11, 2023]
2 In some cases there was also something called the ‘blue pages’ for government listings
3 To get to this page on iOS, open Maps, tap the ‘choose map type’ button, then tap on the link at the bottom of the screen: ‘(c) OpenStreetMap and other data providers’
4 Google Local launched in 2004. Apple Business Connect launched 2023.
5 With perhaps the exception of China, Russia and South Korea
Acknowledgments:
- Apple
- Blutrain for their article The Evolution of Google My Business
- Monte Wasch
- Movie Posters USA
-
12 Map Happenings That Rocked Our World: Part 4
The Epic Quest for Longitude
Let me start by making one thing crystal clear: this story is about one of civilization’s most epic quests.
And, being the learned reader I’m sure you are, you may already be familiar with it. But even if you are, I encourage you to read on as I will attempt to provide a slightly different perspective to this amazing tale.
I’d like you to join me in a journey back to the early 1700s and, to be specific, the year 1707.
1707 was the year of the Scilly Naval Disaster which involved the wreckage of four British warships off the Isles of Scilly in severe weather. It is thought that up to 2,000 sailors may have lost their lives, making it one of the worst maritime disasters in British naval history1. The disaster was likely caused by a number of factors, but one of them was the ship’s navigators inability to accurately calculate their locations. Had they been able to determine their position they might not have run aground and disaster might have been averted.
But knowing your location at sea wasn’t just about avoiding catastrophes. It was also about money. At this time it was very clear that any nation that could solve this problem could rule the economies of the world.
The French and the British certainly knew this full well. Observatories were built in Paris and Greenwich2 with the primary purpose of determining whether it was possible to calculate your location at sea by viewing the location of stars and the moon.
So let’s start with the basic question: without the luxury of modern technology like GPS, just how could you determine your location in the year 1707?
As I’m sure you know, location is all about coordinates, and specifically about measuring your latitude (how far north or south you are) and your longitude (how far east or west you are).
Before the advent of satellite based positioning systems in late 20th century, the method for determining your latitude and longitude was horribly complex. Furthermore, it was far easier to determine your location on land than it was at sea.
At sea a process of ‘dead reckoning’ was commonly used. Starting from a known location on land, for example a port, one could measure your direction of travel and your speed. By plotting these movements on a map you could continually update your location. Your direction of travel was measured using a compass and your speed was measured using a device called a ‘log and line’:
A triangle of wood, called a log, was attached to a knotted line of rope. The knots in the rope were tied at a distance of 47 feet 3 inches. The log was thrown overboard and the speed of the ship was measured by counting the the number of knots that passed through the sailors fingers in the time it took for a 28 second sandglass to empty. This would thus give the speed of the ship in knots — also known as nautical miles per hour3.
Log and Line — Credit: Royal Museums Greenwich A 28 Second Sand Glass
Credit: Royal Museums GreenwichHowever, the use of a log and line did not give an exact speed measure. The navigator had to take into account:
- The flow of the sea relative to the ship
- The effect of currents
- The stretch of the rope
- The inaccuracy in the time measurement as changes in temperature and humidity affected the accuracy of the sandglass
Even the compass had flaws — it measured magnetic north, not true north.
Therein lies the issue of the dead reckoning. Errors in any dead reckoning system build up and over time and after a while you no longer have a good idea of your position. To eliminate the error you have to reestablish your precise location using some alternate method4.
In the early 1700s the method for determining your latitude wasn’t easy, but it was possible:
Navigators knew that the height of the sun above the horizon was different according to how far north you were. At noon on the equator the sun was very high (more or less directly overhead) whereas at noon in the more northerly latitudes the sun was much lower. By measuring the height of the sun at noon — not easy in a ship rolling on the ocean waves — you could determine your latitude. This was done using an instrument called a cross-staff5 — although at your peril. The instrument could easily bruise you in the eye as the ship rocked and you might also be blinded as you stared at the sun. But you could measure it. Just. To get a more accurate reading you also needed to refer to complex tables, also known as ‘almanacs’, that you used to adjust your measurements to take into account the tilt of the earth and the time of year6.
A Cross-Staff from 1804: Credit — The Mariners’ Museum and Park At night, assuming it is cloudless, it is also possible to measure latitude by measuring the angle to the North star.
Measuring latitude was difficult but possible — but in comparison to measuring longitude it was easy peasy. The issue at the time of the Scilly Naval Disaster in 1707 was that no reliable method to measure longitude had been devised.
Let’s pause for a moment and compare the problem of accurately measuring your longitude to the problems our world faces today. Perhaps it would be equivalent to the problem of creating a cheap and reliable fusion reactor? The impact of both solutions on society are equally huge.
Solving the longitude problem ultimately helped prevent disasters, enabled global travel and, more importantly, bootstrapped global trade. Solving the problem of generating energy using a fusion reactor would similarly help prevent (climate) disasters, boost global production efficiency (through cheap energy) and boost global trade.
Nations at the time knew how critical it was to solve the longitude problem. In the hopes of finding a solution they used the carrot of large prizes, not dissimilar to the Lunar XPRIZE that Google established in 2007.
Spain tried it first by offering one in the mid 1500s. In the 1600s Holland offered a similar prize. During this time many inventors tried to solve the longitude problem, but no one figured it out.
Such was the ongoing public outcry from the Scilly Naval Disaster and the continued concern of seafaring folk that in 1714 a petition was presented to the British parliament. They encouraged the British government to offer its own prize to solve the longitude problem.
The matter was referred to a group of esteemed experts, including Sir Isaac Newton. In July 1714, on the advice of these experts, parliament adopted “An Act for Providing a Publick Reward for such Person or Persons as shall Discover the Longitude at Sea.”
If you’d like to read the original text of this now famous act, commonly known as ‘The Longitude Act’, and learn more about its passage I suggest you visit this link on the Royal Observatory Greenwich’s website.
As part of the act, parliament created a committee to address the problem and consider any submissions. This committee became known as the ‘Longitude Board’.
The Longitude Board essentially became a VC and was authorized to fund research via grants. More importantly it was authorized to offer a prize to the first person to develop a method for determining longitude to within varying degrees of accuracy:
- £10,000 for an accuracy of one degree (60 nautical miles at the equator)
- £15,000 for an accuracy of two-thirds of a degree (40 nautical miles at the equator)
- £20,000 for an accuracy of one-half of a degree (30 nautical miles at the equator)
The full prize was a significant sum — well over $6,000,000 in today’s money and certainly in the same realm as the Google Lunar XPRIZE which was $30,000,000.
The race was on.
And it all came down to one thing: measuring time.
In principle calculating longitude was easy:
- The earth rotates 360 degrees everyday
- There are 360 degrees of longitude (from -180 degrees to +180 degrees)
- There are 24×60 = 1,440 minutes in a day
- Therefore the earth rotates one degree of longitude every 4 minutes (= 1,440 / 360)
To calculate your longitude all you need to do is determine what time it was at a known location when it’s noon at your current location.
So if you determine it’s 12.20pm in Greenwich when it’s noon at your location then you must be at 5 degrees west:
- Your clock is 20 minutes later than the time in Greenwich
- 20 minutes divided by 4 degrees of rotation per minute = 5 degrees.
Similarly if you determine it is 11.40am in Greenwich when it’s noon at your location then you must be at 5 degrees east.
Simple right?
Noon was easy to measure: it occurs when the sun is at its highest point in the sky. It’s the other part that turned out to be a bitch.
How do you know what time it is in, say, Greenwich, when it’s noon at your location? Give someone in Greenwich a call on your iPhone? Sorry — even if you did magically happen upon such a fruity device — there was still no cell coverage out at sea. Sucker!
So you needed another method to determine the current time at a remote known location.
In 1714 when the Longitude prize was enacted there were three main methods that became contenders for measuring longitude7:
- The ‘Moons of Jupiter’ method
- The ‘Lunar Distance’ method
- The ‘Chronometer’ method
The Moons of Jupiter method relied on a discovery by Galileo in 1612: the orbits of Jupiter’s moons were like clockwork. They always passed in front of Jupiter at a particular time of day at a particular location.
Galileo developed a somewhat complex mechanical instrument called a jovilabe to calculate the time from the positions of the moons. He also suggested the use of a rather scary helmet, called a celatone, to be used to measure the position of the moons:
Replica of Celatone created by Matthew “Attoparsec’ Dockery — Photo Credit: David Bliss Using a telescope or one of these celatones you could see the positions of Jupiter’s moons. Then referring to some predefined tables and charts you could determine what time this corresponded to at a particular location, for example, back in Greenwich.
Bingo! Problem solved!
Indeed this method proved to be very successful, but not for everyone:
A disciple of Galileo, the Italian astronomer Cassini, realized that the ‘Moons of Jupiter’ method could be used to make more accurate land maps. In 1671 King Louis XIV employed Cassini to revise the existing maps of France. As a result of Cassini’s intricate work the land mass of France in the new maps got reduced by about 20 percent. When the King first saw the maps he is said to have exclaimed “I have just lost more territory to my astronomers than to all my enemies!”
Unfortunately there were a few teeny, tiny problems that made this method somewhat challenging at sea:
- Clouds
- Jupiter isn’t always above the horizon
- Bright daylight!
- Err, ships tend to roll a bit
As a result the ‘Moons of Jupiter’ method was never seriously considered for use by mariners.
The ‘Lunar Distance’ method was first suggested by the Italian explorer, Amerigo Vespucci8, in 1499. The method depends on the motion of the moon relative to other celestial bodies.
The moon completes a circuit of the sky (360 degrees) in 27.3 days on average (a lunar month), giving a movement of just 0.55 degrees per hour. But it’s complicated:
- To be successful a very accurate angle of measurement of the moon’s position was required — if your measurement was off by, say, 0.1 degrees, then your time measurement would be off by about 11 minutes. As a result your calculation for longitude would be off by almost 3 degrees9. In order to win the full £20,000 prize your measurement of longitude had to be accurate to 0.5 degrees. This in turn meant that your measurement of the moon’s position had to be accurate to within 0.018 degrees!
- There’s the issue that you needed access to complex tables and astronomical charts that would tell you the expected position of the moon against the celestial background at some point in the future. To determine your longitude those tables and charts had to be accurate too.
- Having the all the necessary accurate measurements, tables and charts didn’t provide an immediate answer. Further calculations were required and those calculations were long and laborious. Sometimes it would take up to four hours to perform them.
- Clouds
- The moon isn’t always above the horizon
- Err, yes, ships still rolled
You’d think that the Lunar Distance method would have been dismissed as quickly as the Moons of Jupiter method, but other forces were at work.
One of the main purposes of the observatories built by the French in Paris and the British in Greenwich was to develop the tables and charts required for measuring longitude using the Lunar Distance method. The investment in these observatories and developing these tables was therefore huge. On top of that many experts on the Longitude Board were astronomers, most notably Nevil Maskelyne, who was later to be appointed Astronomer Royal. So you might say that there was a vested interest in the Lunar Distance method and a natural bias towards it. As a consequence the Lunar Distance method was far from dismissed and continued to curry favor for many years to come.
So that leads to our last contender. The ‘Chronometer’ method.
The idea behind the Chronometer method was simple: before leaving your port, you set your clock or watch to the known time at that location. You then took that chronometer with you on your trip. Then, when you’re out in the middle of the ocean, simply refer to the time on this chronometer when it’s noon at your current location. From the difference in time you can quickly and simply calculate your longitude from the knowledge that the earth rotates at four degrees per minute.
The problem of course with the Chronometer method was that in early 1700s no clock or watch existed that could keep time accurately.
The detailed rules for winning the Longitude Prize stipulated that the method be able to determine Longitude accurately after a voyage from Britain to the West Indies. This journey took six weeks by ship. So to win the full £20,000 using the Chronometer method your time measuring instrument needed to be accurate to 2 minutes after six weeks which was less than 3 seconds a day. Even a good watch at that time might gain or lose as much as 15 minutes a day!
Needless to say the Longitude Board was extremely skeptical that the Chronometer method would ever be a viable solution. Sir Issac Newton’s point of view didn’t help:
“I have told the world oftener than once that longitude is not to be found by watchmakers but by the ablest astronomers. I am unwilling to meddle with any other method than the right one.”
But is was then that our hero came into view.
His name was John Harrison.
Harrison was a carpenter and lived in the small village of Barrow-on-Humber in the north of England. He had neither been schooled at university nor had he ever gone to sea. But in 1714, at the young age of 20, clock making had become his passion. He was absolutely obsessed with accuracy and had never heard Newton’s doubtful words.
Harrison knew that many factors effected the accuracy of clocks, including humidity, temperature and changes in atmospheric pressure.
Even in his early days of clock making Harrison was a pioneer:
- He knew mechanical friction was the enemy of accuracy, but he also knew that 18th century lubricants were awful. So he invented an oil-free wooden clock by building the most critical parts of lignum vitae, a wood that contained natural lubricating oils.
- He invented a mechanism called the grasshopper escapement which eliminated sliding friction and gave the clock’s pendulum periodic pushes it needed to keep it swinging.
- To measure the accuracy of his clocks, Harrison timed their ticks to the apparent movement of stars from the backdrop of his bedroom window frame to the chimney on his neighbor’s house.
- To develop a pendulum whose length would not be affected by temperature he invented the ‘gridiron pendulum’ which was made from wires of brass and iron. The difference in the thermal expansion rates of the two metals compensated for each other, so the pendulum stayed the same length regardless of temperature.
One of Harrison’s early clocks, ‘Precision Pendulum Clock Number 2’, built by Harrison and his brother in 1722 is still in use today. And it still keeps good time 300 years later: it is accurate to within a second a month. In the early 1700s that level of accuracy was simply unheard of.
Harrison’s ‘Precision Pendulum Clock Number 2’
On Permanent Display at Leeds City Museum in England — Credit: City of LeedsWhile this clock could have easily met the requirements for measuring longitude it could only have done so on land. The rocking of the ship would have wreaked havoc on the pendulum.
But by 1730 Harrison thought he could solve all the issues associated with accurate timekeeping on a ship. For the first time in his life he ventured to London and managed to get a meeting with none other than Dr. Edmond Halley (famous for predicting the comet that bears his name). Being a member of the Board of Longitude Halley was a vital person to convince. Halley introduced Harrison to London’s most famous clockmaker, George Graham.
Harrison was less than impressed with Graham’s clocks:
“While Mr. Graham proved indeed a fine gentleman, if truth be told, I was taken aback by the poor little feeble motions of his pendulums … the small force they had like creatures sick and inactive. But I, um, commented not on he folly in his watches.”
But look at it this way: here was Harrison, some rural carpenter from the north of England, trying to convince London’s leading clockmaker he had something to show. Graham and Harrison debated for hours. It was Harrison’s work on the temperature compensated pendulum that became a turning point. Graham had struggled with this problem for years and failed.
With Graham convinced that Harrison was indeed someone deserving attention the money started to flow. Graham provided Harrison his ‘Angel round’ and lent him money so he could begin development of his sea clock in earnest. Harrison called it ‘H1’:
Harrison’s H1 Clock
Credit: Royal Museums GreenwichH1 was a revolution for Harrison. It was the first time he worked with brass. To compensate for the ship’s rocking he switched from using a pendulum to a mechanism using two rocking balance arms. As you can see, it was an intricate instrument. But it worked. In 1736, on a stormy 5 week voyage from London to Lisbon and back, the clock is thought to have been accurate to within 5 to 10 seconds a day. Not enough to win the Longitude Prize, but a huge step forward over anything else.
The Board of Longitude was very impressed. So impressed in fact, that for the first in its 23 years of existence it did something it had never done before: it held a meeting.
Harrison knew H1 was not capable of winning the prize. Instead he petitioned for a round of financing from the Board. The Board agreed and awarded him £500 (~$130,000 today) — but on the condition that H1 and his next development would become the property of the public. He was thus being asked to relinquish his intellectual property. Harrison reluctantly agreed and in 1737 Harrison embarked on his next development, H2:
Harrison’s H2 Clock
Credit: Royal Museums GreenwichH2 was completed in just two years, but Harrison was never satisfied with its design and never allowed it to be tested at sea.
In the meantime the astronomers were not standing still. They continued their work on the Lunar Distance method. It was Nevil Maskelyne, on the Board of Longitude, that became Harrison’s nemesis. Maskelyne, who was educated at Cambridge and was also a priest, is thought to have been pompous and full of himself. Harrison was the uneducated outsider. Maskelyne was the university educated insider. Tough world.
And the Board of Longitude controlled the funds. Harrison became deeply frustrated:
“They said, a clock can be but a clock, and the performance of mine, though nearly to truth itself, must be altogether a deception. I say, for the love of money, these professors or priests have preferred their cumbersome lunar method over what may be had with ease, for certainly Parson Maskelyne would never concern himself in such a matter if money were not bottom … and yet, these university men must be my masters, knowing nothing at all of the matter, farther than that one wheel turns another; my mere clock being not only repugnant to their learning, but also the loss of a booty to them.”
Despite all this Harrison convinced the board to provide further funding that he needed to continue his work. The Board granted it in 1741.
Harrison had promised the Board that his next device, H3, would be completed in two years. But it took him 19. During this time Harrison appeared before the Board nine times. Each time he appealed for more time and more money. He repeatedly missed his promised dates. To me it sounds like a classic VC story — Harrison could easily have been pitching Sand Hill Road. Over this period Harrison received a total of £3,000 (~$1M today).
Harrison’s H3 Clock
Credit: Royal Museums GreenwichIt was while Harrison was developing H3 that Harrison also turned his attention to watches. He was always disparaging of them and was convinced “he could improve these dreadful things called pocket watches”. In 1753 he instructed a watch maker, John Jefferys, to make a watch to his own design. It turned out to be far more accurate than he expected. After 25 years of working on clocks Harrison came to the realization that watches were the way to go. It was a classic ‘pivot’.
At a Board meeting held on July 18, 1760, Harrison declared that his latest clock, H3, was ready for trial. But he also reported that his first watch, which was under construction, would serve as a longitude timekeeper in its own right. He called the watch H4.
H4 was finally completed in 1759. Harrison was now 66 years old and had worked on his timekeepers for more than 45 years.
Harrison’s H4 Watch
Credit: Royal Museums GreenwichHarrison’s H4 Watch Mechanism
Credit: Royal Museums GreenwichOn February 26, 1761, Harrison contacted the Board with the request to test both H3 and H4 at sea. But a few months later, apparently dissatisfied with H3, withdraw it from the test. It was down to H4.
But Harrison’s struggles were only just beginning.
Under the terms of the Longitude Act, the first trial was a voyage from Portsmouth, England to Jamaica.
Harrison claimed that H4 lost a mere 5.1 seconds during the eighty-one-day voyage — which was a stunning result and easily enough to meet the requirements for the full £20,000 prize. But his claim depended on an allowance being made for the watch’s natural fixed gain or loss per day, also known as its “rate of going”. The problem, however, was that Harrison neglected to specify H4’s rate of going before the trial. For this reason the Board declared the result of the trial to be non-conclusive. They did, however, agree that H4 had met the terms of Section V of the Longitude Act and that it was “of considerable Use to the Publick”. As such the Board awarded £2,500 (~$725,000 today).
A second trial was scheduled, this time to Barbados, but before it occurred Harrison appealed to Parliament for further monetary assistance, presumably because he had more than exhausted his funds in completing H4. In 1763 Parliament passed “An Act for the Encouragement of John Harrison” which stated that Harrison could receive a prize of £5,000 (~$1.5M today). This award did not require a second sea trial, but instead required that Harrison assign all of his trade secrets and intellectual property associated with the design and engineering of his watch to the public, to the satisfaction of a technical committee. He would not only have to supply detailed designs, but he would also have to dismantle the watch piece by piece before the committee and supervise workmen in making two or more copies of it, which would have to be tested. In other words: sell your soul and we’ll give you $1.5M. Harrison refused and never received any money from the Act.
For the second trial, as for the first trial, the precise longitude at the destination had to be measured prior to its start. It was Harrison’s nemesis, the Very Reverend Nevil Maskelyne, that was selected to take this measurement. He would do so using the ‘Moons of Jupiter’ method.
Much to Harrison’s chagrin, the Longitude Board had also decided that the Lunar Distance method should be tested simultaneously with Harrison’s chronometer. The 1763 Act by Parliament protected Harrison against competitors using the Chronometer method, but not against competitors using the Lunar Distance method. Harrison was therefore extremely troubled.
And the Lunar Distance method was gaining favor. The Board was sufficiently impressed with Maskelyne results that in 1763, it authorized him to produce The British Mariner’s Guide, a handbook for use of the lunar distance method.
As Harrison was getting old, it was Harrison’s son who traveled with H4 to Barbados. When he met Maskelyne he accused him to be “a most improper person”. Maskelyne was not surprisingly extremely offended.
Regardless of the animosity between the Harrisons and the Board, the Board declared H4’s second test to be a success. H4 was able to measure the longitude to Barbados to an accuracy of less than 10 miles — three times better than needed to win the full £20,000 prize.
On February 9, 1765 the Board considered Harrison’s claim to the prize.
Still they did not award it!
The problem, they explained, was in Section IV of the Longitude Act. This section instructed the Board that a method was deemed to have won when it had been “tried and found practicable and useful at Sea.”
The Board told Harrison that he had not explained how his watch had worked, nor had he explained how it could be manufactured at scale so it could be put into general use. Therefore the Board therefore decided that the watch was not “practicable and useful”.
These words turned out to be crucial and it’s a text book lesson in legalese.
Harrison pushed back and continued to claim the full prize. The Board and Harrison both escalated the issue to Parliament. The Board sought to codify its recommendation. Harrison fought back. But the Board won and Harrison lost.
In May 1765, Parliament passed “An Act for explaining and rendering more effectual” its previous Longitude Acts.
To be awarded the first half of the £20,000 prize Harrison would now have to:
- Explain the principles of his watch to the satisfaction of the Board
- Assign the property rights in all four of his timekeepers to the Board
- Hand over all four timekeepers to the Board
The second half of the £20,000 prize would be awarded when “other … Time Keepers of the same Kind shall be made,” and when these other timekeepers, “upon Trial,” were determined by the Board to be capable of finding the longitude within half a degree.
In interpreting the Act the Board decided that Harrison would need to dismantle his watch in front or a technical committee to the satisfaction of the Board. When they explained this to Harrison he declared he would never consent “so long as he had a drop of English blood in his body”.
The Board’s chairman responded thus:
“Sir, … you are the strangest and most obstinate creature that I have ever met with, and, would you do what we want you to do, and which is in your power, I will give you my word to give you the money, if you will but do it.”
Finally Harrison capitulated.
Over six days he dismantled H4 before the Board’s technical committee — and much to Harrison’s disgust Maskelyne was present for the event. But on August 22, 1765 the Board agreed that Harrison had completed the knowledge transfer to their satisfaction. The remaining condition to win the first £10,000 was to hand over his timekeepers. Harrison finally handed them over on October 28, 1765. On the same day he was awarded the first half of the prize.
But what of the second half of the prize?
According to the Act, Harrison would only receive the final payment when “Time Keepers” (plural) “of the same Kind shall be made”. That meant at least two copies of H4 needed to be made. The Board declined to provide funds to Harrison for him to make the copies, instead outsourcing the work to another watchmaker for a fee of £450 (~$120,000 today). For the required trial of the copies the Board introduced scope creep: no longer was it to be a six week voyage to the West Indies, but a 10 month trial at the Royal Observatory and an an eight week voyage.
By this time Harrison was 74 years old and would have to wait at least another year to see the copies of his watch tested. And yet the Board still had not specified what would constitute a successful trial.
Harrison made his own copy of H4 which he called H5. Due to his failing eyesight it took him four and a half years to complete. By that time it was 1772 and Harrison was 79. Rather than submit H5 to the Board Harrison’s son appealed to the King George III. Soon after father and son were granted an audience. The King was clearly taken by the Harrisons’ plea for he declared out loud: “By God, Harrison, I shall see you righted!”
Harrison’s H5 Watch
Credit: The Science Museum GroupHarrison’s H5 Watch
Credit: The Science Museum GroupWhile the King was an ally the Board still pushed back. Harrison gave up dealing with the Board and appealed again to Parliament for their benevolence — asking for an appropriate award for his lifetime of work and dedication. Parliament relented and awarded Harrison the sum of £8,750. While this was no doubt welcomed by Harrison it still did not represent the full award and he never received the remaining £1,250.
Even today there are differing and strong opinions on the Board’s decision.
The author Dava Sobel wrote an amazing best selling book on this epic story. It’s called “Longitude”. It is still the number one best seller for books on geography on Amazon. If you’re at all interested in learning more about this tale then this book is an absolute ‘must read’.
Dava takes the position that the Board slighted Harrison and accuses them of bias.
But others have taken a different point of view.
One such view is held by Professor Jonathan Siegel in his excellent paper ‘Law and Longitude’. Jonathan is a professor of law and takes the position that “The Commissioners of Longitude did their duty.” He looks at it from the Board’s point of view, focusing on the meaning of those crucial words in the Longitude Act: the question as to whether Harrison’s invention was “practicable and useful”.
Perhaps you can see the Board’s perspective: H4 and its subsequent copy, H5, took years to build. They had commissioned a third party to make duplicates of H4, but at a cost ~$60,000 a copy in today’s money. And what would happen if your precious and expensive watch were to malfunction at sea? You would be totally lost and there would be no hope for recovery. At least the Lunar Distance method did not suffer this problem.
Was H4 “practicable and useful”?
As you’re considering this perhaps you should consider a present day analogy: let’s say some government had enacted a similar prize for landing humans on the moon and returning them safely to earth. And let’s say the rules for winning used the same benchmark as the 1714 Longitude Act — that it be “practicable and useful”.
Did Apollo 11 meet that criteria? After all it got people safely to the moon and back. But that mission alone cost $2.7B in today’s dollars10. Given a very expensive new rocket had to be built for each mission should it only win half the prize? Or should the full prize only be awarded for a completely reusable rocket?
As to Harrison’s epic quest and whether it was deserving of the full prize, I encourage you to read both Dava’s book and Jonathan’s paper and make your own decision.
In the meantime be thankful that you can simply look at your phone to determine your position — and that you don’t have to spend hours doing laborious calculations or wear one of those scary celatones.
I’ll leave you with one last tidbit:
Centuries after Harrison’s toils, at a dinner at 10 Downing Street, Neil Armstrong, the first man on the moon, proposed a toast to John Harrison, saying his invention enabled men to explore the earth, which gave them the courage to voyage to the Moon.
So was Harrison’s quest a Map Happening that Rocked Our World?
I’ll say so.
Footnotes:
1 This story is documented in the “The Last Voyage of Sir Cloudesley Shovell” by W. E. May. May, W. (1960).
2 England. Not Connecticut.
3 The distance between the knots, 47 feet 3 inches, wasn’t an arbitrary number. It is the distance traveled in 28 seconds if your are moving at a speed of one nautical mile per hour … thus it is your speed in knots. 🙂
4 The Etak Navigator — the pioneering in-vehicle navigation system released in 1985 had the same problem. But its brilliance was to eliminate dead reckoning errors by “map matching” — correcting its position to a topologically accurate map.
5 The sextant, still in use today, wasn’t invented until 1731.
6 At midwinter (21 December) you needed to deduct 23.45º from your reading, and at midsummer (21 June) to add 23.45º. Between those times you had to adjust the readings proportionally.
7 There were plenty of other wacky ideas too. Professor Jonathan Siegal describes some of them in his paper:
“In 1713, William Whiston and Humphrey Ditton, a pair of mathematicians, proposed that a fleet of ships be anchored across the ocean at 600-mile intervals and that each such ship fire off a cannon shell and flare every day at midnight. Navigators of other ships could then determine their position by timing the interval between seeing the flare and hearing the cannon.60 The difficulty of anchoring ships in mid-ocean and the vast expense that would be required to man the ships made this proposal impractical.61 Even more incredible was the plan, proposed in 1687, to send a wounded dog aboard every ship, and to leave behind a discarded bandage from the dog’s wound. Each day at noon, this bandage would be dipped in “powder of sympathy,” a substance which had the miraculous power to heal at a distance, although at the cost of giving some pain to the patient. The dog’s yelp when the bandage was dipped would give the ship’s navigator the necessary time cue.62 Like most plans that rely on magic, however, the powder of sympathy method failed to work in practice.”
8 Amerigo’s main claim to fame: America is named after him.
9 The moon makes a circuit across the sky in 27.3 days = 655.2 hours. A circuit is 360 degrees so that’s ~0.55 degrees per hour. If your measurement of the moon’s position is off by 0.1 degrees then that’s 0.1/0.55 = 0.182 hours = ~11 minutes. The earth rotates at one degree of longitude every four minutes, so 11 minutes is almost 3 degrees of longitude. If you’re at the latitude of London then 3 degrees of longitude is about 210km or about 130 miles. Ouch.
10 Source NASA: Expenditure on the Apollo missions 1968-1972 converted to today’s dollars using CPI Inflation calculator
Acknowledgments:
- Dava Sobel for her best selling book, ‘Longitude‘.
- Professor Jonathan R. Siegel for his superb paper ‘Law and Longitude‘ published in the Tulane Law Review, Vol 84, Number 1, July 2009
- PBS Nova – “Lost at Sea- The Search for Longitude” narrated by Richard Dreyfus (1998). If you can stomach the incessant ads I found a copy of the video here
- The Royal Museums Greenwich
- The Royal Observatory
- The Mariner’s Museum and Park
- Wikimedia and its site, Wikipedia
- Britannica
- The City of Leeds
- The Science Museum Group
- CPI Inflation calculator
Other Reading:
- Open University: Measuring Latitude and Longitude
- Sea Museum: Why Latitude was Easier to Find than Longitude
- The Guardian: John Harrison Vindicated after 250 Years of Absurd Claims
- Guide to London: Where to See John Harrison’s H4
-
Why Geospatial Data is Stuck in the Year 1955
If we take a look back in history, actually just 66 years ago to 1956 it used to be a very different world.
It was in 1956 that the humble shipping container was invented by the American entrepreneur, Malcolm McLean1. Prior to this era ships used to be loaded with all manner of boxes, wooden crates, containers and sacks. It could take weeks to load or unload a ship. Fast forward to today and one of those giant container ships can be loaded or unloaded in as little as 24 hours.
Malcolm McLean in 1957 – “The Father of Containerization”
Credit: WikimediaThe most significant advantage of Mclean’s container was that it allowed cargo to be seamlessly transported by sea, road or rail. There was no need to unload the contents of a container from one vessel to another, you simply moved the container directly from the ship to the railroad car or to a truck, and bingo, you were on your way.
As a result of all this shipping costs plummeted and it was an easy decision for the industry to switch to using these containers. Use of the 20’x8’x8’ steel boxes2 quickly became a viral success and suddenly all new commercial ships were designed around these standard dimensions.
The icing on the cake came in 1968 when the International Standards Association (ISO) adopted McLean’s specifications and it became an official global standard.
The Economist magazine has since called it “The Humble Hero”:
“Although only a simple metal box, it has transformed global trade. In fact, new research suggests that the container has been more of a driver of globalization than all trade agreements in the past 50 years taken together.”
A Modern Day Container Ship — Credit: Wikimedia In summary two very important things happened here:
- A very useful and easily adherable standard was invented
- The standard was broadly adopted across the entire globe
So how does this story relate to the geospatial world?
Well, I have to say that this world is unfortunately very comparable to the world of international shipping prior to McLean’s 1956 invention. We are still mostly in the year 1955.
Like the contents inside the ~20,000,000 shipping containers in use around the world there are mountains and mountains of geospatial data in use today. With the advent ever cheaper data collection devices, ever more powerful compute power and ever increasing storage capacity — all multiplied by the sheer demand for geospatial data — these mountains are growing exponentially.
But unlike the world of containers all these data are actually incredibly hard to use. The geospatial world is still stuck in the era of miscellaneous boxes, wooden crates, containers and sacks. So, to take a particular type of geospatial data from one system and use it in another system one has to go through a laborious process:
- First gain an understanding of what, exactly, is contained in the source data and how the information is organized
- Then go through a similar process of understanding how the information in the data in target system needs to be organized in order that the target system can make use of it
- Finally develop a process to take the data from the source system and translate it in a format the target system can understand.
It’s all pretty horrible.
Over the last few decades there have been some valiant attempts to solve this problem. For example, in the world of street map data in the 1990s the United Stated Geological Survey (USGS) forged ahead with a standard called the Spatial Data Transfer Standard (SDTS). In 1994 it became a federal standard. Meanwhile in Europe there was a similar effort, but here it revolved around a standard called the Geographic Data File (GDF) which was aimed mainly at map data for car navigation systems. Around this time I remember having to attend some excruciatingly tedious and boring international standard meetings where the two sides (SDTS vs. GDF) battled it out, trying to come up with a single standard that everyone could agree on. Needless to say it went nowhere. Since then neither SDTS nor GDF have really taken hold.
You might be thinking well, what about a shapefile? Isn’t that a standard? Well, yes, it is. But while a shapefile dictates how the geospatial data are structured it says nothing about what’s in the data or how it’s organized. A shapefile is much more similar to a PDF document: a PDF allows you to exchange documents, but a PDF says almost nothing about what’s in the document — for example whether it’s a PhD dissertation or a picture of a cute kitten.
The lack of common, broadly adopted geospatial data exchange standards is crippling the geospatial industry. It’s a bit like going to an EV charger with your shiny new electric vehicle and discovering you can’t charge it because your car has a different connector to the one used by the EV charger. The electricity is there and ready to be sucked up and used, but, sorry — your vehicle can’t consume it unless you miraculously come up with a magical adaptor that allows the energy to flow.
Back in the geospatial world there have been a few exceptions to this dilemma that have proven to be quite successful. I’ll provide two examples:
The first relates to Google Maps and public transit schedules. Back in December 2005 the city of Portland, Oregon became the launch city for Google’s Transit Trip Planner which helped people plan trips using public transit schedules and maps. This app was later folded into Google Maps. Upon launching the feature in Portland Google obviously wanted to roll the feature out to as many cities as they could worldwide. However, they quickly discovered that every city published their public transit schedules and routes in different ways. Many of them just published the data in text form buried somewhere in a PDF document or in a web page. It was therefore going to take Google a humongous amount of manual effort to get the data transformed into a format they could ingest and use. To combat this Google came up with a data exchange specification called the ‘Google Transit Feed Specification’ or GTFS for short. Google pushed the specification hard and eventually it become broadly adopted, so much so that in 2009 the name of the specification was changed from ‘Google Transit Feed Specification’ to ‘General Transit Feed Specification’. GTFS is now open and its evolution is facilitated by an independent organization called MobilityData.
The success of GTFS should not be underestimated. It’s like a little standardized shipping container for data about public transit schedules and transit data and it’s used by thousands of public transport providers around the globe — I think even Disney uses it to publish schedule data about their theme park shuttles.
Another emerging success is in the world of indoor mapping, which as you may have guessed, is one of my favorite topics.
While working at Apple I was intimately involved with Apple Maps’ program to provide indoor maps of airports and shopping centers. This launched as a feature of Apple Maps in iOS 11 back in 2017. Like Google’s efforts to make public transit data available we quickly discovered that making indoor maps was an arduous task. In fact that problem was arguably an order of magnitude more complex than dealing with public transport data as indoor maps were far more intricate than a relatively simple transit schedule. The data for the indoor maps typically came in the format of computer aided design drawings or ‘CAD’ files. Even through these drawings were electronic the information contained within them was all organized in different ways. Sometimes walls were indicated by a single line. Sometimes they were indicated by double lines. Some organizations called an area inside a building a ‘room’. Others called it a ‘unit’ or ‘space’. The names for specific spaces were also different. For example, was it a ‘toilet’, a ‘restroom’, a ‘W.C.’ or a ‘women’s room’? All in all it was ugly and precipitated the need for a huge amount of manual effort to extract the needed information out of the source data and convert it into a format usable in Apple Maps.
In an effort to make things easier we hunted around for a data exchange standard that might meet our needs. We found a few, but nothing fit our particular needs. So like Google, we embarked upon creating our own specification, initially called ‘Apple Venue Format’ (AVF). This later evolved into something we called ‘Indoor Mapping Data Format’ (IMDF). We realized that to be successful we not only had to create a specification but we also needed to convince organizations to adopt it. So we worked with the industry ‘tool makers’ — those who developed software to create, edit and transform geospatial data — to support the specification. This included Esri of course, but also many other people in the business both large and small, for example Autodesk, MappedIn and Safe Software. With that in place we could start to push building owners and specific verticals industries to publish data using IMDF.
The efforts proved worthwhile and eventually we were extremely honored to have the Open Geospatial Consortium adopt the IMDF specification as a ‘Community Standard’. Like the ISO adopting McClean’s specification for shipping containers this gave IMDF the endorsement it needed to be broadly adopted. No longer did organizations, particularly public entities, have to be concerned about publishing data using a specification that was tied to a single large corporation. With the specification endorsed by the leading international geospatial standards body the fear of showing favoritism to a corporation was effectively removed. Today IMDF is becoming the preferred shipping container for exchanging indoor map data and it’s saving organizations a whole lot of time and energy.
So if GTFS and IMDF are two success stories where is the geospatial industry still lacking?
Well unfortunately there’s a ton of work left to do and it won’t be a piece of cake. It will likely be several pieces of cake — or even many cakes.
Let me provide a shortlist focused on just a tiny subset of geospatial data: those used in vehicle navigation applications. This includes:
- Street addresses
- Street names
- Street centerlines
- Street classifications
- Street signage and traffic lights
- Vehicle restrictions
- Lane information
- Temporary restrictions
- Construction information
Talk to any city or regional government around the globe and they will probably have a geospatial database with at least some of this information contained within it. But while they may use the same tools to create and edit the data — typically Esri ArcGIS — you will find that none of them use the same specification for how they organize the data.
In the US alone there are about 600 cities with a population greater than 50,000. Looking at the entire globe this problem obviously multiplies to tens of thousands of cities.
So, imagine the scale of the problem if you’re a global mapping organization like Google Maps, Apple Maps, Waze, HERE or TomTom. You have the same ‘lack of a standard container’ issue that everybody in the shipping industry had back in 1955.
So what’s stopping progress?
Part of it is the natural inertia of standards bodies. While incredibly valuable, the standards developed by these bodies tends to happen at an incredibly slow pace. Just as importantly, the fact that a standard is developed is no guarantee that it will get used.
From my experience at least it’s much more efficient for a single organization to work on a specification and have that organization push adoption of the specification. Then, once some momentum is built, the organization works with the appropriate standards body to get that specification turned into a standard. This approach worked for shipping containers. It worked for GTFS. And it’s working for IMDF.
How could this be achieved?
Well we need an organization to take the lead in pushing standard data exchange formats. Preferably that organization would also be in a position to get various industries to adopt it.
Who could fulfill this role?
Well — here’s looking at you, Esri.
Esri could do it, but, frankly they’re not.
Yes, Esri has developed a number of standard data models for various industries but they are very few and far between. Furthermore they’re extremely hard to find on Esri’s website and when you do find references to them you’ll find they are thin and contain a number of broken links.3
The most important missing element: all of Esri’s data models are optional and suggestions at best. There seems to be little encouragement to use them and there is rarely a highly visible import/export option to a standard data model in the various renditions of ArcGIS.
But what about this ‘Living Atlas‘ that Esri has developed? Isn’t that a beautiful, aggregation of all the data anyone would ever need? Unfortunately, no, this is not the case. While Esri has put a humungous and admirable effort into collating, normalizing and aggregating data, the latency in involved with assembling it and keeping it maintained is too high for critical uses. Organizations still need access to the original source data.
But what about the great work of organizations like the US Department of Transportation on data standards like the Work Zone Data Exchange (WZDx)? The work of this group is extremely encouraging and hopefully they’ll make further progress. But it still needs to be broadly adopted, not only in the US but also around the globe.
Whether Esri develops the data models and publishes them is one question. We could all wait for standards like WZDx to become finalized. But Esri still has an extremely important role to play: encouraging, enabling and making it insanely easy for organizations to import and export data that adhere to these standard data models. This has to be part of the UI. It has to be part of the workflow. It has to be part of the standard data pipeline. It can’t just be an afterthought buried deep inside some data interoperability extension.
So, Esri, are you up to the challenge of becoming a Malcolm McClean, or will you forever be content with everyone having to live in a world of various sized boxes, wooden crates, containers and sacks?
______________________
1 To lean more about Malcolm McLean, watch this 3 minute video from the New York Times.
2 It’s interesting to note that the ISO specification for shipping containers continues to use imperial units as its main driver. Today’s standard ISO containers usually measure 8 ft. 6 in high. ISO containers continue to be 8 ft. or 2,438mm wide. The most common lengths are 20 and 40 ft. Other lengths include 24, 28, 44, 45, 46, 53, and 56 ft. Because the standard is so widely adopted it’s doubtful if that the standard dimension would ever be changed to, say, 2,500mm wide — unless of course the French try to get involved.
3 Try visiting Esri.com. There’s no section on their site for standard data models. If you search on their site for ‘data model’ you get a number of random pages for various industries, but nothing concrete and well organized. Another example, if you search on ‘address data model’ the first search result is a blog post titled “New Release of Local Government Information Model Supports Upcoming Address and Facilities Maps and Apps” — this sounds promising but then you discover the article was written 11 years ago and in it is a link that invites you to “download the information model from ArcGIS.com”. When you click on the link you are forced to sign in to ArcGIS.com are a presented with this lovely user experience:
As the 45th President of the United States was apt to say: “Sad!”
-
Are Consumer Maps Dead?
Back in the old days1 we had to rely on our brains for navigation. Printed maps solved that purpose: they provided the necessary information so you could locate your desired destination and plot an efficient route to get there. The maps were all about representing road hierarchies — the motorways and highways, the major thoroughfares and the arterial networks. The detailed city maps provided the information necessary for short journeys. For long journeys small scale maps enabled you to plan your route to the nearest freeway or motorway and from there the best exit to reach your destination.
In the US in 1996 MapQuest appeared out of nowhere. Now you could type in your starting point and destination into Netscape Navigator on your desktop computer and get MapQuest to generate the directions. With those at hand you printed them out, put them in your briefcase and you were on your way. (But take a wrong turn at your peril — doing so would get you completely lost.)
MapQuest print out — Credit: MapQuest A few years later the era of personal navigation devices or ‘PNDs’ arrived. One of these gadgets from Garmin, Magellan or TomTom would set you back many hundreds of dollars, but, boy, was it a stress reliever. No longer did you have to worry about missing that turn on your MapQuest print out and you never had the acute embarrassment of having to roll down your window and admit you were lost.
Garmin StreetPilot GPS – Launched 1998 for US$400 — Credit: Garmin And for almost 16 years we’ve now had the delight of having a navigation device in our pocket. As a result our navigation anxiety has almost completely been eradicated.
For those of us that lived and worked in big cities or chose not to drive the situation was different but similar. Here you relied on public transportation. Your daily commutes were no different from those who drove: you knew where you were going and no map was needed. But if you had to get across town to an unfamiliar location you’d have to study a different kind of map — a map of bus routes, or more likely, the city’s subway map.
Made brilliant by the wisdom of Harry Beck, subway maps have become the ultimate navigation map by providing the minimum required information in its most simplistic and understandable form. Harry’s unique ability was to abstract just the essential essence of what you needed out of a horribly complex real world. He even took out the geography, realizing that it was the topology that mattered most.
Extract from the London Tube Map — Credit: Transport for London But as those navigation apps in our pocket have continued to evolve even subway maps have become less relevant.
I remember visiting Madrid shortly after Apple Maps launched public transport directions in 2018. It was a godsend. No longer did I have to worry about trying to remember directions and train changes — I could just let Apple Maps guide me — even to the best exit from the subway station. My stress level and navigation anxiety was completely dissipated.
My Trip across Madrid in 2018 using Apple Maps
Credit: AppleSo given these apps do all the calculations for you, my question to you is this: how often do you actually peruse a map? And a follow up question: if you do peruse maps, what, exactly, do you use them for?
I put it to you that a ‘consumer’ map’s utility today is far, far less than it was 30 years ago.
But is it dead?
Well to answer that question let’s start by looking at the definition of what a ‘map’ actually is. Perhaps we can agree that the source for a good definition might be the National Geographic Society. This illustrious organization defines a map as being “a symbolic representation of selected characteristics of a place, usually drawn on a flat surface”.
I think the important key words here are “symbolic representation”.
Today’s subway maps are “symbolic representation” in its rawest form. Printed road maps are another good example. They show the important information necessary to help you make the decision on which roads to take. If it’s been a while, as a refresher, take a look at this map from the Rand McNally road atlas below. It’s completely true to National Geographic’s definition of a map: it’s almost entirely symbolic:
Rand McNally Road Map — Credit: Rand McNally But if you look at consumer mapping apps today you’ll notice a new trend. It’s a trend away from symbolic representation and a trend towards recreating reality.
It started with Google’s Street View which launched in 2007. Clearly not a map. It continued in 2012 with the launch of Apple Maps and its Flyover feature. Again, clearly not a map. More recently we’ve seen this trend accelerate. In the last year Apple Maps has been launching detailed street maps that include (some rather delectable) three dimensional renderings of the key buildings in each city:
Royal Albert Hall, London in Apple Maps — Credit: Apple And earlier this year Google announced something called “Immersive view” which in their blog they characterize as a “A more immersive, intuitive map”.
Google ‘Immersive view’ — Credit: Google But, going back to National Geographic’s definition, is Google’s Immersive view a ‘map’? With the greatest respect2, I would say no, absolutely not.
It’s all part of a trend, a downward trend in my opinion, that will result demise of consumer maps. Contrary to Beck’s approach to distill reality into its essential essence we’re moving in the opposite direction.
We are instead on a path to the dreaded metaverse, a virtual world where we should all be thankful and glad to wander around as legless avatars with the aspirational goal of reaching social media nirvana. I don’t know about you, but, ugh.
But surely there must still be a case for a consumer map, a map in the true sense of the word, as defined by National Geographic?
We’ve seen the need for navigation maps decline, which is fine. There’s nothing wrong with that trend. As a result consumer mapping companies are desperately trying to backfill that need by hoping they can help people “explore and discover”. Read Google’s blog and you’ll clearly see this is their ambition. And Apple Maps is no different with their push to re-create reality with detailed and colorful 3D models. But I don’t think either are quite succeeding in their lofty goal.
Google tells us:
“With our new immersive view, you’ll be able to experience what a neighborhood, landmark, restaurant or popular venue is like — and even feel like you’re right there before you ever set foot inside”
Well that may be their desire, but I think they will find that after they’ve spent gazillions of dollars launching Immersive view (and gazillions more maintaining it) the general reaction will be “it looks pretty” and, after a while “meh” — and the young ones will promptly move on to a TikTok video promoting a new donut store.
But what could these organizations do to build a better, more informative map, in a true sense of the word, instead of just focusing on recreating reality?
Well I think Hoodmaps is one site that is showing the way. The serial entrepreneur Pieter Levels created the site back in 2017. Pieter has had a history of creating different and entertaining products3. Hoodmaps was Pieter’s project to crowdsource information about neighborhoods and put it on a map. Pieter built Hoodmaps on his own in just four days.
Looking at Hoodmaps for any city and you quickly get a feel for the land (click on the image to visit the site for that city):
HoodMaps for London— Credit: Hoodmaps Hoodmaps for San Francisco — Credit: Hoodmaps I don’t know about you but I think Hoodmaps has more or less nailed the characteristics of neighborhoods in these two cities. I encourage you to explore your own cities and draw your own conclusion.
Similar to the work of Harry Beck, Hoodmaps successfully distills down valuable and immediately comprehensible information about an area into an easy to understand map. Yes, you can argue about the design and the cartography — but the idea is spot on.
Think what could be done if a mapping organization dedicated a (small!) team to soup up Hoodmaps’ concept and then brought in their huge audiences to make it really resonate.
Well it turns out that Google is attempting this with a new feature they plan to launch in the coming months called ‘neighborhood vibe’. The general goal is to help you understand what’s popular with the locals. As the feature has yet to launch it’s difficult to say if they’ll achieve their objective, but judging from the screenshots it doesn’t look like they’ll be as raw, punchy or informative as Hoodmaps. But we can all hope.
Forthcoming Google Maps ‘Vibe’ Feature — Credit: Google If you’ve been in the mapping business as long as I have I’m guessing that you too will decry the lurch away from maps to this focus on recreating reality. Maps were invented for a reason — they reduce a complex world into something you can easily understand. A virtual reality can be very pretty, but it’s also just a photo on steroids — it doesn’t necessarily extract and present those golden nuggets of information you’re looking for. A map, however, can do that and it can do that extremely well.
So, are consumer maps dead?
For all of our sakes, I sincerely hope not.
1 Before the navigation systems were pioneered by Etak in 1985, before consumer mapping on the web was invented by MapQuest in 1996 and before Google Maps launched in 2004
2 For a British to American translation please see this handy guide
3 Pieter’s latest foray has been to create AvatarAI.me. It allows you to quickly create 100s of AI generated Avatars of yourself. Pieter made US$100,000 from the project in the first 10 days. For more information on Pieter check out Levelsio on Twitter
-
12 Map Happenings That Rocked Our World: Part 3
Road Maps!
Before we delve into history1, let’s start with a multiple choice question:
When do you think the first road map was created? Was it:
- 1895?
- 1905?
- 1924?
The answer is, of course, none of the above.
The first road map of significance is arguably a map commissioned by the Emperor Augustus Caesar (63BC – 14AD). Augustus had his son-in-law, Marcus Agrippa, embark on a mapping project that took nearly 20 years to complete. The result was a map that stretched from Middle East all the way back to Britain. Like many of the maps the Romans created at that time it had multiple purposes. Maps were used both to conquer lands and to administer their vast Empire. But like most maps today, they were also used for commerce.
What was interesting about Agrippa’s map was its sheer scale. It measured almost seven meters long (~22 feet) and is 34 centimeters high (~1 foot). So it’s a linear scroll that somewhat conveniently rolled up for reference on long journeys. While it was a distorted format it still showed all the important details: the key settlements, the roads connecting them and distances between each settlement. I should emphasize that the geographic scope of the map was vast: it covered the entire Roman Empire as well as the near east, India and Sri Lanka. It even indicated the location of China.
Alas the map is long gone, but a copy was made in c1250AD and still exists today. It is known as the ‘Tabula Peutingeriana’ and is preserved at the Austrian National Library. It is considered one of their greatest treasures:
Detail from the Tabula Peutingeriana showing Rome in the center, represented by a crowned figure on a throne
Credit: World History EncyclopediaRather than have me feebly attempt to describe this amazing work, I strongly encourage you to watch this 5 minute video from the BBC. I was was dumfounded — and I think you will be too:
The Tabula Peutingeriana
Video Credit: BBCImpressive, right?
But what of more recent maps?
Well let’s fast forward to the year 1500AD. It was then a map of Central Europe was developed by the compass maker and physician Erhard Etzlaub (1460–1532). It is the first known German road map. This was the era of the pilgrims and 1500AD was special — it was designated the ‘Holy Year’. In that year the pilgrims were expected to make their way to Rome and this map was specifically designed to help them find their way. It showed the routes to take and mountains to avoid. Perhaps, then, this is where we got the term “All Roads Lead to Rome”?
The First German Road Map: Erhard Etzlaub, This is the Road to Rome (c. 1500)
Credit: Bavarian State LibraryMoving on another 200 years, yet another advance in mapping was made in Britain. It was there in the year 1675 that a chap called John Ogilby published a seminal work called ‘Britannia’ — ‘an illustration of the Kingdom of England and Dominion of Wales; by a geographical and historical description of the principal Roads thereof’.
Ogilby only started map making in the latter part of his life. Prior to that he was a dancer, then he became director of Dublin’s theater. Returning to England in the 1640s he went on to translate and publish Aesop’s fables. He set up a printing shop in London which he used to produce a number of works that included travel guides and traveller’s tales. But in 1671 King Charles II commissioned him to make ‘a particular survey of every county’. What’s interesting about Ogilby’s Britannia is that it takes the form of a strip map, rather like a pre-cursor to the TripTik maps made popular by the American Automobile Association (AAA) in the 1930s. It was Ogilby’s atlas that set the standard for using 1760 yards for the mile, and a scale of one inch to the mile.
Ogilby’s Britannia: this map of part of the route from London to Flamboroughead in Yorkshire
Credit: The British LibraryThe first significant road atlas of the United States was the “A Survey of the Roads of the United States of America,” by Christopher Colles of New York in 1789:
Survey of the Roads of the United States of America
Credit: Library of CongressLarry Printz describes the efforts of Colles in his excellent article for Hagerty: “Where the first automative maps roadmaps came from”. He explains:
“Despite such distinguished customers as George Washington and Thomas Jefferson, the effort faltered because there was little use for road maps in the United States. Most trips were short, made by locals who already knew where they were going. And besides, inner-cities roads were paved. Venturing any farther meant traversing unpaved, unmarked roads. Federal highways didn’t exist. Finding your way took time, patience and luck since most roads were originally trails carved out by wild animals or Native Americans.
It’s little wonder that until the early 20th century, traveling between cities was done mostly by rail, not by carriage.”
In 1901 a businessman and car enthusiast Charles Gillette from Connecticut created a series of maps called ‘The Automobile Blue Book’2 which covered the northeastern US from Boston to Washington DC. A few years later in 1906 the American Automobile Association (AAA) became the official sponsor of the Blue Book which dramatically increased its circulation. It wasn’t until 1911 that AAA produced its first interstate map, “Trail to Sunset,” a booklet of strip maps detailing a route from New York to Jacksonville, FL:
American Automobile Association (AAA) Strip Map from 1911
Credit: AAANow if you grew up in the US and were born before 1980 you might be wondering about Rand McNally. They actually got their start in 1868 producing railroad tickets and in 1872 railroad maps3. Their first road map wasn’t published until 1904. In 1907 they assumed publication of the Chapin Photo-Auto Guides — which were super cool — basically it was Google Street View or Apple Look Around about 100 years ahead of its time:
Rand McNally/Chapin Photo-Auto Maps and Guide Book
Credit: David Rumsey Map CollectionAs automobiles and paved roads became more pervasive many other publishers got into the game. In Europe one of the most famous was Michelin. Their first publication, “Guides Michelin” for France, came out in 1900 which was several years before AAA and Rand McNally published their road maps.
What of today? Alas, most young ones can barely read a paper map, let alone know how to use one (street index anyone?). Paper road maps still do exist though! In case you don’t believe me… see the image of the latest Rand McNally road atlas below.
One really cool thing about Rand McNally’s road atlas is that it’s an atlas of the future — in this case for 2023! I’m not sure how the clever people at Rand do this, but perhaps the folks at Google Maps and Apple Maps could take note?
Rand McNally Road Atlas 2023
Credit: Rand McNally
Footnotes:
1 Warning to those of you born after 1990: smartphones have not always been ubiquitous. Before their invention one had the laborious task of having to refer to something called ‘printed maps’ to determine locations and routes to get there.
2 Not related to the Kelly Blue Book.
3 You can read more about Rand McNally’s history here
Acknowledgments:
- World History Encyclopedia
- Jeremy Norman: History of Information
- Guinness World Records
- British Broadcasting Corporation (BBC) — especially for this video
- German History Intersections
- The British Library
- Larry Printz for his article “Where the first automotive roadmaps came from” on Hagerty.
- The David Rumsey Map Collection
- Rand McNally
-
The Curse of Gerrymandering — & the Mapping Software Behind It
First, for those of you who are not accustomed to the ‘American Way’, a short introduction to gerrymandering:
The term is used to describe adjusting voting district boundaries to create an unfair advantage for a particular party or group. Basically it’s a way to enable a minority of the population to win control of government.
The term gerrymandering is named after the American politician Elbridge Gerry who was the 5th vice president of the USA under president James Madison from March 1813 until his death in 1814. Prior to becoming vice president Gerry was the governor of Massachusetts.
Elbridge Gerry – Credit: Wikimedia In 1812, while Gerry was governor, the Jeffersonian Republicans forced a bill through the Massachusetts legislature to rearrange voting district lines to assure them an advantage in the upcoming senatorial elections. Apparently Governor Gerry only reluctantly signed the law. One of the districts was compared to the shape of a salamander, but when a particularly influential editor at the time saw it, he is said to have exclaimed: “Salamander! Call it a Gerrymander!”
As a result a cartoon-map depicting this district appeared in the Boston Gazette on March 26, 1812:
“The Gerrymander: a New Species of Monster” Boston Gazette, March 26, 1812, Credit -Library of Congress. Ever since the term has had a negative connotation — indicating corruption of the political process.
So how does this nefarious process of gerrymandering work? Well let’s look at a simple example:
When creating voting districts there are certain basic rules put in place that prevent overtly egregious boundaries. For example, it is common to require that each district have equal population, thereby preventing the voters in one district having more influence than another. For US Congressional Voting Districts the variation in population is generally held to less than 1%. Obviously populations change over time and so countries use information gained from a national census as input to redraw the boundaries. In the US this happens every 10 years.
Continuing with a hypothetical example, let’s imagine that the requirement is to create 5 voting districts from a set of 50 precincts. Let’s assume each precinct has equal population. More importantly let’s assume we know the voting characteristics of each precinct — i.e. whether the voters in each precinct would vote for the ‘Purple’ party or whether they would vote for the ‘Orange’ party. Given these assumptions here are two different ways to draw the boundaries that result in entirely different outcomes:
If you think this is all rather academic and can’t possibly happen take a look at some of these contortions in 2022 US Congressional Districts. Now, I’m not saying that contortion implies gerrymandering, but it sure looks weird to me. And sorry @Texas — you win the prize for the most convoluted shapes:
[Huge credit to Alasdair Rae for providing the maps above. Alasdair is an internationally recognized mapmaker, data analyst, author and visual storyteller. Formerly a Professor of Urban Studies and Planning in the UK, he now runs Automatic Knowledge, a UK-based data, analysis and training company.]
What’s worse is attempts to create disproportionate outcomes based on race. The site All About Redistricting, is a great resource to learn more about the whole redistricting process. It has this to say about the various ploys used to achieve discrimination:
In redistricting, one ploy is called “cracking“: splintering minority populations into small pieces across several districts, so that a big group ends up with a very little chance to impact any single election. Another tactic is called “packing“: pushing as many minority voters as possible into a few super-concentrated districts, and draining the population’s voting power from anywhere else.
In the US discrimination like this is in theory prevented by Section 2 of the 1965 Voting Rights Act. However, this may all be upended in a new case being considered by the current session of the US Supreme Court, Merrill vs. Milligan. The London Guardian wrote about the case just a few days ago:
Merrill v Milligan concerns Alabama, where Republican lawmakers want to draw up congressional district maps that would give Black voters the power to send just one African American member to Congress out of a total of seven representatives, even though Black Alabamans make up a quarter of the state’s population. The map was blocked by three federal judges who ruled that it was racially discriminatory and that the state had engaged in racial gerrymandering.
In its brief to the supreme court, Alabama effectively invites the conservative justices to make it virtually impossible to challenge racial gerrymandering. Should the state’s view prevail, challengers would have to show that racial discrimination was the primary intent behind how district lines were drawn.
“That’s a very hard standard to prove,” said Paul Smith, senior vice-president of the Campaign Legal Center. Should the supreme court side with Alabama, Smith added, “it would allow legislatures to undo Black and Latino-majority districts and do away with the opportunity of people to elect their own representatives”.
The case was argued before the Supreme Court on October 4. It will be some months before the outcome is known.
So, given all this, how exactly does one go about creating voting districts in the first place? It must have been incredibly laborious to do it all by hand back in the days of Elbridge Gerry. Today of course we have technology at hand — and not just any technology — we have mapping technology!
According to Ballotpedia there are six packages designed for specifically governments1:
Software Package Developer Backend Technology Autobound Citygate GIS Esri ArcGIS Auto-Redistrict opensource opensource DISTRICTSolv ARCBridge Esri ArcGIS Esri Redistricting Esri Esri ArcGIS iRedsrict ZillionInfo ZillionInfo Maptitude for Redistricting Caliper Caliper Credit: Information from Ballotpedia I was fully expecting the websites for these products to be emblazoned with colorful ads, perhaps something like this:
Alas — they are all quite boring and only talk about how they can be used to help in creating plans that “meet legislative requirements”. However, I have no doubt that any of these tools could be misused. One other note: none of them mention AI or ML. No doubt that’s coming though. I can only imagine what it will bring.
So is gerrymandering just a phenomenon limited to the US? And what can be done to prevent it?
Canada is one example where gerrymandering was rife until the 1960s. Andrew Prokop writes about it in his article on Vox: “How Canada ended gerrymandering”. Andrew explains what Canada did:
“Canadian reapportionment was highly partisan from the beginning until the 1960s,” writes Charles Paul Hoffman in the Manitoba Law Journal. This “led to frequent denunciations by the media and opposition parties. Every 10 years, editorial writers would condemn the crass gerrymanders that had resulted.
Eventually, in 1955, one province — Manitoba — decided to experiment, and handed over the redistricting process to an independent commission. Its members were the province’s chief justice, its chief electoral officer, and the University of Manitoba president. The new policy became popular, and within a decade, it was backed by both major national parties, and signed into law.
Independent commissions now handle the redistricting in every province. “Today, most Canadian ridings [districts] are simple and uncontroversial, chunky and geometric, and usually conform to the vague borders of some existing geographic / civic region knowable to the average citizen who lives there,” writes JJ McCullough.
“Of the many matters Canadians have cause to grieve their government for, corrupt redistricting is not one of them.” Hoffman concurs, writing, “The commissions have been largely successful since their implementation.”
Implementing independent, nonpartisan commissions in the US is more complex. The decision is made at the state level, not the federal level. And I guess certain states (both Democratic and Republican) are perfectly happy to have the fox guard the hen house.
Again from Andrew’s article:
”There are no truly nonpartisan redistricting commissions in the United States,” political scientist Bruce Cain of Stanford University told me in 2014. Iowa uses a nonpartisan agency that’s not permitted to take party registration into account, but it still gives final say to the governor and legislature.
If all this leaves you rather depressed there is one ray of hope. A recent report by David Leonhardt in the New York Times “finds that the House of Representatives has its fairest map in 40 years, despite recent gerrymandering”.
I’ll leave you with a quote from Bernard Grofman and German Feierherd in a Washington Post article from 2017:
However, in most other countries, legal challenges [to voting districts] are limited, and there is not the same concern for strict population equality.
So perhaps the problem all boils down to lawyers? Ah, but then that’s a whole other topic, isn’t it?
Footnotes:
1 There are other packages but they are designed more for the general public and educational purposes.
Acknowledgements:
- Alasdair Rae for providing the maps of the US 2022 Congressional Districts used in this post.
- The US Library of Congress for their image of the Gerry-manner in the Boston Gazette.
- Professor Justin Levitt and Professor Doug Spencer for their detailed and informative site: “All About Redistricting”
- US Department of Justice: “Section 2 of the Voting Rights Act”
- SCOTUSblog: “Merrill v. Milligan”
- Ed Pilkington, The Guardian: “US supreme court to decide cases with ‘monumental’ impact on democracy”
- Ballotpedia: “Redistricting apps and software available for the 2020 cycle”
- Andrew Prokop, Vox: “How Canada ended gerrymandering”.
- David Leonhardt, New York Times: “Gerrymandering, the Full Story”
- Bernard Grofman and German Feierherd, Washington Post: “The U.S. could be free of gerrymandering. Here’s how other countries do redistricting.”
- Wikimedia and all its contributors
-
The Religious Question of HD Maps: Tesla vs. Everybody Else
I grew up in a world of stick maps — street centerlines and roughly digitized curves. All topologically correct, but crude. This was the absolute minimum required to power the pioneering Etak Navigator back in 1985.
The Etak Navigator – 1985 At Etak we took extremely simplistic digital map data from the US Census Bureau, called GBF/DIME files1, and using the information from the paper maps published by the US Geological Survey added shape, topology and any other missing data we could find. This was an extremely labor intensive process. At its peak we had about 36 workstations and ran 24×7 shifts. It took us years to get there, but eventually we digitized the whole of the US and much of Europe.
Etak Map Production Workstations – c. 1988 Fast forward to today’s world and organizations that want to make a map have it much easier, but it’s still really hard. If you want to own the map you can’t just copy Open Street Map. Just like Etak did in 1985 you have to start from scratch. But thanks to Gordon Moore and his law you now have a night-day-day technology advantage. And in the US at least you can start with the US Census Bureau’s TIGER files2 which are a tad more shapely than their GBF/DIME file predecessors.
You can lease a large fleet of vehicles, equip them with expensive cameras and LiDARS, drive all the roads and vacuum up all the data. But while this will get you a lot, it won’t get you everything. You’ll get lanes, street signs, traffic lights, speed limits and maybe if you’re lucky some addresses or businesses. But you won’t get post codes or administrative areas. Or rivers. Or golf courses. Or indoor maps. Or 3D building models.
And of course all this won’t come cheap. Plan on a budget that starts with a number greater than one and ends with a ‘B’.
At the end of it all you’ll have a beautiful map. But that construction your vehicle passed when it was collecting data? Well that was changing the intersection from a four way stop sign to one controlled by traffic lights. Your beautiful map is now out-of-date. Sucker!
Maintaining a general purpose map that is used for finding locations and turn-by-turn navigation is hard. Really hard. Believe me — I lived through it at Etak, at MapQuest and at Apple Maps. Even supposedly simple things like keeping speed limits up-to-date is horribly hard. There are four million miles of drivable road in the US alone. Your private fleet is not going to be able to drive the entire network every day, no matter what the market cap of your company is. It’s just not tenable.
So now let’s switch gears. Let’s up the ante. Let’s talk about creating a map not just for finding stuff and getting there. Instead let’s talk about a map to support autonomous vehicles. Now you really have to be on top of your game. Just about every company in the autonomous vehicle business will tell you that you need something called an ‘HD Map’ or high-definition map. It’s like the general purpose map from Google Maps or Apple Maps but with excruciatingly more detail and centimeter perfect accuracy.
HERE HD Map – Credit: HERE Global B.V. There’s a ton of money pouring into the HD Map business. According to a report from MarketsAndMarkets, it’s projected to reach US$16.9B by 2030. The theory is that you absolutely need an HD Map to support Level 3+ autonomous driving system. The difficulty of producing an HD Map is illustrated by the fact that autonomous vehicles are commonly limited to specific geographic areas. This is partly due to climate — operators want to reduce risks of failure due to sensor obfuscation from road grime, but it is also due to the fact that the vehicles need an HD Map and those HD Maps are expensive to produce and so have very limited coverage.
I’m not an expert in autonomous systems, but I suspect many of them rely on a method of differentiation to operate. By that I mean the autonomous systems take what the vehicle sees and dynamically compares that to the HD Map as a reference. They use this comparison to deduce (1) where the vehicle is, (2) where it can go and (3) what’s around the vehicle that is not part of the map, for example other vehicles and objects.
Back in 1985 when I was at Etak we used to joke that the Etak Navigator would be like the introduction of the electronic calculator. Just like calculators eliminated humanity’s ability to perform arithmetic in their head, the Etak Navigator would eliminate humanity’s ability to remember how to get from A-to-B. Sure enough this prophesy has turned out to be completely true. My wife reminds me of it constantly — “Why do you need directions home? Don’t you know where to go?”
Etak used a system of cassette tapes to store the map data. We imagined cars roaming around aimlessly at the edge of our map coverage — their owners completely lost due to having no EtakMap. The brains inside many autonomous vehicle systems are like these poor owners of the Etak Navigator — they’d be completely lost without an HD Map.
So the big question is this: if it takes billions of dollars to maintain a plain Jane general purpose map, how can organizations possibly build and maintain an HD Map?
The theory is that eventually your personal vehicle will collect data as well as navigate. So if you’ve bought that snazzy new Waymo Bubble Car it will vacuum up data while it’s driving you around — and that data will help keep the HD Map current.3
Clearly the issue is scale. Today no member of the public owns a Waymo. Waymo operates 25,000 vehicles, but how often do you see one drive down your cul-de-sac? Not as often as an Amazon van I suspect. There are simply not enough vehicles in these dedicated fleets to collect the necessary data for an HD Map and, more importantly, keep it current.
But there is one way out of this conundrum.
What if your system doesn’t rely on an HD Map?
As I said in the title of this post — this is very much a religious question — and it’s the folks at Tesla who have a completely different religion. They don’t use an HD Map. This means that, in theory at least, their vehicles aren’t limited to drive in a particular area.
If you haven’t already watched the recent video from Tesla’s AI Day 2022, I strongly recommend you do so. It’s quite technical, but it will give you a very clear idea of how Tesla thinks — or to use Steve Jobs’ phrase: how Tesla “thinks different”.
If you distill everything Tesla talked about in their presentation — humanoid robots, full self driving methodologies and home-grown supercomputers — I think you will come away with the takeaway that Tesla is fundamentally about two things:
- Tesla is about scale
- Tesla is about efficiency
This is demonstrated by their effort to produce a humanoid robot called Optimus, which I predict is destined to become the Model ’T’ of robots. Yes, as predicted, the technical media immediately scoffed at Tesla’s efforts, saying it was nothing like what you see from Boston Dynamics. But Boston Dynamics started their dream 30+ years ago! 4 Tesla has been working on Optimus for just 13 months.
The Tesla Optimus Robot – Tesla AI Day 2022 – Credit: Tesla So the reaction is a little like the initial reaction to iPhone. I would be somewhat cautious about immediately dismissing their efforts.
Tesla is focused on building Optimus out of cheap, readily available materials — no carbon fibre for example — and they plan to manufacture it using techniques they’ve learned from making Tesla vehicles. Elon Musk predicts they could ultimately produce millions of units with each one costing less than a car. This is an example of how Tesla focuses on scale.
Tesla is also leveraging everything they’ve learned from their other work to speed development. This will help them leapfrog everyone else in the industry. For example, they’re using their full self driving software to give Optimus the brains it needs to navigate indoor spaces. This is an example of how Tesla focuses on efficiency.
Video of Tesla Visual Navigation for Optimus Robot – Credit: Tesla
https://www.youtube.com/watch?v=ODSJsviD_SU&t=2911sIf you look at Boston Dynamics by comparison, they’ve done some very impressive work, but now they have significant challenge ahead of them. They don’t have Tesla’s high volume manufacturing prowess, nor do they have Tesla’s autonomous navigation expertise, nor do they have a high volume factory floor that they can use to test and refine their robots at scale.
But let’s get back to the question of HD Maps:
For full self driving, Tesla uses a map, but to use their words, it’s a “coarse road-level map … this map is not an HD Map”. This ‘coarse’ map is used in combination with vision components built from vehicle camera data to dynamically derive lane connectivity in real time.
Video of Tesla Neural Network for Deriving Lanes. It requires no HD Map – Credit: Tesla
https://www.youtube.com/watch?v=ODSJsviD_SU&t=5223sBy choosing not to rely on HD Maps, Tesla has undoubtably chosen a much harder problem to solve as their vehicles have no theoretical ‘ground truth’ to compare to. But assuming their approach is successful it should result in a much more capable, intelligent and independent system that doesn’t have the extreme cost burden of building and, more importantly, maintaining an HD Map.
Tesla still spends an enormous amount of time, energy and money to process information used to train their neural networks, particularly the neural networks used for what they call “auto labeling” 5. Where does all this training data come from? Well of course a lot of it comes from all those Teslas driving around — there are now about 2 million Teslas on the road today. Given Tesla’s fleet is orders of magnitude greater than anybody else’s autonomous fleet they can further accelerate away from their competition. (And now they’re going to do the same in robotics.)
The cost of Tesla’s differing approach is still significant. For example, it includes the development of Tesla’s Dojo supercomputer to solve the massive parallel computing problems of processing petabytes of data. Their supercomputer architecture is significantly different from conventional approaches that traditionally amass banks of GPUs. As a result it provides significant efficiency gains. I don’t see GM, Toyota or VW developing a brand new supercomputer architecture like this anytime soon. Perhaps NVIDIA? We shall see.
I suspect there will be other benefits to Tesla’s approach, some of which even Tesla has yet to anticipate or realize.
It’s ironic, but if anyone could build and maintain an HD Map that covers a large geographic area, like the USA or Europe — and keep it super current — it’s Tesla. They’re the only ones that have a large enough fleet to do it.
I pity other manufacturers. It’s going to be insanely hard to keep up.
In the meantime we’ll see who ultimately wins this religious battle.
Stay tuned.
Footnotes:
1 GBF = Geographic Base File; DIME = Devilishly Insidious Map Encoding. Credit: Marv White
2 TIGER = Topological Illusion Generating Extensive Rework. Credit: Marv White
3 In the interim traditional OEMs hope that third parties like Intel’s Mobileye, Toyota’s Camera and Nvidia’s DeepMap will help them collect data from the cars they already sell. However, the issue is most of their vehicles just don’t have the cameras or sensors to collect the required data at the quality that is needed.
4 See https://www.bostondynamics.com/about: “We began the pursuit of this dream over 30 years ago, first in academia and then as part of Boston Dynamics”
5 This is the process of automatically identifying and categorizing features and objects that the cameras in Tesla vehicles see as they drive.
Acknowledgments:
- Marv White, Chief Technology Officer at Sportvision. Marv is an amazing mathematician with a wicked sense of humor and was my mentor at Etak
- Tesla and everything they presented at AI Day 2022.
- MarketsandMarkets for their report on the HD Map market