Map Happenings

Mapping Industry Tidbits, Activity and Musings

  • OK Map Making Nerds — Where’s Your Revolution?

    In case you didn’t notice, we’re living in a world of revolutions. 

    Generative AIs are exploding and seemingly turning the world upside down. It feels very similar to the dot com boom of the 1990s, where your company wasn’t worth diddly squat unless its name ended in ‘.com’. Today your products better include some form of ML or AI — preferably generative — to grab any attention. It’s getting so ridiculous that I wouldn’t be at all surprised if Cadbury’s somehow integrated ChatGPT into their next chocolate bar. 

    In parallel with all this there’s another revolution happening in automotive manufacturing. 

    Here you are witnessing a wholesale switch in the methods for designing, engineering and manufacturing vehicles. It’s not just about replacing the internal combustion engine with an electric motor and lots of AA batteries, it’s about developing a completely new method of producing vehicles.

    The Mercedes ‘AA’ — Credit: SNL

    In the western world it’s Tesla that’s grabbing the limelight, spurring all the other auto OEMs to jump on the EV bandwagon. 

    The PC angle on the story for the OEMs is that it’s part of their mission to save the planet. However, it’s really about FOMA and increasing profits — there are orders of magnitude fewer parts in an EV vs. an ICE equivalent. The legacy OEMs’ strife and anxiety is being cranked up by Tesla’s use of its Giga Press1 which is resulting in yet another massive increase in manufacturing efficiency. No wonder Ford is sweating.

    But what about the map making world? 

    Has anyone — or is anyone — doing something similar to revolutionize global map production?  

    And I’m not just talking about building the map, I’m talking about keeping it maintained and up-to-date.

    Since organizations started the process of building global street maps in the mid 1980s the approach has more or less been the same: 

    • Get rights to some existing reference maps or aerial photography
    • Develop some map editing tools that will scale (sorry @Esri)
    • Throw bodies at the problem. Thousands of them. 

    In the 2000s organizations started adding fleets of vehicles to the mix, first equipped with cameras and later also with LiDAR. This enabled richer data collection — for example, things like speed limits and lane info — and made it easier to verify ‘ground truth’.

    But map production still took thousands of bodies. 

    Then around 2007 something magical happened. Smartphones came to be, first the iPhone and later other photocopies. Over time the proliferation of these devices has resulted in an abundance of new data. While this treasure trove of information has mainly been used by Location Harvesters, Personal Information Brokers & Assholes to turn you into a product, it has also proved useful for map making.

    The anonymized and aggregated data from mobile devices can be used to both derive new products and help maintain existing ones. 

    The prime example is real-time traffic information: those red and orange lines you see overlayed on consumer maps denoting traffic jams are nearly always derived from movements of mobile devices. 

    But these movements can also be used to derive change signals. For example:

    • Where are devices traveling along a path where there is no road? Is that a new road?
    • Where are devices no longer traveling in a particular direction along an existing road? Is that a new one way?
    • Where are devices no longer turning left at an intersection? Is that a new turn restriction?

    So that’s progress.

    However, there’s a limit to its usefulness. 

    Even though the data is a large firehose, it’s fundamentally just movement data — and due to the current  limitations of GPS it’s not always accurate enough to derive even simple information, particularly in urban canyons. 

    So, in other words, while movement data definitely can be used as a change signal, it’s extremely difficult to use it to derive map edits automatically. 

    While it’s progress it’s not exactly what you’d call a revolution.

    So what do you need for a real revolution?

    Well, I’d make the argument that it distills down to three things:

    • Eyes
    • Smart Processing
    • Streaming

    Eyes

    Let’s start with eyes. 

    By ‘eyes’ think of images like Google StreetView or Apple’s LookAround, but at a frequency that will make a difference. StreetView and LookAround images come from the dedicated fleets that the map makers employ, but the problem is they only drive the streets about once a year — if you’re lucky. Sorry guys, but that doesn’t cut it for revolutionizing map production. 

    Now there are other organizations collecting image data — examples include Mapillary2, Solera/SmartDrive, Lytx, Nexar, Hivemapper and Gaist, but again what’s missing is massive volume. 

    To really stay on top of things you need eyes on every road everyday. 

    Where might that volume come from? Well from the cameras built into vehicles of course. There are two companies that I’ll highlight here that could bring the volume: Mobileye and Tesla. 

    Mobileye, recently spun off from Intel and subsequently IPO’d, sells systems to auto OEMs to enable them to provide driver assistance systems like adaptive cruise control, collision avoidance and ultimately, they hope, completely autonomous driving. They claim that their most basic system — which includes a front facing camera — is already installed in millions of vehicles. 

    What about Tesla? Well as of April 2023, Tesla has sold a total of 4,061,776 electric vehicles3. Each of them has eight cameras. That’s a lot of eyes.

    Phew — sounds good right?

    Alas there is one small problem in the way: lawyers.

    Yes, dear readers, it turns out that the auto OEMs want to keep all that data to themselves, so it’s actually pretty hard to come by. 

    But it’s not just about eyes on the ground. You also need eyes in the sky. These eyes, used intelligently, can in theory be used to collect data automatically and can be used to detect change. 

    And the good news is there is an ever increasing plethora of eyes in the sky, not so much from drones (from which data is at best very limited and sporadic), but from birds. Small ones. They’re called earth observation satellites. 

    There’s a ton of activity going on in this space — volume, frequency of capture and resolution is increasing by leaps and bounds, sensors are evolving — and in the meantime costs are coming down by orders of magnitude. Pretty soon we’re going to be awash with data. For those of us in the map making business it’s going to be thrilling to watch because it’s going to change the way business is done. 

    One of the companies I’m watching is Satellogic, who claims to have got the costs of data acquisition down to $0.46 per km2 — two orders of magnitude cheaper than their competition:

    Credit: Satellogic

    One way or another all these eyes will produce the volume of data needed. It will just be a matter of time.

    Smart Processing

    The question is of course, what are you going to do with all this data? You’re talking many many petabytes at least. 

    To make good use of it you need to be intelligent about extracting information. Of course this is where machine learning models come in, not so much targeted at creating maps, but instead at detecting change. 

    Change can be detected from the ground, for example detecting construction zones by automatically detecting traffic cones. Or your machine learning models might also automatically pick up traffic lights, stop signs, speed limits, et cetera. 

    Detecting information from this street level imagery is already quite advanced. You’ll be familiar with it if you’ve ever ridden in a Tesla where the screen displays people and objects that the cameras see, and more importantly for map makers – traffic lights, speed limits and signs. 

    Ultimately all these eyes on the road might also be used to keep information about places and businesses up-to-date, including volatile information from signs on the windows indicating things like operating hours and ongoing sales. It’ll take some work, but it will get there.

    Using street level imagery to help keep business information current — Credit: Apple Maps LookAround

    Detecting change from the sky is more interesting. For example, there’s a homebuilding data company called Zonda which is using imagery to automatically detect phases of building construction, so you can tell when streets are in, framing has started or roofs are on. 

    Perhaps more interestingly, there’s a company called Blackshark.ai who has a service called Orca that is able to perform automatic detection on global imagery, e.g. for vegetation classification and building detection. I’ve not seen them produce a specific workflow for detecting road changes at scale yet, but I wouldn’t be surprised if they have something in the works. 

    Credit: Blackshark.ai

    Streaming

    Back in the hay days of printed road atlases everyone would be thrilled to get the annual update of their favorite road atlas from the likes of Rand McNally, Michelin or the Ordnance Survey. 

    Then ‘Sat Nav’ systems came along, and after spending $2,000 or more for a ugly stick map you only had to pay a ransom of a few hundred dollars to get your refreshed map CDs or DVD. 

    The cadence was still pretty much annual however.

    Then, lo and behold, MapQuest came to be and suddenly you didn’t have to worry about DVDs or paying annual ransoms. The map updated itself!

    But little did most of you know that organizations like MapQuest were beholden to the map makers of the day like Navteq, GDT and TeleAtlas. At best they sent MapQuest an update every quarter. On top of that it took MapQuest several months to process all the data, so by the time it got to customers the map was at least six months out of date.

    It wasn’t until Google started making their own map that things really started to change. Because Google was developing the whole stack it gave them the ability to be in control of the release cycles. 

    Eventually the cadence of map releases became more frequent, first monthly and then with the ability to splice in critical updates, e.g. for highway and motorway intersections. But still the pipeline was geared towards releasing all the data in one big glued-together multi-layered lump. 

    The advent of displaying real-time traffic on top of the roads forced a change in architecture as traffic conditions change by the minute. This precipitated the need to stream at least some of the data.

    The question is, is it possible to stream all layers of the data independently from one another — so an update in say roads can be streamed in separately from updates to parks or indoor maps? 

    To achieve such a goal might enable a true ‘living map’ with almost zero latency in updates.4

    I’m not sure if any organization in the map data editing, processing and publishing business has truly achieved a layer independent, near zero latency streaming system yet. I’ve never seen the inner workings of Google Maps — perhaps they have, but I’d be surprised. 

    One organization that is certainly striving for such a system is HERE. That was the underlying story behind their recent announcement of Unimap at CES.

    For example, they want to be able to take speed limit data that is recognized by vehicles driving around, quickly automatically verify it and then immediately stream the newly updated information back out to the their mapping services. HERE has an advantage in this space as their investors include BMW, Audi and Mercedes, so in theory at least those data could come in volume from the OEMs’ fleets.

    This approach may be a little nascent as there aren’t enough BMWs, Audis and Mercedes vehicles on the road with the necessary ‘eyes’ yet, but hell, it won’t be too long before there is critical mass. So kudos to HERE for showing leadership.

    Tesla has a ton of ‘eyes’ and could in theory stream the signs and objects their vehicles recognize back to their map. But for navigation at least Tesla doesn’t have their own map. They rely on Google.

    Hmm — what does that particular data license agreement look like I wonder? Is there a quid pro quo that we don’t know about in place? Like Tesla’s object recognition in exchange for Google’s map data? Perhaps we should all ask Elon and find out.

    Regardless of the relationship it clearly doesn’t result in near instant map updates, so the streaming architecture is not in place yet.

    So Who’s Going to Drive the Revolution?

    So, net/net — nobody has quite cracked it yet. As far as I can tell a ground breaking revolution in map making along the lines of what we’re seeing in generative AIs and auto manufacturing has yet to materialize. 

    But in time — and probably not too much time — somebody will crack it. There will be enough eyes, there will be enough smart processing and organizations will re-architecture their pipelines to enable near real time updates of all layers of the map independently of one another. 

    I can’t wait to see who will be first. 


    1 This 13 minute video is well worth a watch if you’re interested in the technical and financial details of how the Giga Press is benefiting Tesla:

    Credit: The Tesla Space

    2 Acquired by Meta in 2020.

    3 Source: Licarco.

    4 Yeah, I know, I know — the Esri groupies among you will exclaim the virtues of Esri’s ‘Living Map’, but Esri is not a global map maker. Also, like it or not, there is still significant latency between the time when a change happened on the ground and when it appears in the Esri offering.

  • Watch ChatGPT Plugins in Action. Now Imagine it with Geospatial

    As a follow up to my post “ChatGPT Releases Brain Implants. GeoSpatial Intelligence is Nigh.” you might want to watch OpenAI co-founder, Greg Brockman, demo ChatGPT plugins at the recent TED 2023 conference.

    It’s a 15 minute talk with a follow up Q&A from TED’s Chris Anderson.

    Visit this page for more info.

  • Yes, it was an April Fool.

    Alas, there is no ArcGIS for macOS in sight.

    Apple ArcGIS on macOS ad
    Click for hi-res image

    It’s a shame for all those Mac lovers who use ArcGIS. They either have to suffer through having to use Parallels, or worse yet, they have to endure the ignominy of using a PC.

    Perhaps all of you avid Mac fans could start a campaign to convince Esri to invest?

    For example, perhaps wear one of these rather delightful buttons at your next exciting GIS event?

    ArcGIS on macOS button
    Available for purchase today!

    But in the meantime I’m afraid you’ll have to make do with their breakfast cereal product:

    ArcGIS for Cornflakes
    Click for hi-res image
  • Finally!! About Time!! Announcing: ArcGIS for macOS. 😱

    If your day job is not in the mapping industry then you might find the title of this post a little yawn inducing. But bear with me, this is actually pretty momentous…

    ArcGIS is a product that comes from the largest enterprise mapping technology company on the planet. That company is the Environmental Systems Research Institute, now commonly known as ‘Esri’. People call it ‘ezz-ree’ although for the longest time it was known to employees as ‘E-S-R-I’ or sometimes just ‘The Institute’.

    Esri has both an impressive and illustrious pedigree. Started by Jack Dangermond and his wife, Laura, in 1969 they have built the company into an industry juggernaut. Through Esri’s work Jack has pioneered the geographic approach to technology, developing a foundation on something called ‘GIS’ or geographic information systems.

    While the term ‘GIS’ is meant to be a generic term, it has actually become synonymous with Esri. In other words ‘GIS’ means ‘Esri’ and there are no other significant GIS players in the market.

    Esri ArcGIS
    Credit Esri

    ‘What about Google?’ I can hear you exclaim. Well to Esri, Google is but a pittance. While Google has become a master of consumer maps and navigation they have done relatively little in the enterprise mapping market. Sure, I guess you could say they ‘dabble’, but it’s not a core focus.

    For Esri, mapping technology is central to everything they do. And as a result you will find it is used under the covers almost everywhere — national, regional and local governments, utilities, oil & gas, telecommunications, transportation, banking, insurance, retail, education — to name just a few.

    It’s used by organizations not only to create super detailed maps of places and infrastructure, but more importantly it’s used for geospatial analytics — or what I’ve always liked to call ‘location analytics’.

    You can use Esri’s software to map your cities — parcels, water and sewer lines, roads, bridges and parks. You can use it to figure out the optimal location for a store, a school, a cell tower or a wind farm. You can use it to assess risk or to plan for emergencies. You can use it to optimize emergency response. The list is essentially endless. At anytime when the question ‘where?’ comes up then Esri has a product for you.

    And since 1969 that list of products has grown.

    In the beginning Esri started with prefacing all their product names with the letters ‘Arc’ — as in ‘arc’, ‘line’ or ‘polygon’. This is much like Apple, who in the Steve Jobs days used to preface all their products with the letter ‘i’.

    First there was ‘ArcView’, ‘ArcMap’ and ‘ArcInfo’ and more recently they’ve settled on prefacing all product names with the word ‘ArcGIS’ 1, for example ‘ArcGIS Pro’, ‘ArcGIS Enterprise’ and ‘ArcGIS Online’.

    I’m actually quite astounded at how many ‘ArcGIS’ products there are now. When I last counted there were 110 of them — they even have a product for breakfast cereals (!!):

    Esri ArcGIS list of products  (1 of 2)
    Esri ArcGIS list of products  (2 of 2)

    But if you look through the list carefully you might notice there’s one product that’s not listed.

    Yes, there is no product for that operating system favored by the many millions of people who use computers from that large fruit company in Cupertino, California.

    It’s true, dear readers, you will not find ‘ArcGIS for macOS’.

    This is surprising, particularly given the popularity of the macOS ecosystem — not to mention its cool factor. It is also very surprising given Esri’s propensity and strategy to ‘get em while they’re young‘.

    I asked Perplexity about how much Macs are favored in universities. Even I was surprised at the number — some 71% of college students prefer Macs over PCs.

    Percentage of students that prefer Mac over PC
    Credit Perplexity.AI

    And real life backs this up. Here’s a screenshot of freshman students attending one of their first lectures at a well known US college:

    Err — I think I can detect just one or two Apple devices in the audience!

    But here’s the exciting news…

    Thanks to some little birdies that have graciously kept me in the loop I can now tell you that your long wait is now almost over:

    And I’m told this isn’t going to be some Windows lookalike hack either. No, it will be fully compliant with all the nitty gritty, pixel-perfect details of the Apple macOS Human Interface Guidelines. It’s also going to built from the ground up on Metal so it can make full use of Apple’s latest M-series chips. All-in-all it’s going to be gorgeous.

    But wait, I hear you clamoring — just when is the exciting date?

    Well I’m told that in deference to Apple it will be on the anniversary of Apple’s founding.

    Go figure. 🙂


    1 Although when I worked at Esri I sometimes heard people call it ‘ArghhGIS’ in frustration at the complexity of its UI.

  • ChatGPT Releases Brain Implants. GeoSpatial Intelligence is Nigh.

    So, in case you missed it, yesterday there was a momentous announcement from OpenAI. They released “ChatGPT Plugins”.

    These are essentially brain implants that solve the woeful embarrassment that ChatGPT suffers from when trying to answer basic questions about, for example, mathematics or anything geospatial.

    If you want the background on its ineptitude I suggest you read my last post: “ChatGPT (et al) for Geospatial. You Ain’t Seen Nothing… Yet.

    I can’t emphasize enough what a big deal this new plugin capability is: it’s just like the scene in the Matrix when the character Trinity is essentially given a plugin to learn how to fly a helicopter:

    Only in ChatGPT’s case you can now upload one of these many brains:

    ChatGPT Plugins

    So now ChatGPT can be immediately be given the power of any one of these sites, for example Expedia and Kayak for booking travel or OpenTable for finding and booking restaurants.

    But the one I want to focus on is Wolfram.

    For those (few?) of you that might not be familiar, Stephen Wolfram built an amazing site, Wolfram|Alpha, that was released 14 years ago in 2009. One of its key original intents was to be able to answer mathematical questions using a natural language interface. It did so admirably.

    Alas, as Stephen Wolfram recently pointed out, this intelligence didn’t make its way into ChatGPT:

    ChatGPT Failure on Mathematics
    ChatGPT Failure to Compute
    Wolfram|Alpha at Work on mathematics
    The Correct Answer from Wolfram|Alpha

    Stephen Wolfram pointed out ChatGPT’s ineptitude in spades in his article back in January. When I read the article I reached out to Stephen to discuss this in more detail. He put me in touch with Peter Overmann on his team.

    Peter has worked at Wolfram|Alpha for many years, but he’s also worked at TomTom, so he knows geospatial. We had some great conversations. It was Peter who kindly gave me the scoop of what was really going on under the covers.

    It turned out the ideas that I raised in my post were already being worked on by brains exponentially smarter than mine.

    And, as of yesterday, ChatGPT is now also exponentially smarter than it was before:

    Wolfram|Alpha in ChatGPT

    But since 2009 Wolfram|Alpha has grown significantly. It can now answer questions on a whole host of subjects, not just mathematics:

    Wolfram|Alpha capabilities

    And now you can access all these capabilities in ChatGPT.

    A geospatial example:

    Wolfram|Alpha in ChatGPT
    Wolfram|Alpha in ChatGPT - Maps
    Wolfram|Alpha in ChatGPT - Heat Maps
    Wolfram|Alpha in ChatGPT - Map Projections

    To learn more please read Stephen’s extensive new article on this new capability: “ChatGPT Gets Its “Wolfram Superpowers”!

    If you want to experience it yourself I’m afraid you’re going to have to join a waitlist. Alas I don’t have access yet, so I’ve yet to enjoy these new superpowers myself.

    Now while Wolfram|Alpha’s overall capabilities are outstanding, the geospatial capabilities are still, shall we say, somewhat rudimentary.

    But as we all know things are moving fast. Very fast.

    My question is: who will be the first to plugin a truly powerful geospatial engine into ChatGPT. Will it be:

    • Wolfram|Alpha extending its geospatial chops?
    • Mapbox?
    • Esri?
    • Wolfram|Alpha wrapping Esri?
    • Some new upstart?

    It’ll be fun to see.

    Stay tuned. I’m sure you won’t have to hold your breath too long.

  • ChatGPT (et al) for Geospatial. You Ain’t Seen Nothing… Yet.

    So with all the recent froth about ChatGPT and Clippy 2.01, err, I mean the new Bing, I thought it might be fun to do a deeper dive and think about how all this might effect the geospatial industry. 

    In other words, what does the future hold for ‘Map Happenings’ powered by generative AI?

    In order to write this article I started by doing a little research and investigation. I wanted to discover just how much these nascent assistants might be able to help in their current form. Now unfortunately I don’t yet have access to the new Clippy, so I had to resort to performing my tests on ChatGPT. However, while I suspect the new Bing might provide better answers, it might also might decide that it loves me or wants to kill me or something2, so for now I’m happy to stay talking to ChatGPT. 

    I picked a number of different geospatial scenarios — consumer based as well as enterprise based.

    The first scenario is based on a travel premise. 

    I imagined I was planning a trip to an unfamiliar city, in this case to Madrid. I was pleasantly surprised with the results — they weren’t too bad:

    ChatGPT - Results from prompt requesting things to do and where to eat

    But if you try using ChatGPT for something a little more taxing than searching all known written words in the universe, like, for example, calculating driving directions, you will quickly be underwhelmed. 

    Take this example of driving from Apple’s Infinite Loop campus to Apple Park. At first the directions look innocuous enough:

    ChatGPT - incredibly bad driving directions from Apple Infinite Loop campus to Apple Park

    However, digging in, you’ll find the directions are completely and utterly wrong.

    It turns out ChatGPT lives in an alternate maps universe. 

    Diagnosing each step:

    1. “Head east on Infinite Loop toward Homestead Rd”: Infinite Loop does not connect to Homestead Rd. Get your catapult!
    2. “Turn right onto Homestead Rd”:  so after catapulting from Infinite Loop over the freeway to Homestead you turn right. OK.
    3. “Use the left 2 lanes to turn left onto N Tantau Ave”: Err, you can’t turn left from Homestead to Tantau … unless the wind blows your balloon east of Tantau.
    4. “Use the left 2 lanes to turn left onto Pruneridge Ave”: Really? Hmm. Wrong direction!
    5. “Use the right lane to merge onto CA-280 S via the ramp to San Jose”: It’s actually I-280, but wait … Pruneridge doesn’t connect to the freeway… get out your catapult again!
    6. “Take the Wolfe Rd exit”: but if you took “CA-280” towards San Jose then you were traveling east, so now you’re suddenly west of Wolfe Rd. The winds must have blown your balloon again!
    7. “Keep right at the fork and merge onto Wolfe Rd”: Ok, I think. 
    8. “Turn left onto Tantau Ave”: You’ll be stumbling on this one. Wolfe and Tantau don’t connect. 
    9. “Turn right onto Apple Park Way”: wait, what? 
    Trying to make sense of ChatGPT's incredibly bad driving directions.
    Trying to make sense of ChatGPT's incredibly bad driving directions.
    Trying to make sense of ChatGPT’s incredibly bad driving directions.

    But wait, it gets worse:

    Another example of ChatGPT's incredibly bad driving directions.

    ChatGPT runs out of energy at step 47 somewhere in New Jersey, presumably completely befuddled and lost.

    Now this authoritative nonsense isn’t limited to directions.

    Let’s look at some maths3.

    First a simple multiplication:

    ChatGPT - Simple Mathematics

    So far, so good. But now lets make it a little more challenging:

    ChatGPT - failure with large number

    ChatGPT certainly sounds confident. But is the answer correct?

    Well’s here’s the answer you’ll get from your calculator, or in this example, WolframAlpha:

    Wolfram|Alpha's answer to 3 to the power of 73
    Credit: Wolfram|Alpha

    Huh? It looks like ChatGPT not only lives in an alternate maps universe it also lives in an alternate maths universe. 

    Now the founder of Wolfram|Alpha, Stephen Wolfram, recently authored an excellent and fascinating article about this: “Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT”. In it he’s lobbying for ChatGPT to use Wolfram to solve its alternate maths universe woes.

    Architectural differences between ChatGPT vs. Wolfram|Alpha
    Architectural differences between ChatGPT vs. Wolfram|Alpha
    Credit: Wolfram|Alpha

    Stephen points out not only ChatGPT’s inability to do simple maths, but also its inability to calculate geographic distances, rank countries by size or determine which planets are above the horizon.

    ChatGPT vs. Wolfram|Alpha on a geographic distance question
    Credit: Stephen Wolfram

    Stephen’s big takeaway:

    In many ways, one might say that ChatGPT never “truly understands” things

    ChatGPT doesn’t understand maths. ChatGPT doesn’t understand geospatial. In fact all it understands is how to pull seemingly convincing answers out of what is essentially a large text database. You can sort of see this in its response to the question about what to do in Madrid — this is likely summarized from the numerous travel guides that have been written about Madrid.

    But even that is flawed. 

    In order to work efficiently the information store from which ChatGPT pulls its answers has to be compressed. And it’s not a lossless compression. It therefore is vulnerable to suffering from the same kind of side effects as audio, video or images that use a lossy compression. 

    Ted Chiang covers this in his New Yorker article: “ChatGPT is a Blurry JPEG of the Web

    Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

    In other words, don’t let ChatGPT’s skills at forming sentences fool you.

    What’s missing?

    Clearly ChatGPT’s loquacious front-end needs to be able to connect to computational engines. That is what Stephen Wolfram argues for, in his case for a connection to his Wolfram|Alpha computational engine. 

    I can easily imagine a world where a natural language interface like ChatGPT could be connected to a wide variety of computational engines. 

    There might even be an internationally adopted standard for such interfaces. Let’s call that interface CENLI (“sen-ly”), short for “Computational Engine Natural Language Interface”. 

    I challenge folks like Stephen @ Wolfram-Alpha and Nadine @ OGC to push such a CENLI standard. In that way we could build natural language interfaces to all sorts of computational engines. This might include:

    • All branches of Mathematics
    • Financial Modeling 
    • Architectural Design
    • Aeronautical Design
    • Component Design
    • … and — of course — all manner of Geospatial 

    It turns out making a connection between a generative AI and a computational engine has been done already — by NASA. A chap called Ryan McClelland, a research engineer at NASA’s Goddard Space Flight Center in Maryland has been using generative AI for a few years now to design components for space hardware. The results look like something from an alien spaceship:

    NASA's AI designed space hardware — Credit: NASA
    NASA’s AI designed space hardware Credit: NASA / Fast Company

    Jesus Diaz has recently wrote a great article for Fast Company about Ryan’s work:

    NASA is taking generative AI to space. The organization just unveiled a series of spacecraft and mission hardware designed with the same kind of artificial intelligence that creates images, text, and music out of human prompts. Called Evolved Structures, these specialized parts are being implemented in equipment including astrophysics balloon observatories, Earth-atmosphere scanners, planetary instruments, and space telescopes.

    The components look as if they were extracted from an extraterrestrial ship secretly stored in an Area 51 hangar—appropriate given the engineer who started the project says he got the inspiration from watching sci-fi shows. “It happened during the pandemic. I had a lot of extra time and I was watching shows like The Expanse,” says Ryan McClelland, a research engineer at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. “They have these huge structures in space, and it got me thinking . . . we are not gonna get there the way we are doing things now.

    As with most generative AI software, NASA’s design process begins with a prompt. “To get a good result you need a detailed prompt,” McClelland explains. “It’s kind of like prompt engineering.” Except that, in this case, he’s not typing a two-paragraph request hoping the AI will come up with something that doesn’t have an extra five more limbs. Rather, he uses geometric information and physical specifications as his inputs.

    NASA’s AI designed space hardware — Credit: Henry Dennis / NASA / Fast Company

    “So, for instance, I didn’t design any of this,” [McClelland] says, moving his hands over the intricate arms and curves. “I gave it these interfaces, which are just simple blocks [pointing at the little cube-like shapes you can see in the part], and said there’s a mass of five kilograms hanging off here, and it’s going to experience an acceleration of 60G.” After that, the generative AI comes up with the design. McClelland says that “getting the right prompt is sort of the skill set.

    What’s really interesting about McClelland’s work is that it is streamlining the long cycle of design -> engineering -> manufacturing. No longer does he need to pass off the designs to an engineering team who then iterates on it and subsequently passes it on to a manufacturing team who iterates even further. No. Now the generative AI tool compresses that process: 

    It does all of it internally, on its own, coming up with the design, analyzing it, assessing it for manufacturability, doing 30 or 40 iterations in just an hour. “A human team might get a couple iterations in a week.”

    Jesus Diaz sums it up perfectly:

    Indeed, to me, it feels like we are the hominids who found the monolith in 2001: A Space Odyssey. Generative AI is our new obsidian block, opening a hyper-speed path to a completely new industrial future.

    So, given that a natural language interface to all sorts of computational engines is both possible and inevitable, what might a natural language interface to a geospatial computational engine look like and what might it be capable of doing?

    First, let’s start with a consumer example. 

    I don’t know about you, but I love road trips. But I abhor insanely boring freeways and much prefer two lane back roads.

    Many years ago when I lived in California I discovered the wonderful world of MadMaps4

    MadMaps has developed a series of maps for people of my ilk. Originally they were designed for those strange people who for some reason like motorbikes, but for me, at the time when I had my trusty Subaru WRX, they were also perfect.

    You see MadMaps’ one goal was to tell you about the interesting routes from A to B. So, when I was driving back to Redlands from my annual pilgrimage to the Esri user conference in San Diego, I would be guided by MadMaps to take the windy back roads over the mountains. It would take me about twice as long, but it was hellish fun. 

    Imagine if the knowledge of MadMaps was integrated into a geographic search engine or your favorite consumer mapping app. And imagine if it also happened to know something about your preferences and interests so that it could incorporate fun places to stop along the way.

    It turns out I’m not the first person to think of this.

    It was only recently that Porsche announced a revamped version of its ROADS driving app.

    Porsche ROADS driving app
    Porsche ROADS driving app — Credit: Porsche

    ROADS is a valiant attempt to use AI to do what MadMaps does but in an interactive app. Unfortunately the generated routes are, well, pretty simplistic and not particularly enthralling. They lack the reasoning and context that you get from studying a MadMap.

    However, I don’t think it would take a huge amount of work by the smart boys and girls at Google Maps and Apple Maps to do something similar, but much more powerful. Imagine this prompt:

    “Hey Siri, I’m looking to drive from Tucson to Colorado Springs. I’m traveling with my dog and I’d love to take my time, but I want to do the trip in two days. Can you recommend a route that takes in some beautiful scenery and some great places to eat and stop for good coffee? And by “good coffee” I mean good coffee, not brown water or chain coffee schlock. I’d obviously like find good places to stop for walks to exercise the dog and I’d love to spend the night at some cute boutique hotel or motel close to some eclectic restaurants.” 

    If you try it today5 you will find what first appears to be a good answer, but on closer analysis it’s lacking in detail and is very vague in some places.

    More importantly perhaps: it’s also just a text answer.

    It’s not a detailed trip plan displayed on an interactive map that you can then tweak and edit. In other words, it’s only about 50% of the way there.

    Switching gears, now let’s imagine a natural language interface to a complex geospatial analytics problem, this time applied to business.

    As an example I’ll use the geospatial problem of something called “site selection”. This is a process of determining the best location for some object, some business or some facility. Traditionally this is performed with huge amounts of geospatial data about things like roads, neighborhoods, terrain, geology, climate, demographics, soils, zoning laws … the list goes on.

    Organizations like Starbucks and Walmart have used these geospatial and geo-demographic analysis methods for decades to help determine the optimal location for their next store. Organizations like Verizon have used similar processes to help determine the best locations for cell phone towers based on where the population centers are and what the surrounding terrain looks like.

    This methodology has not been limited to commercial use cases. 

    A long time ago I remember someone performing a complex geospatial analysis on the location of Iran’s Natanz uranium enrichment facility. They looked at things like the geology, the climate, the topography, access to transportation and energy. Using this information they spent a significant amount of time, energy and brainpower to determine other locations in Iran that might have similar characteristics — in other words: where else Iran might be hiding another such facility? I think there were only one or two places that the algorithm found.

    What’s common about all these enterprise use cases is the complexity of getting to the answer. You have to set up all the right databases, you have to invent, develop and test your algorithms. And just like with the design -> engineering -> manufacturing process that NASA faces with component design, there is a feedback loop — for example, one of the challenges for locating a Starbucks is determining exactly what factors are driving the success of its most profitable stores.

    All of this is compounded by the horrible complexity of the user interfaces to these systems. To get the best results you not only need to be well educated in something called ‘GIS’ 6, but it also doesn’t hurt to be an accomplished data analyst. My good friend, Shawn Hanna, who also happens to be a super sharp data analyst, used to work on these site selections scenarios for Petco. He can attest to the complexity of the problem.

    But imagine if instead data analysts could issue a prompt to a geospatial computational engine to help them find the optimal answers more quickly:

    “I’m looking to figure out the best location to open a new Petco store in the Atlanta metropolitan area. I’d like you to take into account the locations of current Petco stores, their sales and profitability and the location of competitive stores. I’d also like you to take into account the demographics of each potential location and match that against the demographics of my best performing stores. Also take into account likely population growth and predicted trends in the respective local economies. And, of course, information on which households own pets. When you’ve derived some answers, match that against suitable available commercial properties in the area. Rank the results and explain why you chose each location” 6

    The trick, as McLelland at NASA says, will be in good prompt engineering.

    And of course, you’ll have to have the confidence that your chatty interface is connected to a reliable, dependable and knowledgable computational engine.

    It’s not going to eliminate your job, but it sure as hell is going to make you tons more productive.

    We’re not there yet. But it’s coming. 

    Hell, we might even be able to do this:

    Can you fly that helicopter?
    Credit: The Matrix / Warner Bros. Entertainment Inc.

    Footnotes:

    1 For those of you that don’t remember, here is Clippy 1.0 in action: 

    Clippy 1.0

    2 By now many of you will have read Kevin Roose’s conversation with Bing in his New York Times article. If you don’t have access to the New York Times then you can see a reasonably good summary of the conversation in The Guardian.

    3 If you live in the United States, that translates to ‘Math’. Why I’m not sure. People generally don’t study ‘Mathematic’. Perhaps that’s why people from the US sometimes have a reputation for not being as good at mathematics as people in other countries? They don’t realize there’s a number bigger than one.

    4 Here is one of my favorite MadMaps:

    One of the brilliant back roads maps by MadMaps

    5 ChatGPT’s answer to a road trip challenge. It’s a reasonably good start, but the directions are pretty vague:

    ChatGPT trip from Tucson to Colorado Springs

    6 GIS stands for ‘Geographically Insidious System’

    7 FWIW, here is ChatGPT’s answer to this prompt:

    ChatGPT's answer to a site selection prompt

    Acknowledgments:

  • 12 Map Happenings That Rocked Our World: Part 5

    The Dawn of Tube Maps

    So the astute readers among you1 will have realized by now that this series of posts on the 12 Map Happenings that Rocked Our World are slowly advancing through history:

    • Part 1 was about The First Map which was probably invented about 45,000 years ago
    • Part 2 was about The Birth of Coordinates, specifically latitude and longitude, which happened in about 245BC
    • Part 3 was about the invention of Road Maps by the Romans somewhere around 20BC
    • Part 4 was about The Epic Quest for Longitude and how it came to be measurable at sea in 1759

    Today we move forward yet again, this time to the year 1933 and the invention of the ’Tube Map’. 

    First of all through, what the hell is a ‘Tube’? 

    Well, if you’re not familiar, please let me enlighten you. 

    The Tube refers to the London Underground, which in 2023 is celebrating its 160th anniversary. 

    ‘Love the Tube’ Roundel – Celebrating the 160th Year of the London Underground
    Credit: Transport for London

    The first line opened on 10th January 1863 between Paddington and Farringdon Street. Initially the trains were powered by steam locomotives that hauled wooden carriages. It wasn’t until 1890 until the first deep level electric line was opened:

    The first electric London Underground train in 1890
    London Electric Underground Train in 1890 — Credit: Wikimedia

    The London Underground first became known as the ’Tube’ in 1900 when the then Prince of Wales, Prince Albert Edward (later Edward VII), opened the Central London Railway from Shepherd’s Bush to Bank. This line was nicknamed the ‘Twopenny Tube’2,3.

    Many maps of the Tube were created, the first being in 1908:

    London Underground Map from 1908
    London Underground Map from 1908 — Credit: Darien Graham-Smith

    It wasn’t until 1949 that the Tube Map that that we all know and love truly came into being4.

    The map was created by one Henry Charles Beck (4 June 1902 – 18 September 1974), a.k.a. Harry Beck.

    Beck’s map was first published in 1933:

    Beck’s First Work, Published in 1933

    But It wasn’t until 1949 that Beck was completely satisfied with the the design:

    Harry Beck's Masterpiece - Map of the London Underground in 1949
    Harry Beck’s Favorite Creation from 1949 — Credit: Darien Graham-Smith
    Harry Beck (1902-74) — Credit: Wikimedia

    Beck had created something of beauty and it was truly a game changer: eliminating all extraneous information — even topography — to create the most simple and easy-to-understand map you could possibly achieve. Jony Ive would have been proud.

    The history of how this map came to be and Beck’s trials and tribulations to get it approved is a story that has been told many times and, I hasten to add, with great comedic wit and wisdom. I could never come close to doing these prior works justice. Instead please let me point you to some delightful muniments worthy of your time:

    One of my favorites is by Darien Graham-Smith who wrote about the History of the Tube Map in his article for the Londonist. In this article you will see the progression from the messiness of the pre-Beck maps to Beck’s 1949 masterpiece.

    Another of my favorite history lessons is given by the amazing Jay Foreman who created two delightful 10 minute videos. They are full of acerbic British wit and most definitely a ‘must watch’:

    The Tube Map nearly looked very different — Credit: Jay Foreman

    What went wrong with the Tube Map? — Credit: Jay Foreman

    So how much did Beck’s map influence the rest of the world? You only have to take a look at the official subway maps from around the globe to see:

    Even the city of Venice has adopted Beck’s style for its official maps of Venice’s water taxi network:

    Map of Venice Water Taxi Network
    Venice Water Taxi Network — Credit: Actv

    Now while I prattle on about Harry Beck, I’m sure the map purists among you are probably whinging5 that the London Tube Map is not a map, it’s a schematic. Well in the sense that the topography was trounced by topology that may strictly be the case. But the Tube Map accomplished what so many of today’s ‘maps’ fail to do today — distilling the horrible complexity of the real world into the atomic essence of the information you really need. And, let’s not forget, they still depict space, albeit without the equal scale of a traditional map.

    But where is it all going?

    Well there is one land yet to be conquered — that fair city of the Big Apple, which so far has steadfastly refused to adopt Beck’s non-topographic mantra:

    Map of New York City Subway
    New York City Subway Map — Credit: MTA

    And, I’m sorry to say that since Beck’s passing the London Tube Map itself has regressed. Somehow the attractive simplicity of Beck’s finest work in 1949 has now been lost to complexity and incoherence:

    Map of London Underground - Tube Map - 2023
    London Tube Map in 2023 — Credit: Transport for London

    However, in my research I did come across one bright light. This is a map of the roads of the Roman Empire, in what is now a very familiar form:

    Map of Roman Roads in the Style of a Transit Map
    Roman Roads in the Transit Map Form — Credit: Sasha Trubetskoysashamaps.net

    So, perhaps it is the Romans we should thank after all? 😉


    Footnotes:

    1 By suffering through my blogs you have to be somewhat astute, or at the very least, patient and tenacious

    2 Two things here:

    • ‘Twopenny’ perhaps unsurprisingly means two pence. This was the initial cost of a ticket on this line
    • To those unfamiliar with proper British pronunciation, ‘twopence’ is actually pronounced ’tuppence’ not ‘two pence’

    3 The term ‘Tube’ could also have come from the fact that, well, the tube looks very much like a ‘tube’. It could also have come from the concept of London’s Victorian Hyperloop, run by the London Pneumatic Despatch Company between 1863 and 1874.

    A London Tube train emerging from the Tube — Credit: Wikimedia

    4 You could argue that Beck’s first map actually dated from his 1931 sketch, drawn in pencil and colored ink on squared paper in his exercise book:

    Sketch for a new diagrammatic map of the London Underground network by Henry C. Beck in 1931
    Sketch for a new diagrammatic map of the London Underground network by Henry C. Beck in 1931
    Credit: Transport for London and the Victoria & Albert Museum Collection

    5 ‘Whinging’ — pronounced ‘winge-ing’ (like hinge-ing) — is British for whining in a particularly irritating way. In other words, it’s much worse than simply whining. 

    References and Acknowledgments:

  • Apple Business Connect: A cure for Apple Maps’ weak spot?

    Last week Apple issued a press release for a new tool, something they call ‘Apple Business Connect’ and it’s tightly linked to Apple Maps. 

    Press releases about Apple Maps don’t come particularly frequently from Apple. If you include last week’s release there have been just four dedicated press releases about Apple Maps since 20161. The prior one was in September 2021, announcing their 3D city maps.

    ‘Apple Business Connect’ seems like a very specialized topic. Almost too much in the weeds for Apple to stoop so low and give it press release.

    So what’s the big deal?

    Well, now businesses and organizations are being given the opportunity to “Put your business on the map.”

    Apple Business Connect - Put Yourself on the Map — Credit: Apple
    Put Yourself on the Map — Credit: Apple

    Huh, but weren’t all businesses on the map already?

    Well, not always. 

    It turns out getting all those businesses on the map is hard — super hard. And it’s even harder to keep all the information about them current. 

    Having accurate, complete and up-to-date information about businesses is also absolutely crucial to the success of you map product: it doesn’t matter how pretty your map looks, it’s pretty much useless if you can’t find the organization you’re looking for. 

    The issue of how hard it is to keep the information up-to-date quickly became apparent with the onset of the pandemic. Restaurants and other businesses were suddenly closed or suddenly had very different operating hours. And it was extremely difficult to keep track of all the changes. 

    Keeping this information current is a constant struggle for all map makers, and Apple is far from immune. 

    So how does one even begin to address this challenge?

    For you millennials in the audience, let me start with a little history:

    Back in the old days we had something called the ‘Yellow Pages’. These were big printed books published by your national or regional telephone company. The yellow pages listed all the businesses in your city or region and complemented the ‘white pages’ which contained the residential listings2.

    Yellow Pages were a big business: they generated a ton of advertising revenue for the phone companies. As a business you could buy a block of space — advertising your trade, your shop or perhaps your legal practice. If you really wanted to grab someone’s attention you bought a full page ad at great expense and renamed you business so it started with the letter ‘A’ — or indeed many As — so as to increase the likelihood that your listing was the first a prospective customer would see.

    For you Millennials: This is what a Yellow Pages book looked like — Credit: Wikimedia
    For you Millennials: This is what a Yellow Pages book looked like — Credit: Wikimedia
    A Typical Yellow Pages Ad for a Lawyer -- Credit: Movie Posters USA
    A Typical Yellow Pages Ad for a Lawyer
    Credit: Movie Posters USA

    Being big, heavy and expensive to produce the phone books were published just once a year. 

    In the 1990s, with the advent of mobile phones and the quickly growing popularity of the internet, the business models of the phone companies began to change. The data started to move online. Suddenly the world became awash with something called “Internet Yellow Pages”. Back in their hay day Internet Yellow Pages were a key feature of both America Online (AOL) and Yahoo!  The legacy of this era lives on today, for example with “Pages Jaunes” in France, but I’m pretty certain almost nobody uses it. 

    The issue in the 1990s was the currency of the data. These digital yellow pages were updated using the same low cadence methodology as had been used for decades with the printed yellow pages. The publishers would proudly tell you: “We call every business once per year!”  😱

    Moreover, as these companies were making money from advertising, they were far more concerned with getting another year’s revenue from the lawyers, locksmiths & plumbing companies than they were about deleting listings for organizations that were no longer in business. So not only was there a currency issue, there was also a quality issue.

    Back in the heady days of the dot com boom in the late 1990s I was one of the people at MapQuest that had to deal with these companies. Let’s just say that they didn’t move at the speed of the internet. 

    I remember dealing with all the various companies operating in the US at that time — InfoUSA, Dun & Bradstreet and Database America to name just a few — trying to understand their processes and their data quality. 

    A quote from a salesman at Database America sticks with me still to this day: 

    “It’s not a question of how good these databases are, it’s a question of how bad they are!”

    Monte Wasch, c. 1995

    So what about today? How do mapping organizations like TomTom, HERE, Google and Apple Maps keep their own ‘business listings’ current?

    If you dig a little you can quickly find out that they don’t do all the work by themselves. And that’s true even for Google. It’s a massive aggregation and collation of data from dozens and dozens of sources. To get an idea of what sources are used you simply have to find the ‘acknowledgments’ page for each product. For example, here is the acknowledgements page for Google Maps’ business listings and here is the same page for Apple Maps3. These pages don’t list all the organizations that contribute data, but they list many of them.

    At its inception Apple Maps relied solely on third parties, the most prominent being Yelp. Unlike Google and unlike Facebook, Apple has never seriously been collecting data about businesses. 

    That is until fairly recently. 

    It all started a couple of years ago in the latter part of 2020. Apple Maps suddenly gave users the ability to rate businesses in Australia as well as upload photos. It wasn’t long before this ability was extended to many more countries. This didn’t mean Yelp and other partners were suddenly swept aside, but it was a telltale sign that Apple was beginning to shift towards a homegrown solution. 

    Of course Google had taken the same approach many years before. It started with Google Local in 2004 and, via a long, winding and horrendously convoluted road, to the launch of Google Business Profile in November 2021:

    The Evolution of Google Business Profile -- Credit: Bluetrain
    The Evolution of Google Business Profile
    Credit: Bluetrain

    Due to the enormous popularity of Google search and Google Maps businesses knew that they had to be found on Google and that they needed to be visible on Google Maps. Google didn’t have to do much to encourage businesses to seek out the page on Google where they could provide the information. Today Google Business Profile offers a myriad of options to enable businesses to not only add or correct basic information, but enrich it with details to entice people to visit:

    Google Business Profile Marketing Page — Credit Google
    Google Business Profile Marketing Page — Credit: Google

    So what is Apple Business Connect? 

    Well, it’s taken them a while — err, 19 years4 — but it’s actually Apple’s response to Google Business Profile. 

    Like Google Business Profile you can add your business if it’s not listed, correct information if it’s wrong and enrich your listing with things like official photos, menus, special announcements and offers.  The information you provide doesn’t just make its way to Apple Maps, but it also gets shared across the Apple ecosystem to services like Siri. Similar to Google Business Profile, Apple Business Connect also provides access to an analytics dashboard so you can see how users are interacting with your listing. 

    But here’s the $64 million dollar question: wiil businesses even realize that Apple Business Connect exists?

    The problem — of course — is all about mindshare

    In most countries Google Maps is nearly always top of mind5. So much so that many iPhone users will swear to you that they use nothing but Google Maps, but when you ask them to point to the icon of the app they use it turns out it’s not Google Maps, it’s Apple Maps. 

    So will the owner of Joe’s pizza parlor even even think about Apple Maps, let alone go on a hunt for Apple Business Connect? 

    I think we all know the answer.

    ‘No.’

    Not unless Apple starts a major campaign to significantly increase the awareness of Apple Maps and Apple Business Connect. 

    But how?

    It’s extremely unlikely Apple would start a massive billboard advertising campaign. Even if they could foist the costs of such a campaign on carriers, I don’t think this would ever happen.

    A more logical approach might be to promote Apple Business Connect as part of the Apple Business Essentials, a program which helps organizations optimize use of the Apple devices they use at work.

    Or perhaps Apple Business Connect could become a more prominent feature of Apple Pay, for example in the promotional pages that help businesses learn more about Apple Pay and how to set it up:

    Apple Pay Marketing Page — Credit: Apple
    Apple Pay Marketing Page — Credit: Apple

    A conjecture that seems to me to be far more likely, however, is that Apple Business Connect is just the start. The rumor mill has been rumbling about the likelihood of ads coming to Apple Maps. While I have no information to substantiate or refute such rumors, I wouldn’t be at all surprised if Tim and Luca would salivate at the prospect of recouping some of their massive geospatial investments.

    Then promoting Apple Business Connect in order to effect more accurate, more complete and more up-to-date businesses in Apple Maps would be easy. They could just make use of unsold inventory.

    One thing is for sure, however: Apple Business Connect is not a case of “if you build it, they will come”.

    Let’s all stay tuned, ‘cos Apple is going to have to do something big to make your average Joe aware.


    Footnotes:

    1 Links to Apple press releases about Apple Maps:

    2 In some cases there was also something called the ‘blue pages’ for government listings

    3 To get to this page on iOS, open Maps, tap the ‘choose map type’ button, then tap on the link at the bottom of the screen: ‘(c) OpenStreetMap and other data providers’

    4 Google Local launched in 2004. Apple Business Connect launched 2023. 

    5 With perhaps the exception of China, Russia and South Korea

    Acknowledgments:

  • The Overture Maps Foundation:  Yet Another Global Map. But Will it Fly?

    In case you didn’t realize, we live in a multiverse1 of global street maps.

    It all started back in the early 1980s in the offices of two startups who were both based in Sunnyvale, California. One was Etak, the original pioneer of in-vehicle navigation systems. The other was a little company called Karlin & Collins. 

    In the case of Etak, founder Stan Honey and angel investor Nolan Bushnell had the vision of building a ground breaking navigation system. Stan knew they had a large number of hard problems to solve in order for the system to be successful, and the need for a digital street map was only one of them. In the very early days Etak somewhat naively thought, “Maps? That’s the easy part — we’ll just get those from the government!”

    The Etak Navigator in 1985
    The Etak Navigator in 1985

    Their assumption wasn’t totally lacking judgement. 

    It turns out that in 1965, almost twenty years before Etak was founded, the US Census Bureau had made the case for building a digital street map of the USA in support of the 1970 census. They called it GBF-DIME. The Bureau was a visionary of its time, realizing that such a map could not only be used in support of tabulating the national census, but it could also be used in many other areas, including education, transportation planning, emergency services and urban planning2.

    The GBF-DIME File System in 1978 - US Bureau of the  Census
    Extract from US Bureau of the Census Paper Extolling the Virtues of a Digital Map
    Credit: US Bureau of the Census (and Google for digitizing)

    Unfortunately Etak quickly came to realize that these US Census Bureau ‘stick’ maps didn’t quite meet the requirements of a navigation system. The data contained little information about curvature of the roads and highways were barely digitized. The quality of road connectivity — technically known as its topological correctness — left a lot to be desired. It was this hard reality that became the catalyst for Etak to get into the digital map business. 

    The other Sunnyvale start-up, Karlin & Collins, got its start in 1985 not because they’d invented a James Bond like navigation system like Etak, but because one of their founders, Galen Collins, had got lost driving in the Bay Area. Collins also saw the value in navigation, but focused on a much harder problem: developing a system that could provide turn-by-turn directions in addition to the map based guidance that the Etak Navigator provided. Collins’ desire for turn-by-turn directions added another whole level to the requirements — not only did you need to collect all the road geometry and street addresses, but now you also had to collect information about turn-restrictions and one ways. GBF-DIME definitely didn’t have that!

    While Etak and Karlin & Collins discussed cooperating a number of times they rapidly became competitors.

    Both companies realized ‘data was king’ and both companies started digitizing — the same cities, the same neighborhoods, the same streets, the same addresses. Not only in north America, but in Europe and Asia. All at huge expense. 

    Time moved on. 

    Etak was sold to Rupert Murdoch’s News Corporation who later sold it to Sony, who sold it to Tele Atlas. Tele Atlas was acquired by TomTom. 

    Karlin & Collins went through a series of rebrands, first to Navigation Technologies, then to NAVTEQ and later to HERE. HERE is now privately held by a number of corporate investors including BMW, Audi, Mercedes, Mitsubishi, Intel, Bosch and NTT. 

    The Genesis of TomTom and HERE from Etak and Karlin & Collins
    The Genesis of TomTom and HERE

    Both TomTom and HERE built a global map database. Both developed a successful business licensing map data to automotive OEMs. With the advent of the internet, they also licensed their data for use on the web, initially to a fledgling web mapping company called MapQuest3. The MapQuest site relied solely on map data licensed from third parties. This ultimately became an opportunity for Google. They swooped in, launching Google Maps in 2005. MapQuest was left to atrophy by its new parent, AOL, who remained completely distracted by its acquisition of TimeWarner.

    But even Google had to rely on third parties for digital map data. Google initially chose NAVTEQ as their primary source and later switched to Tele Atlas. But on October 7, 2009 Google made what was to be a very significant change. Hidden in their announcement of their new “report a problem” feature for Google Maps was a move that would send shock waves across the mapping industry. Google had dropped the use of Tele Atlas data in the USA and had replaced it with their own map. This thus became the genesis for a third global street map. Today Google Maps continues to maintain a map of the entire planet, albeit relying on the foundation of many third party datasets

    Meanwhile in 2004, around the same time that Google Maps got it start, an English gentleman named Steve Coast became frustrated with the UK’s national mapping agency, the Ordnance Survey. Unlike the US government, who released all their geospatial data for free with no license restrictions, the Ordnance Survey insisted on (significant) license fees. In response Steve launched the OpenStreetMap (OSM) project and two years later established the OSM Foundation. It got off to a slow start, but a few key catalysts gave it the momentum it needed:

    • Contributions from organizations like AND (now Geojunxion), who provided some basic road networks; integration of US Census Bureau street maps
    • Access to aerial imagery, providing the necessary backdrop for map editing, initially provided by Yahoo! and later Microsoft Bing Maps 
    • Money. In 2012 Google made the decision to start charging for access to its Google Maps API. The original catalyst for OSM was the lack of free access to good map data. Google’s move to add a paywall to its APIs added fuel to the fire. It precipitated moves by the likes of Foursquare, Wikipedia and AllTrails to switch from Google Maps to OSM. Many others followed.
    • ‘Paid Editing’ —  whereby corporations funded enormous numbers of edits to OSM. This started in 2017 and started to mushroom in 2019:
    The Growth of Paid Editing Teams in OSM
    Growth of Paid Editing Teams in OSM — Credit: Jennings Anderson

    I’m not sure if Steve Coast’s original aspiration was to develop the ‘Wikipedia of Maps’, but it certainly turned out that way. Thanks to all the hard work of its millions of contributors, today OSM provides a beautifully rich global map:

    The OSM Map of the Berlin Zoo
    OSM Map of the Berlin Zoo — Credit: Best of OSM

    Some nine years after Google launched its own map of the US in 2009, Apple Maps embarked on a similar journey, launching their home grown map of northern California in September 2018. Today Apple Maps has extended its coverage to many countries, including the US, Canada, most of western Europe, Australia, New Zealand, Israel and Saudi Arabia4. Filling in the gaps with OSM and other third party data Apple effectively has its own global street map5

    But it doesn’t stop there. In the enterprise mapping world Esri has not been standing still. Esri created a global atlas which they call the “ArcGIS Living Atlas of the World”. Just like a paper atlas it contains many maps. And included in the list of maps is a highly curated global street map built from collating and aggregating numerous sources from around the world. The purpose behind this map is a little different from the other players. It’s designed to help Esri users get more value out of their investment in ArcGIS. If you want to be cynical — it’s to keep their users in the ArcGIS ecosystem.

    So let’s review our multiverse of global maps. In no particular order:

    • TomTom
    • HERE
    • Google Maps
    • OSM
    • Apple Maps6
    • Esri ArcGIS Atlas of the World7

    So now we have half a dozen organizations, all creating essentially the same thing. A global street map of the planet.

    It’s as though there are six different organizations creating six separate sets of identical roads for you to drive on. Or perhaps it’s like having six different electrical companies, each creating an entirely separate electrical supply network to your house. Is that crazy or what?

    But wait — there’s more!

    Ladies and Gentlemen, now we also have the Overture Maps Foundation!

    The Overture Maps Foundation
    Credit: The Linux Foundation

    The Overture Maps Foundation (OMF) was officially announced by the Linux Foundation on December 15, 2022 and has Amazon Web Services, Meta, Microsoft and TomTom as founding members. Clearly some heavy hitters. Together they will be “Powering current and next-generation map products by creating reliable, easy-to-use, and interoperable open map data” 

    Err, so how and why did this happen?

    Well, having talked to a few people in-the-know, I think I can distill it down to three things:

    • Money
    • Control
    • Interoperability   

    Let’s Start with Money

    Building a high quality global street map from scratch is expensive. Super expensive. As I’ve said in prior posts: you had better start with a number greater than 1 that ends with the letter ‘B’. The hard work only starts after you’ve built the map. Now you’ve committed yourself to spending beaucoup bucks to maintain it. Only a very few companies on this planet have the financial means to do this and even they are under extreme pressure.

    And as it stands in today’s economic climate pressures are now much, much greater. 

    Stock prices of all companies in this business have dropped — and for some — precipitously:

    Stock price drop of Apple, Microsoft, Alphabet, Amazon, Meta, TomTom since all time highs
    See Footnote 8 for Details

    For a comparative benchmark consider the fact that the NASDAQ composite has dropped about 35% since its all-time high in November 2021. Alphabet, Apple and Microsoft are in this ballpark, but Amazon, Meta and TomTom have performed decidedly worse.

    And there have been some hard realities for each of the OMF founding members:

    • Microsoft: Back in the Balmer days Microsoft was fairly bullish about mapping (remember ‘Bing Maps’?9). They even started going down the path of building their own map. It wasn’t until Satya took over that reality hit: the bulk of Bing Maps’ assets were sold to Uber. Uber got serious about maps for a while, but then their own reality hit. Let’s just call that one ‘Travis’.  
    • TomTom: Since the heyday of a $2,000 navigation option for your shiny new vehicle, TomTom’s world has been shrinking inexorably. They got some solstice from licensing their data to Google Maps and later to Apple Maps, but now the Google revenue is gone. And Apple Maps is continuing to expand its coverage, so TomTom’s revenue from Apple has got to be shrinking fast.
    • Meta: For Meta, well, we all know it’s not been easy. 13% of their staff got laid off in 2022. I’m sure that just like Satya, Zuckerberg has no appetite for investing heavily in a global map. 
    • Amazon: in 2022 Amazon has been implementing its largest cuts in its 28 year history and Jassy has just announced there will be 18,000 layoffs coming. Enough said.

    So clearly each founding member must see OMF as a way to combine efforts and reduce costs. 

    Second Topic: Control

    Clearly Google Maps is a factor. It was a big factor for OSM getting its initial traction. But what’s wrong with OSM? Doesn’t that give these players what they need? 

    Well apparently not.

    The founding members see specific challenges with OSM and they all relate to strategy and control. 

    While they can have some influence, no one company or group of companies that works with OSM has the ability to:

    • Direct OSM’s strategy: what information is mapped, where it is mapped and in what order 
    • Define how information is represented in OSM: each country or region can effectively define their own data models independently of other countries
    • Set QA processes: OSM leans toward manual and ground-truth verification processes; using input from massive sensor networks (e.g. vehicle sensors) to detect change or map errors is not widely endorsed
    • Prioritize internationalization: not surprisingly local map editors tend to favor their local language (thus all the German labels in the OSM map of the Berlin Zoo above)
    • Prevent vandalism: whereby malicious edits make their way into a widely published product

    The last one is interesting. It caused a number of companies to get together to launch the Daylight Map Distribution organization which essentially puts OSM data through a data scrubber. Every day millions of contributions are made to OSM by thousands of people and it’s impossible to check everything in-real time. To quote Daylight:

    “Some of these contributions may have intentional and unintentional edits that are incompatible with our use cases.”

    In other words: they’d cause a major PR headache for any company that used the data.

    My favorite piece of map vandalism actually took place not in OSM but in Google Maps, back in 2015. Some enterprising chap contributed the following edit:

    Map vandalism in Google Maps - Android urinating on Apple
    Award for the Most Outstanding Piece of Map Vandalism — Credit: Google Maps

    Last Topic: Interoperability… 

    I whined about this in my post ‘Why Geospatial Data is Stuck in the Year 1955’. Given OMF’s recent announcement and its planned working group on data schema it turned out to be rather prescient. They clearly see the same issue.

    The thing is it’s still very difficult to use data built by one mapping organization in another mapping system. 

    So, for example, the way in which the city of Los Angeles defines, say, their data for building addresses is very different from the way that the city of Turin or Osaka might do it. So somebody building a global map has to deal with all these differences. My analogy is there is no equivalent to a standard shipping container in the geospatial world. As a result everyone is hurting — it’s incredibly inefficient and it’s a huge impediment to progress. 

    In my mind map-editing tool makers should enforce standard data models on their users — or at least very strongly encourage it — and make it insanely easy for their users to adopt.

    But alas, that is not the case.

    The resulting pain is thus felt by any organization that wants to use a high quality global map in their product.

    So, in summary, the reasons for the birth of OMF seem to be valid and defensible. 

    But the Question is, Will it fly?

    Let me start by pointing out that the ask of OMF is high. 

    If you want a proper seat at the table — by that I mean a seat on the steering committee — get ready to cough up $3M per year and to dedicate at least 20 full time engineers. If you assume the fully loaded cost of an engineer is, say, $250,000 per year, then you’re being essentially being asked to contribute $8M per year. No small chunk of change even for a wealthy organization.

    At the same time, OMF is keeping its focus very narrow: streets, building outlines and basic information about places (or ‘POIs’). As far as I can tell they’re not even focused on street names or addresses at the outset. This narrow focus will increase the chances of success, but will this shallow foundation be enough to be useful? I guess you have to start somewhere.

    There’s also the complex question of how the data will be licensed and what effect it will have on potential data contributors. This is too big a topic to cover in this post, but I suspect there will be some practical and very real challenges, particularly around “share alike” clauses.

    To succeed OMF is going to need a lot more participation. 

    I think to increase the chances of success governments will need to contribute data at a frequent cadence and in volume. The good news is there is no membership fee for governments. But this isn’t a case of “if you build it they will come”. Additional incentives will be required as, like everyone else, governments are strapped for resources. Do they have the time or the money to contribute? What’s in it for them? So far OMF has not made any grand announcements about sharing back with the communities it serves.

    What about the other global map makers? Will they join?

    So my final question is this:

    OMF — is it ‘OMFG’ or just a big ‘MEH’?

    Grab a drink. Get the popcorn. Let’s all watch and see.


    Footnotes:

    1 Note: I said ‘multiverse’, not ‘metaverse’

    2 See ‘The GBF/DIME System’ published by US Bureau of the Census in 1978. Digitization courtesy Google. 

    3 At this time HERE was known as NAVTEQ and Tele Atlas was yet to be acquired by TomTom. There was also third company in the mix that licensed map data, Geographic Data Technology a.k.a. GDT. It was acquired by Tele Atlas in 2004. 

    4 See Apple Maps expansions, courtesy of Justin O’Beirne:

    Apple Maps expansions by Justin O'Beirne
    Apple Maps Expansions — Credit: Justin O’Beirne

    5 It is interesting to note that Apple is one of the top ‘Paid Editors’ on OSM. For more details see Jennings Anderson’s 2021 Update on Paid Editing in OSM.

    6 I know, I know — you could argue that Apple Maps is really a hybrid map and not a true global map of its own making.

    7 Unlike the other players Esri is not creating any map data from scratch.

    8 Stock table:

    Stock price drop of Apple, Microsoft, Alphabet, Amazon, Meta, TomTom since all time highs

    9 OMG — it still exists!

    Acknowledgements:

  • 12 Map Happenings That Rocked Our World: Part 4

    The Epic Quest for Longitude

    Let me start by making one thing crystal clear: this story is about one of civilization’s most epic quests. 

    And, being the learned reader I’m sure you are, you may already be familiar with it. But even if you are, I encourage you to read on as I will attempt to provide a slightly different perspective to this amazing tale.

    I’d like you to join me in a journey back to the early 1700s and, to be specific, the year 1707.

    1707 was the year of the Scilly Naval Disaster which involved the wreckage of four British warships off the Isles of Scilly in severe weather. It is thought that up to 2,000 sailors may have lost their lives, making it one of the worst maritime disasters in British naval history1. The disaster was likely caused by a number of factors, but one of them was the ship’s navigators inability to accurately calculate their locations. Had they been able to determine their position they might not have run aground and disaster might have been averted.

    But knowing your location at sea wasn’t just about avoiding catastrophes. It was also about money. At this time it was very clear that any nation that could solve this problem could rule the economies of the world.  

    The French and the British certainly knew this full well. Observatories were built in Paris and Greenwich2 with the primary purpose of determining whether it was possible to calculate your location at sea by viewing the location of stars and the moon. 

    So let’s start with the basic question: without the luxury of modern technology like GPS, just how could you determine your location in the year 1707?

    As I’m sure you know, location is all about coordinates, and specifically about measuring your latitude (how far north or south you are) and your longitude (how far east or west you are). 

    Before the advent of satellite based positioning systems in late 20th century, the method for determining your latitude and longitude was horribly complex. Furthermore, it was far easier to determine your location on land than it was at sea. 

    At sea a process of ‘dead reckoning’ was commonly used. Starting from a known location on land, for example a port, one could measure your direction of travel and your speed. By plotting these movements on a map you could continually update your location. Your direction of travel was measured using a compass and your speed was measured using a device called a ‘log and line’:

    A triangle of wood, called a log, was attached to a knotted line of rope. The knots in the rope were tied at a distance of 47 feet 3 inches. The log was thrown overboard and the speed of the ship was measured by counting the the number of knots that passed through the sailors fingers in the time it took for a 28 second sandglass to empty. This would thus give the speed of the ship in knots — also known as nautical miles per hour3.

    Log and line for measuring a ship's speed
    Log and Line — Credit: Royal Museums Greenwich
    A 28 second sand glass
    A 28 Second Sand Glass
    Credit: Royal Museums Greenwich

    However, the use of a log and line did not give an exact speed measure. The navigator had to take into account:

    • The flow of the sea relative to the ship
    • The effect of currents
    • The stretch of the rope
    • The inaccuracy in the time measurement as changes in temperature and humidity affected the accuracy of the sandglass

    Even the compass had flaws — it measured magnetic north, not true north. 

    Therein lies the issue of the dead reckoning. Errors in any dead reckoning system build up and over time and after a while you no longer have a good idea of your position. To eliminate the error you have to reestablish your precise location using some alternate method4

    In the early 1700s the method for determining your latitude wasn’t easy, but it was possible:

    Navigators knew that the height of the sun above the horizon was different according to how far north you were. At noon on the equator the sun was very high (more or less directly overhead) whereas at noon in the more northerly latitudes the sun was much lower. By measuring the height of the sun at noon — not easy in a ship rolling on the ocean waves — you could determine your latitude. This was done using an instrument called a cross-staff5 — although at your peril. The instrument could easily bruise you in the eye as the ship rocked and you might also be blinded as you stared at the sun. But you could measure it. Just. To get a more accurate reading you also needed to refer to complex tables, also known as ‘almanacs’, that you used to adjust your measurements to take into account the tilt of the earth and the time of year6.

    A cross-staff from 1804
    A Cross-Staff from 1804: Credit — The Mariners’ Museum and Park

    At night, assuming it is cloudless, it is also possible to measure latitude by measuring the angle to the North star. 

    Measuring latitude was difficult but possible — but in comparison to measuring longitude it was easy peasy. The issue at the time of the Scilly Naval Disaster in 1707 was that no reliable method to measure longitude had been devised. 

    Let’s pause for a moment and compare the problem of accurately measuring your longitude to the problems our world faces today. Perhaps it would be equivalent to the problem of creating a cheap and reliable fusion reactor? The impact of both solutions on society are equally huge.

    Solving the longitude problem ultimately helped prevent disasters, enabled global travel and, more importantly, bootstrapped global trade. Solving the problem of generating energy using a fusion reactor would similarly help prevent (climate) disasters, boost global production efficiency (through cheap energy) and boost global trade. 

    Nations at the time knew how critical it was to solve the longitude problem. In the hopes of finding a solution they used the carrot of large prizes, not dissimilar to the Lunar XPRIZE that Google established in 2007.

    Spain tried it first by offering one in the mid 1500s. In the 1600s Holland offered a similar prize. During this time many inventors tried to solve the longitude problem, but no one figured it out.

    Such was the ongoing public outcry from the Scilly Naval Disaster and the continued concern of seafaring folk that in 1714 a petition was presented to the British parliament. They encouraged the British government to offer its own prize to solve the longitude problem.

    The matter was referred to a group of esteemed experts, including Sir Isaac Newton. In July 1714, on the advice of these experts, parliament adopted “An Act for Providing a Publick Reward for such Person or Persons as shall Discover the Longitude at Sea.”

    If you’d like to read the original text of this now famous act, commonly known as ‘The Longitude Act’, and learn more about its passage I suggest you visit this link on the Royal Observatory Greenwich’s website.

    As part of the act, parliament created a committee to address the problem and consider any submissions. This committee became known as the ‘Longitude Board’.

    The Longitude Board essentially became a VC and was authorized to fund research via grants. More importantly it was authorized to offer a prize to the first person to develop a method for determining longitude to within varying degrees of accuracy:

    • £10,000 for an accuracy of one degree (60 nautical miles at the equator)
    • £15,000 for an accuracy of two-thirds of a degree (40 nautical miles at the equator)
    • £20,000 for an accuracy of one-half of a degree (30 nautical miles at the equator)

    The full prize was a significant sum — well over $6,000,000 in today’s money and certainly in the same realm as the Google Lunar XPRIZE which was $30,000,000. 

    The race was on.

    And it all came down to one thing: measuring time.

    In principle calculating longitude was easy: 

    • The earth rotates 360 degrees everyday
    • There are 360 degrees of longitude (from -180 degrees to  +180 degrees)
    • There are 24×60 = 1,440 minutes in a day
    • Therefore the earth rotates one degree of longitude every 4 minutes (= 1,440 / 360)

    To calculate your longitude all you need to do is determine what time it was at a known location when it’s noon at your current location. 

    So if you determine it’s 12.20pm in Greenwich when it’s noon at your location then you must be at 5 degrees west: 

    • Your clock is 20 minutes later than the time in Greenwich
    • 20 minutes divided by 4 degrees of rotation per minute = 5 degrees. 

    Similarly if you determine it is 11.40am in Greenwich when it’s noon at your location then you must be at 5 degrees east.

    Simple right? 

    Noon was easy to measure: it occurs when the sun is at its highest point in the sky. It’s the other part that turned out to be a bitch. 

    How do you know what time it is in, say, Greenwich, when it’s noon at your location? Give someone in Greenwich a call on your iPhone? Sorry — even if you did magically happen upon such a fruity device — there was still no cell coverage out at sea. Sucker!

    So you needed another method to determine the current time at a remote known location. 

    In 1714 when the Longitude prize was enacted there were three main methods that became contenders for measuring longitude7:

    • The ‘Moons of Jupiter’ method
    • The ‘Lunar Distance’ method
    • The ‘Chronometer’ method

    The Moons of Jupiter method relied on a discovery by Galileo in 1612: the orbits of Jupiter’s moons were like clockwork. They always passed in front of Jupiter at a particular time of day at a particular location. 

    Galileo developed a somewhat complex mechanical instrument called a jovilabe to calculate the time from the positions of the moons. He also suggested the use of a rather scary helmet, called a celatone, to be used to measure the position of the moons:

    Replica of a celatone by Matthew Dockery
    Replica of Celatone created by Matthew “Attoparsec’ Dockery — Photo Credit: David Bliss

    Using a telescope or one of these celatones you could see the positions of Jupiter’s moons. Then referring to some predefined tables and charts you could determine what time this corresponded to at a particular location, for example, back in Greenwich. 

    Bingo! Problem solved!

    Indeed this method proved to be very successful, but not for everyone:

    A disciple of Galileo, the Italian astronomer Cassini, realized that the ‘Moons of Jupiter’ method could be used to make more accurate land maps. In 1671 King Louis XIV employed Cassini to revise the existing maps of France. As a result of Cassini’s intricate work the land mass of France in the new maps got reduced by about 20 percent. When the King first saw the maps he is said to have exclaimed “I have just lost more territory to my astronomers than to all my enemies!” 

    Unfortunately there were a few teeny, tiny problems that made this method somewhat challenging at sea: 

    1. Clouds
    2. Jupiter isn’t always above the horizon
    3. Bright daylight!
    4. Err, ships tend to roll a bit

    As a result the ‘Moons of Jupiter’ method was never seriously considered for use by mariners. 

    The ‘Lunar Distance’ method was first suggested by the Italian explorer, Amerigo Vespucci8, in 1499. The method depends on the motion of the moon relative to other celestial bodies. 

    The moon completes a circuit of the sky (360 degrees) in 27.3 days on average (a lunar month), giving a movement of just 0.55 degrees per hour. But it’s complicated:

    1. To be successful a very accurate angle of measurement of the moon’s position was required — if your measurement was off by, say, 0.1 degrees, then your time measurement would be off by about 11 minutes. As a result your calculation for longitude would be off by almost 3 degrees9. In order to win the full £20,000 prize your measurement of longitude had to be accurate to 0.5 degrees. This in turn meant that your measurement of the moon’s position had to be accurate to within 0.018 degrees!
    2. There’s the issue that you needed access to complex tables and astronomical charts that would tell you the expected position of the moon against the celestial background at some point in the future. To determine your longitude those tables and charts had to be accurate too.
    3. Having the all the necessary accurate measurements, tables and charts didn’t provide an immediate answer. Further calculations were required and those calculations were long and laborious. Sometimes it would take up to four hours to perform them. 
    4. Clouds
    5. The moon isn’t always above the horizon
    6. Err, yes, ships still rolled

    You’d think that the Lunar Distance method would have been dismissed as quickly as the Moons of Jupiter method, but other forces were at work. 

    One of the main purposes of the observatories built by the French in Paris and the British in Greenwich was to develop the tables and charts required for measuring longitude using the Lunar Distance method. The investment in these observatories and developing these tables was therefore huge. On top of that many experts on the Longitude Board were astronomers, most notably Nevil Maskelyne, who was later to be appointed Astronomer Royal. So you might say that there was a vested interest in the Lunar Distance method and a natural bias towards it. As a consequence the Lunar Distance method was far from dismissed and continued to curry favor for many years to come.

    So that leads to our last contender. The ‘Chronometer’ method.  

    The idea behind the Chronometer method was simple: before leaving your port, you set your clock or watch to the known time at that location. You then took that chronometer with you on your trip. Then, when you’re out in the middle of the ocean, simply refer to the time on this chronometer when it’s noon at your current location. From the difference in time you can quickly and simply calculate your longitude from the knowledge that the earth rotates at four degrees per minute.

    The problem of course with the Chronometer method was that in early 1700s no clock or watch existed that could keep time accurately. 

    The detailed rules for winning the Longitude Prize stipulated that the method be able to determine Longitude accurately after a voyage from Britain to the West Indies. This journey took six weeks by ship. So to win the full £20,000 using the Chronometer method your time measuring instrument needed to be accurate to 2 minutes after six weeks which was less than 3 seconds a day. Even a good watch at that time might gain or lose as much as 15 minutes a day! 

    Needless to say the Longitude Board was extremely skeptical that the Chronometer method would ever be a viable solution. Sir Issac Newton’s point of view didn’t help:

    “I have told the world oftener than once that longitude is not to be found by watchmakers but by the ablest astronomers. I am unwilling to meddle with any other method than the right one.”

    But is was then that our hero came into view.

    His name was John Harrison.

    Harrison was a carpenter and lived in the small village of Barrow-on-Humber in the north of England. He had neither been schooled at university nor had he ever gone to sea. But in 1714, at the young age of 20, clock making had become his passion. He was absolutely obsessed with accuracy and had never heard Newton’s doubtful words. 

    Harrison knew that many factors effected the accuracy of clocks, including humidity, temperature and changes in atmospheric pressure. 

    Even in his early days of clock making Harrison was a pioneer:

    • He knew mechanical friction was the enemy of accuracy, but he also knew that 18th century lubricants were awful. So he invented an oil-free wooden clock by building the most critical parts of lignum vitae, a wood that contained natural lubricating oils. 
    • He invented a mechanism called the grasshopper escapement which eliminated sliding friction and gave the clock’s pendulum periodic pushes it needed to keep it swinging.
    • To measure the accuracy of his clocks, Harrison timed their ticks to the apparent movement of stars from the backdrop of his bedroom window frame to the chimney on his neighbor’s house.
    • To develop a pendulum whose length would not be affected by temperature he invented the ‘gridiron pendulum’ which was made from wires of brass and iron. The difference in the thermal expansion rates of the two metals compensated for each other, so the pendulum stayed the same length regardless of temperature. 

    One of Harrison’s early clocks, ‘Precision Pendulum Clock Number 2’, built by Harrison and his brother in 1722 is still in use today. And it still keeps good time 300 years later: it is accurate to within a second a month. In the early 1700s that level of accuracy was simply unheard of.

    John Harrison's Precision Pendulum Clock Number 2
    Harrison’s ‘Precision Pendulum Clock Number 2’
    On Permanent Display at Leeds City Museum in England — Credit: City of Leeds

    While this clock could have easily met the requirements for measuring longitude it could only have done so on land. The rocking of the ship would have wreaked havoc on the pendulum.

    But by 1730 Harrison thought he could solve all the issues associated with accurate timekeeping on a ship. For the first time in his life he ventured to London and managed to get a meeting with none other than Dr. Edmond Halley (famous for predicting the comet that bears his name). Being a member of the Board of Longitude Halley was a vital person to convince. Halley introduced Harrison to London’s most famous clockmaker, George Graham.

    Harrison was less than impressed with Graham’s clocks:

    “While Mr. Graham proved indeed a fine gentleman, if truth be told, I was taken aback by the poor little feeble motions of his pendulums … the small force they had like creatures sick and inactive. But I, um, commented not on he folly in his watches.”

    But look at it this way: here was Harrison, some rural carpenter from the north of England, trying to convince London’s leading clockmaker he had something to show. Graham and Harrison debated for hours. It was Harrison’s work on the temperature compensated pendulum that became a turning point. Graham had struggled with this problem for years and failed.

    With Graham convinced that Harrison was indeed someone deserving attention the money started to flow. Graham provided Harrison his ‘Angel round’ and lent him money so he could begin development of his sea clock in earnest. Harrison called it ‘H1’:

    John Harrison's H1 Clock
    Harrison’s H1 Clock
    Credit: Royal Museums Greenwich

    H1 was a revolution for Harrison. It was the first time he worked with brass. To compensate for the ship’s rocking he switched from using a pendulum to a mechanism using two rocking balance arms. As you can see, it was an intricate instrument. But it worked. In 1736, on a stormy 5 week voyage from London to Lisbon and back, the clock is thought to have been accurate to within 5 to 10 seconds a day. Not enough to win the Longitude Prize, but a huge step forward over anything else.

    The Board of Longitude was very impressed. So impressed in fact, that for the first in its 23 years of existence it did something it had never done before: it held a meeting. 

    Harrison knew H1 was not capable of winning the prize. Instead he petitioned for a round of financing from the Board. The Board agreed and awarded him £500 (~$130,000 today) — but on the condition that H1 and his next development would become the property of the public. He was thus being asked to relinquish his intellectual property. Harrison reluctantly agreed and in 1737 Harrison embarked on his next development, H2:

    John Harrison's H2 Clock
    Harrison’s H2 Clock
    Credit: Royal Museums Greenwich

    H2 was completed in just two years, but Harrison was never satisfied with its design and never allowed it to be tested at sea.

    In the meantime the astronomers were not standing still. They continued their work on the Lunar Distance method. It was Nevil Maskelyne, on the Board of Longitude, that became Harrison’s nemesis. Maskelyne, who was educated at Cambridge and was also a priest, is thought to have been pompous and full of himself. Harrison was the uneducated outsider. Maskelyne was the university educated insider. Tough world. 

    And the Board of Longitude controlled the funds. Harrison became deeply frustrated:

    “They said, a clock can be but a clock, and the performance of mine, though nearly to truth itself, must be altogether a deception. I say, for the love of money, these professors or priests have preferred their cumbersome lunar method over what may be had with ease, for certainly Parson Maskelyne would never concern himself in such a matter if money were not bottom … and yet, these university men must be my masters, knowing nothing at all of the matter, farther than that one wheel turns another; my mere clock being not only repugnant to their learning, but also the loss of a booty to them.”

    Despite all this Harrison convinced the board to provide further funding that he needed to continue his work. The Board granted it in 1741. 

    Harrison had promised the Board that his next device, H3, would be completed in two years. But it took him 19. During this time Harrison appeared before the Board nine times. Each time he appealed for more time and more money. He repeatedly missed his promised dates. To me it sounds like a classic VC story — Harrison could easily have been pitching Sand Hill Road. Over this period Harrison received a total of £3,000 (~$1M today).

    John Harrison's H3 Clock
    Harrison’s H3 Clock
    Credit: Royal Museums Greenwich

    It was while Harrison was developing H3 that Harrison also turned his attention to watches. He was always disparaging of them and was convinced “he could improve these dreadful things called pocket watches”. In 1753 he instructed a watch maker, John Jefferys, to make a watch to his own design. It turned out to be far more accurate than he expected. After 25 years of working on clocks Harrison came to the realization that watches were the way to go. It was a classic ‘pivot’.

    At a Board meeting held on July 18, 1760, Harrison declared that his latest clock, H3, was ready for trial. But he also reported that his first watch, which was under construction, would serve as a longitude timekeeper in its own right. He called the watch H4.

    H4 was finally completed in 1759. Harrison was now 66 years old and had worked on his timekeepers for more than 45 years.

    John Harrison's H4 Watch
    Harrison’s H4 Watch
    Credit: Royal Museums Greenwich
    John Harrison's H4 Watch mechanism
    Harrison’s H4 Watch Mechanism
    Credit: Royal Museums Greenwich

    On February 26, 1761, Harrison contacted the Board with the request to test both H3 and H4 at sea. But a few months later, apparently dissatisfied with H3, withdraw it from the test. It was down to H4. 

    But Harrison’s struggles were only just beginning. 

    Under the terms of the Longitude Act, the first trial was a voyage from Portsmouth, England to Jamaica. 

    Harrison claimed that H4 lost a mere 5.1 seconds during the eighty-one-day voyage — which was a stunning result and easily enough to meet the requirements for the full £20,000 prize. But his claim depended on an allowance being made for the watch’s natural fixed gain or loss per day, also known as its “rate of going”. The problem, however, was that Harrison neglected to specify H4’s rate of going before the trial. For this reason the Board declared the result of the trial to be non-conclusive. They did, however, agree that H4 had met the terms of Section V of the Longitude Act and that it was “of considerable Use to the Publick”. As such the Board awarded £2,500 (~$725,000 today).

    A second trial was scheduled, this time to Barbados, but before it occurred Harrison appealed to Parliament for further monetary assistance, presumably because he had more than exhausted his funds in completing H4. In 1763 Parliament passed “An Act for the Encouragement of John Harrison” which stated that Harrison could receive a prize of £5,000 (~$1.5M today). This award did not require a second sea trial, but instead required that Harrison assign all of his trade secrets and intellectual property associated with the design and engineering of his watch to the public, to the satisfaction of a technical committee. He would not only have to supply detailed designs, but he would also have to dismantle the watch piece by piece before the committee and supervise workmen in making two or more copies of it, which would have to be tested. In other words: sell your soul and we’ll give you $1.5M. Harrison refused and never received any money from the Act.

    For the second trial, as for the first trial, the precise longitude at the destination had to be measured prior to its start. It was Harrison’s nemesis, the Very Reverend Nevil Maskelyne, that was selected to take this measurement. He would do so using the ‘Moons of Jupiter’ method.

    Much to Harrison’s chagrin, the Longitude Board had also decided that the Lunar Distance method should be tested simultaneously with Harrison’s chronometer. The 1763 Act by Parliament protected Harrison against competitors using the Chronometer method, but not against competitors using the Lunar Distance method. Harrison was therefore extremely troubled. 

    And the Lunar Distance method was gaining favor. The Board was sufficiently impressed with Maskelyne results that in 1763, it authorized him to produce The British Mariner’s Guide, a handbook for use of the lunar distance method.

    As Harrison was getting old, it was Harrison’s son who traveled with H4 to Barbados. When he met Maskelyne he accused him to be “a most improper person”. Maskelyne was not surprisingly extremely offended. 

    Regardless of the animosity between the Harrisons and the Board, the Board declared H4’s second test to be a success. H4 was able to measure the longitude to Barbados to an accuracy of less than 10 miles — three times better than needed to win the full £20,000 prize.  

    On February 9, 1765 the Board considered Harrison’s claim to the prize. 

    Still they did not award it!

    The problem, they explained, was in Section IV of the Longitude Act. This section instructed the Board that a method was deemed to have won when it had been “tried and found practicable and useful at Sea.” 

    The Board told Harrison that he had not explained how his watch had worked, nor had he explained how it could be manufactured at scale so it could be put into general use. Therefore the Board therefore decided that the watch was not “practicable and useful”. 

    These words turned out to be crucial and it’s a text book lesson in legalese. 

    Harrison pushed back and continued to claim the full prize. The Board and Harrison both escalated the issue to Parliament. The Board sought to codify its recommendation. Harrison fought back. But the Board won and Harrison lost. 

    In May 1765, Parliament passed “An Act for explaining and rendering more effectual” its previous Longitude Acts. 

    To be awarded the first half of the £20,000 prize Harrison would now have to:

    • Explain the principles of his watch to the satisfaction of the Board
    • Assign the property rights in all four of his timekeepers to the Board
    • Hand over all four timekeepers to the Board

    The second half of the £20,000 prize would be awarded when “other … Time Keepers of the same Kind shall be made,” and when these other timekeepers, “upon Trial,” were determined by the Board to be capable of finding the longitude within half a degree.

    In interpreting the Act the Board decided that Harrison would need to dismantle his watch in front or a technical committee to the satisfaction of the Board. When they explained this to Harrison he declared he would never consent “so long as he had a drop of English blood in his body”. 

    The Board’s chairman responded thus:

    “Sir, … you are the strangest and most obstinate creature that I have ever met with, and, would you do what we want you to do, and which is in your power, I will give you my word to give you the money, if you will but do it.”

    Finally Harrison capitulated. 

    Over six days he dismantled H4 before the Board’s technical committee — and much to Harrison’s disgust Maskelyne was present for the event. But on August 22, 1765 the Board agreed that Harrison had completed the knowledge transfer to their satisfaction. The remaining condition to win the first £10,000 was to hand over his timekeepers. Harrison finally handed them over on October 28, 1765. On the same day he was awarded the first half of the prize. 

    But what of the second half of the prize?

    According to the Act, Harrison would only receive the final payment when “Time Keepers” (plural) “of the same Kind shall be made”. That meant at least two copies of H4 needed to be made. The Board declined to provide funds to Harrison for him to make the copies, instead outsourcing the work to another watchmaker for a fee of £450 (~$120,000 today). For the required trial of the copies the Board introduced scope creep: no longer was it to be a six week voyage to the West Indies, but a 10 month trial at the Royal Observatory and an an eight week voyage. 

    By this time Harrison was 74 years old and would have to wait at least another year to see the copies of his watch tested. And yet the Board still had not specified what would constitute a successful trial. 

    Harrison made his own copy of H4 which he called H5. Due to his failing eyesight it took him four and a half years to complete. By that time it was 1772 and Harrison was 79. Rather than submit H5 to the Board Harrison’s son appealed to the King George III. Soon after father and son were granted an audience. The King was clearly taken by the Harrisons’ plea for he declared out loud: “By God, Harrison, I shall see you righted!” 

    John Harrison's H5 Watch
    Harrison’s H5 Watch
    Credit: The Science Museum Group
    John Harrison's H5 Watch mechanism
    Harrison’s H5 Watch
    Credit: The Science Museum Group

    While the King was an ally the Board still pushed back. Harrison gave up dealing with the Board and appealed again to Parliament for their benevolence — asking for an appropriate award for his lifetime of work and dedication. Parliament relented and awarded Harrison the sum of £8,750. While this was no doubt welcomed by Harrison it still did not represent the full award and he never received the remaining £1,250.

    Even today there are differing and strong opinions on the Board’s decision. 

    The author Dava Sobel wrote an amazing best selling book on this epic story. It’s called “Longitude”. It is still the number one best seller for books on geography on Amazon. If you’re at all interested in learning more about this tale then this book is an absolute ‘must read’.

    Dava takes the position that the Board slighted Harrison and accuses them of bias.

    But others have taken a different point of view.

    One such view is held by Professor Jonathan Siegel in his excellent paper ‘Law and Longitude’.  Jonathan is a professor of law and takes the position that “The Commissioners of Longitude did their duty.” He looks at it from the Board’s point of view, focusing on the meaning of those crucial words in the Longitude Act: the question as to whether Harrison’s invention was “practicable and useful”.

    Perhaps you can see the Board’s perspective: H4 and its subsequent copy, H5, took years to build. They had commissioned a third party to make duplicates of H4, but at a cost ~$60,000 a copy in today’s money. And what would happen if your precious and expensive watch were to malfunction at sea? You would be totally lost and there would be no hope for recovery. At least the Lunar Distance method did not suffer this problem. 

    Was H4 “practicable and useful”?  

    As you’re considering this perhaps you should consider a present day analogy: let’s say some government had enacted a similar prize for landing humans on the moon and returning them safely to earth. And let’s say the rules for winning used the same benchmark as the 1714 Longitude Act — that it be “practicable and useful”.

    Did Apollo 11 meet that criteria? After all it got people safely to the moon and back. But that mission alone cost $2.7B in today’s dollars10. Given a very expensive new rocket had to be built for each mission should it only win half the prize? Or should the full prize only be awarded for a completely reusable rocket?

    As to Harrison’s epic quest and whether it was deserving of the full prize, I encourage you to read both Dava’s book and Jonathan’s paper and make your own decision.

    In the meantime be thankful that you can simply look at your phone to determine your position — and that you don’t have to spend hours doing laborious calculations or wear one of those scary celatones.

    I’ll leave you with one last tidbit:

    Centuries after Harrison’s toils, at a dinner at 10 Downing Street, Neil Armstrong, the first man on the moon, proposed a toast to John Harrison, saying his invention enabled men to explore the earth, which gave them the courage to voyage to the Moon.

    So was Harrison’s quest a Map Happening that Rocked Our World? 

    I’ll say so.


    Footnotes:

    1 This story is documented in the “The Last Voyage of Sir Cloudesley Shovell” by W. E. May. May, W. (1960).

    2 England. Not Connecticut. 

    3 The distance between the knots, 47 feet 3 inches, wasn’t an arbitrary number. It is the distance traveled in 28 seconds if your are moving at a speed of one nautical mile per hour … thus it is your speed in knots. 🙂

    4 The Etak Navigator — the pioneering in-vehicle navigation system released in 1985 had the same problem. But its brilliance was to eliminate dead reckoning errors by “map matching” — correcting its position to a topologically accurate map.

    5 The sextant, still in use today, wasn’t invented until 1731.

    6 At midwinter (21 December) you needed to deduct 23.45º from your reading, and at midsummer (21 June) to add 23.45º. Between those times you had to adjust the readings proportionally. 

    7 There were plenty of other wacky ideas too. Professor Jonathan Siegal describes some of them in his paper:

    “In 1713, William Whiston and Humphrey Ditton, a pair of mathematicians, proposed that a fleet of ships be anchored across the ocean at 600-mile intervals and that each such ship fire off a cannon shell and flare every day at midnight. Navigators of other ships could then determine their position by timing the interval between seeing the flare and hearing the cannon.60 The difficulty of anchoring ships in mid-ocean and the vast expense that would be required to man the ships made this proposal impractical.61 Even more incredible was the plan, proposed in 1687, to send a wounded dog aboard every ship, and to leave behind a discarded bandage from the dog’s wound. Each day at noon, this bandage would be dipped in “powder of sympathy,” a substance which had the miraculous power to heal at a distance, although at the cost of giving some pain to the patient. The dog’s yelp when the bandage was dipped would give the ship’s navigator the necessary time cue.62 Like most plans that rely on magic, however, the powder of sympathy method failed to work in practice.”

    8 Amerigo’s main claim to fame: America is named after him

    9 The moon makes a circuit across the sky in 27.3 days = 655.2 hours. A circuit is 360 degrees so that’s ~0.55 degrees per hour. If your measurement of the moon’s position is off by 0.1 degrees then that’s 0.1/0.55 = 0.182 hours = ~11 minutes. The earth rotates at one degree of longitude every four minutes, so 11 minutes is almost 3 degrees of longitude. If you’re at the latitude of London then 3 degrees of longitude is about 210km or about 130 miles. Ouch.

    10 Source NASA: Expenditure on the Apollo missions 1968-1972 converted to today’s dollars using CPI Inflation calculator

    Acknowledgments:

    Other Reading:

Create a website or blog at WordPress.com