In case you didn’t notice, we’re living in a world of revolutions.
Generative AIs are exploding and seemingly turning the world upside down. It feels very similar to the dot com boom of the 1990s, where your company wasn’t worth diddly squat unless its name ended in ‘.com’. Today your products better include some form of ML or AI — preferably generative — to grab any attention. It’s getting so ridiculous that I wouldn’t be at all surprised if Cadbury’s somehow integrated ChatGPT into their next chocolate bar.
In parallel with all this there’s another revolution happening in automotive manufacturing.
Here you are witnessing a wholesale switch in the methods for designing, engineering and manufacturing vehicles. It’s not just about replacing the internal combustion engine with an electric motor and lots of AA batteries, it’s about developing a completely new method of producing vehicles.
In the western world it’s Tesla that’s grabbing the limelight, spurring all the other auto OEMs to jump on the EV bandwagon.
The PC angle on the story for the OEMs is that it’s part of their mission to save the planet. However, it’s really about FOMA and increasing profits — there are orders of magnitude fewer parts in an EV vs. an ICE equivalent. The legacy OEMs’ strife and anxiety is being cranked up by Tesla’s use of its Giga Press1 which is resulting in yet another massive increase in manufacturing efficiency. No wonder Ford is sweating.
But what about the map making world?
Has anyone — or is anyone — doing something similar to revolutionize global map production?
And I’m not just talking about building the map, I’m talking about keeping it maintained and up-to-date.
Since organizations started the process of building global street maps in the mid 1980s the approach has more or less been the same:
- Get rights to some existing reference maps or aerial photography
- Develop some map editing tools that will scale (sorry @Esri)
- Throw bodies at the problem. Thousands of them.
In the 2000s organizations started adding fleets of vehicles to the mix, first equipped with cameras and later also with LiDAR. This enabled richer data collection — for example, things like speed limits and lane info — and made it easier to verify ‘ground truth’.
But map production still took thousands of bodies.
Then around 2007 something magical happened. Smartphones came to be, first the iPhone and later other photocopies. Over time the proliferation of these devices has resulted in an abundance of new data. While this treasure trove of information has mainly been used by Location Harvesters, Personal Information Brokers & Assholes to turn you into a product, it has also proved useful for map making.
The anonymized and aggregated data from mobile devices can be used to both derive new products and help maintain existing ones.
The prime example is real-time traffic information: those red and orange lines you see overlayed on consumer maps denoting traffic jams are nearly always derived from movements of mobile devices.
But these movements can also be used to derive change signals. For example:
- Where are devices traveling along a path where there is no road? Is that a new road?
- Where are devices no longer traveling in a particular direction along an existing road? Is that a new one way?
- Where are devices no longer turning left at an intersection? Is that a new turn restriction?
So that’s progress.
However, there’s a limit to its usefulness.
Even though the data is a large firehose, it’s fundamentally just movement data — and due to the current limitations of GPS it’s not always accurate enough to derive even simple information, particularly in urban canyons.
So, in other words, while movement data definitely can be used as a change signal, it’s extremely difficult to use it to derive map edits automatically.
While it’s progress it’s not exactly what you’d call a revolution.
So what do you need for a real revolution?
Well, I’d make the argument that it distills down to three things:
- Smart Processing
Let’s start with eyes.
By ‘eyes’ think of images like Google StreetView or Apple’s LookAround, but at a frequency that will make a difference. StreetView and LookAround images come from the dedicated fleets that the map makers employ, but the problem is they only drive the streets about once a year — if you’re lucky. Sorry guys, but that doesn’t cut it for revolutionizing map production.
Now there are other organizations collecting image data — examples include Mapillary2, Solera/SmartDrive, Lytx, Nexar, Hivemapper and Gaist, but again what’s missing is massive volume.
To really stay on top of things you need eyes on every road everyday.
Where might that volume come from? Well from the cameras built into vehicles of course. There are two companies that I’ll highlight here that could bring the volume: Mobileye and Tesla.
Mobileye, recently spun off from Intel and subsequently IPO’d, sells systems to auto OEMs to enable them to provide driver assistance systems like adaptive cruise control, collision avoidance and ultimately, they hope, completely autonomous driving. They claim that their most basic system — which includes a front facing camera — is already installed in millions of vehicles.
What about Tesla? Well as of April 2023, Tesla has sold a total of 4,061,776 electric vehicles3. Each of them has eight cameras. That’s a lot of eyes.
Phew — sounds good right?
Alas there is one small problem in the way: lawyers.
Yes, dear readers, it turns out that the auto OEMs want to keep all that data to themselves, so it’s actually pretty hard to come by.
But it’s not just about eyes on the ground. You also need eyes in the sky. These eyes, used intelligently, can in theory be used to collect data automatically and can be used to detect change.
And the good news is there is an ever increasing plethora of eyes in the sky, not so much from drones (from which data is at best very limited and sporadic), but from birds. Small ones. They’re called earth observation satellites.
There’s a ton of activity going on in this space — volume, frequency of capture and resolution is increasing by leaps and bounds, sensors are evolving — and in the meantime costs are coming down by orders of magnitude. Pretty soon we’re going to be awash with data. For those of us in the map making business it’s going to be thrilling to watch because it’s going to change the way business is done.
One of the companies I’m watching is Satellogic, who claims to have got the costs of data acquisition down to $0.46 per km2 — two orders of magnitude cheaper than their competition:
One way or another all these eyes will produce the volume of data needed. It will just be a matter of time.
The question is of course, what are you going to do with all this data? You’re talking many many petabytes at least.
To make good use of it you need to be intelligent about extracting information. Of course this is where machine learning models come in, not so much targeted at creating maps, but instead at detecting change.
Change can be detected from the ground, for example detecting construction zones by automatically detecting traffic cones. Or your machine learning models might also automatically pick up traffic lights, stop signs, speed limits, et cetera.
Detecting information from this street level imagery is already quite advanced. You’ll be familiar with it if you’ve ever ridden in a Tesla where the screen displays people and objects that the cameras see, and more importantly for map makers – traffic lights, speed limits and signs.
Ultimately all these eyes on the road might also be used to keep information about places and businesses up-to-date, including volatile information from signs on the windows indicating things like operating hours and ongoing sales. It’ll take some work, but it will get there.
Detecting change from the sky is more interesting. For example, there’s a homebuilding data company called Zonda which is using imagery to automatically detect phases of building construction, so you can tell when streets are in, framing has started or roofs are on.
Perhaps more interestingly, there’s a company called Blackshark.ai who has a service called Orca that is able to perform automatic detection on global imagery, e.g. for vegetation classification and building detection. I’ve not seen them produce a specific workflow for detecting road changes at scale yet, but I wouldn’t be surprised if they have something in the works.
Back in the hay days of printed road atlases everyone would be thrilled to get the annual update of their favorite road atlas from the likes of Rand McNally, Michelin or the Ordnance Survey.
Then ‘Sat Nav’ systems came along, and after spending $2,000 or more for a ugly stick map you only had to pay a ransom of a few hundred dollars to get your refreshed map CDs or DVD.
The cadence was still pretty much annual however.
Then, lo and behold, MapQuest came to be and suddenly you didn’t have to worry about DVDs or paying annual ransoms. The map updated itself!
But little did most of you know that organizations like MapQuest were beholden to the map makers of the day like Navteq, GDT and TeleAtlas. At best they sent MapQuest an update every quarter. On top of that it took MapQuest several months to process all the data, so by the time it got to customers the map was at least six months out of date.
It wasn’t until Google started making their own map that things really started to change. Because Google was developing the whole stack it gave them the ability to be in control of the release cycles.
Eventually the cadence of map releases became more frequent, first monthly and then with the ability to splice in critical updates, e.g. for highway and motorway intersections. But still the pipeline was geared towards releasing all the data in one big glued-together multi-layered lump.
The advent of displaying real-time traffic on top of the roads forced a change in architecture as traffic conditions change by the minute. This precipitated the need to stream at least some of the data.
The question is, is it possible to stream all layers of the data independently from one another — so an update in say roads can be streamed in separately from updates to parks or indoor maps?
To achieve such a goal might enable a true ‘living map’ with almost zero latency in updates.4
I’m not sure if any organization in the map data editing, processing and publishing business has truly achieved a layer independent, near zero latency streaming system yet. I’ve never seen the inner workings of Google Maps — perhaps they have, but I’d be surprised.
One organization that is certainly striving for such a system is HERE. That was the underlying story behind their recent announcement of Unimap at CES.
For example, they want to be able to take speed limit data that is recognized by vehicles driving around, quickly automatically verify it and then immediately stream the newly updated information back out to the their mapping services. HERE has an advantage in this space as their investors include BMW, Audi and Mercedes, so in theory at least those data could come in volume from the OEMs’ fleets.
This approach may be a little nascent as there aren’t enough BMWs, Audis and Mercedes vehicles on the road with the necessary ‘eyes’ yet, but hell, it won’t be too long before there is critical mass. So kudos to HERE for showing leadership.
Tesla has a ton of ‘eyes’ and could in theory stream the signs and objects their vehicles recognize back to their map. But for navigation at least Tesla doesn’t have their own map. They rely on Google.
Hmm — what does that particular data license agreement look like I wonder? Is there a quid pro quo that we don’t know about in place? Like Tesla’s object recognition in exchange for Google’s map data? Perhaps we should all ask Elon and find out.
Regardless of the relationship it clearly doesn’t result in near instant map updates, so the streaming architecture is not in place yet.
So Who’s Going to Drive the Revolution?
So, net/net — nobody has quite cracked it yet. As far as I can tell a ground breaking revolution in map making along the lines of what we’re seeing in generative AIs and auto manufacturing has yet to materialize.
But in time — and probably not too much time — somebody will crack it. There will be enough eyes, there will be enough smart processing and organizations will re-architecture their pipelines to enable near real time updates of all layers of the map independently of one another.
I can’t wait to see who will be first.
1 This 13 minute video is well worth a watch if you’re interested in the technical and financial details of how the Giga Press is benefiting Tesla:
2 Acquired by Meta in 2020.
3 Source: Licarco.
4 Yeah, I know, I know — the Esri groupies among you will exclaim the virtues of Esri’s ‘Living Map’, but Esri is not a global map maker. Also, like it or not, there is still significant latency between the time when a change happened on the ground and when it appears in the Esri offering.