singapore Traffic simulation systems of the future: a special kind of supercomputer?

The worst-ever MRT metro line disruption in Singapore affected hundreds of thousands; can custom supercomputer simulation systems cushion the impact of such disasters in real time?

This month's massive disruption of Singapore's two main – and oldest – metro lines, the North-South and East-West MRT, was the worst in the city's metro  system history. The compounding of smaller disruptions on the other two lines in the same week didn't help either. Whether the two old lines should be re-dug and replaced with underground maglev metro, where the wear and tear like the one with 'claws' and 'sleepers' stops happening, or the signalling systems need change too, is another issue beyond our scope here.



But, is there a way for high performance computing to help manage such disruptions in real time, and analyse the possible passenger impacts down to the minute detail?



Here's a simple scenario: in a city of 5 million like Singapore, suddenly, really, suddenly, the two main metro lines stop working. Say, half of the population which is around the two lines' corridors is affected. How to avert the chaos and have the extra bus or other resources deployed exactly there where the traffic needs it, to avert the total transport collapse?



Let's go back to that number of 5 million inhabitants. Let's say that each inhabitant, as a person, is an entry in a giant database – the Big Brother datacentres, like the Google or Facebook ones, already do it anyway on a global scale. In the case of Singapore's EZ Link, or Malaysia's Touch'N'go, that database can easily be filled with the transport history of each card holder across a long period of time. Same for the car registration plates tracked via electronic road pricing gantries and such. Add to it the age, income, car ownership and house & work/study location, and you can perfectly well estimate a person's traffic movement during a weekday or weekend on any given day or time. That draws a sort of 'movement diagram' for each of 5 million persons on a given day.



Such database entry for each person can grow up to 100 kilobytes in size, and, since we got to keep such stuff in memory for full-speed analysis and modeling, it means roughly 500 GB of main memory, times 2 for space to create alternate models, and we talk about a terabyte RAM machine with as many cores as you can throw at them, since it is highly parallel work. That little slim 4-CPU terabyte Xeon E5 mainboard from yesterday's story comes to mind here…



Now, with such system, containing the movement data of all the population and their vehicles, and where they want to go next, we can automatically identify any hot spots in the case of any specific malfunction or shutdown and, in real time, divert the extra resources exactly to these 'hot spots of the moment'. No, it doesn't replace the extra metro lines that, maybe, Singapore should have built much earlier when it was way cheaper to construct – rather than pouring money into Western investment banks or assets – but, anyway, a fast computer for a quick fix, when the problems arise, is always welcome.