## Synoptic first!

So, you’re a transit agency (or vendor, consultant, system integrator, etc.), and you’ve decided to develop an API to expose your real-time data. Perhaps you’ve gotten queries from developers like “I want to be able to query an API and get next bus arrivals at a stop…”.

It’s hard to say “no”, but fulfilling that developer’s request may not be the best way to go. If you have limited resources available to expose your data, there are better approaches available, which will in the long term enable the development of more advanced applications.

## Traction motor, HVAC unit, AVL system?

If a transit agency runs its own motor shop for rebuilding traction motors, runs its own electronics shop for performing component-level repair of circuit boards, runs its own axle shop for rebuilding axles, why shouldn’t it be able to do the same for the software which is just as vital to everyday operation as axles and traction motors?

I recently came across a very interesting paper describing the successes of SEPTA’s Woodland Electronic Repair Shop. At SEPTA, the justification for in-house electronics repair is twofold: many components which come into the shop are not actually defective, and had they been sent to an outside shop for repair, time and money would have been wasted only for the outside shop to return the same “no trouble found” verdict, and, secondly, sending equipment to an outside shop is expensive—by SEPTA’s analysis, more than double the cost of operating an in-house electronics shop.

Transit agencies may not think of themselves as being technology-oriented, but the reality is that software systems are at the heart of so many things transit agencies do—from scheduling to passenger information to signals and communications. Agencies almost universally rely on vendor support for large software packages that perform a wide range of functions: scheduling, trip planning, real-time passenger information, and even safety-critical tasks in signalling and communications.

Yet in comparison to the nuts and bolts which keep the a transit system moving, most transit agencies have shockingly little control over their vital software. Being closed-source and proprietary, the agency is unable to develop its own bug fixes, patches, and new features, and may not even be able to export data in an open format. By controlling the availability of support and new features, the vendor dictates when the agency upgrades—and by using proprietary data formats and interfaces, the vendor all but guarantees that the agency will return to them instead of shopping around. This is the very same risk that SEPTA’s electronics shop seeks to mitigate:

At some point the vendor will no longer support their particular system and since you have always relied upon them for their parts you will have no choice but to go out for bid to get a new system or an alternately designed part to perform the same function.

When procuring new equipment, SEPTA demands access to schematics and test equipment, so that their repair shop can do its work. Without this access, the results are predictably poor. SEPTA found that costs for one class of parts had increased 94% over two years—an “astronomical” price increase at an agency used to inexpensive in-house repair. The explanation, from SEPTA’s engineering department, is depressing:

These are so expensive because SEPTA has no alternative but to purchase these parts from the OEM.

This is why our equipment specifications have a requirement that the Vendor provide SEPTA with all test equipment, documentation and training to allow us to repair the circuit boards in our electronic repair shop at Woodland. The CBTC project did not have a specification from Engineering, but rather was supplied for liquidated damages from the M4 program. It was understood from the beginning that SEPTA would not have the capability to repair the circuit boards.

The complexity and safety aspect of these boards prevents SEPTA from creating drawings and specifications that would allow an alternate supplier to produce these boards.

So, what is the parallel for a software project? Where an electronics shop has schematics, where a mechanical shop has blueprints, a software shop has source code and supporting tools. When a transit agency has access to the source code for a software system, they can perform their own in-house work on the system, on their own schedule, and with their own staff. New features are developed to meet the agency’s needs, not according to a vendor’s whims. Even if the agency elects to bring in contracted support to develop bug fixes or new features, they retain complete control over the process—and, more importantly, they own the end product.

Transit agencies may feel ill-at-ease at the prospect of getting into the business of software development, but the reality is that by bringing software skills in-house, they can realize the same gains as when they bring mechanical and electronic repair and overhaul in-house. In fact, the potential gains are even greater for software, when agencies use open-source software and actively participate in the surrounding community. Many of the fundamental problems of mass transit are the same from agency to agency, and software developed to solve a problem at one agency is very likely to be usable (at least in part) at other agencies.

## Bringing OpenTripPlanner and OneBusAway to DC to improve rider experience

I want to bring OpenTripPlanner and OneBusAway to DC. Why? Simply put, because they’re a lot better than what we’ve got now.

WMATA’s trip planner has no API for developers, returns illogical and nonsensical results for some trips (which can be due in part to data quality problems), is based on a costly, proprietary product, and has a clunky, outdated-looking interface. As for leading-edge features like real-time and intermodal trip planning (including bicycling and bike sharing)? Dream on.
Continue reading Bringing OpenTripPlanner and OneBusAway to DC to improve rider experience

## For San Francisco, and governments everywhere, technology startups are the perfect partners

A recent article in the New York Times describes SMART Muni, “an Apple iPad app that uses Global Positioning System technology to track all of the city’s buses in real time, allowing transit managers and passengers to monitor problems and delays”. It sounds like the perfect success story: civic coders taking open data (Muni tracks its buses and trains with NextBus, which provides an XML data feed) and using that data to improve operations and create real value for the agency.

Unfortunately, it’s not a success story: the app has never been used in production. As the article explains, “Muni hopes to put the app to good use some day, but the agency is \$29 million over budget and cannot afford to buy the iPads required to run the software…[nor] is the city willing to invest \$100,000 to run a pilot program.”

The costs involved here—a few hundred dollars each for some iPads, perhaps a few thousand dollars to fund a stipend for a civic coder, even \$100,000 for a pilot—pale in comparison to the costs associated with the big-name IT consulting firms that governments are used to dealing with.

In addition, startups, teams of civic coders, and open source projects can often deliver a working prototype or even a completed project much faster than conventional development teams. As the New York Times describes, “a small team of volunteers took just 10 days last summer to create [the app].”

Unfortunately, the City of San Francisco is out of touch with the realities of technology: “‘Start-ups fail at a high rate,’ said Jay Nath, chief innovation officer of San Francisco. ‘As stewards of taxpayer dollars, we need to be thoughtful of using that money wisely and not absorbing too much risk.'” Nath is right about one thing: start-ups do fail at an alarming rate. But that’s not the risk you might think it is, because startups aren’t like conventional development projects.

Unlike conventional projects, startups fail fast. Instead of wasting years and millions of dollars, when a startup has an idea that isn’t going anywhere, it winds up quickly. Maybe it was a bad (or even outright infeasible) idea to begin with, or the startup had the wrong team, or they tried to do too much at once. Maybe their idea’s been superseded by a newer, even better technology. Whatever the reason may be, the startup doesn’t just grind away for years, running up a million-dollar bill. Instead, they admit that they can’t deliver, and get out gracefully.

Consider, for example, the FBI’s Virtual Case File, a five-year, \$170-million development effort that never actually delivered any working software. Imagine if the VCF project had failed after three or six months, not five years. Imagine if it had spent less than a million dollars before failing, not \$170 million. Of course, the project still wouldn’t be done—but we’d have known that something was wrong up front, instead of finding out five years later, after millions of taxpayer dollars had been wasted on a doomed development effort.

More importantly, startups do have the agility necessary to keep up with the ever-changing technology marketplace. A development effort that takes five or ten years is bound to deliver a product that is obsolete as soon as it arrives, unless major changes are made along the way.

The conventional development practices used by many government agencies and their contractors don’t incorporate that kind of agility. Specifications and requirements are written early in the project’s life, perhaps even before a development team has been selected (if the project must be put out for bids). Even if the requirements are found to be lacking—or flat-out wrong—development marches on. In the end, the team will deliver a product that meets the requirements (thus satisfying the bean-counters) but which is already out-of-date and which doesn’t actually do what users need it to do.

I alluded to these problems in my recent coverage of WMATA’s initiative to install real-time information displays at bus stops. By only considering bids from vendors with “standard, proven products” and “successful existing and fully operational implementations, in multiple transit agencies”, they potentially shut out innovative startups (or even teams of civic coders, like the Mobility Lab).

It’s entirely possible that the first team to tackle a thorny problem may fail—but rather than casting them as “failures that burn holes in the city’s budget”, we’ve got to communicate to governments and taxpayers alike that not all failures are the same. There’s a big difference between a project that runs for years, spends millions of dollars, and has nothing to show for it in the end, and a project that fails after just a few months, has spent well less than a million dollars, and can identify what went wrong, so the next project will be more successful.

When it comes to technology, the best way for governments to be good ‘stewards of taxpayer dollars’ is to adopt successful development practices: small, agile, competent teams, that build inexpensive, flexible products, and fail quickly if they can’t get the job done. The old way—forking over millions and millions to high-priced contractors until they finally declare defeat, then taking it up in a years-long legal battle—just doesn’t look like good stewardship anymore. Sure, established companies may have a long track record that startups don’t, but what’s it a record of? We don’t need any more million-dollar failures. We need smart civic coders developing next-generation solutions like SMART Muni, and we need governments to accept, embrace, and support them.

## Automating transit alert selection using fare collection data

Last week, WMATA launched its new MetroAlerts service, which greatly extends and improves the previous alert system, and adds alerts for bus routes. With the addition of bus alerts, the service provides real benefits for riders, allowing them to get targeted updates on the routes they use.

But this service, and others like it, still require riders to manually designate the rail and bus routes and rail stations they use, in order to receive targeted alerts. Some systems also allow riders to further customize their selection of alerts by time period. The end result is that riders are presented with a screenful of choices, when all they really want to know is if they’re going to get to work on time.

So, how can we simplify the process? One approach, which I’ve been considering recently, is to use data from a transit agency’s fare collection system to infer a rider’s travel patterns, and automatically select alerts which would affect their usual trips.

## Reconstructing train positions from prediction data

Recently, I’ve been investigating techniques for independently gathering data in order to be able to analyze performance on the Metrorail system. As I’ve previously lamented, the agency releases only summary performance statistics, which makes it impossible to conduct more detailed analyses. Therefore, we must begin with data collection. If WMATA made all of the data captured by AIM available to developers, this would be a much easier task. But, as I’ve noted, only train predictions are released, obscuring the actual number of trains in the system and their positions.

So, we must first sample the prediction data. We know that the predictions are updated by AIM roughly every 20 seconds. It is not known how much delay Mashery introduces, so for simplicity we will just assume that new predictions are made available every 20 seconds. Application of the Shannon-Nyquist sampling theorem therefore tells us that we must sample the data every 10 seconds.

Don’t trust Claude Shannon? Here’s an example to illustrate why we have to sample so frequently:

Suppose that we’re polling the PIDS at Metro Center once per minute. In the peaks, sometimes the interval between trains is less than 60 seconds. So, at $T=0$, we might sample the PIDS and find an 8-car train to Glenmont boarding. If we sample again at $T=60$, and once again we see that an 8-car train to Glenmont is boarding, has one train serviced the platform, or two?

We might be able to say with some certainty that two distinct trains had serviced the platform if the observed trains were on different lines, or travelling to different destinations, or if they were different lengths. But if all of the observed characteristics are identical, then we have no way to tell if we saw one train or two, unless we were to have observed, in between the two trains, that the platform was empty (that is, that no train was boarding).

Once we accept the need to sample at a particular rate in order to avoid missing a train, how often do we sample the predictions? This is where Claude Shannon comes in. As previously introduced, the sampling theorem states that:

If a function $f(t)$ contains no frequencies higher than $W$ cps, it is completely determined by giving its ordinates at a series of points spaced $1/(2W)$ seconds apart.

The PIDS update every 20 seconds, or at a rate of 0.05 Hertz. Accordingly, we must sample the predictions every 10 seconds. But then what? We’ll have a database of predictions; the sampling rate ensures that we will not miss any. But how do we go from predictions to trains? This remains an open question for me.

Obviously, any time we have a prediction indicating that a train is boarding, we know that there is a train physically at the platform. That’s the only time we don’t have to guess. In all other cases, we have to start guessing. One of the more substantial problems is that the it’s hard to figure out where a train is physically, given its arrival time to a station. The WMATA GTFS feed can be used to find the average travel time between two adjacent stations, and the WMATA API can be used to get the distance between those stations. Using that data, you can estimate how many feet away from the station a train is, given the arrival time. But it’s only an estimate, and almost certainly a bad one.

Have I mentioned how much easier this would be if there were an API call that would return every train being tracked by AIM and the track circuit being occupied by the head of the train? And have I mentioned the inconsistency inherent in the fact that the API will readily return the position of every Metrobus on the road, straight from OrbCAD, but all we can get from AIM is predictions?

Anyway, suppose we can get an accurate picture of where the trains are, then what can we do with that data? When you can see all of the trains at once, you can detect bunching and gaps. In addition, the PIDS only show predictions for trains arriving in the next 20 minutes, and tend to fail miserably when trains are single-tracking. A real feed of train positions might make it possible to offer better information to passengers during track work and disruptions, when the PIDS are often blank or give bad information.

Finally, with the right data, it should be possible to correlate real-time data with the GTFS schedule, and compute on-time performance—not just as the summary metric that WMATA provides, but along a variety of dimensions: by line, by time of day, by day of week, etc. Many questions have been asked about the performance of Metrorail, and ultimately, more data is the only way to answer those questions.

## Context-free trip planning

Jarrett Walker was in town last week, and among other points he emphasized the value of grids, and the value of high-frequency transit services—“frequency is freedom”, as he says. While a regular grid of frequent services makes it easier to get around without having to consult an online trip planner before every trip, many riders still rely on Google Transit and local trip planners to figure out how to get around.

Matt Johnson argues that trip planners should show riders a wider range of options, illustrating how the schedules of connecting services (like bus and rail) mesh.

I’d argue, though, that for a transit system where most destinations are within reach of a high-frequency grid, that the best thing we can do to improve the usability of transit is show fewer times, not more.

If a person is travelling between two points that are served by the high-frequency grid, then what does it matter when they are leaving? When you provide a rider with a rigid itinerary—“here’s how to get there if you leave at exactly 5:17 PM”—you give them the impression that if their departure time changes, then they have to re-plan their entire trip. When high-frequency routes are used, that simply isn’t the case.

When a trip can be taken entirely using high-frequency routes, doesn’t it seem so much more liberating to tell the rider to “show up any time and arrive within 45 minutes”? Simplifying directions like this helps riders internalize the route network, and encourages spontaneity. Instead of having the sense that every transit trip starts with a visit to Google Transit, riders gain the sense that they can travel whenever they want. Once again, “frequency is freedom”.

In fact, the worst thing a trip planner can do is recommend that a rider take an infrequent, irregular service just because it happens to be there when the rider is starting their trip. A great example of this is the Route 305 bus in Los Angeles; as Jarrett Walker explains, it’s a low-frequency service which runs through a high-frequency grid:

That means that the 305 is the fastest path between two points on the line only if it happens to be coming soon. If you just miss one, there’s another way to get there faster, via the much more frequent lines that flow north-south and east-west across this entire area.

So, why should a trip planner ever recommend that a rider take a bus like the Route 305? Doesn’t it make more sense to show them to how to use the high-frequency grid to their advantage?

Our hapless, misdirected rider will doggedly wait for that infrequent route to come along, because it’s what is on their itinerary. But if they’d been given an itinerary which sent them along the high-frequency grid, they’d be on their way a lot sooner.