Legacy AVL system? It’s okay, join the club.

If you work with real-time transit data, you’ve probably heard the steadily-increasing call for data producers to release their data in open, standardized formats like GTFS-realtime and SIRI. But how do you actually make your data available in those formats? Some AVL vendors are beginning to include standards-compliant APIs in their products, and that’s great for agencies considering a new system or major upgrade. But what about the massive installed base of legacy AVL systems which have few open interfaces, if any?

Fortunately, there are ways to get data out of almost any AVL system, whether it was explicitly designed with open interfaces or not. Some of these techniques are more technologically sound than others, and some may require some relatively tricky programming, but if you can find the right software developer, almost any problem is soluble.

Here are five key strategies for extracting information from an AVL system. The first three are strongly recommended, while the last two should only be undertaken if no better interface is available, and if you have adequate technical support to implement a more complex solution.

  • Transform a proprietary API to GTFS-realtime or SIRI: Many AVL systems (both COTS and agency-homegrown) include non-standard APIs which can, with a bit of programming, be transformed into a modern, standards-compliant API. This is the approach I took with wmata-gtfsrealtime, to produce a GTFS-realtime feed from WMATA’s real-time bus data, septa-gtfsrealtime to produce a GTFS-realtime feed from SEPTA’s real-time bus and rail data, and ctatt-gtfsrealtime to produce a GTFS-realtime feed from CTA’s Train Tracker data. This is also the approach taken by onebusaway-gtfs-realtime-from-nextbus-cli, which converts from the NextBus API, and bullrunner-gtfs-realtime-generator, which converts from the Syncromatics API.
  • Query a reporting database: Some AVL systems can be configured to log vehicle positions, predicted arrival times, and other information to a database. Ostensibly these databases are meant to be used for after-the-fact incident analysis, performance reporting, etc., but there’s nothing stopping an application from polling the database every 15-30 seconds to get the latest vehicle positions and predicted arrival times. Many GTFS-realtime feed producers take this approach, including ddot-avl, built by Code for America to extract real-time information from DDOT’s TransitMaster installation, HART-GTFS-realtimeGenerator, built by CUTR to extract real-time information from HART’s OrbCAD installation, and live_transit_event_trigger, built by Greenhorne & O’Mara (now part of Stantec) to produce a GTFS-realtime feed from Ride On’s OrbCAD installation.
  • Parse a published text file: Similar to the database approach, some AVL systems can be configured to dump the current state of the transit network to a simple text file (like this file from Hampton Roads Transit). This text file can be read and parsed by a translator which then generates a standards-compliant feed, which is the approach taken by hrt-bus-api, built by Code for Hampton Roads, and onebusaway-sound-transit-realtime.
  • Screen-scrape a passenger-facing Web interface: This is where we get into the less technologically-sound options. While the first three options focused on acquiring data from machine-readable sources, screen scraping involves consuming data from a human-readable source and transforming it back into machine-readable data. In this case, that might mean accessing a passenger-facing Web site with predicted arrival times, extracting the arrival times, and using that to produce a standards-compliant feed. This is the approach taken by this project, which screen-scrapes KCATA’s TransitMaster WebWatch installation to produce a GTFS-realtime feed. Compared to options which involve machine-readable data sources, screen-scraping is more brittle, and may make it more challenging to produce a robust feed, but it can be made to work.
  • Intercept internal AVL system communications: This is the last resort, but if an AVL system has no open interfaces, it may be possible to intercept communications between the components of the AVL system (such as a central server and a dispatch console or system driving signage at transit stops), decode those communications, and use them as the basis for a standards-compliant feed. This is a last resort because it will often require reverse-engineering undocumented protocols, and results in solutions which are brittle and will tend to break in unpredictable ways. But, it can be done, and if it’s the only way to get data out of an AVL system, then go for it. This is the approach taken by onebusaway-king-county-metro-legacy-avl-to-siri.

As evidenced by the example links, every one of the strategies mentioned above has been implemented in at least one real-world application. No matter how old your AVL system is, no matter how far out of warranty or how unsupported it is, no matter how obsolete the technology is, some enterprising civic hacker has probably already figured out a way to get data out of the system (or is eager and ready to do so!). Every one of the tools linked in this post is open-source, and if it closely approximates your needs, you can download it today and start hacking (or find a local civic hacker and have them adapt it to meet your needs). And if none of the tools look close? Don’t head for your procurement department and have them issue an RFP—instead, post on the Transit Developers Google Group; chances are your post will make its way to someone who can help, whether a local Code for America brigade, or an independent civic hacker, or another transit agency that has already solved the same problem.

Finally, I’d like to thank the participants in the Disrupting Legacy Transit Ops Software (Moving Beyond Trapeze) session at Transportation Camp DC 2015, who inspired me to write this post.

Reprogramming a u-blox MAX-7Q in-situ on a Raspberry Pi

Suppose you have a u-blox MAX-7Q GPS module connected to a Raspberry Pi, and you need to reprogram the module (for example, to enable/disable certain NMEA strings, change the baud rate, etc.). You could manually construct the various binary UBX strings and send them through gpsd or straight out the serial port, but that’s needlessly complex.

You could also connect the module to a Windows PC and use u-center to reprogram it, but that’s a bit of a nuisance too. If the module is conveniently packaged for connection to a Raspberry Pi, then you aren’t going to have a readily accessible USB port, nor an RS-232 serial port that you could connect directly to a PC. Sure, you could cobble together a USB-serial interface (or a real hardware serial port, rare as they are nowadays) and an RS-232–3.3 volt level shifter like the MAX3232CPE, or just use this convenient cable from Adafruit, which provides a +5 volt supply and 3.3 volt serial interface from USB, but it’s still not quite plug-and-play.

But, there’s an easier way that avoids all of those hassles (although it still requires you to have a Windows PC)—enter socat!

$ sudo socat tcp-l:2000,reuseaddr,fork file:/dev/ttyAMA0,nonblock,waitlock=/var/run/ttyAMA0.lock,b9600,iexten=0,raw

This exposes the Raspberry Pi's serial port at TCP port 2000, so you can connect to it over the network from a Windows PC running u-center. You might think you'd then have to use extra software on the Windows side to get a TCP socket to appear as a virtual COM port, but u-center has support for network interfaces built in. Just enter the Raspberry Pi's IP address and the port number, and it will happily connect to the module via socat.

How should transit agencies make their GTFS available?

To many techies, the question of how transit agencies should make their GTFS available might seem like a silly one. They’d reply that obviously the agency should simply post their GTFS to their Web site at a reasonable URL, and make that URL readily available from the agency’s developer resources page.

Unfortunately, it isn’t nearly so simple in the real world. Instead, many agencies hide their GTFS behind a “clickwrap” license, or even require a login to download the feed. In a few particularly bad cases, developers even have to sign an agreement and return it (on paper) to get access to a feed. Some agencies don’t host their own feeds at all, instead depending on sites like the GTFS Data Exchange.

So, what are some best practices for hosting GTFS feeds?

  • Don’t rely on third parties: Think of this in terms of paper maps and schedules. How would riders feel if a transit agency told them to pick up transit maps and timetables not at the agency’s offices or stations, but rather some unrelated third party? If a transit agency has a Web site (as almost all do), then it should be capable of hosting its own GTFS feed. Sure, some agencies will complain about what their content management system “won’t let them do”, or complain that they must go through some arduous process to upload new content, but in 2014 running a Web site is a basic competency for almost any organization. Depending on a third-party site introduces additional risk and additional points of failure.
  • Help developers discover feeds: Developers shouldn’t have to hunt for GTFS feeds–there should be a prominent link on every agency’s homepage. Bonus points for participating in any applicable data catalogs, like these operated by ODOT and MassDOT for agencies in their respective states.
  • No login, no clickwrap: GTFS feeds should be downloadable by any Internet user, without having to log in or accept a license agreement. This is a must-have for being able to automate downloads of updated GTFS feeds, an essential part of any large-scale passenger information system. Don’t make it needlessly hard for developers to use your GTFS feed – if you can’t download it with wget, then you’re just making work for feed users. The only piece of information a developer should need to know to use an agency’s GTFS feed is the URL—a clean, simple URL like http://www.bart.gov/dev/schedules/google_transit.zip.
  • Support conditional HTTP GET: GTFS feeds rarely change every day, but it’s still important to get updates as soon as they’re available. But downloading a large feed (some can be 20 MB or more) every day is wasteful. So how can feed consumers stay up-to-date without wasting a lot of bandwidth? Feed producers should support conditional HTTP GET, using either the ETag or Last-Modified headers.

Agencies may balk at some of these recommendations—”But we have to track usage of the feed! But we have to have a signed license agreement!”—but the simple fact is that there are plenty of agencies that get it right. There are plenty of agencies that use a simple, reasonable license, and plenty of agencies that host their GTFS at a stable URL that supports automated downloads. If you demand a signed license agreement, or make developers log in to access the feed, you make it harder for developers to use your data. When you make it hard for developers to use your data in their apps, you make it harder for transit riders to get service information, because many riders’ first stop when they need transit information is a third-party smartphone app.

Synoptic first!

So, you’re a transit agency (or vendor, consultant, system integrator, etc.), and you’ve decided to develop an API to expose your real-time data. Perhaps you’ve gotten queries from developers like “I want to be able to query an API and get next bus arrivals at a stop…”.

It’s hard to say “no”, but fulfilling that developer’s request may not be the best way to go. If you have limited resources available to expose your data, there are better approaches available, which will in the long term enable the development of more advanced applications.
Continue reading Synoptic first!

What’s wrong with the NextBus API?

When it comes to real-time transit data, one of the common refrains is “just use NextBus!”—but while NextBus may be a common name, that doesn’t make them best choice for providing real-time transit data with a robust open data API for developers. It’s true that NextBus provides an API for developers, but there are problems that hamper or even entirely prevent its use in certain applications.

What are these problems? Some are organizational, and some are technical:

  • API not enabled for all agencies: While NextBus provides service for more than a hundred agencies, only a fraction of those agencies make their data available through the NextBus API.
  • API not standards-compliant: NextBus provides data to developers in their own custom format, rather than using the industry-standard SIRI or GTFS-realtime formats. While NextBus’s API has its advantages for certain types of apps (principally simple mobile apps), for developers working on large-scale passenger information systems, and developers seeking to solve complex problems like real-time routing, there are deficiencies in the NextBus API which could be remedied by using a standardized format. In particular, NextBus makes it exceedingly difficult to get the status of an entire transit system at once. Retrieving data stop-by-stop makes sense for mobile apps, but not for transit data integration platforms like OneBusAway, which benefit from being able to update from a feed containing status updates for all of an agency’s vehicles and trips.
  • Commonality of identifiers: When NextBus agencies also publish a GTFS feed containing their static route and schedule data (which they should), route, stop, and trip identifiers should match those in the NextBus data. When this is not done, it becomes onerous to use the real-time data—developers must expend additional engineering effort to map identifiers between the static and real-time data.
  • Data quality and completeness: Though the NextBus API documentation defines the data elements which developers can expect to find in the API responses, the actual availability of these data varies considerably between agencies. For example, many agencies do not include the tripTag element, which is essential for linking predictions between stops and then to the static schedule. Similarly, some agencies don’t actually provide useful values for the block element. NextBus must impress upon its customers (that is, the transit agencies) the value of supplying high-quality configuration data so that the NextBus API works as intended.

Though the present NextBus API is far from ideal, it is possible to transform the data into standards-compliant GTFS-realtime, which can be fed into any app which uses GTFS-realtime data, but only if the feed has been configured correctly—that is, with meaningful trip IDs, identifiers which match those in the agency’s GTFS feed, etc. Out of all of the agencies which use NextBus, the fraction of those agencies who have enabled the NextBus API and provided NextBus with the right configuration data for the API to be useful to the GTFS-realtime translator is frustratingly small.

NextBus can—and should—do better. Their customers, more than 100 transit agencies in North America, would all benefit from standards-compliant APIs that would allow developers to build apps that work with data produced by AVL systems from all vendors, not just one. This is the essence of open data, and it’s time for NextBus to get on board.

Passively open, actively closed

What do you think of when you hear “open data”? Do you think of hackathons, APIs, data catalogs, perhaps partnerships with Socrata or Mashery, etc.? Do you think of clean data in well-defined formats with ample developer documentation?

Not all open data looks like that. Take Amtrak’s new “interactive train locator map”, for example. You might not know it, but that map is powered by a public dataset stored in Google Maps Engine. As Google’s documentation explains:

There’s an ever-growing number of public datasets available in Google Maps Engine for use by developers in their map or data visualization applications. You may retrieve this data with a simple HTTP request; no authorization is required, and authentication is accomplished through the use of an APIs Console key.

These data, then, are passively open. They are, on a technical level, available for creative reuse, innovation, and incorporation into new transformative projects. But there’s no fancy developer portal, no hackathon, no documentation. The openness of the dataset is more a side effect of having elected to host it in Google Maps Engine than a conscious decision. Once you get the map data, it’s up to you to figure out how to use it—and as for a developer community, well, you’re on your own. It’s not the end of the world, though—in the case of this dataset, it’s mostly self-documenting, and it’s not too hard to build transformative applications with the data.

Unfortunately, sometimes datasets which could easily be treated as passively open are instead made actively closed. Take, for example, GO Transit’s GO Tracker application. The Web application is powered by an XML data feed containing the real-time train data, which would make a great example of a passively open dataset. Instead, it is actively closed to innovation, development, and creative reuse. Try accessing the underlying XML feed outside of the GO Tracker application, and you’ll see that they employ technical measures to control access to the feed. While you could spoof the necessary HTTP headers to gain access, that’s not the sort of thing that comports with open data.

Open data doesn’t necessarily require any special effort. Where there are already APIs and data feeds powering Web applications, all that is required is to allow outside developers to access those same resources. In fact, as in the case of GO Transit, it often takes more effort to shut out developers, building access controls around what would otherwise be easily-reusable open data.

GTFS-realtime for WMATA buses

I’ve posted many times about the considerable value of open standards for real-time transit data. While it’s always best if a transit authority offers its own feeds using open standards like GTFS-realtime or SIRI, converting available real-time data from a proprietary API into an open format still gets the job done. After a few months of kicking the problem around, I’ve finally written a tool to produce GTFS-realtime StopTimeUpdate, VehiclePosition, and Alert messages for Metrobus, as well as GTFS-realtime Alert messages for Metrorail.

The tool, wmata-gtfsrealtime, isn’t nearly as straightforward as it might be, because while the WMATA API appears to provide all of the information you’d need to create a GTFS-realtime feed, you’ll quickly discover that the route, stop, and trip identifiers returned by the API bear no relation to those used in WMATA’s GTFS feed.

One of the basic tenets of GTFS-realtime is that it is designed to directly integrate with GTFS, and for that reason identifiers must be shared across GTFS and GTFS-realtime feeds.

In WMATA’s case, this means that it is necessary to first map routes in the API to their counterparts in the GTFS feed, and then, for each vehicle, map its trip to the corresponding trip in the GTFS feed. This is done by querying a OneBusAway TransitDataService (via Hessian remoting) for active trips for the mapped route, then finding the active trip which most closely matches the vehicle’s trip.

Matching is done by constructing a metric space in which the distance between a stoptime in the API data and its counterpart in the GTFS feed is defined as an (x, y, t) tuple—that is, our notion of “distance” becomes distance in both space and time. The distances fed into the metric are actually halved, in order to bias the scores towards matching based on time, while allowing some leeway for stops which are wrongly located in either the GTFS or real-time data.

The resulting algorithm will map all but one or two of the 900-odd vehicles on the road during peak hours. Spot-checking arrivals for stops in OneBusAway against arrivals for the same stop in NextBus shows relatively good agreement; of course, considering that NextBus is a “black box”, unexplained variances in NextBus arrival times are to be expected.

You may wonder why we can’t provide better data for Metrorail; the answer is simple: the API is deficient. As I’ve previously discussed, the rail API only provides the same data you get from looking at the PIDS in stations. Unfortunately, that’s not what we need to produce a GTFS-realtime feed. At a minimum, we would need to be able to get a list of all revenue trains in the system, including their current schedule deviation, and a trip ID which would either match a trip ID in the GTFS feed, or be something we could easily map to a trip ID in the GTFS feed.

This isn’t how it’s supposed to be. Look at this diagram, then, for a reality check, look at this one (both are from a presentation by Jamey Harvey, WMATA’s former Enterprise Architect). WMATA’s data management practices are, to say the least, sorely lacking. For most data, there’s no single source of truth. The problem is particularly acute for bus stops; one database might have the stop in one location and identified with one ID, while another database might have the same physical stop identified with a different number, and coordinates that place it in an entirely different location.

Better data management practices would make it easier for developers to develop innovative applications which increase the usability of transit services, and, ultimately improve mobility for the entire region. Isn’t that what it’s supposed to be about, at the end of the day?