There is no more Perl at JP Morgan, the largest investment bank in the US. The last Perl code was decommissioned in 2019.
Disclaimer: I am writing this to illustrate discovering and dealing with ancient critical systems. Please do not interpret it as a bashing on Perl.
There were a few Perl scripts in the bank circa 2018 totaling in the order of 10k LOC and originally written up to 2 decades ago.
This is everything that could be discovered, the search covering repositories with 50M lines of code and thousands of servers, everything that could be accessed and accounted for really.
I was working there at the time and got the unfortunate task to look into it. (Tip: I you see emails passing by about some old systems, don’t reply or people might take the opportunity to get you to do the work).
I say “look into it” and not upgrade or rewrite or decommission because these words are not accurate. The first step in dealing with old potentially-critical systems is to understand them, before considering whether anything can or should be done.
The second step, if any, is to add logging to try to capture what is being used. What is ultimately unused can be decommissioned. What became trivial or obsolete can be worked away (e.g. call to
getcorecount.pl can be replaced by a built-in API call).
More than half of the scripts turned out to be unused (or close enough) and could be safely decommissioned. Leaving a few scripts to deal with, totaling a couple thousand lines.
What was the last Perl system to die?
The last bits of Perl active in production were written circa 2006 and practically untouched since, as per the source control history.
There were building pieces of a storage subsystem. It is similar to AWS S3, to summarize in one word.
Perl was known in its era to be a glue language, used to write short scripts to glue things together, like a bash script but one notch above in complexity. This shouldn’t come as a surprise that Perl code was gluing some low level things together there.
While the Perl bits were mostly pristine since inception. The rest of “the whole” was not.
Storage is a fundamental need of many projects. What started as a building block hacked in over a few weeks quickly grown in usage across the business.
Additional integrations and API were created across the company to interface with it (not in Perl and not necessarily in the master repo). It got used extensively and built upon some more.
Over the decade, it became the go-to solution to store numerous and various data, to store daily reports, store risk analysis results, market data, past and future transactions. Consider any use case that could be covered by a csv/xlsx/pdf (and drive billions of dollars in a bank).
[Note: This means both human and programmatic usage, which is the most sensitive to deal with. Systems that are depended upon by humans are inherently limited in criticality and impact (humans can only click so fast as physically possible and would raise the alarm on things not working as expected). Whereas automated systems have an infinite potential for catastrophe, as they can fail endlessly and repeatedly at “machine speed” and drive other dependent systems to fail, sometimes silently.]
Of course out of the thousands of applications and developers relying on it, none of them suspected they were ultimately depending on a bunch of Perl. There’s maybe 2 people who will recognize the topic. (In any organization, no matter how large, there are only 1-3 people who are familiar with any given system).
At the end of the analysis, it is clear that the Perl was driving billions of dollars per day, possibly trillions. It is a contender for the top 3 most critical system in the bank as usage is ubiquitous in JP Morgan. It may be one of the most critical systems in the planet, as the bank drives a good chunk of the US economy and the world.
The conclusion is plain and simple, this can’t be decommissioned because it’s used.
There was a long list of reasons to take action (anything you can imagine most likely held true) that I may not enumerate, the bank doesn’t have a culture of full disclosure and I agree that some things are better left undisclosed. I can maybe point out that the last nail in the coffin involved severe vulnerabilities (emphasis on plural) that couldn’t be fixed. This gave initial grounds to investigate, quickly followed by a dreadful sense of urgency as more issues came to light.
It will have to be rewritten instead, into something that’s actively maintained and understood by developers.
That something is python.
Before you ask… python 3???
Nope, not python 3. Think 2018-2019 so no python 3 in sight.
The rewrite will work on python 2.6 because python 2.6 is great and stable… and more importantly it’s the version that’s shipped with RHEL 6 where this is running.
[While the replacement is running on python 2.6, it is working on python 2.7 and python 3.7 too, in anticipation of the servers being upgraded to RHEL 7 then RHEL 8. It’s possible to make small software that is compatible with all these with extreme care and a scrupulous management of dependencies. The original project worked well for over a decade, the rewrite is meant to live equally long, the next maintenance stop could be in 2030 if adjustments are required to run on RHEL 9 and future python.]
Long story short, the scripts were rewritten and tested extensively. (Skipping the narrative on testing extremely critical software of that magnitude or the article will be 50 pages longer).
The rewrite went live in production…
And it worked fine on the first try.
End of story.
- Perl is dead
- The world runs on Python (2.6)