The Graph of Life in a Post-Relational World

We now exist in a “post-relational world,” wrote Alex Williams and
Mark Boyd in 2015. We won’t actually understand the full virtues of the graph database model first brought to light by Neo4j, they proposed in The New Stack, until we’ve fully plumbed the changes it has made to the way we live and work. We’ll notice the connections between the objects and the people in our lives once we’ve taken a walk through the graph model of our respective realities, like traversing a city street as tourists.
Stacks, such as they are, tend to be new for the reasons smartphones tend to be new: They adapt to support new functions and new functionality.But just like communications technology, software stacks are built upon systems. And systems, like fossils, don’t change.
We build so much dependency upon our systems being just functional enough, that we’re afraid to change them, or even to think about what such changes might entail. Our infrastructure is “infra-“ for a reason: we don’t really want to know about it all that much.
The technology that the stability of our world arguably relies upon most of all lies beneath such streets. The island of Manhattan — still the very symbol of American ingenuity, so much so that its destruction in thousands of movies with variable budgets since 1933 has become the icon of our demise as a species — rests apprehensively balanced atop a brick-and-mortar sewer system, parts of which have been unaltered since 1853.
Folks who believe technology changes every day haven’t spent a great deal of time walking beneath city streets. More often than not, they’re referring to “tech” — the products and by-products of our evolving need as a people to devise more efficient, better ways of work. We just love changing version numbers for things, which is why Samsung has a Galaxy S22 when it didn’t have an S19, and why there was a 6G telecommunications conference in October. One of the thrills of our lives growing up (guilty as charged) was watching our parents’ odometers roll over to some colossal power of 10. Small events carried such thunder, back before our worlds scaled up.
Today beneath the streets, along with the cables, tunnels and decaying sewers, it would appear we would prefer “tech” never to change at all.
Sure, we portray information technology as bright, innovative, enlightening, revolutionary, often granting a fourth or fifth go-round for the fourth industrial revolution. We think because we look out upon the landscape of IT and witness natural bouquets of revolutions, like wellsprings in the desert, that change is some kind of self-compelled, perpetual force that rampantly makes over our lives and our work.
We’re forgetting the obvious: Revolutions happen when evolution doesn’t.
You Say You Want a Revolution?
The original Greek meaning for the term “technologia” had nothing to do with products, devices or even programs. Its two roots equate with “artisanship” and “science” — the fusion of what we create and how we think. In an era before touchscreens and product generations, the Greeks were referring to methods.
Methods do evolve. If we’re being honest, the “new” in “new stack” refers to our continually evolving methods. Moore’s Law, when it worked, was a method of managing two trends simultaneously: how quickly a method improves against how soon the tools that use that method become outmoded. Once managed effectively, you could calculate a price for those tools, and then estimate its rate of depreciation. Moore’s Law wasn’t about driving change. It was about reigning it in.
There are new stacks in our world, not because our technology changes by itself, but because our methods must evolve, and stacks enable them to do so. New stacks are all about swapping out old technologies with newer ones, while maintaining interoperability. Yes, we can create new functions, new applications, new services with information technology that’s 50 or 60 years old. We certainly keep trying. But we can only keep doing so for so long. Once the need for new methodologies outweighs the capacity for old stacks to fulfill them, the impetus for revolution reaches critical mass.
What happens from that point onward is far from predetermined. Sometimes it’s nothing. Occasionally, it’s astonishing.
Infrastructural shifts are the last things our industries, our economy and our society ever want to do. At the core of our IT infrastructure are systems that were designed to make information easier to manage and maintain, back when memory was stored on ceramic drums. Down here, in what I like to call “the weeds,” someone appears to have missed the revolution. The new stacks being created today are easily five or six generations ahead of the weeds, in terms of architecture, efficiency and performance. Graph databases, for example, are astoundingly functional. Graph methodology is profoundly enabling. That graph methods are not an everyday part of our lives already is an indication of just how much effort will yet be required to finally shut our past technologies off.
Ex Post Facto
Think of how long it will take for 5G telecommunications to finally supplant 4G, even when the latter’s obsolescence goes according to plan (which it already isn’t). Now imagine what it will take for the fundamental structure of the world’s data to shift from a first-generation framework established six decades ago, to a second-generation framework that’s been waiting in the wings for over a decade already.
Working against this easy generational shift appears to be two phenomena, reminders of whose existence we are repeatedly inundated with by those same folks who try to sell us on the idea that technology evolves under its own power:
- We are drowning in our own data. Evidently our planet generates 2.5 quintillion new bytes of data per day (according to this 2017 infographic). What that visualization misses is how many of those bytes are accumulated over years, decades of time.
- We’re unaware of all the data we’re drowning in, specifically because we’re not properly managing it all. Data is the silent, invisible, insipid threat to our well-being, and because we can’t acknowledge it, we can’t eliminate it.
Propping up both of these phenomena is a limited degree of observable truth. For example, we don’t talk very much about the actual data itself, nor the engineering behind the data residing in the cloud and in your data centers. And that’s odd, since we’re really into data centers as a topic, though our excitement tends to focus more on the “center” part than the “data” part.
There’s a handful of possible reasons. First — and there’s substantive evidence to back this up — we’re somewhat embarrassed about the data part of our infrastructure. It’s the older, unseen, camouflaged area we don’t take guests around to see, like the back of our garage or the front of our closet.
Second, there’s the theory subscribed to by Ted Dunning, creator of the Ezmeral (MapR) data fabric and now CTO for data fabric at HPE: Data technology has become so efficient, he explains, that it makes for dull dinner conversation. (I’ve tried discussing this theory with my wife, but she doesn’t seem to pay attention to me.) “Boring is good,” Dunning writes for The New Stack, “if you want to get on with your life by using technology rather than inventing it.”
That sounds hopeful enough for folks who perceive technology as tech: as the product of innovation, as opposed to the act of it. The management of our data requires us to think differently, to stop perceiving infrastructure as boring and to risk being perceived as geeks among mixed company. The corollary of Dunning’s assertion is this: Boring lets us get on with our life, up until we suddenly discover we’re losing the planet upon which that life depends — at which time boredom will become a scarce, beloved commodity.
Here’s my belief: The way we manage our data is not, in and of itself, a threat to the stability of our economy, the well-being of our society or the survival of our species. However, the inefficiencies of our present methods, our technologia, do constrain us from adequately and impactfully addressing the genuine threats to our society: climate change, resource utilization, class struggles, political polarity, human inequities. We lose track of our tech; thus, we lose sight of our methods.
The Last Full Measure of Devotion
Graph databases have been at the core of a new stack of sorts for well over a decade already. Yet although graph architecture does address fundamental requirements of modern data, especially when it’s sharded across data centers and the cloud, it’s still a novel concept for most of us. Its principal idea is this: Data may be made more informational, and thus more useful to people, by imbuing it at the outset with the types of relationships and properties that otherwise would require significant processing power (and therefore time) just to infer.
Put another way: Solving serious, critical-needs problems with data typically requires systems to infer relationships from vast data stores. The accuracy of those inferences can increase, though mainly through expanding the volume of those data stores. Knowledgeable folks spend valuable time and effort developing artificial intelligence to enhance the power of systems to draw more accurate inferences from smaller and smaller volumes of data.
“Analysis,” I wrote for a college-level textbook published two decades ago, “is the act of extracting more information from a set of data than that data naturally has. . . A computer simply cannot absorb the data from items in inventory in their entirety, and produce a complete analysis automatically. It doesn’t know how. Someone has to tell it.”
If actual intelligence — the human variety — were tasked at the start with giving data the relationships that a database manager or engine would otherwise have to infer, most, if not all, that AI would become instantly unnecessary. This assumes, of course, we have tools available to us to make that job both functional and bearable, which is the goal graph databases are working to bring to fruition.
Relationships between the things in our lives, including the random items we encounter while walking city streets, have patterns. If we can reproduce those patterns visually — or rather, graphically — most of the job of transmitting relationships into databases can be reduced to a matter of drawing circles and arrows.
This is what Neo4j began demonstrating, way back, on the technological evolutionary scale, at the dawn of time.
We know we should be living in Williams and Boyd’s “post-relational world.” But in recent years, our expectations for living in a post-anything world (in which humans still play a role) rely upon us having been devastated somehow by whatever preceded the “anything”: greenhouse gasses, soil erosion, nuclear meltdown, political upheaval, social uprising, the mass cancellation of superhero movie projects. We walk our city streets harvesting new connections for our collection, like playing Pokémon Go on a Sunday afternoon, yet we conveniently ignore the most obvious connections of all because they’re 1) outside our direct line of vision, 2) huge and 3) decaying.
New stacks do come to fruition, but never on their own. They require effort, risk and patience.
Where to Begin with Graph Database
- Get started free with Neo4j AuraDB native graph database
- The 3 Underrated Strengths of a Native Graph Database
- Register now for Neo4j NODES 2022 Online Developer Education Summit Nov. 16