-   Terms of Use and Privacy
  -   Terms of Use and Privacy

<<   <   >   >>

2019-10-08 | Meta | Resilience is a Disposition

This article will introduce my perspective and interest in resilience of socio-economic-ecological systems. I will explain how resilience is grounded in the present moment and within a system that is experiencing stress. I do not address specific kinds of stressors or speculate about future collapse of systems, but, rather, provide a flexible modeling system that facilitates resilience.

I am a typical engineering type. I like to design systems that are stable and adaptable. I'm not a real engineer. I work in IT. Oh, I suppose I have my MCSE, which is a certification with engineer in the name, but I don't think it counts, because it is a vendor certification and not a professional body for the purposes of broader licensing. I work on some of the same kinds of problems, though, as far as systems.

For the last six years, I have moved more and more into systems analysis. I interview the people who have an interest in a system: stakeholders, operations, engineering, and sponsors. The system might be used to create public bonds, or it might be used to manage an entire workflow for a company. I help those I interview understand how the entire system works now (as-is), down to all of the little bits of data that flow around. I act as a mirror for them, showing them what I understand, and refine until we all agree. We then try and understand where they would like to be in the future (to-be). Once I understand as-is and to-be, I am frequently responsible for designing a system that fulfills the requirements, capturing the design decisions in a document called a solution description and creating visualizations (diagrams) of how the components fit together.

I've noticed more and more that there is a tendency in organizations to replace the entire system before they understand as-is. There is also a slightly different problem where people assume that they can iterate towards a system that works for them incrementally, before understanding as-is and to-be. The problem with both of these approaches is that at some point most of the old ways the system was used will come up again as needs, or as pain that also existed in the previous system. On the other hand, proper analysis takes a significant amount of time. There is a derogatory term applied to too much analysis and not enough action, analysis paralysis, and I have seen both sides of those clouds. There needs to be a balance, and over time the balance has shifted away from analysis and towards "Fuck it. Ship it." Part of this is that there is confusion about the focus of analysis. For a UI, it is true that you need to quickly get it in front of a user to find out what works and what doesn't. The underlying machinery, though, the way data flows, how it is secured, how it is used, is something that needs to be understood before you just toss the analysis and ship.

One of the most time consuming parts of analysis, after the interviewing process, is creating the models used to represent the as-is. Usually the to-be is easier, since everybody is so used to the models and their current process due to the exercise of establishing as-is. Typically a diagramming tool is used to create various diagrams that serve as models for an IT effort. Often this is a mix of beige box blocks representing system components and text drawn on lines to signify the kinds of data flowing. Sometimes networking configuration is added to the diagram. I prefer to start with the Gane and Sarson modeling technique, as it only has three symbols, and it facilitates broad and deep diagrams without making the diagram too confusing to read due to the convention of zooming in on a process. The interviewing part is difficult for me, as there is a lot of information coming in. I limit interviews to an hour and apply the knowledge to the model to use in the next interview. Rarely do I need more than three interviews for a particular area. For quickly changing systems, the analysis can be out of date before it is done, because of this cycle.

What can we do to make the modeling part of the process faster? Consider Graphviz. Graphviz does all of the routing of the lines automatically, and it turns out that it can handle large data flow diagrams with multiple levels just fine. But it also is a window into something even bigger: the idea of capturing various domains of a system as an ontology. A domain, in this sense, is a perspective related to a set of knowledge. A data flow diagram has groups of people (entities) that serve as a source or sink of data that is transformed by a process before being stored or retrieved via data storage at rest. This model can be built out quite large, but it still exists within the domain of entities, processes, and data stores. A system component diagram of a network might have devices, servers, cloud providers, routers, firewalls, WiFi access points, etc. This would be another domain. Another domain might be docker hosts, individual images, and failover between regions of a cloud provider. Each domain has particular characteristics, a vocabulary of what the objects are and how they are connected. As I mentioned earlier, these domains are often mixed in order to create one diagram. In some cases this can work; however, data flow diagrams can get very large, and mixing in other domains can be confusing in the visualization.

Many tools exist for storing and managing ontologies; however, they were were originally developed for bioinformatics, the world-wide web, and telecommunications. Only relatively recently have these tools been used for IT. One thing that you get with an ontology is meaning, and this is another reason to stay within a domain. As an example, if X is a process and Y is a subprocess of X, and Z is a subprocess of Y, then it can be inferred that Z is a subprocess of X. There is a rich language used to express these kinds of relationships, and software that can infer meaning from these models. Now, if you put a network component X' that X runs over, the implication that Z uses that network component is not valid. What might be valid is if Y' is a subnet of X', and some inferences could be made in that relationship. That is why mixing domains with models is problematic. It is possible to cross domains with references. That is, I can relate a part of a data flow to a network, server, or application component yet still maintain the value of relationships within the domain. This forms what used to be called an "Expert System" from the models themselves. My guess is that the value will be a factor of 10X over the more labor-intensive manual methods because of the inference and extensibility of the models.

One other advantage to using ontological methods to build these models, is that it is easier to build models collaboratively. Ontologies come down to just three things that build out the models: subject, predicate, and object. X is_a_subprocess_of Y, for instance. These are called "triples". It is much easier to divide up a system and pin down relationships at this level than it is to have somebody modify an existing diagram. Simply inserting a component or process into an existing diagram can be labor intensive, usually. With ontological methods, though, all you need to do is add some triples and redraw the entire diagram automatically. This should effect both velocity in a collaborative way, but also raw speed for a single individual. My past experience with interviews and back-and-forth diagrams using different domains makes me think that the speed with which this can be done will be 10X over the previous methods I've used.

The 10X value and 10X velocity advantages mean that the historical friction with analysis paralysis may be moot now. Users still need to be interviewed, but even that could be aided if visualization took seconds instead of minutes. Imagine sitting down with users and having process diagrams come together instantly in the interview. This is why alternative methods involving sticky notes are so popular. It brings a form of visualization and collaboration to the group. It is useful to see the analysis form in real-time. It is still a bit of a waste though, because you end up having to capture all of those tiny squares somehow, often ending up in a diagram. It makes more sense to capture immediately.

Another principle that I want to apply to all of this, and it seems to be working across domains, is the principle of reasonable constraints for the sake of simplicity (convention over configuration). What if I constrained the graph of the domains to a filesystem tree? This makes the mechanism visual, for one, kind of like the sticky notes. It also keeps the system from becoming too complicated. This is IT. Already there is concern about too much analysis, to the point where much analysis and BA roles are tossed in favor of just putting something, anything, in front of the user and figuring it all out later. I am continuing to develop this constraint using the ideas of ontology. So far I have it working for data flow. I suspect that keeping this constraint across the domains I'm familiar with will work. It also means that it is possible to use pen and paper, as this diagram from 1855 demonstrates. Like sticky notes, these techniques are visual enough that they can be performed without computers. Certainly they would be compatible with Collapse OS and the like. (Tip 'o the hat to you, Virgil.)

What does this have to do with resilience of socio-economic-ecological systems? Well... As I was learning about ontologies, I ran into this article by Desiree Daniel called Resilience as a Disposition:

"In domains concerned with global change, achieving resilience in socio-ecological systems is highly desired; however making this concept operational in reality has been a struggle partly due to the conflation of the term by these domains. Although resilience is vastly researched in sustainability science, climate change and disaster management for some reason this concept is not dealt with from an ontological perspective. In this paper, the foundation for a formal theory of resilience is laid out. I propose that the common view of resilience as ‘the ability of a system to cope with a disturbance’ is a disposition that is realized through processes since resilience cannot exist without its bearer i.e. a system and can only be discerned over a period of time when a potential disturbance is identified. To this end, the constructs of the Basic Formal Ontology are applied to ground the proposed categorization of resilience. In so doing, I adhere to the notion of semantic reference frames by employing a top-level ontology to anchor the notion of resilience."

What this means is that we can't really design for resilience. Resilience happens at the time that the system is faced with stressors/threats. We can design for specific threats, but this is not directly related to resilience. This makes sense if you think of this from a psychological perspective. It isn't like you can prepare for every single stressor you might face. You need to prepare by building understanding and practices that facilitate resilience as stressors appear. My efforts, porting all of the various domains of IT analysis and tools from my 30 year career into simple, file-based ontological models, will be useful as we modify our processes in response to threats. Further, over the long term, it will help us answer questions like "Where are we now?" "Where do we want to be?" in ways that have not been possible before at a small scale. I think that providing answers to those questions quickly and flexibly will help us make better choices in the future.

Now, I have said all of this, but I am just a career IT person with some ideas on making this work better. My ideas will be used by others and improved. This is just what I have to offer, a tribute to her. I know some domains. I am going to capture those using a tree model that I think will be useful to domains outside of IT. For those within IT that want to do this, I can help by mapping their particular combination of IT into this model. I am an IT docent.