We don’t need a healthcare platform

This text was triggered by discussions on Twitter in the wake of a Norwegian blog post I published about health platforms.
I stated that we need neither Epic, nor OpenEHR, nor any other platform to solve our healthcare needs in the Norwegian public sector. Epic people have not responded, but the OpenEHR crowd have been actively contesting my statements ever since. And many of them don’t read Norwegian, so I’m writing this to them and any other English speakers out there who might be interested. I will try to explain what I believe would be a better approach to solve our needs within the Norwegian public healthcare system.

The Norwegian health department is planning several gigantic software platform projects to solve our health IT problems. And while I know very little about the healthcare domain, I do know a bit about software. I know for instance that the larger the IT project, the larger the chances are that it will fail. This is not an opinion, this has been demonstrated repeatedly. To increase chances of success, one should break projects into smaller bits, each defined by specific user needs. Then one solves one problem at a time.

I have been told that this cannot be done with healthcare, because it is so interconnected and complex. I’d say it’s the other way around. It’s precisely because it is so complex, that we need to break it into smaller pieces. That’s how one solves complex problems.

Health IT professionals keep telling me that my knowledge of software from other domains is simply not transferrable, as health IT is not like other forms of IT. Well, let’s just pretend it is comparable to other forms of IT for a bit. Let’s pretend we could use the best tools and lessons learned from the rest of the software world within healthcare. How could that work?

The fundamental problem with healthcare seems to be, that we want all our data to be accessible to us in a format that matches whatever “health context” we happen to be in at any point in time. I want my blood test results to be available to me personally, and any doctor, nurse or specialist who needs it. Yet what the clinicians need to see from my test results and what I personally get to see will most likely differ a lot. We have different contexts, yet the view will need to be based on much of the same data. How does one store common data in a way that enables its use in multiple specific contexts like this? The fact that so many applications will need access to the same data points is possibly the largest driver towards this idea that we need A PLATFORM where all this data is hosted together. In OpenEHR there is a separation between a semantic modelling layer, and a content-agnostic persistence layer. So all datapoints can be stored in the same database/s – in the same tables/collections within those databases even. The user can then query these databases and get any kind of data structures out, based on the OpenEHR archetype definitions defined in the layer on top. So, they provide one platform with all health data stored together in one place – yet the user can access data in the format that they need given their context. I can see the appeal of this. This solves the problem.

However, there are many reasons to not want a common platform. I have mentioned one already – size itself is problematic. A platform encompassing “healthcare” will be enormous. Healthcare contains everything from nurses in the dementia ward, to cancer patients, to women giving birth, orthopaedic surgeons, and family of children with special needs… the list goes on endlessly. If we succeed building a platform encompassing all of this, and the plattform needs an update – can we go ahead with the update? We’d need to re-test the entire portfolio before daring to do any changes. What happens if there is a problem with the platform (maybe after an upgrade.) Then everything goes down. The more things are connected, the more risky it is to make changes. And in an ever changing world, both within healthcare and IT, we need to be able to make changes safely. There can be no improvement without change. Large platforms quickly become outdated and hated.

In the OpenEHR case – the fact that the persistence has no semantic structure will necessarily introduce added complexity in how to optimise for context specific queries. Looking through the database for debugging purposes will be very challenging, as everything is stored in generic constructs like “data” and “event” etc. Writing queries for data is so complex, that one recommends not doing it by hand – but rather creating the queries with a dedicated query creator UI. Here is an example of a query for blood pressure for instance:

let $systolic_bp=“data[at0001]/events[at0006]/data[at0003]/items[at0004]/value/magnitude”
let $diastolic_bp=“data[at0001]/events[at0006]/data[at0003]/items[at0005]/value/magnitude”
SELECT
obs/$systolic_bp, obs/$diastolic_bp
FROM
EHR [ehr_id/value=$ehrUid] CONTAINS COMPOSITION [openEHR-EHR-COMPOSITION.encounter.v1]
CONTAINS OBSERVATION obs [openEHR-EHR-OBSERVATION.blood_pressure.v1]
WHERE
obs/$systolic_bp>= 140 OR obs/$diastolic_bp>=90

This is needless to say a very big turn-off for any experienced programmer.
The good news though, is that I don’t think we need a platform at all. We don’t need to store everything together. We don’t need services to provide our data in all sorts of context-dependent formatting. We can both split health up into smaller bits, and simultaneously have access to every data point in any kind of contextual structure we want. We can have it all. Without the plattform.

Let me explain my thoughts.

Health data has the advantage of naturally lending itself to being represented as immutable data. A blood test will be taken at a particular point in time. Its details will never change after that. Same with the test results. They do not change. One might take a new blood test of the same type, but this is another event entirely with its own attributes attached. Immutable data can be shared easily and safely between applications.

Let’s say we start with blood tests. What if we create a public registry for blood test results. Whenever someone takes a blood test, the results need to be sent to this repository. From there, any application with access, can query for the results, or they can subscribe to test results of a given type. Some might subscribe to data for a given patient, others for tests of a certain type. Any app that is interested in blood test results can receive a continuous stream of relevant events. Upon receipt of an event, they can then apply any context specific rules to it, and store it in whatever format is relevant for the given application. Every app can have its own context specific data store.

Repeat for all other types of health data.

The beauty of an approach like this, is that it enables endless functionality, and can solve enormously complex problems, without anyone needing to deal with the “total complexity”.
The blood test registry will still have plenty of complexity in it. There are many types of blood tests, many attributes that need to be handled properly, but it is still a relatively well defined concrete solution. It has only one responsibility, namely to be the “owner” of blood test results and provide that data safely to interested parties.

Each application in turn, only needs concern itself with its own context. It can subscribe to data from any registry it needs access to, and then store it in exactly the format it needs to be maximally effective for whatever usecase it is there to support. The data model in use for nurses in the dementia ward does not need to be linked in any way to the data model in use for brain-surgeons. The data stores for each application will only contain the data that application needs. Which in itself contributes to increased performance as the stores themselves are much smaller. In addition they will be much easier to work with, debug and performance tune, since it is completely dedicated for a specific purpose.

Someone asked me how I would solve an application for
“Cancer histopathology reporting, where every single cancer needs its own information model? and where imaging needs a different information model for each cancer and for each kind of image (CT, MRI, X-ray) +where genomics is going to explode that further”

Well I have no idea what kind of data is involved here. I know very little about cancer treatment. But from the description given, I would say one would create information models for each cancer, and for each type of image and so on. The application would get whatever cancer-data is needed from the appropriate registries, and then transform the data into the appropriate structures for this context and visualise the data in a useful way to the clinician.
We don’t need to optimize for storage space anymore, storage is plentiful and cheap, so the fact that the same information is stored in many places in different formats is not a problem. As long as we, in our applications can safely know that “we have all the necessary data available to us at this point in time”, we’re fine. Having common registries for the various types of data solves this. But these registries don’t need to be connected to each-other. They can be developed and maintained separately.

Healthcare is an enormous field, with enormous complexity. But this does not mean we need enormous and complex solutions. Quite the contrary. We can create complex solutions, without ever having to deal with the total complexity.

The most important thing to optimise for when building software, is the user experience. The reason we’re making the software is to help people do their job, or help them achieve some goal. Software is never an end in itself. In healthcare, there are no jobs that involve dealing with the entirety of “healthcare”. So we don’t need to create systems or platforms that do either. Nobody needs them.

Another problem in healthcare, is that one has gotten used to the idea that software development takes for ever. If you need an update to your application, you’ll have to wait for years, maybe decades to see it implemented. Platforms like OpenEHR deal with this by letting the users configure the platform continually. As the semantic information is decoupled from the underlying code and storage, users can reconfigure the platform without needing to get developers involved. While I can see the appeal of this too, I think it’s better to solve the underlying problem. Software should not take years to update. With DevOps now becoming more and more mainstream, I see no reason we can’t use this approach for health software as well. We need dedicated cross functional teams of developers, UXers, designers and clinicians working together on solutions for specific user groups. They need to write well tested (automatically tested) code that can be pushed to production continuously with changes and updates based on real feedback from the users on what they need to get their jobs done. This is possible. It is becoming more and more mainstream, and we are now getting more and more hard data that this approach not only gives better user experiences, but it also reduces bugs, and increases productivity.

The Norwegian public sector is planning on spending > 11 billion NOK on new health platform software in the next decade. We can save billions AND increase our chances of success dramatically by changing our focus – away from platforms, and on to concrete user needs and just making our health data accessible in safe ways. We can do this one step at a time. We don’t need a platform.

This entry was posted in Uncategorized. Bookmark the permalink.

25 Responses to We don’t need a healthcare platform

  1. qristin says:

    Comment from Philippe Ameline:

    Short read: you certainly don’t need a platform but unfortunately will actually get one, and openEHR is your best take in this context.
    The technical reason is that health is now more about chronic conditions than acute ones. The result is that, in a domain where information has always been handled in silos (since 1) an acute trajectory is by essence short (healed or dead) and 2) each specialist has a very specific view angle), there is a need for a global vision. The main motivation for a “country wide platform” is always about “continuity of care”: to switch from silos scattered information to a global record.
    Unfortunately, it is a wrong idea (that failed everywhere).
    The global principle that leads most countries to go for a platform is the flawed concept that “a record of records c reates a continuity of care record”. It is pretty easy to demonstrate with proper knowledge management concepts: when people with very different view angle have to contribute to a common project, they never share their own, specialized record, they build a common “artefact” (for example, in mechanical fields, a 3D model of what is to be commonly designed).
    Another way to put it is that a “record” is a specialized information repository where each practitioner manage the smallest set of information that enables her own decision making process – hence locally optimizing the signal over noise ratio. Hence a “record of records” where specialized records pile up with no guiding principle eventually becomes the place where signal over noise ratios go to die.
    As a consequence, what should be shared is not a record, but an artefact (as the concept that represents a common ground for the various view angles that should team up inside a multidisciplinary team)… but what is a proper artefact in health, and why wouldn’t it legitimate a platform?
    I have been working on this matter for more than twenty years, and I will give you my two cents, but before, I will tell you why it is useless to talk about it… since you will end up with a platform, whatever the validity of any counter-argument.
    Let’s start with a concept that you nailed in your text: complexity (“because [healthcare] is so interconnected and complex”). I am used to saying that what separates “complicated” from “complex” is that experts tend to converge in the first domain while they naturally diverge in the second one – at large in the left part of the Cynefin framework (https://en.wikipedia.org/wi….
    The framework defines “complex” as the domain where the behavior should be “probe-sense-respond”, that’s to say “experiment and learn from errors”. As well demonstrated by Yaneer Bar-Yam (http://philippe.ameline.fre…, the consequence is that the more our societies become complex and the less they can be hierarchically governed. The top of the hierarchy is by essence located at the largest distance from the field (where people are in contact with the core business). In complicated domains, it is the best location for decision making because it is both far from the everyday noise and converging place for experts’ opinions, but in complex domain, it is, on the contr ary, both the farthest point from where “probe-sense-respond” occurs and where diverging experts’ opinions pile up.
    As Yaneer Bar-Yam puts it, we need to switch from the pyramidal organization inherited from the industrial revolution to a meshed (networked) society. My own take is that the current “epidemic of stupid leaders” in our democracies comes from our current inability to jump from our old paradigm to a modern one (the famous monsters nailed by Antonio Gramsci).
    But we are stuck in the ancient paradigm, with people making pure political decisions “from above”, and they will always go for a platform because it is a demonstration of power… and also because their biggest current fear is that Google could build one such system before they have the opportunity to deploy their’s.
    My own take for what is the proper “artifact” is in line with the concept of meshed society: this is the individual “health project” that one can operate in her own “bio-psycho-social bubble”. If there is a platform, it should be your personal one, the place where your own goals get transformed in technical workflows (the core principle of shared decision making). To sum it up, your own “personal platform” should provide the guideline to federate silos instead of relying on a platform designed as a “countrywide hospital information system”. A kind of Copernican revolution.
    OK, so, you don’t need a platform, but you will eventually get one.
    At this point, your best take is by far to go for openEHR. Not only because the alternative is some dysfunctional crap from the US, not only because openEHR is open source, hence plainly auditable, but mainly because openEHR is a flexible system that allows for agility and the kind of “probe-sense-respond” behavior that you will need when the platform will be confronted with the true complexity of an aging population with several chronic episodes of care.
    To end this (too long) comment with a technical argument, I don’t agree with you when you say that a “two level system” is harder to request. It may seem to be, but in real life, you know what are the crucial information and can easily build indexes to query for it.
    For example, when you imagine things as “blood results repositories”, the very information silo “modern IT” is made to eradicate, it perfectly translates as “blood result set of index”. Something like building a set of index specifically targeted for a given view angle instead of building a silo of information for the same purpose.

  2. wolandscat says:

    Your post is predicated on the idea that a ‘platform’ is a big hosting solution and/or a product. That is not what we mean by ‘platform’ in openEHR … at all. For us, a platform is a *public specification* whose implementation, hosting etc, can be distributed, centralised, or anything in between; it can be implemented by (ideally) multiple companies / orgs, and the implementations can be delivered incrementally. See here: https://wolandscat.net/2014/05/07/what-is-an-open-platform/

    BTW, you do need to optimise for storage. It might be plentiful, but it’s not cheap. Try getting a quote for 10TB @ RAID 10 @ 99.99% availability, fully managed – the sort of space you are going to need for e.g. GP data for a small country, not counting any PACS storage. Now, consider that certain choices can cut your persistence needs in half. And look at the savings. Also, bad persistence design is almost always bad for performance.

    With respect to openEHR, no-one goes looking directly through the DB and trying to match AQL queries to raw storage of data. It seems complicated to you because you have not learned about the architecture and you are presumably used to relational systems, or other single-level systems. I recommend you get some experience with a real openEHR system (they have been around for 10 years), then you will have a better understanding. This is not to say they are perfect. Like any advanced technology, it is an R&D process, not a fixed product.

    The biggest problem in healthcare IT is semantic scalability. There is no hope of satisfying healthcare computing needs with naive architectures where the DB schema or UML model is a model of all the medical data in your system. There are systems built like this – most of their data is text, and not computable, and still their DBs are unmanageable. To make healthcare work, you have to have at a minimum an architecture that accommodates huge numbers of data items, terminological concepts, and never-ending change – an adaptive architecture, that can constantly absorb these changes while continuing to run at deployed sites. As soon as you make the semantic definitions part of the DB schema or the software, you can’t succeed.

    App developers can make all the shiny apps they want, but without a standard semantic backbone architecture to plug into, their efforts (and data) will go nowhere. There’s a reason MS HealthVault and Google Health failed, despite unlimited resources – they were naive architectures.

    The architecture of openEHR can certainly be critiqued, but its innate level of complexity is a response to the complexity of the problem, developed over nearly 20 years, not some academic idea invented in a lab detached from reality. So you can go for ‘different’, but you probably can’t go for ‘simpler’. Alternatives will be just as complex, just in different ways.

    • qristin says:

      There are plenty of good things about openEHR and I am obviously not an expert on it like you. And despite the issues I have raised, I would have no problems with teams of developers using it if they feel it brings value to their development efforts. I oppose the idea that to work within healthcare YOU MUST USE THIS TECH/PLATFORM. As mentioned, as soon as you mention “platform” it all too often leads to gigantic projects that are doomed. Secondly, it prevents people from taking advantage of other solutions that might come along in the future, or are already present.
      From your response it also seems like you’ve misunderstood how I propose to store data. I propose that the applications developed be fairly small/narrow in what problems they solve. They should be made for particular use cases. A human can only process a certain amount of data at a time. There is never any reason to be presenting a user interface with zillions of different datapoints in detail. If you want a good user interface, you need to limit the data to what the user actually needs to see; which won’t be that much. The data storage needs for each use case should therefore not be enormous, and the data types should be tailored for that particular domain. Large databases with non-computable text data is (obviously) not what I’m proposing.
      Another thing, in your application you need some way of knowing that “I have all the data I need”. If health data is distributed – you either need some “master node” that you can query for everything – that all the other nodes synchronize with – OR you need different specific sites for different specific data, so you know where to query for different kinds of data. What I’m arguing for is the latter, that we split responsibility up for different kinds of data. If the team responsible for one type of data choose OpenEHR as their solution – that’s fine by me.
      Finally – I have yet to hear WHY my approach is not viable. I keep hearing “It’ll never work”, “We need consistent architecture and domain model”. But why though? Which particular use case requires the domain models to be consistent across the entire healthcare system? If we’re always working in teams with close contact with the end users, we should automatically be creating data models that are useful to them. If we’re receiving immutable data from reliable registries to base our application’s models on – why would we need to consider all the other use cases?

      • qristin says:

        It just struck me that my solution requires contact with and feedback from users in order to get the models right.
        With OpenEHR you don’t need users to get useful consistency, the applications will follow the design from up top.
        I think all development should be focused on the end user needs, and my approach would fail if you don’t get input from users.

      • wolandscat says:

        If you focus *only* on user experience and don’t create a library of domain model elements, such as vital signs, all the labs, and the other 35,000 data points in medicine, what will (does) happen is that each application implementer creates their own representation of these things, which doesn’t connect with anyone else’s. The result is thousands of little data silos and applications that are useless in healthcare, because they can’t be part of the overall picture of the patient over time. Dozens of private versions of blood pressure measurement, allergic reaction, and all those 35k data points is clearly not a viable approach – because none of the applications can share data; and no analytics or reporting can compute over the top of all the data. Creating data models useful to the end-user only without reference to all the other users and data requirements is a guaranteed recipe for chaos. It’s how most health IT is done for the last 35 years, which is why we know it doesn’t work.

      • qristin says:

        Again, I have explained how I’d solve that in my approach. You’d have to have common registries for “base data” like blood pressure, blood test results etc. All apps then get data from these common registries. And _then_, based on these common, well defined data points, they can store data in whatever format they need for the use case they are serving.

  3. wolandscat says:

    “You’d have to have common registries for “base data” like blood pressure, blood test results etc”
    … that’s a platform. A shared public specification. You will discover it needs an agreed type system, APIs etc. And then you will have re-invented openEHR.

    • qristin says:

      Well, not quite. The various base data registries do not have to have any connection with each-other or even know about each-other. The various user-facing applications only need to relate to the registries with relevant data for what they are doing. My main purpose here, is to break things up into smaller pieces that can work independently. Even though few components will be completely isolated, very few will need to connect to EVERYTHING. So we don’t need a “common platform for everything”

      • wolandscat says:

        If the ‘registries’ are independently developed, you are back to chaos. How much of the models one application uses is orrelevant. indeed, I agree, it’s common for one app to just be about say vital signs. However, when you get to peri-natal care, in-house nursing, oncology, etc you’ll discover that half the registry is implicated. Ultimately, everything is connected – the data from some simple app, e.g. counting just foetal heartbeat will need to be combined in some other system with other pregnancy data. If it isn’t coherent, then nothing will work.

        Many people have studied this for decades.You must surely wonder why your easy solution doesn’t already exist.

  4. qristin says:

    Yes, each application will need to connect to a variety of data sources. Yet the systems interested in foetal heartbeats, will probably not be interested in data about dementia or cancer. Health is way too large a field to try to solve as one IMO. Even though every data point will be interesting in a whole variety of settings.
    The fact that something isn’t commonplace doesn’t mean it’s not a good idea. Every new good idea starts off as something that has not yet been done after all.

    • wolandscat says:

      On the contrary, the ‘nice simple’ approach you are proposing has been tried numerous times. It never works, and indeed is the origin of most interoperability problems we have today in e-health. All the evidence is there. That’s why there are so many people working hard on more sophisticated approaches. These approaches don’t try to solve health as one product, but as one generalised platform, i.e. set of type definitions, models, query language, APIs, and so on. Things that are developed outside any such platform are doomed to be islands.

      • qristin says:

        Ok, since this has been tried so many times, can you send me some information on which public health service has had public registries serving immutable health data over well definedAPIs, and then had dedicated cross functional DevOps teams working closely with their respective user groups?

  5. wolandscat says:

    Well, the DevOps, UI/UX and ‘well-defined’ APIs don’t make any difference one way or the other if your info models and semantic definitions are all local. You’re just shipping around incoherent data. That’s the current situation, for which all the poor-quality standards in health IT were invented (but don’t really help, and are continually being replaced by new attempts). User experience is important, but will tell you close to nothing about a coherent model of semantics.

    • qristin says:

      Who said anything about shipping around incoherent data?
      I’m suggesting we have well defined public APIs for specific sets of data. Like blood test results for instance. Then we can have another service that stores and provides blood pressure and EKG measurements? One for MRI results? I don’t know. But the idea is that we have services that provide specific types of data – in a well defined format. Then application developers can subscribe to or query for the data they need, and from them create whatever data structures they need to display to the users.
      From there, they _can_ choose to expose their data via an API too – if so, they would have to define it well so any consumers know what they are getting. But the main idea is that you’d gather the “basic building blocks” in use, and have them stored in well known services.

      • wolandscat says:

        Well either you support the idea of a coherent information model and models of content and process – co-developed, in which case, you are half-way to a platform concept (the rest is mainly around querying, the service view and APIs), or else you think all the models and semantics are locally developed, in which case you will get incoherence – i.e. silos of information all designed differently, that is extremely painful to interoperate. There’s really no escape from this. Most health data that can’t be combined with other kinds of health data is useless. Even if some apps only deal with that one kind of data. The reason is that the view of care providers isn’t isolated BP or heart-rate, but a virtual view of the patient overall – problems, allergies, medications, family history, previous procedures, patient preferences, social situation, recent vitals, recent labs… etc. This is all basic info required to just do basic care. If the models of all this are not coherently designed, you are back to silos. If your ‘basic building blocks’ are not developed according to some common theory and methodological basis, then you have silos. So how these ‘building blocks’ are developed is crucial.

    • qristin says:

      Also, DevOps and user involvement are HUGELY important. The ability to deliver the functionality that users want quickly is VERY important if you want a decent user experience – which is the whole point really. Providing decent user experiences and easing people’s lives within healthcare.

    • qristin says:

      In the original Norwegian post, I told shortly about how I once helped make an application form for housing benefits. You logged in with the public SSO, then we fetched your email and phone number from one register, then we fetched your name, address and who you lived with from another. We got your income from the tax office, your benefits from the benefits-office, information about the building you lived in from another service and finally we got existing housing allowance information from the housing allowance services. The end result was that you got a pre-filled out form, and estimates of what you could expect to receive in housing benefits without hardly having to click or enter any extra data.
      None of these services ran on the same platform, but that didn’t matter. And the data we used, is also used by tons of other applications, but we didn’t have to concern ourselves with them. It worked very well.

    • qristin says:

      The archetypes you have defined in OpenEHR are probably an excellent starting point for defining the types of data that could be returned from my proposed registries. It makes sense that a considerable amount of thought goes into those. And as I said, if various teams of developers choose to implement their applications using OpenEHR, that’s fine.
      I’m just opposed to the idea of “forcing” everyone onto _A_ platform, be it technological or semantic.

    • qristin says:

      You mention that one will need information about allergies, medications, family history, previous procedures, patient preferences, social situation, recent vitals, recent labs. Absolutely. Just like in the housing allowance example we needed
      email, phone, address, family, building info, income and benefits. Then we took the parts we needed, generated our own data structures and stored- and worked with that. Works fine.

      • wolandscat says:

        If the parts you ‘took’ were a formal and semantically consistent view of the original information structures, you are just talking about local use of those same structures. To communicate interoperably with other systems, DBs, etc, you need to maintain a bidirectional transformation capability. If you have that, you are using the platform. All of this is independent of platform implementations, as I said in my first reply, which you seem to have missed.

    • qristin says:

      I think communication should happen via the passing of immutable messages as much as possible. You receive input in the form of well defined immutable messages from well known services. If your system generates input, you would need to format that information as an immutable data point in whatever format the appropriate service requires, and then send it to that service/registry.

  6. Pingback: God dag Akson! Økseskaft | Hello world

  7. Johannes Brodwall says:

    I know I’m piling on at at edges of your main point, but I can’t help myself: Relational databases have a meta model (eg. on Oracle “select column_name, table_name from all_tables where schema_name = ‘health’). Why does OpenEHS want to make a meta model on top of a meta model??

    • qristin says:

      Because they need to be able to change the schemas without involving any programmers. That’s the selling point AFAIK, they can configure the whole thing via GUIs, they don’t need to wait a decade for a new software update.

Leave a reply to qristin Cancel reply