It's been a fantastic ride -- I'll continue to be involved in the Direct Project and the S&I Framework, but on the community side now.
An overdue update on pilot production usage of Direct Project specifications:
As of February 25, 2011 Heartland Regional Medical Center (St. Joseph, MO) is sending production clinical information to Living Community of St. Joseph (a long term care facility). Direct is being leveraged as a replacement for traditional ad hoc fax communications, representing primarily lab results and discharge summaries to facilitate transitions of care for the residents of that facility.
OK, it seems I'm a bit behind the times. Yesterday, I mused about a capability based on Direct Project specifications for EHRs to send information to a personally controlled Direct Address, and I immediately learned that it's already happening. That's, incidentally, one of the "emergent" properties of standards: people and organizations independently do innovative things around them. We intentionally designed Direct to be a specification for "any-to-any" secure, universal directed exchange based on a universal address to enable that kind of emergent behavior.
Dr. Paolo Andre yesterday blogged about how he is using the Care360 EHR to send information to a patient's personally controlled health record on HealthVault. This illustrates a number of interesting things:
To recap across the set of announcements the last couple of weeks:
We are almost playing production bingo with the Direct Project User Stories.
I'm pleased to echo a couple of blog posts about our newest Direct Project production environment, the HealthVault Message Center.
A number of our implementation geographies will be opening up the ability to send data to the HealthVault Message Center. Individuals in Minnesota in will be able to designate that their immunization data be sent to their HealthVault account in conjunction with the pilot activity with Hennepin County Medical Center, and we'll see more of this as we see more pilots moving to production. What's particularly elegant about this is that the same transport mechanism supports both sending to the Minnesota Department of Health and HealthVault, and will support sending to any other PCHR that supports Direct, without any additional work by Hennepin County Medical Center.
My belief is that as we see more EHRs supporting Direct, we'll open up a world where individuals can designate a certain Direct Address as their consolidation point for their health data. That might be a PCHR, like HealthVault, a point at my medical home, or the regional HIO that serves my community. Or, it could be a combination; perhaps I want my primary care provider (my medical home) and my PCHR to each get a copy of my encounter summary, laboratory results (if allowed in the state where I live), discharge summary, etc.
Universal addressing and universal transport opens up a wide set of options.
I am incredibly pleased to announce today the first two production networks using Direct Project specifications.
VisionShare has enabled Hennepin County Medical Center to send immunization information to the Minnesota Department of Health. Testing of immunization (or syndromic surveillance) communication to a public health agency is a requirement for Meaningful Use incentives.
Rhode Island Quality Institute has implemented provider-to-provider health information exchange supporting Meaningful Use objectives with Dr. Al Puerini and members of the Rhode Island Primary Care Physicians Corporation.
These are the first two of many to go live: the Direct Project homepage now has an interactive map listing all of the implementation geographies that will be going live later this year. This milestone is a significant accomplishment by all the members of the Direct Project community, and I would like to thank you all for your dedication, hard work and sweat equity in the critical mission we are engaged in and the significant progress we have made to date.
In conjunction with the go-live, there was an HHS event in the Humphrey Building Great Hall to highlight and spotlight the implementation pilots. The event was keynoted by Dr. David Blumenthal, Coordinator for Health Information Technology, and Aneesh Chopra, US CTO, closed by Todd Park, HHS CTO, MCd by Dr. Farzad Mostashari, Deputy National Coordinator for ONC, and presented the stories of some of the members of the community who helped make this happen, including VisionShare, Allscripts, Microsoft and Dr. Puerini. I wish the stage and event were big enough to tell the stories of everyone who has been involved in the Direct Project -- there are so many stories to tell. One of those key stories is that of the "spiritual father" of the Direct Project: Wes Rishel, in his Simple Interop series of blog posts.
In addition, Steve Lohr wrote about the events for the New York Times and Mary Mosquera wrote about it for Government Health IT News. Peter Neupert at Microsoft and VisionShare also made some significant announcements in conjunction with this event.
We'll update this blog post with more information about the event as it comes in.
Many of you have already received invitations to participate in one of the Standards and Interoperability Framework Initiatives that the ONC has launched. If you haven't received the call for participation, or if you are confused about what is being asked of you:
The Standards and Interoperability (S&I) Framework was launched on January 7th and is currently undergoing a call for participation during the month of January. ONC is looking for volunteers to collaborate on interoperability challenges critical to meeting Meaningful Use objectives for 2011.
The two main initiatives are:
Transition of Care, to address interoperability of a set of core information that needs to be exchanged at transitions of care. Lab Interface, to reduce the cost and time to implement new laboratory results interfaces in ambulatory settings
In addition, we are working on a focused initiative with HL7, IHE, HealthStory and VLER:
Consolidation Project, to consolidate and harmonize the information needed to create compliant templated CDA documents, including C32 constrained CCDs
You may also want to be aware of the calendar of events associated with the initiatives.
This post is hopefully going to sum up a good deal of what is taken for granted within the Direct Project implementation group membership, but causes confusion outside.
It touches a bit on John Halamka's latest post and on the feedback from the HITSC Privacy and Security Workgroup on XDR.
I started the work on the Direct Project with my neighborhood in mind. In the SF Bay Area, there are a large set of HIOs that share information within clinical networks, but there are also large numbers of real-world clinical transactions that cross clinical networks on an hourly basis. The combination of Universal Addressing and Universal Transport was a powerful idea that could help coordinate transitions of care across organizational boundaries.
Each of those clinical networks, however, use proprietary mechanisms for information sharing within the network. Is it the goal of the Direct Project to replace those networks?
The definition of "Direct Project Compliant" that we have been using (at least informally) in the project is pretty simple: a participant can send and receive from any other participant with a Direct Address, given a common trust fabric. That does not mean that the "last mile" connection necessarily needs to be SMTP + S/MIME. It may mean that there is an XDS or XDR or a proprietary connection to an HIO, and the HIO has a Direct gateway that allows for sending and receiving from other Direct Project Compliant users. It may also mean that two HIOs mutually agree to use XDR or REST or another protocol to bridge between them. The semantics of the directed message, however, need to be preserved (this is what the sometimes misunderstood specification on using XDR and XDM for Direct messaging is all about: specifying a well-defined mechanism to bridge between SMTP + S/MIME and XDR while preserving directed messaging semantics).
For a provider or hospital that already has the capability and the trust fabric to share laboratory, discharge, referral, and consult information robustly across organizational boundaries, and is creating the capability to improve quality and health outcomes in both a patient-centered and population-centered way, and does so with SOAP or REST transport (such as the example John Halamka provided of NEHEN), there is no interest in the Direct Project to replace those mechanisms of transport with S/MIME and SMPT.
That being said, there will, over time, be a benefit for using Direct Project specifications for the last mile. It will often be easier to connect EHRs that have built-in support for Direct Project specifications, and have built-in workflow for receiving structured data over that transport, particularly if, as John Halamka notes, the HITSC recommends that support for Direct Project specifications and common X.509 certificates become a certification criterion.
If this seems familiar, it parallels the evolution of the common e-mail network. In the beginning, e-mail was fragmented into proprietary networks (e.g., AOL, Compuserve). Over time, those proprietary networks added SMTP gateways so their members could send and receive from other e-mail users. Those gateways did not replace the existing transports for sending and receiving from other Compuserve or AOL members, and even now Blackberry users receive encrypted content over HTTP connections and a GMail user sending to a GMail user will use a different transport mechanism from a GMail user sending to a Hotmail user. However, the SMTP standard allows any device to plug into any standard network, and any e-mail user can send and receive from any other e-mail user. Seems like a good pattern for health information exchange.
John Halamka today posted on a topic that has been a huge item of historical discussion in the Direct Project: whether we can enable simple, direct, scalable and secure transport in support of meaningful use based just on TLS.
Our conclusion was that you couldn't, at least not simply, and we felt that S/MIME was the natural proven fallback.
To explain why, I'm going to go moderately deep on bits and bytes in a rather long post. I apologize in advance. If you want to get to the meat, go to the paragraph that starts "so why is this a problem?"
First, I want to distinguish server authenticated TLS from client (or mutually authenticated) TLS. Server authenticated TLS is the form you are likely most familiar with. You connect to a website using SSL (https), and your browser's lock turns green (in some sites that use Extended Validation (EV) certificates, your browser's lock turns green with the name of the organization you are connecting to). Your browser is checking:
This gives you, the shopper, assurance that the channel is encrypted and that the server is not spoofing the organization it pretends to be (for example, ensures that when you shop at Amazon, you aren't actually shopping at BadHacker.com, who is syphoning off your credit card and CCV number and not shipping you any books).
Your browser has a list of root certificates (for example, this is the list of roots approved by the Mozilla Foundation for Firefox) that it trusts. As I mentioned above, there are two kinds of trusted SSL certificates in common use: the ordinary kind, that verify that the certificate holder actually controls the domain in question, and the EV kind, that verify that the certificate holder is the actual legal entity it purports to be.
So far, so good. This approach works well, and promotes a good ecosystem of certificate issuers who compete on cost without lowering overall quality.
The only problem is that this approach authenticates the server, but not the client. In the context of information exchange, it would give assurance that the receiver of information was the individual or organization it purported to be, but would give no assurance whatsoever about the sender. Oops.
If you want to authenticate the client and the server in an TLS-encrypted channel, you can use mutual authentication. In this mode, the server presents its certificate and encrypts the channel, as before, and then requests that the client present its certificate. If the client trusts the server and the server trusts the client, the transaction can proceed. Brilliant!
Here was the problem we ran into: in the TLS negotiation process, the server presents one and only one certificate path. In typical implementations*, the server also verifies the client certificate against one and only CA root. That means for every pair of client-server connections, the server and the client have to have the same root CA.
So why is this a problem? Well, if everyone lives in the same exchange, it's not a problem. The exchange hands client certificates to everyone, and everything works. If people use different exchanges, it falls apart. Consider a typical practicing physician:
This is not a crazy situation; in fact, the real world will likely have more exchanges in the mix (many EHR vendors are building exchange capabilities into their products). Based on the way TLS currently works, for this model of real world exchange to work, every exchange would have to use the same CA. If StateHISP got its certificate from Verizon, GreatEHRCo got its certificate from Verisign, and GeneralIDN got its certificate from Thawte, they couldn't connect based on the mutual-authenticated TLS of today.
If you take anything away from this long post, take this away: using TLS with mutual authentication would require central coordination and configuration. That is, unlike the browser bundle I mentioned above, which provides a market-based approach where anyone can stand up a CA, and, assuming they pass validation by WebTrust or a similar credentialing organization, can play on an equal footing with the established CAs, in the single rooted world, there would be only one master CA. Assuming ONC provided that single master CA root or bundle, getting the operational and governance mechanisms to make that work, and rolling it out nationwide, and enabling a robust market for identity assurance would take us a ways out in the future.
In addition, authentication in this approach is explicitly machine to machine, not organization to organization. To get organization to organization authentication, the SNI extension would be needed because TLS servers provide a single server certificate for the IP address on which the server runs. In cases where the same machine is used for multiple organizations (e.g., an HIE or a HISP), SNI allows virtual hosts to be used on the same IP address. Unfortunately, SNI is not well supported across web servers or SMTP servers. Without SNI, authentication is exchange to exchange, not organization to organization, which bumps the policy requirement for identity assurance to the intermediary.
There are inventive ways around this problem, by using TLS extensions like SNI** and agressive (and by agressive, I mean agressive) use of DNS RRs like SRV and NAPTR and CERT to allow discovery of the certificate in use by a particular exchange participant, or using server-auth TLS and client signed Oauth bearer tokens but it ends up being pretty knotty and uses a technology stack that is not well supported. We tried. We really wanted TLS to work.
S/MIME, by contrast, works today, is well supported, has multiple interoperable implementations, works on the PKI infrastructure that is in common use today, is well described by IETF and is forward compatible to a day where we have authentication and digital signatures at the individual provider level. That's why we went that direction in the Direct Project, and I feel just as good about that decision today as when we made it.
`* The TLS spec supports multiple CAs for client authentication but only one cert chain for the server. Multiple client CAs are supported in Apache and IIS, but not in Nginx.
On Friday, the HITSC Privacy and Security Workgroup reviewed the Direct Project specifications. Their findings are on the HITSC website. I would like to thank the Workgroup for the time and thought they put into the review, and would like to thank Dixie in particular for leading the Workgroup through the review process.
The Workgroup found that the core Direct Project SMTP + S/MIME specification met the goals of a secure transport and is scalable for the purpose intended (although it should be marked as less applicable for large (GB and higher) file size transport). In addition to that finding, the Workgroup also noted:
The core specification is "messy"
We agree -- the original version of the SMTP + S/MIME was a mixture of design advice and specification leading out of the creation of the reference implementations. On review, if you do not know what it already says, it is hard to read it to find out. We are in the process of revising the specification for editorial cleanup and readability.
The specification implies a key policy decision -- that it is OK for a receiver to reject unstructured content.
The discussion on this point was interesting. John Halamka has a good summary of the discussion and some e-mail based follow-on. The key point is that the policy decision (for instance, that sophisticated receivers should be open to accepting electronic data from senders who have less sophisticated technology) should be separated from the technology question (that, for instance, in some cases, receivers who are expecting particular healthcare formats should error if the content is not what is expected).
The use of DNS for certificate discovery should not be mandated
The HIT Policy Committee just published recommendations (link forthcoming) on provider directories, which call for certificate discovery as a key directory attribute. We expect a great deal of innovation here. DNS may be an appropriate option for the short term; long term, the strategy should align with the standards for provider and organizational directories.
TLS and S/MIME wrapping should be removed as optional components for already encrypted S/MIME content
The TLS issue is, I think, not an issue. The SMTP infrastructure already has the ability to upgrade to TLS on the fly so encouraging TLS support does not impede interoperability. The recommendation on message wrapping is that the complexity of wrapping the full message does not justify the benefit (protecting headers) and that the risk of subject and other headers containing PHI should be mitigated by policy.
There were some important recommendation regarding XDM and XDR that I will address in a separate post.
Newly posted to the FACA blog (the blog of the HIT Policy Committee and HIT Standards Committee) is a request for comment on Standards and Interoperability Framework Initiatives and priorities.