[OpenID] Selectively Redirecting OpenID Traffic To HTTPS

Peter Williams pwilliams at rapattoni.com
Sun Jan 13 18:25:46 UTC 2008


Focusing on composition of trusted systems/networks  so as to determine overall trustworthiness of the networked system doesn't seem that useful in this community - I agree. Vertical segmentation of trust in the internet/web by mostsecret->public partial orders doesn't really seem to digg the way the web is actually evolving. NSA types such as Bell can rattle their cage bars all they want (that the world has to go back to praying to the pre 1994 gods of A1 systems/NICs and MLS doctrine), but I really don't think anyone is going to listen - no matter how loud one engages in cage rattling.
 
What we do have in the two philosophies I mention (MISSI vs Rusby separation) is 2 reasonable (modern) ways of rationalizing the protection mechanisms: neither being focused on 1970s era conceptions of how information assurance should be attained by strict adherence to the Bell/LaPadula model's notions of dominance (having subjects and objects with security labels and caveats). If I summarize (viciously), MISSI puts the focus of assurance into the only bit of hardware you can reasonable trust - your crypto HSM, where excellent  (hardware only based) key management provides (as a side effect of having engineered high assurance ciphering) generally-repurposable control systems (plural). The doctrine is largely reflected in the RealID driving licenses that all Americans will have to adopt, in a couple of years. The notion of the separation kernel on the other hand argues that control  systems for wide area networked applications should derive from the properties of a trusted kernel, designed to emulate in its box the assurances of the wider trusted distributed system of which the box is a part. Quite an old doctrine - rusby separation is only today coming into vogue (as reflected by that PP I linked to), sponsored by those searching for models that the general purpose world could actually adopt (rather than waiting another 30 years for the world to finally recognize the second coming of the MLS prophet).
 
The focus of the issue here is, of course, an OP (sitting in an SSL-capable host or a load-balanced SSL cluster) responding to  specifically "https" openid discovery messages aimed at different users (since openid pursues a user-centric control doctrine) or as a set of virtual OPs (since any use of openid for commercial reliance will inevitably cause bigger providers to repeat what happened in the PKI space - support multiple assurance levels and private-label communities, where impacts during reliance (inevitable in in the lower levels) must not contaminate higher levels of robustness/integrity, where a given OPs system operates on of multiple levels sharing the same physical host/cluster).
 
ok ok - this is all a bit "out there" compared to the thrust of the argument - should we mandate https addressing, how should delegate mode https/http handoff work, how should RPs handle the many PKI trust anchors, etc. But recognize that we inconclusively addressed the first round of the https -during-discovery argument some while ago... and yet here it is again (with no breakthrough, yet). Its unlikely we will make some progress to solve the puzzle (in ideally a really simple way), unless we draw some additional information into the equations! If we think out of the box for a bit, perhaps we will see what's sitting there directly in front of our eyes.
 
Its a fascinating question for me, as for an UCI -centric notion like openid (that critically depends upon name discovery for its assurance properties) to now depend on https controls during the time-critical act of name discovery would seem to imply that the means by which https is itself managed by TTP PKIs must have no impact on the UCI property of OpenID. On the face of its, we have an apparent contradiction: how can TTP-centric https be a dependency of UCI-centric openid?
 
I suppose I should start blogging this, rather than using email. 

________________________________

From: Cameron King [mailto:cameron at uniquekings.com]
Sent: Sun 1/13/2008 6:36 AM
To: Peter Williams
Cc: Eddy Nigg (StartCom Ltd.); general at openid.net
Subject: Re: [OpenID] Selectively Redirecting OpenID Traffic To HTTPS



Ok, so let's say that I work at a company that has a network of some
security level (a) and I am logging into a secure service of a different
security level (b) with an OpenID.  If we are wanting OpenID to fit into
this model, then it would need to be provable that a given provider is
able to operate in these two levels.

I think we wouldn't want to make this a requirement for everyone,
because it sounds to me like it would defeat the decentralized low
barrier to entry aspect of OpenID... But it looks like something that
might be interesting for specialized providers - particularly if
companies start implementing OpenID internally.

Actually, in the face of so many providers, if one provider could
implement the things you mention here and provide sufficient
documentation and be willing to submit to audits - there might be a
market for a paid service here.  There might be a niche willing to pay a
fee for the privilage of having their provider meet these requirements -
I don't know.

Peter Williams wrote:
>
> Im thinking more in terms of standards evolution, than today's deployment.
>
> Lets assume openid takes off as an SSO standard for the web -- and starts to protect even semi-critical infrasrtucture. We have to be able to rationalize the protection mechanisms of openid in standard terms. In the case of the issue you noted, that rationalization will need to address  how one would methodicically demonstrate how 'discovery over https" ensures the policy of the parties relying on one of the given virtual OPs in the multi-policy-domain OP server.
>
>
> ________________________________
>
> From: Cameron King [mailto:cameron at uniquekings.com]
>
> ...
> While I'm not sure that either of these are in the realm of possibility
> for your average net user trying to setup his vhosted blog to delegate
> an OpenID, these options might be very appealing to a larger company who
> wants to SSL enable all their OpenIDs.
>
> Peter Williams wrote:
>> There would seem to be two, obvious architected approaches to the underlying assurance issues for multiple OPs operating in a virtual https context (wildcard SSL certs, vs load balanced clusters relying on SSL session de/multiplexing):-
>>
>> 1. A MISSI-based B3-grade general-purpose server system, in which the crypto of each openid OP is keyed/controlled by a distinct policy authority asociated with one PKI (controlling via certs how reliance is performed in that partitioned distibuted community), with the B3 principles on the virutal host ensuring that protection domains and security domain design principles properly seperate the OPs from each other when handling messages
>>
>> 2. A OpenId Server supportiing virtual OPs addresses the requirements of http://www.niap-ccevs.org/pp/draft_pps/pp_draft_skpp_hr_v0.621.pdf - where crypto is just a bit of signing/verification/keyexchange rather than a control system providing for seperation. The assurance comes from the design of the kernel, upon which one must ultimate rely.
>>
>> Depending which side of the assurance wars you fall into (NSA crypto-based assurance, or DARPA/USNavyHA trusted OS assurance), you can choose your own poison.
>>
>> ________________________________
>>
>> From: general-bounces at openid.net on behalf of Cameron King
>>
>> By vhosts I mean name-based virtual hosting.  Most hosting providers do
>> not give each user their own IP address and certificate.  Often the
>> added cost of a SSL-enabled hosting plan (expecially a wildcard
>> certificate) can be substantial compared to the otherwise low cost of
>> hosting.  We wouldn't want to make SSL a requirement if we are shooting
>> for high adoption rates.
>> ...


--
Cameron King





More information about the general mailing list