Draft OpenID v.Next Discovery working group charter

John Bradley john.bradley at wingaa.com
Fri Apr 16 18:40:54 UTC 2010


Phillip,

openID 2.0 discovery is based on the principal that you discover the users identifier and then based on meta data decide what protocol you are going to use.

That was to accommodate LID and perhaps to leave the door open for SAML and other authentication protocols.

Once openID gained popularity the need to be inclusive diminished.  (Sorry Johannes)

I think you are taking this trend further by saying the RP knows in advance that it only wants openID, and can discover the openID service for an identifier via DNS directly.

A understandable but perhaps not universally accepted position.  

I think we agree that NAPTR has issues.

I think that given the browser redirect needs to be over https to the service, there are issues with virtual hosts sharing the same port on https:  to support older servers and browsers different port numbers can be used.

To keep things clear lets refer to the existing identifiers as http: , webfinger as acct: and yours as dns:   just to keep track of things.

It would be helpful to understand your proposed flow if a user presents a dns: identifier from domain A and has IdP B as an openID service provider.   

So in my case I input dns:ve7jtb.com as my personal domain.
The RP finds the openID SRV record (I have them for XMPP now) and gets a port and target.
It then uses that to construct a https: OP endpoint URL to make an association with and formulate a redirect authn request to?

I don't understand how you are protecting from someone poisoning the RP's DNS to use a different OP.

I don't think we can sort all of this out on this list during the charter discussion.

If you want to propose a standard or even submit a draft spec I think it should be considered.

I am happy to review what you produce.  

Regards
John B.

On 2010-04-16, at 1:09 PM, Phillip Hallam-Baker wrote:

> What I am suggesting is actually very similar.
> 
> The conventional assumption is that a Web Service is going to be
> running on a general purpose Web server that does lots of other things
> and in particular is going to be running in a http namespace that is
> being used for multiple purposes.
> 
> The well-known approach is to say 'OK to make that work we will carve
> out a private bit of URI space'. This then creates the security
> problem that anyone with access to a Web server that does not already
> have an OpenID scheme running can start one of their own. That is of
> course what is attractive to some people, but it is not just making
> things easy for legitimate users, it is making it easy for attackers.
> 
> The NAPTR approach was to say 'lets throw some regular expressions in
> here'. Which sounds great until you start thinking that perl is
> nothing more than REs and a little bit of logic. REs are a very very
> powerful tool. They are also a fairly subtle tool that allow a great
> deal of complexity to be hidden away. I really do not feel at all
> comfortable using such tools at such a low level in the network stack.
> 
> 
> The observation I make is that if we are using SRV, the question of
> the web service end point becomes moot. Once we insert an SRV lookup
> for a Web Service into the discovery chain we can take ownership of as
> much or as little of the HTTP URI space as we wish.
> 
> 
> What we are doing here is trying to make it easy for someone to locate
> a Web Service by domain name. The web service endpoint clutter is just
> that - clutter that we do not need.
> 
> Now attaching metadata to a service is also a great idea, well, we can
> also take over the method space. If we want meta information on the
> service, then why not use META as the HTTP request method?
> 
> Again, these are features that are exposed in Apache and IIS and most
> other Web servers and are used in protocols such as WebDav.
> 
> 
> What we are talking about here is not really discovery of OpenID
> subjects but of generalized Web Services. The payoff here is that if
> we are looking for (say) the XYZ mapping service provided by google we
> can find it by simply doing a lookup for _XYZ.google.com.
> 
> This was what UDDI was meant to be. But the problem (like there was
> only one) there was that the group of UDDI die-hards had this bizarre
> notion that their success was pre-ordained. The fact that people
> wanted to use Web Services would force them to use UDDI and so all
> they needed to do was keep on talking excitedly to each other about
> how great this directory was going to be that they never thought about
> how to recruit users or how potential users might view the fact that
> they were being asked to become locked into a proprietary
> infrastructure with an unstated business model. Nor could they get
> their minds round the fact that using an open protocol does not mean
> that an infrastructure built on that protocol is open.
> 
> 
> I keep trying to find the LRDD 'group' where is the mailing list?
> 
> 
> On Fri, Apr 16, 2010 at 12:40 PM, John Bradley <john.bradley at wingaa.com> wrote:
>> One of the reasons the DNS proposal was originally rejected by LRDD as I understand it, is that SRV alone is not sufficient to eliminate the need for a well known location.  Unless we invent a new meta-data service that needs to run on a separate port from the web server.
>> 
>> The SRV record allows you to delegate from one domain to another but that alone is not super useful.
>> 
>> The more likely approach was to use NAPTR to point to a http: URI where the host-meta XRD could be retrieved.
>> 
>> NAPTR could also directly contain the mapping regex to turn a email address or other identifier into a URI to retrieve the desired meta-data.
>> 
>> It was felt that without a mechanism to sign the regex doing the template directly in DNS posed security risks.
>> 
>> That leaves using NAPTR to point to a host meta for each protocol.  The host -meta can be signed to assure integrity.
>> 
>> Given that the premise of LRDD was to build of of the link infrastructure,  using the link header to point to a resources meta-data needs to be supported.
>> 
>> Given a lack of support for NAPTR it was felt that while being theoretically better,  it could not be be relied on to be universally available.  That and that it didn't fit the link model they had in mind a well.
>> 
>> This left the well known location option as the one that people wold have the easiest time supporting.
>> 
>> There were other options considered as well as I recall.  Most of them are listed on Eran's blog, and I think discussed on the openid list as well at the time.
>> 
>> Could LRDD support NAPTR, yes I think it could.
>> 
>> There is however a tradeoff in complexity if every time you want the meta-data for a URI you need to try three different ways.
>> 
>> I think there is likely a debate that needs to take place around what order openID should look for the meta-data for http: URL.
>> 
>> Should the person controlling the URL be allowed to override the sites mapping by adding headers to there page?
>> 
>> I am sympathetic to the DNS approach however unless it can completely eliminate the need for a well known location I don't think the community is likely to accept it.
>> 
>> We could do something entirely in DNS if DNSSEC were widely available.  However I don't see that happening anytime soon.
>> 
>> As long as we are relying on SSL for security having a meta-data file for a DNS authority that you retrieve from a well-known location via https: is perhaps the best compromise.
>> 
>> I understand that will not make everyone happy.
>> 
>> People with other proposals should document them and submit them to the Work Group.
>> 
>> John B.
>> 
>> On 2010-04-16, at 11:06 AM, SitG Admin wrote:
>> 
>>>> Let's look at the complete SRV record:
>>>> 
>>>> _openid._tcp            IN      SRV     0 0 8080 openid.example.com.
>>>> 
>>>> We have a machine name, but what is the URL to the endpoint for logging in?
>>>> What is the user's OpenID URI?
>>> 
>>> I think Phillip is proposing a discovery chain - more opportunities for other parties to step in (at their layer) and take control, more points of failure if vulnerabilities are discovered in each protocol - and to be fair, DNS is *already* such a layer. OpenID relies on it.
>>> 
>>> -Shade
>>> _______________________________________________
>>> specs mailing list
>>> specs at lists.openid.net
>>> http://lists.openid.net/mailman/listinfo/openid-specs
>> 
>> _______________________________________________
>> specs mailing list
>> specs at lists.openid.net
>> http://lists.openid.net/mailman/listinfo/openid-specs
>> 
> 
> 
> 
> -- 
> -- 
> New Website: http://hallambaker.com/
> View Quantum of Stupid podcasts, Tuesday and Thursday each week,
> http://quantumofstupid.com/



More information about the specs mailing list