<!doctype html public "-//W3C//DTD W3 HTML//EN">
<html><head><style type="text/css"><!--
blockquote, dl, ul, ol, li { padding-top: 0 ; padding-bottom: 0 }
--></style><title>Re: [OpenID] [OpenID board] Members Login
broken</title></head><body>
<div>>Within the openid framework, for now we could just ensure
that by standardized AX processes, users can register a CTL of
*<b>their</b>* trusted CAs at each consumer - to aid
_<i>subsequent</i>_ recognition/discovery of the user's syno-nyms
that delegate to the CTL-introducing OP.</div>
<div><br></div>
<div>This is the Achilles' Heel of the URL scheme (as opposed to,
what? XRI?), it requires that *first* contact to establish a trusted
CA - all an attacker needs to do is spoof *one* user at a domain/site
that hasn't been to that RP yet (this implicitly requires spoofing
that domain/site too) , and the REAL site with the same URL but a
different "trusted CA" will encounter problems using HTTPS -
it could fallback to non-secure?</div>
<div><br></div>
<div>The risk is nothing new, it's essentially an exchange of
"secure" information (certificates and their associated
data, protected mathematically by cryptography) over an insecure line
(URL's). We know that attackers effectively won't be able to crack the
former; centralized PKI attempts to keep them from simply substituting
their *own* data by compromising the latter.</div>
<div><br></div>
<div>XRI *might* be able to solve this problem, by assigning different
URL/cert pairs to different entries in the global registry to
distinguish "site.com with this cert" from "site.com
with that cert"? Allowing a secure fallback for sites that are
late to an RP, though I don't know how many libraries would have to be
rewritten to accept multiple certs per URL - trivial if they accept a
cert file as one of the arguments, I suppose.</div>
<div><br></div>
<div>-Shade</div>
</body>
</html>