XAuth critiques

John Panzer jpanzer at google.com
Tue Jun 8 05:09:17 UTC 2010


On Mon, Jun 7, 2010 at 7:51 PM, Peter Watkins <peterw at tux.org> wrote:

> On Mon, Jun 07, 2010 at 04:22:15PM -0700, John Panzer wrote:
>
> > Specifically, I haven't seen a privacy issue which is simply 'solved' by
> > moving responsibility into the browser.  I believe browsers are in the
> best
> > position to do certain things (like not rely on a central DNS name,
> remove
> > SPOFs, and help implement anti-phishing) but these don't specifically
> > address 'privacy'.  Is there a specific privacy attack / leak you're
> worried
> > about that we could discuss?
>
> You haven't seen any privacy issues at all that are solved in browser?
> None?
>
> The current XAuth implementation has sites using IFRAME elements to
> access the XAuth service/JS code. Web browsers send Referer headers with
> IFRAME, so whoever runs xauth.org is in a position to see information
> about
> what Extender and Receiver sites a user accesses. Currently auth.org has
> pretty good settings -- cache control headers telling browsers they can
> cache the page for a week. But that could change. Move responsibility into
> the browser and that problem is solved.
>

Yes.  The problem is also pretty much solved by serving the content with an
year-long expiration and putting xauth.org under the control of a neutral,
trusted third party, which you want to do in any case.  See my blog post for
details.


>
> Also, xauth.org could start delivering JS code that reports information to
> the xauth.org mothership in addition to simply "working".


If the entity wants to destroy all its reputation by doing this, causing
everyone to immediately switch to altxauth.org set up by another party and
producing other unwelcome consequences, then yes, it could do this.


> Say the local
> government tries to compel xauth.org to deliver additional code to
> specific
> IP addresses (not that Google has *ever* had any trouble with any
> government
> legal pressure, right?). xauth.org could deliver pristine, trustworthy JS
> to everyone else. How would the government targets, let's say political
> activists maybe, be able to tell their privacy was being subverted? Move
> the function to the browser and that hole is closed.
>

Given truly edge-cached JS this would actually be a pretty difficult thing
to actually undertake for very minimal results.  Wouldn't it be much easier
for the government to simply compel the ISP to hand over router logs?

In general, if a government with significant resources is after you, you
probably want to take additional measures.  I'd start with a Tor proxy and
do everything over TLS, with a restricted subset of CA's for my browser to
trust.  I think the promiscuous browser CA list is a much bigger problem for
privacy at this level of paranoia, actually.


>
> This is by no means a comprehensive list (shoot, it's been less than 24
> hours since I startd reading up on XAuth), but I think it's enough that you
> can't say there are no privacy issues that going to an in-browser model
> would solve no privacy issues.
>

The choice is not between an in-browser model and an out-of-browser model.
 It's between the status quo and an iterative solution that eventually leads
to an in-browser model.  Even the early incremental steps are _no worse_
than the security regular users enjoy on the Internet today, where they
basically use the same username and password at every site, so that
harvesting their passwords is nearly trivial.


> -Peter
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openid.net/pipermail/openid-specs/attachments/20100607/96476128/attachment.html>


More information about the specs mailing list