[Openid-specs-ab] Issue #2087: Problems with mandatory merge strategy for path processing of metadata (openid/connect)
Stefan Santesson
issues-reply at bitbucket.org
Mon Nov 6 23:27:39 UTC 2023
New issue 2087: Problems with mandatory merge strategy for path processing of metadata
https://bitbucket.org/openid/connect/issues/2087/problems-with-mandatory-merge-strategy-for
Stefan Santesson:
During actual implementation work of the data and metadata processing model of OpenID federation, we have encountered problems with the mandatory merge logic of metadata processing.
The basic problems in summary are the following
1. Change of semantics of policy operators
2. Loss of control over policy rules
3. Optimization deficiencies
4. Complex logic and unpredictable results
**Change of semantics**
One of the most obvious examples is the “value” modifier. This is a modifier and its intended function is not to make any subordinate metadata invalid. Its sole purpose is to enforce a value. The value of a parameter, no matter what it was, is simply replaced.
In merge, the value modifier function is altered from just setting a value to invalidate any metadata if any superior policy has a different value modifier.
**Loss of control**
An Entity Statement by a federation entity that includes a “metadata\_policy” is supposed to define explicit rules that must be applied to superior entities. But in fact, this metadata policy will not apply to paths longer than 1. For longer paths, the policy in the Entity Statement will be altered through the merge process, creating an unpredictable result.
It is hard even for a human brain to figure out exactly in what ways the total merged policy may turn out based on different setups, but it is even harder to write code that ensures a certain predictable outcome. One complicating factor is that metadata can be modified in several ways by a federation entity. One way is by metadata\_policy, the other is by entering explicit values as “metadata”. It is very hard to foresee how such direct changes may impact processing against a combined policy of a path.
**Optimisation deficiencies**
OpenID federation allows alternatives for resolving information about entities. One possibility is to traverse a path, but a much more effective option is offered through the resolve endpoint. This could be used by a superior entity to learn information about all leafs under an Intermediary. The problem is that it delivers metadata that has already been processed against policy. And as such it is not compatible with the mandatory merge process. We see this as a big lost opportunity.
**Complex logic and unpredictable results**
We have gone through great efforts to try to model our national environment using policy merge. In order to make this work we need to impose severe restrictions on subordinate federations on how metadata policies are constructed. But even with a complex ruleset we fail to create a generic model that will ensure predictable results.
## **Alternative paths and proposals:**
We see two different possible logics that could be used to process metadata through a chain of policies. One is the defined merge approach. The other is sequential processing where the metadata from the leaf is processed against one policy at the time, and that the result of each policy processing is fed into the policy of the next superior entity. I.e:
**Metadata --> policy 1 --> policy 2 --> Result policy**
Our conclusion is that this model has none of the problems raised above. Using a sequential processing logic in this way makes it possible to allow local federations to be connected to national federations of different kinds with a predictable result. Local federations can apply any local rules regarding scopes and security policies and we can still ensure that a service from that federation is represented by suitable metadata in one of the national federations just by applying a suitable policy. With merge we can’t see how we can create this in a predictable way.
We would propose one of two possible solutions:
1. Replace the merge logic in the current specification with sequential processing
2. Allow a metadata policy to be flagged for sequential processing
We would prefer to remove merge as we have not found any reasonable use for it. But if that is not possible, we could solve the issue by a flag. When such flag is encountered this means that this policy must be processed in unaltered form. The metadata fed into this process must be the processed metadata in the form it would have been received from a resolve endpoint by the subordinate Intermediate entity \(this logic is already defined\).
We are really stuck on this issue and we hope for a favourable resolution or enlightenment on how we should solve our problems.
More information about the Openid-specs-ab
mailing list