Level of Confidence of What, When, Where and Who?

Last week's blog post by Dr. Peter Alterman on "Why LOA for Attributes Don’t Really Exist" has generated a good bit of conversation on this topic within FICAM working groups, in the Twitter-verse (@Steve_Lockstep, @independentid, @TimW2JIG, @dak3...) and in may other places.  I also wanted to call out the recent release of the Kantara Initiative's "Attribute Management Discussion Group - Final Report and Recommendations" (via @IDMachines) as being relevant to this conversation as well.

One challenge with this type of discussion is to make sure that at a specific point in the conversation, we are all discussing the same topic from the same perspective. So before attempting to go further, I wanted to put together a simple framework, and hopefully a common frame of reference, to hang this discussion on:


  • Separate out the discussion on Attribute Providers from the discussion on individual Attributes
  • Separate out the discussion on making a decision (to trust/rely-upon/use) based on inputs provided vs making a decision (to trust/rely-upon/use) based on a "score" that has been provided
(to trust/rely-upon/use)
  • "Design time" and "Run time"
  • Where is the calculation done (local or remote)?
  • Where is the decision (to trust/rely-upon/use) done?
  • Party relying on attributes to make a calculation, a decision and/or use in a transaction
  • Provider, aggregator and/or re-seller of attributes
  • Value added service that takes in attributes and other information to provide results/judgements/scores based on those inputs

Given the above, some common themes and points that surfaced across these conversations are:
  1. Don't blur the conversations on governance/policy and score/criteria  i.e. The conversation around "This is how you will do this within a community of interest" is distinct and separate from the "The criteria for evaluating an Attribute/AP is x, y and z" 
  2. Decisions/Choices regarding Attributes and Attribute Providers, while related, need to be addressed  separately ["What"] 
  3. Decision to trust/rely-upon/use is always local ["Where"], whether it is for attributes or attribute providers
  4. The decision to trust/rely-upon/use an Attribute Provider is typically a design time decision ["When"]
    1. The criteria that feeds this decision (i.e. input to a confidence in AP calculation) is typically more business/process centric e.g. security practices, data quality practices, auditing etc.
    2. There is value in standardizing the above, but it is unknown at present if this standardization can extend beyond a community of interest 
  5. Given that the decision to trust/rely-upon/use an Attribute Provider is typically made out-of-band and at design-time, it is hard to envision a use case for a run-time evaluation based on a confidence score for making a judgement for doing business with an Attribute Provider ["When"]
  6. The decision to trust/rely-upon/use an Attribute is typically a local decision at the Relying Party ["Where"]
  7. The decision to trust/rely-upon/use an Attribute is typically a run-time decision ["When"], given that some of the potential properties associated with an attribute (e.g. unique, authoritative or self-reported, time since last verified, last time changed, last time accessed, last time consented or others) may change in real time
    1. There is value in standardizing these 'attributes of an attribute'
    2. It is currently unknown if these 'attributes of an attribute' can scale beyond a specific community of interest
  8. A Relying Party may choose to directly make the calculation about an Attribute (i.e. local confidence calculation based using the 'attributes of an attribute' as input) or depend on an externally provided confidence "score" ["What"]
    1. The "score" calculation may be outsourced to an external service/capability ["Where"]
    2. This choice of doing it yourself or outsourcing should be left up to the discretion of the RP based on their capabilities and risk profile ["Who"]
Given that we have to evaluate both Attribute Providers and Attributes it is probably in all of our shared interest to come up with a common terminology for what we call these evaluation criteria. A recommendation, taking into account many of the conversations in this space to date:
  • Attribute Provider Practice Statement (APPS) for Attribute Providers, Aggregators, Re-Sellers
  • Level of Confidence Criteria (LOCC) for Attributes

As always, this conversation is just starting... 

:- by Anil John