Trust and reputation models among agents
-
Upload
nick-bassiliades -
Category
Presentations & Public Speaking
-
view
105 -
download
2
Transcript of Trust and reputation models among agents
Trust and reputation models among agents
Nick Bassiliades
Intelligent Systems group, Software Engineering, Web and Intelligent Systems Lab,
Dept. Informatics, Aristotle University of Thessaloniki, GreeceInvited talk at
6th International Conference on Integrated InformationSep 19-22, 2016, Athens, Greece
N. Bassiliades - Trust and reputation models among agents 2
TALK OVERVIEW Introduction Review of State-of-the-Art on Trust / Reputation Models Centralized approaches Distributed approaches Hybrid approaches
Presenter’s work on Trust / Reputation Models T-REX HARM DISARM
Summary / Conclusions
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 3
TRUST AND REPUTATION Agents are supposed to act in open and risky
environments (e.g. Web) with limited or no human intervention
Making the appropriate decision about who to trust in order to interact with is necessary but challenging
Trust and reputation are key elements in the design and implementation of multi-agent systems
Trust is expectation or belief that a party will act benignly and cooperatively with the trusting party
Reputation is the opinion of the public towards an agent, based on past experiences of interacting with the agent
Reputation is used to quantify trustIC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 4
TRUST / REPUTATION MODELS Interaction trust: agent’s own direct experience from past
interactions (aka reliability)- Requires a long time to reach a satisfying estimation level When no history of interactions with other agents in a new
environment Witness reputation: reports of witnesses about an agent’s
behavior, provided by other agents - Does not guarantee reliable estimation Are self-interested agents willing to share information? How much can you trust the informer?
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 5
IMPLEMENTING TRUST / REPUTATION MODELS Centralized approach: One or more centralized trust authorities keep agent interaction
references (ratings) and give trust estimations Convenient for witness reputation models (e.g. eBay, SPORAS, etc.)
+Simpler to implement; better and faster trust estimations- Less reliable; Unrealistic: hard to enforce central controlling
authorities in open environments Decentralized (distributed) approach: Each agent keeps its own interaction references with other agents
and must estimate on its own the trust upon another agent Convenient for interaction trust models
+Robustness: no single point of failure; more realistic- Need more complex interaction protocols
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 6
OTHER TRUST / REPUTATION MODELS Hybrid models: Combination of Interaction Trust and
Witness Reputation Regret / Social Regret FIRE RRAF / TRR CRM T-REX / HARM / DISARM
Certified reputation: third-party references provided by the agent itself Distributed approach for witness reputation
Centralized / Distributed
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 7
SOME TERMINOLOGYWITNESS REPUTATION
Agent A wants a service from agent B Agent A asks agent C if agent B can be
trusted Agent C trusts agent B and replies yes
to A Agent A now trusts B and asks B to
perform the service on A’s behalf A = truster / beneficiary,
C = trustor / broker / consultant, B = trustee
IC-ININFO, Sep 19-22, 2016
ATruster /
Beneficiary
BTruste
e
CTrustor / Broker /
Consultant
?
?
N. Bassiliades - Trust and reputation models among agents 8
SOME TERMINOLOGYINTERACTION TRUST
Agent A wants a service from agent B Agent A judges if B is to be trusted
from personal experience Agent A trusts B and asks B to perform
the service on A’s behalf A = trustor / truster / beneficiary,
B = trustee
IC-ININFO, Sep 19-22, 2016
ATrustor / Truster /
Beneficiary
BTruste
e
?
N. Bassiliades - Trust and reputation models among agents 9
SOME TERMINOLOGYREFERENCES
Agent A wants a service from agent B Agent A asks agent B for proof of trust Agent B provides some agents R that
can guarantee that B can be trusted Agent A now trusts B and asks B to
perform the service on A’s behalf A = trustor / truster / beneficiary,
B = trustee,R = referee
IC-ININFO, Sep 19-22, 2016
ATruster /
Beneficiary
BTruste
e
RReferee
?
?
RReferee
N. Bassiliades - Trust and reputation models among agents
10 CENTRALIZED TRUST MODELS
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 11
EBAY-LIKE REPUTATION MODEL Centralised rating system After an interaction eBay users rate partner (−1, 0,+1) positive, neutral, negative
Ratings stored centrally Reputation value is computed as the sum of ratings over 6
months Reputation is a global single value
- Too simple for applications in MAS (1-D) All ratings count equally
- Cannot adapt well to changes in a user’s performance E.g. a user may cheat in a few interactions after obtaining a high
reputation value, but still retains a positive reputationIC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 12
SPORASZacharia G. & Maes P.: Trust Management Through Reputation Mechanisms. Applied Artificial Intelligence, 14(9):881-908 (2000)
Centralized repository and reputation / reliability estimation Ratings are not kept – reputation updated after each
transaction New agents start with a minimum reputation
value and build up reputation The reputation value of an agent never falls
below the reputation of a newcomer Users with very high reputation values experience much smaller
rating changes after each update Ratings are discounted over time so that the most recent ratings
have more weight in the evaluation of an agent’s reputation+ More adaptable to changes in behavior over time
Reliability measure = ratings deviation High deviation: Agent has not been active enough to have a more accurate reputation
prediction Agent’s behaviour has a high degree of variation. IC-ININFO, Sep 19-22, 2016
+ Prevent agents with bad reputation leaving and entering with fresh reputation
- May discourage newcomers
N. Bassiliades - Trust and reputation models among agents
13
DISTRIBUTED TRUST / REPUTATION MODELS
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 14
REGRETSabater, J., & Sierra, C. (2001). Regret: A Reputation Model for Gregarious Societies. Proc. 4th Workshop On Deception, Fraud and Trust in Agent Societies, 61-69.
Decentralized reputation model Εach agent is able to evaluate the reputation of others by
itself Keeps own ratings about other partners in a local database Direct trust is calculated as the weighed average of all
ratings Each rating is weighed according to its recency More recent ratings are weighted more- Time granularity control does not actually reflect a rating’s recency
Reliability How many ratings are taken into account? Deviation of ratings
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 15
SOCIAL REGRETSabater, J., & Sierra, C. (2002). Social Regret, A Reputation Model Based on Social Relations. Sigecom Exch, 3(1):44-56.
An attempt to locate witnesses’ ratings through a social graph Agent groups with frequent interactions among them Each group is a single source of reputation values Only the most representative agent within each group is asked for
information Heuristics are used to find groups and to select the best agent to
ask- Social Regret does not reflect the actual social relations
among agents Attempts to heuristically reduce the number of queries to be asked
in order to locate ratings- Considers the opinion of only one agent of each group Most agents are marginalized, distorting reality.
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 16
REFERRAL SYSTEMYu, B., & Singh, M. P. (2002). An Evidential Model of Distributed Reputation Management. Proceedings 1st International Joint Conference on Autonomous Agents and Multi-agent Systems. Vol. 1. (pp. 294–301).
Decentralized, witness reputation model Agents cooperate by giving, pursuing, and evaluating referrals Each agent maintains a list of acquaintances (other agents it
knows) and their expertise When looking for certain information, an agent queries its
acquaintances They will try to answer the query, if possible If not, they will send back referrals of other agents they believe are
likely to have the desired information Agent’s expertise is used to determine how likely it is to have interaction with know witnesses of the target agent
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 17
JURCA & FALTINGS REPUTATION MODELJurca, R. & Faltings, B.: Towards Incentive-compatible Reputation Management. In Trust, Reputation and Security: Theories and Practice, LNAI 2631, pp. 138–147, Springer (2003)
Agents are incentivized to report truthfully interactions’ results
Broker agents Buy and aggregate (average) reports from other agents Sell needed information to agents
+Mechanism guarantees that agents who report incorrectly will gradually lose money, while honest agents will not
±Brokers are distributed, but each one collects reputation values centrally
- Reputation values limited to 0, 1
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 18
CERTIFIED REPUTATION (CR)Huynh, D, Jennings, N R., & Shadbolt, N R. (2006). Certified Reputation: How an Agent Can Trust a Stranger. In AAMAS '06: Proceedings 5th International Joint Conference on Autonomous Agents and Multiagent Systems, Hokkaido, Japan.
Decentralized witness reputation model Each agent keeps references given to it from other agents After every transaction, each agent asks partners to provide their
certified ratings about its performance It chooses the best ratings to store in its (local) database When an agent A contacts B to use service C, it asks B to provide
references about its past performance with respect to C Agent A receives certified ratings of B from B and calculates CR of
B
+Agents do not have costs involved in locating witness reports- Ratings might be misquoted Each agent only provides the best ratings about itself
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents
19HYBRID TRUST / REPUTATION MODELS
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 20
FIREHuynh, D., Jennings, N R., & Shadbolt, N. R. (2006). An Integrated Trust and Reputation Model for Open Multi-agent Systems. Journal of Autonomous Agents and Multi-agent Systems (AAMAS), 13(2):119-154.
Distributed, hybrid model Integrates 4 types of trust / reputation: interaction trust,
role-based trust, witness reputation and certified reputation Role-based trust: defined by various role-based relationships
between the agents Provides means for domain-specific customization of trust, using rules
All values are combined into a single measure by weighted average - Weak computation model for the combined reputation
estimation Does not take into consideration the problem of lying and
inaccuracy
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 21
TRR (TRUST–RELIABILITY–REPUTATION)Rosaci, D., Sarne, G., Garruzzo, S. (2012). Integrating Trust Measures in Multiagent Systems. International Journal if Intelligent Systems, 27: 1–15.
Hybrid reputation model Combines interaction trust and witness reputation into a single
value, using weighted sum Dynamically computes weights Previous RRAF model used static weights assigned by the user
Weight depends on: Number of interactions between two agents Expertise that an agent has in evaluating other agents
capabilities Reputation is based on the global trust that community has
in an agent+More accurate reputation prediction- Difficult to implement in a distributed environment
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 22
CRM (COMPREHENSIVE REPUTATION MODEL) Khosravifar, B., Bentahar, J., Gomrokchi, M., & Alam, R. (2012). CRM: An Efficient Trust and Reputation Model for Agent Computing. Knowledge-based Systems, 30:1-16.
Distributed, hybrid, probabilistic-based reputation model On-line trust estimation: Direct trust evaluation, using own interaction history Consulting reports: indirect trust estimation from consulting
agents trustworthy agents referee agents (introduced by the trustee agent as recommenders)
Off-line trust estimation: Off-line interaction inspection: After interaction, trustor assigns
a (useful/useless) flag for each consulting agent Rating & confidence of provided information Number and recency of interactions with the trustee agent
Maintenance: update information about consulting agents Initiated at different intervals of time
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents
23
PRESENTER’S TRUST / REPUTATION MODELS
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 24
PRESENTED TRUST / REPUTATION MODELS Centralized T-REX Hybrid
HARM Hybrid Knowledge-based Temporal Defeasible Logic
Distributed DISARM Hybrid Knowledge-based, Defeasible Logic Social relationships
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents
25 T-REX
Kravari, K., Malliarakis, C., & Bassiliades, N.: T-REX: A Hybrid Agent Trust Model Based on Witness Reputation and Personal Experience. Proc. 11th International Conference on Electronic Commerce and Web Technologies (EC-Web 2010). Springer, LNBIP, Vol. 61, Part 3, pp. 107-118 (2010)
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 26
T-REX OVERVIEW Centralized, hybrid reputation model Combines Witness Reputation (ratings) & Direct Trust
(personal experience) Weighted average (user-defined weights)
Ratings decay over time All past ratings are taken into account Time-stamps are used as weights Past ratings loose importance over time through a linear
extinguishing function+Low bandwidth, computational and storage cost
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 27
Central ratings repository: Trustor A special agent responsible for collecting, storing, retrieving
ratings and calculating trust values Considered certified/reliable
Interacting agents Truster / Beneficiary: an agent that wants to interact with
another agent that offers a service Trustee: the agent that offers the service
Role of Trustor Before the interaction, Truster asks from Trustor calculation of
Trustees trust value After the interaction, Truster submits rating for Trustee’s
performance to Trustor
T-REX INTERACTION MODEL
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 28IC-ININFO, Sep 19-22, 2016
Truster Trusteeinteract?
Trustor
T-REX RATING MECHANISM (I)
final reputation value
asks reputation
Calculates reputation from stored ratings and agent’s weights
Gives personalized weights for each rating criteria
N. Bassiliades - Trust and reputation models among agents 29
T-REX RATING MECHANISM (II)
IC-ININFO, Sep 19-22, 2016
TrusterTrusteeinteract
Trustor
Evaluation criteria• Correctness• Completeness• Response time
evaluation report
N. Bassiliades - Trust and reputation models among agents 30
Rating Vector
rx ∈ [0.1, 10] | 0.1 = terrible , 10 = perfect Normalized Ratings Ratings normalized logarithmically to
cross out extreme values log(rx) ∈ [-1, 1] | -1 = terrible, 1 = perfect
Weighted Normalized Ratings Agents customize the importance of
criteria using weights
T-REX RATING MECHANISM (III)
a: truster b: trustee Corr: correctness Resp: response
time Comp:
completeness t: time stamp px: weight of rating
rx
IC-ININFO, Sep 19-22, 2016
¿⃗ ¿
r ab (t )=|⃗Rab ( t )|=∑x=1
3
[ px ∙ log (r xab (t ) ) ]
∑x=1
3
px
N. Bassiliades - Trust and reputation models among agents 31
T-REX RATING MECHANISM (IV)The final reputation value
(TR)
Transactions between a and b
(interaction trust)
Transactions of b with all other agents
(witness report)
social trust weights (πp, πo)
Time is important more recent ratings “weigh”
more
𝑇𝑅𝑎𝑏 (𝑡 )=𝜋𝑝
𝜋𝑝+𝜋𝑜∙∑∀ 𝑡𝑖<𝑡
[𝑟𝑎𝑏 (𝑡𝑖 )∙ 𝑡 𝑖 ]
∑∀𝑡 𝑖< 𝑡
𝑡 𝑖+
𝜋𝑜
𝜋𝑝+𝜋𝑜∙ ∑∀ 𝑗≠ 𝑎 , 𝑗 ≠𝑏
∑∀𝑡 𝑖<𝑡
[𝑟 𝑗𝑏 ( 𝑡𝑖 ) ∙ 𝑡𝑖 ]
∑∀ 𝑡𝑖<𝑡
𝑡 𝑖
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 32
T-REX ADVANTAGES Dynamicity Trust depends on time
Flexibility Can be customized from completely witness-based to
completely personal-experience-based Reliability Centralized repository (Trustor) is assumed honest
Low bandwidth Trusters report in one communication step Trustees do not communicate at all
Low storage cost Trusters and trustees do not store anything Trustor stores everything
Computational cost?IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 33
REDUCING COMPUTATIONAL COMPLEXITY (I) Original Formula
Computational complexity O(n*t) n: number of agents / t: amount of
transactions
Swap order of sums ti, x
Sums can be calculated incrementally, when a new report arrives
𝑇𝑅𝑎𝑏 (𝑡 )=𝜋𝑝
𝜋𝑝+𝜋𝑜∙
∑∀ 𝑡𝑖<𝑡 [∑𝑥=1
3
[𝑝 𝑥 ∙ log (𝑟 𝑥𝑎𝑏 (𝑡 ) ) ]
∑𝑥=1
3
𝑝 𝑥
∙ 𝑡𝑖]∑∀𝑡 𝑖< 𝑡
𝑡 𝑖+
𝜋𝑜
𝜋𝑝+𝜋𝑜∙ ∑∀ 𝑗≠ 𝑎 , 𝑗 ≠𝑏
∑∀𝑡𝑖<𝑡 [∑𝑥=1
3
[𝑝𝑥 ∙ log (𝑟𝑥𝑗𝑏 (𝑡 ) ) ]
∑𝑥=1
3
𝑝 𝑥
∙𝑡 𝑖]∑∀𝑡 𝑖< 𝑡
𝑡 𝑖
𝑇𝑅𝑎𝑏 (𝑡 )=𝜋𝑝
𝜋𝑝+𝜋𝑜∙∑𝑥=1
3
[𝑝𝑥 ∙∑∀ 𝑡𝑖<𝑡
log (𝑟 𝑥𝑎𝑏 (𝑡 ) ) ∙𝑡 𝑖]
∑𝑥=1
3
𝑝𝑥 ∙∑∀𝑡 𝑖<𝑡
𝑡 𝑖+
𝜋𝑜
𝜋𝑝+𝜋𝑜∙ ∑∀ 𝑗≠ 𝑎 , 𝑗≠ 𝑏
∑𝑥=1
3
[𝑝 𝑥 ∙∑∀𝑡 𝑖<𝑡
log (𝑟𝑥𝑗𝑏 (𝑡 ) )∙ 𝑡 𝑖]∑𝑥=1
3
𝑝𝑥 ∙ ∑∀𝑡𝑖<𝑡
𝑡𝑖
𝜎 𝑥𝑞𝑦 (𝑡 )=∑
∀𝑡 𝑖<𝑡log (𝑟𝑥𝑞𝑦 (𝑡 ) ) ∙ 𝑡𝑖 𝑇 (𝑡 )=∑
∀𝑡𝑖<𝑡𝑡𝑖
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 34
REDUCING COMPUTATIONAL COMPLEXITY (II)
Computational complexity O(n) n: number of agents
Storage complexity O(n2) Swap order of sums j, x
This sum can be calculated incrementally
𝑇𝑅𝑎𝑏 (𝑡 )=𝜋𝑝
𝜋𝑝+𝜋𝑜∙∑
x=1
3
¿¿¿
𝑇𝑅𝑎𝑏 (𝑡 )=𝜋𝑝
𝜋𝑝+𝜋𝑜∙∑𝑥=1
3
¿¿¿
𝑆𝑥𝑎𝑏 (𝑡 )= ∑
∀ 𝑗≠𝑎 , 𝑗≠𝑏¿¿
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 35
Computational complexity reduced to O(1)
Storage complexity remains O(n2)
REDUCING COMPUTATIONAL COMPLEXITY (III)
𝑇𝑅𝑎𝑏 (𝑡 )=𝜋𝑝
𝜋𝑝+𝜋𝑜∙∑𝑥=1
3
¿¿¿
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents
36 HARM
Kravari, K., & Bassiliades, N. (2012). HARM: A Hybrid Rule-based Agent Reputation Model Based on Temporal Defeasible Logic. 6th International Symposium on Rules: Research Based and Industry Focused (RuleML-2012). Springer, LNCS 7438: 193-207.
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 37
HARM OVERVIEW Centralized hybrid reputation model Combine Interaction Trust and Witness Reputation
Rule-based approach Temporal defeasible logic Non-monotonic reasoning
Ratings have a time offset Indicates when ratings become active to be considered for
trust assessment Intuitive method for assessing trust Related to traditional human reasoning
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 38
(TEMPORAL) DEFEASIBLE LOGIC Temporal defeasible logic (TDL) is an extension of
defeasible logic (DL). DL is a kind of non-monotonic reasoning Why defeasible logic? Rule-based, deterministic (without disjunction) Enhanced representational capabilities Classical negation used in rule heads and bodies Negation-as-failure can be emulated Rules may support conflicting conclusions Skeptical: conflicting rules do not fire Priorities on rules resolve conflicts among rules Low computational complexity
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 39
DEFEASIBLE LOGIC Facts: e.g. student(Sofia) Strict Rules: e.g. student(X) person(X) Defeasible Rules: e.g. r: person(X) works(X)
r’: student(X) ¬works(X) Priority Relation between rules, e.g. r’ > r Proof theory example: A literal q is defeasibly provable if: supported by a rule whose premises are all defeasibly provable
AND q is not definitely provable AND each attacking rule is non-applicable or defeated by a superior
counter-attacking rule
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 40
TEMPORAL DEFEASIBLE LOGIC Temporal literals: Expiring temporal literals l:t
Literal l is valid for t time instances Persistent temporal literals l@t
Literal l is active after t time instances have passed and is valid thereafter Temporal rules: a1:t1 ... an:tn d b:tb d is the delay between the cause and the effect
Example:(r1) => a@1 Literal a is created due to r1. (r2) a@1=>7 b:3 It becomes active at time offset 1.
It causes the head of r2 to be fired at time 8.The result b lasts only until time 10.Thereafter, only the fact a remains.
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 41
HARM – AGENT EVALUATED ABILITIES Validity An agent is valid if it is both sincere and credible Sincere: believes what it says Credible: what it believes is true in the world
Completeness An agent is complete if it is both cooperative and vigilant Cooperative: says what it believes Vigilant: believes what is true in the world
Correctness An agent is correct if its provided service is correct with
respect to a specification Response time Time that an agent needs to complete the transaction
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 42
HARM - RATINGS Agent A establishes interaction with agent B: (A) Truster is the evaluating agent (B) Trustee is the evaluated agent
Truster’s rating value has 8 coefficients: 2 IDs: Truster, Trustee 4 abilities: Validity, Completeness, Correctness, Response time 2 weights (how much attention agent should pay on each rating?): Confidence: how confident the agent is for the rating Ratings of confident trusters are more likely to be right
Transaction value: how important the transaction was for the agent Trusters are more likely to report truthful ratings on important
transactions Example (defeasible RuleML / d-POSL syntax):
rating(id→1,truster→A,trustee→B,validity→5,completeness→6,correctness→6,resp_time→8,confidence→0.8,transaction_val→0.9).
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 43
HARM – EXPERIENCE TYPES Direct Experience (PRAX ) Indirect Experience reports provided by strangers (SRAX) reports provided by known agents (e.g. friends) due to
previous interactions (KRAX ) Final reputation value of an agent X, required by an agent A
RAX = {PRAX , KRAX, SRAX}
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 44
HARM – EXPERIENCE TYPES One or more rating categories may be missing E.g. a newcomer has no personal experience
A user is much more likely to believe statements from a trusted acquaintance than from a stranger. Personal opinion (AX) is more valuable than strangers’ opinion
(SX) and known partners (KX). Superiority relationships among rating categories
KX
AX, KX, SX
AX, KX
AX, SX
KX, SX
AX
SX
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 45
HARM – FINAL REPUTATION VALUE RAX is a function that combines each available category personal opinion (AX) strangers’ opinion (SX) previously trusted partners (KX)
HARM allows agents to define weights of ratings’ coefficients Personal preferences
, ,AX AX AXAXR PR KR SR
4 4 4
1 1 1
log log log, , ,
, , , _
coefficient coefficient coefficienti AX i AX i AX
AX
i i ii i i
AVG w pr AVG w kr AVG w srR
w w w
coefficient validity completeness correctness response time
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 46
r1: count_rating(rating→?idx, truster→?a, trustee→ ?x) := confidence_threshold(?conf), transaction_value_threshold(?
tran), rating(id→?idx, confidence→?confx, transaction_val→?tranx), ?confx >= ?conf, ?tranx >= ?tran.
r2: count_rating(…) := …?confx >= ?conf.
r3: count_rating(…) := …?tranx >= ?tran.
r1 > r2 > r3
HARM- WHICH RATINGS “COUNT”?
• if both confidence and transaction importance are high, then rating will be used for estimation
• if transaction value is lower than the threshold, but confidence is high, then use rating
• if there are only ratings with high transaction value, then they should be used
• In any other case, omit the rating
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 47
HARM - CONFLICTING LITERALS All the previous rules are conclude positive literals. These literals are conflicting each other, for the same pair of
agents (truster and trustee) We want in the presence e.g. of personal experience to omit
strangers’ ratings. That’s why there is also a superiority relationship between the
rules. The conflict set is formally determined as follows:C[count_rating(truster→?a, trustee→?x)] =
{ ¬ count_rating(truster→?a, trustee→?x) } { count_rating(truster→?a1, trustee→?x1) | ?a ?a1
∧ ?x ?x1 }
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 48
HARM - DETERMINING EXPERIENCE TYPES
known(agent1→?a, agent2→?y) :- count_rating(rating → ?id, truster→?a, trustee→?y).
count_pr(agent→?a, truster→?a, trustee→?x, rating→?id) :-count_rating(rating → ?id, truster→? a, trustee→ ?x).
count_kr(agent→?a, truster→?k, trustee→?x, rating →?id) :-known(agent1→?a, agent2→?k), count_rating(rating→?id, truster→?k, trustee→ ?x).
count_sr(agent→?a, truster→?s, trustee→?x, rating→?id) :-count_rating(rating → ?id, truster →?s, trustee→ ?x), not(known(agent1→?a, agent2→?s)).
Which agents are considered as known?
Rating categories
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 49
HARM – SELECTING EXPERIENCES
Final step is to decide whose experience will “count”: direct, indirect (witness), or both.
The decision for RAX is based on a relationship theory e.g. Theory #1: All categories count equally.r8: participate(agent→?a, trustee→?x, rating→?id_ratingAX) :=
count_pr(agent→?a, trustee→?x, rating→ ?id_ratingAX).r9: participate(agent→?a, trustee→?x, rating→?id_ratingKX) :=
count_kr(agent→?a, trustee→?x, rating→ ?id_ratingKX).r10: participate(agent→?a, trustee→?x, rating→?id_ratingSX) :=
count_sr(agent→?a, trustee→?x, rating→ ?id_ratingSX).
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 50
KX
AX, KX, SX
AX, KX
AX, SX
KX, SX
AX
SX
SELECTING EXPERIENCES ALL CATEGORIES COUNT EQUALLY
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 51
SELECTING EXPERIENCES - THEORY #2 PERSONAL EXPERIENCE IS PREFERRED TO FRIENDS’ OPINION TO STRANGERS’ OPINIONr8: participate(agent→?a, trustee→?x, rating→?id_ratingAX) :=
count_pr(agent→?a, trustee→?x, rating→ ?id_ratingAX).r9: participate(agent→?a, trustee→?x, rating→?id_ratingKX) :=
count_kr(agent→?a, trustee→?x, rating→ ?id_ratingKX).r10: participate(agent→?a, trustee→?x, rating→?id_ratingSX) :=
count_sr(agent→?a, trustee→?x, rating→ ?id_ratingSX).r8 > r9 > r10
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 52
KX
AX, KX, SX
AX, KX
AX, SX
KX, SX
AX
SX
SELECTING EXPERIENCES PERSONAL EXPERIENCE IS PREFERRED TO FRIENDS’ OPINION TO STRANGERS’ OPINION
>>
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 53
SELECTING EXPERIENCES - THEORY #3PERSONAL EXPERIENCE AND FRIENDS’ OPINION IS PREFERRED TO STRANGERS’ OPINIONr8: participate(agent→?a, trustee→?x, rating→?id_ratingAX) :=
count_pr(agent→?a, trustee→?x, rating→ ?id_ratingAX).r9: participate(agent→?a, trustee→?x, rating→?id_ratingKX) :=
count_kr(agent→?a, trustee→?x, rating→ ?id_ratingKX).r10: participate(agent→?a, trustee→?x, rating→?id_ratingSX) :=
count_sr(agent→?a, trustee→?x, rating→ ?id_ratingSX).r8 > r10, r9 > r10
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 54
KX
AX, KX, SX
AX, KX
AX, SX
KX, SX
AX
SX
SELECTING EXPERIENCESPERSONAL EXPERIENCE AND FRIENDS’ OPINION IS PREFERRED TO STRANGERS’ OPINION
>IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 55
HARM - TEMPORAL DEFEASIBLE LOGIC EXTENSION Agents may change their behavior / objectives at any time Evolution of trust over time should be taken into account Only the latest ratings participate in the reputation estimation
In the temporal extension of HARM: each rating is a persistent temporal literal of TDL each rule conclusion is an expiring temporal literal of TDL
Truster’s rating is active after time_offset time instances have passed and is valid thereafter
rating(id→val1, truster→val2, trustee→ val3, validity→val4, completeness→val5, correctness→val6, resp_time→val7, confidence→val8, transaction_val→value9)@time_offset.
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 56
HARM - TEMPORAL DEFEASIBLE LOGIC EXTENSION
Rules are modified accordingly: each rating is active after t time instances have passed each conclusion has a duration that it holds each rule has a delay between the cause and the effect
count_rating(rating→?idx, truster→?a, trustee→?x):duration :=delay
confidence_threshold(?conf),transaction_value_threshold(?tran), rating(id→?idx, confidence→?
confx,transaction_value→?tranx)@t, ?confx >= ?conf, ?tranx >= ?tran.
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents
57 DISARM
K. Kravari, N. Bassiliades, “DISARM: A Social Distributed Agent Reputation Model based on Defeasible Logic”, Journal of Systems and Software, Vol. 117, pp. 130–152, July 2016
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 58
DISARM OVERVIEW Distributed extension of HARM Distributed hybrid reputation model Combines Interaction Trust and Witness Reputation Ratings are located through agent’s social relationships
Rule-based approach Defeasible logic Non-monotonic reasoning
Time is directly used in: Decision making rules about recency of ratings Calculation of reputation estimation (similar to T-REX)
Intuitive method for assessing trust Related to traditional human reasoning
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 59
DISARM - SOCIAL RELATIONSHIPS Social relationships of trust among agents If an agent is satisfied with a partner it is more likely to
interact again in the future If dissatisfied it will not interact again
Each agent maintains 2 relationship lists: White-list: Trusted agents Black-list: Non-trusted agents All other agents are indifferent (neutral zone)
Each agent decides which agents are added / removed from each list, using rules
Personal social network
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 60
Truster’s rating value has 11 coefficients: 2 IDs: Truster, Trustee 4 abilities: Validity, Completeness, Correctness, Response
time 2 weights: Confidence, Transaction value Timestamp Cooperation: willingness to do what is asked for Important in distributed social environments
Outcome feeling: (dis)satisfaction for the transaction outcome Degree of request fulfillment
Example (defeasible RuleML / d-POSL syntax):
DISARM - RATINGS
IC-ININFO, Sep 19-22, 2016
rating (id→1, truster→A, trustee→X, t→140630105632, resp_time→9,
validity→7, completeness→6, correctness→6, cooperation→8, outcome_feeling→7, confidence→0.9, transaction_val→0.8)
3 more than HARM
N. Bassiliades - Trust and reputation models among agents 61
DISARM MODEL
IC-ININFO, Sep 19-22, 2016
...2. Propagating request
Agent ATruster
WL agents providing ratings1. Request ratings
3. Receive ratings
Agent XTrustee
5. Choose agent xService request
6. Receive service
4. Evaluate Reputation(DISARM rules + ratings)
N. Bassiliades - Trust and reputation models among agents 62
DISARM - BEHAVIOR CHARACTERIZATION
good_behavior(time → ?t, truster→ ?a, trustee→ ?x, reason → all) :- resp_time_thrshld(?resp), valid_thrshld(?val), …,
trans_val_thrshld(?trval),rating(id→?idx, time → ?t, truster→ ?a, trustee→ ?x, resp_time→?
respx, validity→?valx, transaction_val→?trvalx, completeness→?
comx, correctness→?corx, cooperation→?coopx,
outcome_feeling→?outfx), ?respx<?resp, ?valx>?val, ?comx>?com, ?corx>?cor, ?coopx>?coop,
?outfx>?outf.bad_behavior(time → ?t, truster→ ?a, trustee→ ?x, reason → response_time) :-
rating(id→?idx, time → ?t, truster→ ?a, trustee→ ?x, resp_time→?respx),
resp_time_thrshld(?resp), ?respx >?resp.
Any combination of parameters can be used with any defeasible theory.
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 63
DISARM - DECIDING WHO TO TRUST Has been good twice for the same reasonadd_whitelist(trustee→ ?x, time → ?t2) :=
good_behavior(time→?t1, truster→?self, trustee→?x, reason→?r),
good_behavior(time→?t2, truster→?self, trustee→?x, reason→?r),
?t2 > ?t1. Has been bad thrice for the same reasonadd_blacklist(trustee→ ?x, time → ?t3) :=
bad_behavior(time→?t1, truster→?self, trustee→?x, reason→?r),bad_behavior(time→?t2, truster→?self, trustee→?x, reason→?r),bad_behavior(time→?t3, truster→?self, trustee→?x, reason→?r),?t2 > ?t1, ?t3 > ?t2.
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 64
DISARM - MAINTAINING RELATIONSHIP LISTS
blacklist(trustee→ ?x, time → ?t) :=¬whitelist(trustee→ ?x, time → ?t1), add_blacklist(trustee→ ?x, time → ?t2), ?t2 > ?t1.
¬blacklist(trustee→ ?x, time → ?t2) :=blacklist(trustee→ ?x, time → ?t1),add_whitelist(trustee→ ?x, time → ?t2),?t2 > ?t1.
whitelist(trustee→ ?x, time → ?t) :=¬blacklist(trustee→ ?x, time → ?t1), add_whitelist(trustee→ ?x, time → ?t2), ?t2 > ?
t1.…
IC-ININFO, Sep 19-22, 2016
Add to the blacklist
Remove from the blacklist
Add to the whitelist
N. Bassiliades - Trust and reputation models among agents 65
DISARM - LOCATING RATINGS Ask for ratings about an agent sending request messages To whom and how? To everybody To direct “neighbors” of the agent’s “social network” To indirect “neighbors” of the “social network” though
message propagation for a predefined number of hops (Time-to-Live - P2P)
“Neighbors” are the agents in the whitelist Original request:send_message(sender→?self, receiver→?r,
msg →request_reputation(about→?x,ttl→?t)) :=ttl_limit(?t), whitelist(?r), locate_ratings(about→?
x).IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 66
DISARM - HANDLING RATINGS REQUEST Upon receiving request, return rating to the sendersend_message(sender→?self, receiver→?s,
msg →rating(id→ idx, truster→ ?self, trustee→ ?x, …)) :=
receive_message(sender→?s, receiver→?self, msg →request_rating(about→?x)),
rating(id→?idx, truster→ ?self, trustee→ ?x, …). If time-to-live has not expired propagate request to all friendssend_message(sender→?s, receiver→?r,
msg →request_reputation(about→?x, ttl→?t1)):=receive_message(sender→?s, receiver→?self,
msg →request_rating(about→?x,ttl→?t)), ?t >0, WL(?r), ?t1 is ?t - 1.
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 67
DISARM - RATING CATEGORIES Direct Experience (PRX) Indirect Experience (reports provided by other agents): “Friends” (WRX) – agents in the whitelist Known agents from previous interactions (KRX) Complete strangers (SRX)
Final reputation value RX = {PRX, WRX, KRX, SRX}
IC-ININFO, Sep 19-22, 2016
KR
PR, WR, KR, SR
PR, WR PR, SR KR, SR
PR SR
PR, KR
WR
WR, SRWR, KR
PR, WR, SR WR, KR, SRPR, KR, SRPR, WR, KR
New compared to HARM
N. Bassiliades - Trust and reputation models among agents 68
DISARM - SELECTING RATINGS
According to user’s preferenceseligible_rating(rating→?idx,truster→?a,trustee→?x,reason→cnf_imp) :=
conf_thrshld(?conf), trans_val_thrshld(?tr),rating(id→?idx,truster→?a,trustee→?x,conf→?confx,trans_val→?
trx),?confx >= ?conf, ?trx >= ?tr.
According to temporal restrictionscount_rating(rating→?idx, truster→?a, trustee→?x) := time_from_thrshld(?ftime), time_to_thrshld(?ttime), rating(id→?idx, t→?tx, truster→?a, trustee→ ?x),
?ftime <=?tx <= ?ttime.IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 69
DISARM – DETERMINING RATING CATEGORIES
count_wr (rating →?idx, trustee→?x) :-eligible_rating(rating → ?idx, cat→?c, truster→?k, trustee→ ?x), count_rating(rating→?idx, truster→?k, trustee→ ?x),known(agent→?k), whitelist (trustee →?k).
count_kr (rating →?idx, trustee→?x) :-eligible_rating(rating→?idx, cat→?c, truster→?k, trustee→ ?x),count_rating(rating→?idx, truster→?k, trustee→ ?x),known(agent→?k), not(whitelist(trustee →?k)),not(blacklist (trustee →?k)).
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 70
DISARM - FACING DISHONESTY When ratings provided by an agent are outside the standard
deviation of all received ratings, the agent might behave dishonestly
bad_assessment (time → ?t, truster→ ?y, trustee→ ?x) :- standard_deviation_value(?t,?y,?x,?stdevy), standard_deviation_value (?t,_,?x,?stdev), ?stdevy > ?stdev.
When two bad assessments for the same agent were given in a certain time window, trust is lost
remove_whitelist(agent→ ?y, time → ?t2) :=whitelist(truster→ ?y),time_window(?wtime), bad_assessment(time → ?t1, truster→ ?y, trustee→ ?x),bad_assessment(time → ?t2, truster→ ?y, trustee→ ?x),?t2 <= ?t1 + ?wtime.
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents
71
EVALUATION OF TRUST / REPUTATION MODELS
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 72
EVALUATION ENVIRONMENT Simulation in the EMERALD* multi-agent
system Service provider agents All provide the same service
Service consumer agents Choose provider with the higher reputation
value Performance metric: Utility Gain
*K. Kravari, E. Kontopoulos, N. Bassiliades, “EMERALD: A Multi-Agent System for Knowledge-based Reasoning Interoperability in the Semantic Web”, 6th Hellenic Conference on Artificial Intelligence (SETN 2010), Springer, LNCS 6040, pp. 173-182, 2010.
IC-ININFO, Sep 19-22, 2016
Number of simulations: 500
Number of providers: 100
Good providers 10
Ordinary providers 40
Intermittent providers 5
Bad providers 45
N. Bassiliades - Trust and reputation models among agents 73
T-REX PERFORMANCE OVER TIME
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 74
HARM VS. T-REX VS. STATE-OF-THE-ART
IC-ININFO, Sep 19-22, 2016
HARM T-REX No Trust CR SPORAS5.73 5.57 0.16 5.48 4.65
N. Bassiliades - Trust and reputation models among agents 75
Series10
1
2
3
4
5
6
7
DISARM Social Regret Certified Reputation CRM FIREHARM NONE
Time
Mea
n UG
DISARM VS. HARM VS. STATE-OF-THE-ART
IC-ININFO, Sep 19-22, 2016
Mean Utility Gain
N. Bassiliades - Trust and reputation models among agents 76
DISARM - ALONE
IC-ININFO, Sep 19-22, 2016
Better performance when alone, due to more social relationships
N. Bassiliades - Trust and reputation models among agents 77
Series10
5
10
15
20
25
30
DISARM Social Regret Certified Reputation CRM FIRE
Time
Mem
ory
Spac
ec (%
)DISARM VS. HARM VS. STATE-OF-THE-ART
IC-ININFO, Sep 19-22, 2016
Storage Space
N. Bassiliades - Trust and reputation models among agents 78
EVALUATING DISHONESTY HANDLING
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents
79 CONCLUDING…
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 80
TRUST / REPUTATION MODELS FOR MULTIAGENT SYSTEMS
Interaction Trust (personal experience) vs. Witness Reputation (Experience of others) Hybrid models
Centralized (easy to locate ratings) vs. Distributed (more robust)
Several State-of-the-Art models have been presented SPORAS, Regret, Certified Reputation, Referral, FIRE, TRR, CRM,
… Presenter’s and associates trust / reputation models T-REX (centralized, hybrid, time decay, computationally
optimized) HARM (centralized, hybrid, knowledge-based, temporal
defeasible logic) DISARM (distributed, hybrid, knowledge-based, defeasible logic,
time decay, social relationships, manages dishonesty)IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 81
CONCLUSIONS Centralized models
+Achieve higher performance because they have access to more information
+Simple interaction protocols, easy to locate ratings+Both interaction trust and witness reputation can be easily
implemented- Single-point-of-failure- Cannot scale well (bottleneck, storage & computational complexity)- Central authority hard to enforce in open multiagent systems
Distributed models- Less accurate trust predictions, due to limited information- Complex interaction protocols, difficult to locate ratings- More appropriate for interaction trust +Robust – no single-point-of-failure+Can scale well (no bottlenecks, less complexity)+More realistic in open multiagent systems
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 82
CONCLUSIONS Interaction trust
+More trustful- Requires a long time to reach a satisfying estimation level
Witness reputation- Does not guarantee reliable estimation+Estimation is available from the beginning of entering a
community Hybrid models
+Combine interaction trust and witness reputation- Combined trust metrics are usually only based on arbitrary /
experimentally-optimized weights
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 83
Centralized models- Cannot scale well (bottleneck, storage & computational complexity)+T-REX reduces computational complexity at the expense of storage
complexity+HARM reduces computational complexity by reducing considered ratings,
through rating selection based on user’s domain-specific knowledge Distributed models
- Less accurate trust predictions, due to limited information- Complex interaction protocols, difficult to locate ratings+DISARM finds ratings through agent social relationships and increases
accuracy by using only known-to-be-trustful agents Hybrid models
- Combined trust metrics are usually only based on arbitrary weights+HARM & DISARM employ a knowledge-based highly-customizable (both
to user prefs & time) approach, using non-monotonic defeasible reasoning
CONCLUSIONS – PRESENTED MODELS
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 84
A FEW WORDS ABOUT US… Aristotle University of Thessaloniki, Greece Largest University in Greece and South-East Europe Since 1925, 41 Departments, ~2K faculty, ~80K students
Dept. of Informatics Since 1992, 28 faculty, 5 research labs, ~1100 undergraduate
students, ~200 MSc students, ~80 PhD students, ~120 PhD graduates, >3500 pubs
Software Engineering, Web and Intelligent Systems Lab 7 faculty, 20 PhD students, 9 Post-doctorate affiliates
Intelligent Systems group (http://intelligence.csd.auth.gr) 4 faculty, 9 PhD students, 16 PhD graduates Research on Artificial Intelligence, Machine Learning / Data Mining,
Knowledge Representation & Reasoning / Semantic Web, Planning, Multi-Agent Systems
425 publications, 35 projectsIC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 85
ACKNOWLEDGMENTS The work described in this talk has been performed in
cooperation with Dr. Kalliopi Kravari Former PhD student, currently postdoctorate affiliate
Occasional contributors: Dr. Christos Malliarakis (former MSc student, co-author) Dr. Efstratios Kontopoulos (former PhD student, co-author) Dr. Antonios Bikakis (Lecturer, University College London, PhD
examiner)
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 86
RELEVANT PUBLICATIONS K. Kravari, C. Malliarakis, N. Bassiliades, “T-REX: A hybrid agent trust
model based on witness reputation and personal experience”, Proc. 11th International Conference on Electronic Commerce and Web Technologies (EC-Web 2010), Bilbao, Spain, Lecture Notes in Business Information Processing, Vol. 61, Part 3, Springer, pp. 107-118, 2010.
K. Kravari, E. Kontopoulos, N. Bassiliades, “EMERALD: A Multi-Agent System for Knowledge-based Reasoning Interoperability in the Semantic Web”, 6th Hellenic Conference on Artificial Intelligence (SETN 2010), Springer, LNCS 6040, pp. 173-182, 2010.
K. Kravari, N. Bassiliades, “HARM: A Hybrid Rule-based Agent Reputation Model based on Temporal Defeasible Logic”, 6th International Symposium on Rules: Research Based and Industry Focused (RuleML-2012). Springer Berlin/Heidelberg, LNCS, Vol. 7438, pp. 193-207, 2012.
K. Kravari, N. Bassiliades, “DISARM: A Social Distributed Agent Reputation Model based on Defeasible Logic”, Journal of Systems and Software, Vol. 117, pp. 130–152, July 2016
IC-ININFO, Sep 19-22, 2016
N. Bassiliades - Trust and reputation models among agents 87IC-ININFO, Sep 19-22, 2016