Using the EigenTrust algorithm for Profile Ranking and Scoring
Karma3 Labs is currently using EigenTrust which helps networked peers calculate how much trust each user should place in relations to other peers in the ecosystem. This is the first algorithm we're employing, and we're open to new algorithms added in the future.
Stepping through How It Works
The following are high level steps on how the algorithm is implemented, in an openly verifiable manner
Step 1: Data Source synchronization
Since Lens BigQuery was released, we are using this as our source of truth. A subset of the data is retrieved from BigQuery into our local datastore. We then run post-processing to determine the number of followers, posts, comments, mirrors, NFT collects, etc. For example, at the time of this writing, there are over 112,000 unique profiles that have at least one follower. This data is refreshed every few minutes to ensure that we don't fall too far behind the latest events.
Step 2: Setting up the Computation
We then run compute on this data set to produce a set of localized interactions(or attestations) per profile, whether those attestations are following one another, having a post be mirrored by someone else, commenting on one another's post, and so on. These interactions between each user is placed into a matrix, then we keep only the non-zero cells in the matrix to then run the EigenTrust algorithm to see which relationship is stronger between one profile to another. The algorithm is now ready.
Step 3: Defining the Strategy
To define a strategy, we needed to combine several types of attestations together and assign them each a weight. An example of an attestation type is a mirror. The types at Lens Protocol are "follows" as F, "mirrors" as M, "comments" as C. If a developer wish to have "follows" higher weighted than "comments" but lower than "mirrors", then the strategy could assign a higher weight (between 0-10) to F, and lower to C, and higher to M. For example, the engagement strategy has F=6, C=3, and M=8. An example is available on our open-sourced GitHub repository.
Step 4: Running the Computation
The computation is then executed, with the localized interactions are first cached, to then be used to calculate results for global ranks or personalized recommendations. Once the computation begins, each iteration is cached to be used to compare with the previous iteration's results. The computation is executed multiple times to then converged to a point where nearly no difference between computing on execution n-1 vs. execution n. We have optimized these to be able to converge near real time in some cases, such as personalized recommendations.
Step 5: Humanizing the Results
Once developers have access to the results, it's a matter of translating this to how a real user will interact with it. Whether it's Global Profile Rankings that can be used by newly onboarded users to find engaging profiles to follow, or whether Personalized Profile Recommendations can be selected as a re-engagement mechanism, it is up to the developers to make this as sensible as possible to them.
The best part about this is the algorithm and data is all open sourced; developers can eventually turn the controls over the their users to choose what types of views they'd like, whether they wish to remain polarized in their own world views, or choose to explore new frontiers to expand their understanding and be a better judge at truth.