Introducing Pyth Publishers Metrics

TL;DR

  • Pyth network has released its Publisher Metrics, a public dashboard of Pyth’s data publishers’ performance metrics
  • Developers benefit from granular insight into Pyth’s data performance and publishers’ track records
  • Publishers benefit from new analytics and monitoring capabilities.
  • In the future, delegators will benefit from increased transparency on publisher data quality, which can better inform their staking decisions
  • This release is part of Pyth’s broader commitment towards transparency and verifiability — the fundamental values of DeFi

Take fate into your own hands! Use the Publisher Metrics!

Today, Publisher Metrics, a new analytics feature on the Pyth website, went live. The feature provides insights that will empower developers, publishers, and delegators by providing the historical performance of the network’s data sources. This powerful tool reflects our commitment to transparency and delivering timely, accurate, and valuable first-party data for everyone.

Pyth is unique in that all of its data is verifiable; any output can be traced back to its sources. Publishers in the network individually submit their prices and confidence intervals to Pyth’s on-chain program, which then aggregates them. There are no off-chain components in the Pyth data distribution process, which guards against potential forms of manipulation found in off-chain designs. This level of transparency is one among many benefits of Pyth’s publisher network model.

Why do we need Publisher Metrics?

The Pyth network has outlined a novel oracle network design to incentivize publishers to provide data, consumers to hedge their oracle risk, and delegators to stake PYTH tokens to secure the network.

To effectively participate in our network, participants need data, including publishers’ historical performance and insight into the network’s state and performance. This is where Publishers Metrics come into play.

Publishers

Publishers are incentivized to provide timely, valuable data. Publishers can use the new metrics feature to track their performance and benchmark themselves against one another. These performance metrics will become even more important when publishers are required to stake PYTH to publish and are able to participate in Pyth’s on-chain rewards program.

End-Users

End-users — the protocols (“consumers”) utilizing Pyth’s data — can pay data fees to hedge against oracle risk. The Publisher Metrics will allow end-users to evaluate the individual components (sources) of Pyth’s aggregate prices and adjust their hedging patterns accordingly.

Delegators

Delegators, who are interested in earning data fees (and avoid getting slashed), may want to update the way they delegate stake based on new information about publisher or feed performance. Publisher Metrics play a critical role in how delegators will choose to stake and contribute to price feed robustness.

The Pyth network is taking steps towards realizing the vision outlined in the whitepaper. Publisher Metrics are a critical first step toward providing the necessary information participants to make thoughtful, informed decisions within the network.

How to Use the Publisher Metrics

Anyone can access the Publisher Metrics and monitor publishers’ performance on a per-product basis. Performance metrics include uptime, price-quality, and confidence interval accuracy. These metrics are in-line with the variables that will ultimately be used to determine publishers scoring and rewards, once the full system is deployed.

Accessing the Metrics

You can start at the Pyth Markets page, which will list every market symbol (“product”). Clicking on any product will bring you to the corresponding product page. We will use the SOL/USD product page as our example.

Each product page has a list of price components (representing each publisher by their publisher keys). Each component will link to the corresponding Publisher Metrics page.

Once you are on a product page, you can click on a publisher key (highlighted in yellow) under “Price Components” to access the Publishers Metrics page.

On the Metrics Page for a specific publisher, you will find the metrics graphs and the option to download a Conformance Report (TSV file).

To open the metrics for another publisher (of that same product), you can click on the “Back to the [SOL/USD] market”.

If you want to review the Publisher Metrics of another product (e.g. ETH/USD), you will need to access the relevant product page. As mentioned, the Pyth Markets page has the full list of products.

Interpreting the Metrics

The Metrics Page shows four graphs derived from the publisher’s prices over a chosen time interval (24H, 48H, 72H, 1W, and 1M) for a single product (price feed):

  • The price graph shows how a publisher’s price compares to the aggregate price, illustrating how closely the two prices track each other, and whether there were any periods where the publisher deviated significantly from the rest of the market.
  • The uptime graph shows when the publisher was actively contributing prices. The x-axis subdivides the time interval into bins, and the y-axis is the % of slots in that bin where the publisher’s price was recent enough to be included in the aggregate. This graph lets you determine the regularity and reliability of a publisher.
  • The quality graph shows the dataset used in the regression model for computing the quality score described in section 4.1.1 of the whitepaper. The quality score measures how well a publisher’s price series predicts future changes in the aggregate price. A smooth color gradient (from blue on the bottom left to pink on the top right) indicates a high-quality score.
  • The calibration graph shows how closely the publisher’s prices and confidences match the expected Laplace distribution. The closer the fit between the two distributions, the higher the calibration score (described in section 4.1.2 of the whitepaper). In other words, a perfect publisher should produce a uniform histogram. As a reminder, the calibration score does not reward publishers for producing tighter confidence intervals; rather, the score captures whether the reported confidence interval corresponds to the publisher’s “true” confidence.

We cannot wait to hear what you think! Feel free to join us on the Pyth Discord server, follow Pyth on Twitter, and Telegram to learn more, and ask any questions you may have.

--

--

--

Smarter data for smarter contracts. Pyth is designed to bring real-world data on-chain on a sub-second timescale.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Can a .NET developer switch to Java without starting over?

R&D project to generate test cases automatically from project description

How to Change your Gmail Default Email Address

Getting Started with RPA Testing — Part 1

Robotics Process Automation

Real Users, Real Clients — Part II

Meetup And Learn, How To Motivate A Software Tester

Why Go Interface is better than Java and C#

“Why Google Translate Can’t Teach You German Reading“

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Pyth Network

Pyth Network

Smarter data for smarter contracts. Pyth is designed to bring real-world data on-chain on a sub-second timescale.

More from Medium

New Pyth Data Provider: Three Arrows Capital

The End Game: Fat Applications Thesis

What Is Invariant? Introduction

Makers in Hubble vAMM