🎨Staking + Burning Token Economy

Modeling a Staking + Burning Token Economy for a DEX economy

In the following example, we are going to see how to model a simple token economy with two utilities: staking, and burning. This is the sort of design a DEX might implement.

The resulting template can be seen here.

Introduction

We aim to analyse different scenarios for our token dynamics, and predict the demand for the token while understanding how the vesting schedule impacts the token robustness.

To do it, we will create a digital twin of a DEX (Decentralised Exchange). In this model, there's a fee for trading. This fee gets distributed between:

  • AMM Liquidity Providers. This is needed to attract liquidity and create an effective exchange.

  • Stakers of the protocol token. This allows them to share in the protocol earnings while contributing to the token stability by reducing the token velocity through locking.

  • Treasury, which uses the fees to buy back tokens from the market and burn them. This puts positive buying pressure on the token economy while also reducing the supply.

We will be able to adjust the fee percentages and how they are split between the staking and burning utilities.

The modelling process consists of two steps:

  1. Define the logic of the tokenomics and protocol with mathematical formulas

  2. Fill in the spreadsheet with those formulas, which will be read and interpreted by Cenit's software in order to create the simulation and dashboard

Building scenarios in tokenomics: testing parameters

To test various scenarios, we will examine different parameters.

  • From the tokenomics design perspective, we aim to model the impact on staking demand from the fees that we are charging to the end user, as well as the redistribution of those fees to stakers.

  • We want to assess at the same time the token demand that is generated by the burning mechanism, aiming to understand the effect of the fees used to burning in the economy.

  • We will also examine different selling behaviours of the agents participating in the token allocation to understand the robustness of the economy.

  • Finally, we want to explore different hypotheses. These hypotheses represent the uncertainties in the token economy. Here are for example, the user demand for the protocol based on trading volume. Another example is the APY that stakers expect from the markets.

How to model the economy mathematically

We need to capture the following token interactions in the economy:

  • Agents with token allocation have different motivations and will sell their allocation once vested at varying rates.

  • A portion of the fees goes to stakers, leading them to purchase the tokens to access that revenue

  • Some fees are used to buy tokens, which are then burnt.

  • Some of the protocol token is being used to provide liquidity to its own markets

Let’s dive into how these interactions translate into formulas for each simulation timestep.

Fees generated are the product of protocol fees and trading volume. Since the timesteps might not align with daily trading volumes, a normalization factor is required for each simulation timestep

Fees going to providers

Fees that goes to burn, which represent a percentage of the fees not given to liquidity providers

Fees given to stakers are the remainder after the amount to burn has been subtracted

Amount of tokens that should be staked for a optimally efficient token economy. If we expect an APY of, let's say, 10%, the protocol should generate one-tenth of the staked tokens' value in fees for stakers annually to meet those expectations.

Consequently, if we're aware of the annual fees distributed to stakers and the token price, we can determine the number of tokens that should be staked to achieve ideal market conditions. Here's the formula

As a result, if the fees increases, the amount of tokens that will be staked are higher, and new tokens will be bought. If on the contrary, the fees are lower or the token price increases a lot, the stakers will sell their tokens since it does not guarantee the expected yield.

Amount of tokens each vesting agent sells. For agents receiving an allocation, we aim to model the percentage of tokens they'll sell and their selling speed. We represent the selling speed with the variable "Mean time to sell".

The estimated percentage of tokens each agent will sell within a period of N months given a "Mean time to sell" (assuming they will sell all fo their tokens) is as follows:

Based on those numbers, our software computes internally how many tokens each agent will sell per time step.

Amount of tokens added as liquidity to the markets. This could be a constant percentage of the tokens in the treasury, for instance, 5-10%. If there is an agent of "Market Maker", then we would assign a 100%.


Once we've set up the token economy model, we'll have this dashboard. Here, we can experiment with the hypotheses mentioned by adjusting values using sliders on the left. The results of the simulation for each scenario will then be displayed in the charts on the right.

Users can interact with this dashboard. When they adjust the parameters, it triggers a complete simulation based on those inputs. Next, we'll explore how to build the structure of this simulation using our templates.

Translating the economy to the templates

It's a good idea to familiarize yourself with the structure of the spreadsheet before going into detail. If you've already done that, let's proceed!

Simulation definition

Base parameters

These are the standard settings for every project. They cover things like total supply, the token's initial price, and the simulation duration in months. By setting default, minimum, and maximum values for each parameter, we set the range users of the dashboard can experiment with.

Here we are specifying a token economy with 1 Billion token supply and a token price at TGE of 0.02 USD as defaults.

Static Parameters

The Static parameters are all the parameters that we wish to test in the dashboard and that play a role in the previously mentioned formulas.

For each of these parameters, we set a Default value for the simulations and define the minimum and maximum values that users can test. As an example, here, multiple Sales mean time exist for the vesting agents. We can either treat each agent with a distinct strategy or handle them all equally.

Time Dependent Parameters

Any parameter that isn't constant can be introduced as a time-dependent parameter.

In this case, the time-dependent parameter is the daily trading volume, representing the protocol's demand KPI.

Imagine for a second that the fees of the protocol were supposed to change dynamically over time. Then, instead of a static parameter for those fees, we would define a time dependent one TDP_service_fee_percent

Vesting

Time to define the vesting schedule. We should specify the agent that each entity in the vesting represents in our simulation. In the example above, even if six entities exist in vesting, we are assuming four act similarly, representing the same agent, A_stakeholders. Alternatively, each entity can represent a distinct agent for added granularity.

Besides stakeholders, we also have unique agents like theTreasury and the Market Maker.

In addition to the usual parameters seen in a token allocation, we have the columns Vesting sales percentage which determines in this case that everyone will sell all their allocation with the exception of the Market Maker (later we will see that it is going to use their allocation on a 100% to provide liquidity).

Each of the vested tokens are sold at a different speed, determined by the static parameters of Vesting sales mean time.

Agents

Now it is the time to define each of the agents that participate in the simulation. We see here that we need to incorporate all of the vesting agents and the new ones, as the agent Stakers.

In addition to their definition, it is here where we define if they are going to supply liquidity or participate in Staking mechanics.

  • Liquidity mechanics, are managed from the variable of liquidity provision. If a agent, as the market maker here, has a "1", that means that 100% of the tokens that the agent onws will be used for supplying liquidity into the markets.

  • Staking mechanics are stipulated through a ratio of tokens that the agents have. Since the agent "Staker" only stakes, we shall define that a 100% of the tokens they have will be staked.

  • Holding mechanics determine how many token each agent will hold at each time step. Here we are introducing the following formula

We can divide the formula in two parts. Va_competitive_staking is the amount of tokens that should be staked to guarantee the expected APY. If, for instance, the amount of fees increases, the amount of tokens required will increase as well and this will be translated into an amount of buying pressure which is the difference between the amount of tokens required in the previous timestep and the ones required now.

The additional part, it is a way of ensure that the amount of tokens bought by the stakers is not insanely crazy from one time step to the other. This is specially important to reduce inestabilities such as high price > need less tokens to stake > more tokens are sold > lower price > need more tokens to stake > more tokens are bought > high price.

Variables

Here we need to define each of the variables that are going to be used in the flows, in the staking amount, and for every KPI that we want to track.

In addition to the formulas described earlier, we have the variable VA_eavg_token_price. This variable is a weighted average of the token price, which is considered to smooth again some of the decisions of the agents, instead of using a price which is chaging all the time to calculate the amount of tokens required for staking, we use the weighted average.

Flows

Finally, we have the flows. In this economy, in addition to token demand through staking, there is only a flow of token that generates demand, which is the buyback + burning.

The flow of tokens that are being purchased and burnt are the same amount, that's why the value of the token flow F_burn_fees is F_burn_fees_purchase_token. In addition, to calculate the amount of tokens that should be purchased, we have to translate the amount of USD generated as fees for being burnt into tokens, therefore the division.


We are done now with the simulation set up. With those inputs, with the information filled so far, we could already recreate the whole simulation.

Now, it is time to define the custom metrics/KPIs and dashboards that we want to have and in which order.

Visualization definition

Metrics

Here in Metrics, we are mainly focusing on transforming those variables that generate token demand in each timestep and we translate them into a meaningful timeframe for reporting, in this case, months.

The are some already defined variables that do not need to define here, this is just for those metrics that are not really requiered in the simulation and they are only for reporting.

Graphs

As it happens with Metrics, our tool has some predefined graphs that are by default in the simulation. However, for this simulation, we want to follow the KPIs of Daily volume, the fees, that amount of Staked tokens and the inflation. That's why we want to add them.

We put all the static parameters in order. Here, we did it in terms of "Design Parameters" vs "Hypotheses", but this is fully subjective.

Dashboard configuration

Same thing for the graphs. Choose the right order for your story telling and you are ready to go.

KPIs pannel configuration

For those simulations that should have a KPI dashboarding pannel, it is on this last spreadsheet where they should be defined.

And that's it. We are ready to simulate!

Last updated