This article is written by the CoinEx Chain lab. CoinEx Chain is the world’s first public chain exclusively designed for DEX, and will also include a Smart Chain supporting smart contracts and a Privacy Chain protecting users’ privacy.submitted by coinexchain to u/coinexchain [link] [comments]
longcpp @ 20200618
This is Part 1 of the serialized articles aimed to explain the Tendermint consensus protocol in detail.
Part 1. Preliminary of the consensus protocol: security model and PBFT protocol
Part 2. Tendermint consensus protocol illustrated: two-phase voting protocol and the locking and unlocking mechanism
Part 3. Weighted round-robin proposer selection algorithm used in Tendermint project
Any consensus agreement that is ultimately reached is the General Agreement, that is, the majority opinion. The consensus protocol on which the blockchain system operates is no exception. As a distributed system, the blockchain system aims to maintain the validity of the system. Intuitively, the validity of the blockchain system has two meanings: firstly, there is no ambiguity, and secondly, it can process requests to update its status. The former corresponds to the safety requirements of distributed systems, while the latter to the requirements of liveness. The validity of distributed systems is mainly maintained by consensus protocols, considering the multiple nodes and network communication involved in such systems may be unstable, which has brought huge challenges to the design of consensus protocols.
The semi-synchronous network model and Byzantine fault toleranceResearchers of distributed systems characterize these problems that may occur in nodes and network communications using node failure models and network models. The fail-stop failure in node failure models refers to the situation where the node itself stops running due to configuration errors or other reasons, thus unable to go on with the consensus protocol. This type of failure will not cause side effects on other parts of the distributed system except that the node itself stops running. However, for such distributed systems as the public blockchain, when designing a consensus protocol, we still need to consider the evildoing intended by nodes besides their failure. These incidents are all included in the Byzantine Failure model, which covers all unexpected situations that may occur on the node, for example, passive downtime failures and any deviation intended by the nodes from the consensus protocol. For a better explanation, downtime failures refer to nodes’ passive running halt, and the Byzantine failure to any arbitrary deviation of nodes from the consensus protocol.
Compared with the node failure model which can be roughly divided into the passive and active models, the modeling of network communication is more difficult. The network itself suffers problems of instability and communication delay. Moreover, since all network communication is ultimately completed by the node which may have a downtime failure or a Byzantine failure in itself, it is usually difficult to define whether such failure arises from the node or the network itself when a node does not receive another node's network message. Although the network communication may be affected by many factors, the researchers found that the network model can be classified by the communication delay. For example, the node may fail to send data packages due to the fail-stop failure, and as a result, the corresponding communication delay is unknown and can be any value. According to the concept of communication delay, the network communication model can be divided into the following three categories:
The design and selection of consensus protocols for public chain networks that allow nodes to dynamically join and leave need to consider possible Byzantine failures. Therefore, the consensus protocol of a public chain network is designed to guarantee the security and liveness of the network under the semi-synchronous network model on the premise of possible Byzantine failure. Researchers of distributed systems point out that to ensure the security and liveness of the system, the consensus protocol itself needs to meet three requirements:
The CAP theorem and Byzantine Generals ProblemIn a semi-synchronous network, is it possible to design a Byzantine fault-tolerant consensus protocol that satisfies validity, agreement, and termination? How many Byzantine nodes can a system tolerance? The CAP theorem and Byzantine Generals Problem provide an answer for these two questions and have thus become the basic guidelines for the design of Byzantine fault-tolerant consensus protocols.
Lamport, Shostak, and Pease abstracted the design of the consensus mechanism in the distributed system in 1982 as the Byzantine Generals Problem, which refers to such a situation as described below: several generals each lead the army to fight in the war, and their troops are stationed in different places. The generals must formulate a unified action plan for the victory. However, since the camps are far away from each other, they can only communicate with each other through the communication soldiers, or, in other words, they cannot appear on the same occasion at the same time to reach a consensus. Unfortunately, among the generals, there is a traitor or two who intend to undermine the unified actions of the loyal generals by sending the wrong information, and the communication soldiers cannot send the message to the destination by themselves. It is assumed that each communication soldier can prove the information he has brought comes from a certain general, just as in the case of a real BFT consensus protocol, each node has its public and private keys to establish an encrypted communication channel for each other to ensure that its messages will not be tampered with in the network communication, and the message receiver can also verify the sender of the message based thereon. As already mentioned, any consensus agreement ultimately reached represents the consensus of the majority. In the process of generals communicating with each other for an offensive or retreat, a general also makes decisions based on the majority opinion from the information collected by himself.
According to the research of Lamport et al, if there are 1/3 or more traitors in the node, the generals cannot reach a unified decision. For example, in the following figure, assume there are 3 generals and only 1 traitor. In the figure on the left, suppose that General C is the traitor, and A and B are loyal. If A wants to launch an attack and informs B and C of such intention, yet the traitor C sends a message to B, suggesting what he has received from A is a retreat. In this case, B can't decide as he doesn't know who the traitor is, and the information received is insufficient for him to decide. If A is a traitor, he can send different messages to B and C. Then C faithfully reports to B the information he received. At this moment as B receives conflicting information, he cannot make any decisions. In both cases, even if B had received consistent information, it would be impossible for him to spot the traitor between A and C. Therefore, it is obvious that in both situations shown in the figure below, the honest General B cannot make a choice.
According to this conclusion, when there are $n$ generals with at most $f$ traitors (n≤3f), the generals cannot reach a consensus if $n \leq 3f$; and with $n > 3f$, a consensus can be reached. This conclusion also suggests that when the number of Byzantine failures $f$ exceeds 1/3 of the total number of nodes $n$ in the system $f \ge n/3$ , no consensus will be reached on any consensus protocol among all honest nodes. Only when $f < n/3$, such condition is likely to happen, without loss of generality, and for the subsequent discussion on the consensus protocol, $ n \ge 3f + 1$ by default.
The conclusion reached by Lamport et al. on the Byzantine Generals Problem draws a line between the possible and the impossible in the design of the Byzantine fault tolerance consensus protocol. Within the possible range, how will the consensus protocol be designed? Can both the security and liveness of distributed systems be fully guaranteed? Brewer provided the answer in his CAP theorem in 2000. It indicated that a distributed system requires the following three basic attributes, but any distributed system can only meet two of the three at the same time.
A distributed system aims to provide consistent services. Therefore, the consistency attribute requires that the two nodes in the system cannot provide conflicting status information or expired information, which can ensure the security of the distributed system. The availability attribute is to ensure that the system can continuously update its status and guarantee the availability of distributed systems. The partition tolerance attribute is related to the network communication delay, and, under the semi-synchronous network model, it can be the status before GST when the network is in an asynchronous status with an unknown delay in the network communication. In this condition, communicating nodes may not receive information from each other, and the network is thus considered to be in a partitioned status. Partition tolerance requires the distributed system to function normally even in network partitions.
The proof of the CAP theorem can be demonstrated with the following diagram. The curve represents the network partition, and each network has four nodes, distinguished by the numbers 1, 2, 3, and 4. The distributed system stores color information, and all the status information stored by all nodes is blue at first.
The discovery of the CAP theorem seems to declare that the aforementioned goals of the consensus protocol is impossible. However, if you’re careful enough, you may find from the above that those are all extreme cases, such as network partitions that cause the failure of information transmission, which could be rare, especially in P2P network. In the second case, the system rarely returns the same information with node 2, and the general practice is to query other nodes and return the latest status as believed after a while, regardless of whether it has received the request information of other nodes. Therefore, although the CAP theorem points out that any distributed system cannot satisfy the three attributes at the same time, it is not a binary choice, as the designer of the consensus protocol can weigh up all the three attributes according to the needs of the distributed system. However, as the communication delay is always involved in the distributed system, one always needs to choose between availability and consistency while ensuring a certain degree of partition tolerance. Specifically, in the second case, it is about the value that node 2 returns: a probably outdated value or no value. Returning the possibly outdated value may violate consistency but guarantees availability; yet returning no value deprives the system of availability but guarantees its consistency. Tendermint consensus protocol to be introduced is consistent in this trade-off. In other words, it will lose availability in some cases.
The genius of Satoshi Nakamoto is that with constraints of the CAP theorem, he managed to reach a reliable Byzantine consensus in a distributed network by combining PoW mechanism, Satoshi Nakamoto consensus, and economic incentives with appropriate parameter configuration. Whether Bitcoin's mechanism design solves the Byzantine Generals Problem has remained a dispute among academicians. Garay, Kiayias, and Leonardos analyzed the link between Bitcoin mechanism design and the Byzantine consensus in detail in their paper The Bitcoin Backbone Protocol: Analysis and Applications. In simple terms, the Satoshi Consensus is a probabilistic Byzantine fault-tolerant consensus protocol that depends on such conditions as the network communication environment and the proportion of malicious nodes' hashrate. When the proportion of malicious nodes’ hashrate does not exceed 1/2 in a good network communication environment, the Satoshi Consensus can reliably solve the Byzantine consensus problem in a distributed environment. However, when the environment turns bad, even with the proportion within 1/2, the Satoshi Consensus may still fail to reach a reliable conclusion on the Byzantine consensus problem. It is worth noting that the quality of the network environment is relative to Bitcoin's block interval. The 10-minute block generation interval of the Bitcoin can ensure that the system is in a good network communication environment in most cases, given the fact that the broadcast time of a block in the distributed network is usually just several seconds. In addition, economic incentives can motivate most nodes to actively comply with the agreement. It is thus considered that with the current Bitcoin network parameter configuration and mechanism design, the Bitcoin mechanism design has reliably solved the Byzantine Consensus problem in the current network environment.
Practical Byzantine Fault Tolerance, PBFTIt is not an easy task to design the Byzantine fault-tolerant consensus protocol in a semi-synchronous network. The first practically usable Byzantine fault-tolerant consensus protocol is the Practical Byzantine Fault Tolerance (PBFT) designed by Castro and Liskov in 1999, the first of its kind with polynomial complexity. For a distributed system with $n$ nodes, the communication complexity is $O(n2$.) Castro and Liskov showed in the paper that by transforming centralized file system into a distributed one using the PBFT protocol, the overwall performance was only slowed down by 3%. In this section we will briefly introduce the PBFT protocol, paving the way for further detailed explanations of the Tendermint protocol and the improvements of the Tendermint protocol.
The PBFT protocol that includes $n=3f+1$ nodes can tolerate up to $f$ Byzantine nodes. In the original paper of PBFT, full connection is required among all the $n$ nodes, that is, any two of the n nodes must be connected. All the nodes of the network jointly maintain the system status through network communication. In the Bitcoin network, a node can participate in or exit the consensus process through hashrate mining at any time, which is managed by the administrator, and the PFBT protocol needs to determine all the participating nodes before the protocol starts. All nodes in the PBFT protocol are divided into two categories, master nodes, and slave nodes. There is only one master node at any time, and all nodes take turns to be the master node. All nodes run in a rotation process called View, in each of which the master node will be reelected. The master node selection algorithm in PBFT is very simple: all nodes become the master node in turn by the index number. In each view, all nodes try to reach a consensus on the system status. It is worth mentioning that in the PBFT protocol, each node has its own digital signature key pair. All sent messages (including request messages from the client) need to be signed to ensure the integrity of the message in the network and the traceability of the message itself. (You can determine who sent a message based on the digital signature).
The following figure shows the basic flow of the PBFT consensus protocol. Assume that the current view’s master node is node 0. Client C initiates a request to the master node 0. After the master node receives the request, it broadcasts the request to all slave nodes that process the request of client C and return the result to the client. After the client receives f+1 identical results from different nodes (based on the signature value), the result can be taken as the final result of the entire operation. Since the system can have at most f Byzantine nodes, at least one of the f+1 results received by the client comes from an honest node, and the security of the consensus protocol guarantees that all honest nodes will reach consensus on the same status. So, the feedback from 1 honest node is enough to confirm that the corresponding request has been processed by the system.
For the status synchronization of all honest nodes, the PBFT protocol has two constraints on each node: on one hand, all nodes must start from the same status, and on the other, the status transition of all nodes must be definite, that is, given the same status and request, the results after the operation must be the same. Under these two constraints, as long as the entire system agrees on the processing order of all transactions, the status of all honest nodes will be consistent. This is also the main purpose of the PBFT protocol: to reach a consensus on the order of transactions between all nodes, thereby ensuring the security of the entire distributed system. In terms of availability, the PBFT consensus protocol relies on a timeout mechanism to find anomalies in the consensus process and start the View Change protocol in time to try to reach a consensus again.
The figure above shows a simplified workflow of the PBFT protocol. Where C is the client, 0, 1, 2, and 3 represent 4 nodes respectively. Specifically, 0 is the master node of the current view, 1, 2, 3 are slave nodes, and node 3 is faulty. Under normal circumstances, the PBFT consensus protocol reaches consensus on the order of transactions between nodes through a three-phase protocol. These three phases are respectively: Pre-Prepare, Prepare, and Commit:
In the three-phase protocol execution of the PBFT protocol, in addition to maintaining the status information of the distributed system, the node itself also needs to log all kinds of consensus information it receives. The gradual accumulation of logs will consume considerable system resources. Therefore, the PBFT protocol additionally defines checkpoints to help the node deal with garbage collection. You can set a checkpoint every 100 or 1000 sequence numbers according to the request sequence number. After the client request at the checkpoint is executed, the node broadcasts
The three-phase protocol of the PBFT protocol can ensure the consistency of the processing order of the client request, and the checkpoint mechanism is set to help nodes perform garbage collection and further ensures the status consistency of the distributed system, both of which can guarantee the security of the distributed system aforementioned. How is the availability of the distributed system guaranteed? In the semi-synchronous network model, a timeout mechanism is usually introduced, which is related to delays in the network environment. It is assumed that the network delay has a known upper bound after GST. In such condition, an initial value is usually set according to the network condition of the system deployed. In case of a timeout event, besides the corresponding processing flow triggered, additional mechanisms will be activated to readjust the waiting time. For example, an algorithm like TCP's exponential back off can be adopted to adjust the waiting time after a timeout event.
To ensure the availability of the system in the PBFT protocol, a timeout mechanism is also introduced. In addition, due to the potential the Byzantine failure in the master node itself, the PBFT protocol also needs to ensure the security and availability of the system in this case. When the Byzantine failure occurs in the master node, for example, when the slave node does not receive the PRE-PREPARE message or the PRE-PREPARE message sent by the master node from the master node within the time window and is thus determined to be illegitimate, the slave node can broadcast
VIEWCHANGE contains a lot of information. For example, C contains 2f+1 signature information, P contains several signature sets, and each set has 2f+1 signature. At least 2f+1 nodes need to send a VIEWCHANGE message before prompting the system to enter the next new view, and that means, in addition to the complex logic of constructing the information of VIEWCHANGE and NEW-VIEW, the communication complexity of the view conversion protocol is $O(n2$.) Such complexity also limits the PBFT protocol to support only a few nodes, and when there are 100 nodes, it is usually too complex to practically deploy PBFT. It is worth noting that in some materials the communication complexity of the PBFT protocol is inappropriately attributed to the full connection between n nodes. By changing the fully connected network topology to the P2P network topology based on distributed hash tables commonly used in blockchain projects, high communication complexity caused by full connection can be conveniently solved, yet still, it is difficult to improve the communication complexity during the view conversion process. In recent years, researchers have proposed to reduce the amount of communication in this step by adopting aggregate signature scheme. With this technology, 2f+1 signature information can be compressed into one, thereby reducing the communication volume during view change.
With that as a starting point, it’s now becoming increasingly evident that Bitcoin MAY be a creation of the NSA and was rolled out as a “normalization” experiment to get the public familiar with digital currency. Once this is established, the world’s fiat currencies will be obliterated in an engineered debt collapse (), then replaced with a government approved cryptocurrency with tracking of all transactions and digital wallets by the world’s western governments..
With the onset of the Information Age, our nation is becoming increasingly dependent on network communications. Computer-based technology is impacting significantly our ability to access, store, and distribute information..
An electronic payment protocol involves a series of transactions, resulting in a payment being made using a token issued by a third party..
The most common example is the electronic approval process used to complete a credit card transaction; neither payer nor payee issues the token in an electronic payment..
The untraceability property of electronic cash creates problems in detecting money laundering and tax evasion because there is no way to link the payer and payee. To counter this problem, it is possible to design a system that has an option to restore traceability using an escrow mechanism.Theyre talking about blockchain and tracing money in fucken 1997.
Three divisible off-line cash schemes have been proposed, but at the cost of longer transaction time and additional storage.Eng/Tatsuaki Okamoto and Ohta's scheme
Eng/Okamoto's divisible scheme is based on the "cut and choose"
Okamoto's scheme is much more efficient and is based on Brands' scheme but also will work on Ferguson's scheme.'
Okamoto and Ohta's scheme is the most efficient of the three, but also the most complicated.
It relies on the difficulty of factoring and on the difficulty of computing discrete logarithms.
It is evident that SHA-256, the algorithm Satoshi used to secure Bitcoin, was not available because it came about in 2001. However, SHA-1 would have been available to them, having been published in 1993..
On top of the fact that the NSA authored a technical paper on cryptocurrency long before the arrival of Bitcoin, the agency is also the creator of the SHA-256 hash upon which every Bitcoin transaction in the world depends. “The integrity of Bitcoin depends on a hash function called SHA-256, which was designed by the NSA and published by the National Institute for Standards and Technology (NIST).”.
“If you assume that the NSA did something to SHA-256, which no outside researcher has detected, what you get is the ability, with credible and detectable action, they would be able to forge transactions. The really scary thing is somebody finds a way to find collisions in SHA-256 really fast without brute-forcing it or using lots of hardware and then they take control of the network.” Cryptography researcher Matthew D. Green of Johns Hopkins University said..
Chaum developed ecash way back in 1983, long before the large scale propagation of the world wide web. Chaum was a proponent of anonymity in transactions, with the express demand that banks and governments would have no way of knowing who had purchased what..
Although Bitcoin adds mining and a shared, peer-to-peer blockchain transaction authentication system to this structure, it’s clear that the NSA was researching cryptocurrencies long before everyday users had ever heard of the term..
‘I wouldn’t be surprised if he is actually an American working for the NSA specializing in cryptography. Then he got sick of the government’s monetary policies and decided to create Bitcoin.’ Vitalik Buterin account then replied: ‘Or the NSA itself decided to create Bitcoin.Here you see a freedom of information act letter to the NSA asking about their involvement in BTC and they say its classified lol
Open Financial System✔︎ Pretty easy
Open financial system is defined as being available to everyone and not controlled by a single entity.
Innovation or Efficiency Gains✔︎ Again, pretty easy, Nimiq is bringing a huge leap forward in terms of accessibility and integration of cryptocurrencies.
New or improved technology which helps solve a problem, creates a new market, addresses an unmet market need, or creates value for network participants.
Economic Freedom✔︎ Basic requirement of any real cryptocurrency, easily fulfilled by Nimiq.
A measure of how easy it is for members of a society to participate in the economy. The technology enables individuals to have more control over their own wealth and property, or the freedom to consume, produce, invest, or work as they choose.
Equality of Opportunity✔︎ Nimiq is the most accessible crypto on the market right now, you don't even have to install something to begin using it or mining it.
This technology is accessible to use by anyone with a smartphone or access to the internet. It contributes to the broader mission of building the on-ramps to Finance 2.0.
Decentralization〜 The architecture of Nimiq is decentralized however the hashrate is clearly not right now.
The network is public, decentralized, and enables trustless consensus.
Security & Code✔︎ Nimiq team has done everything it could to ensure the quality control of the code.
Assessment of engineering and product quality.
Source Code〜 Of course Nimiq is open-source but the documentation is still weak, the good thing is that it's being redone.
Open-source code, well-documented peer-review, and testing by contributors separate from the initial development team on GitHub, etc.
Prototype✔︎ Well, the Nimiq Network is live.
There is a working alpha or beta product on a testnet or mainnet.
Security & Code✔︎ Nimiq team has set a bug bounty program and has been very transparent on the issue of the 25th.
Demonstrable record of responding to and improving the code after a disclosure of vulnerability, and a robust bug bounty program or third party security audit.
Team✔︎ You can even see them on video hehe.
Assessment of short-term operating expectations and decision making.
Founders and Leadership✔︎ The profiles of the team are all known and easily checked.
Able to articulate vision, strategy, use cases or drive developmental progress. Has a track record of demonstrable success or experience. If information is available, Coinbase will apply "know your client" standards to publicly visible founders or leaders.
Engineering✔︎ They released the product which is a damn good track record in a sector full of vaporwares.
Assessment of the engineering team and their track record of setting and achieving deadlines.
Business & Operations✔︎ There has been some "lean" periods in terms of communication but overall the team has never stopped interacting with us. When it comes to cash management the dev team should be a model for everyone else with its last transparency report.
History of interacting with the community, setting a reasonable budget and managing funds, and achieving project milestones. Thoughtful cash management is a key driver of the project's long term viability.
Specialized Knowledge and Key People〜 Let's be honest: it is right now, that said the project protocol isn't even 6 months old.
The project leadership is not highly centralized or dependent on a small number of key persons. Specialized knowledge in this field is not limited to a small group of people.
Governance✔︎ Nimiq has a foundation.
Assessment of long-term operating expectations and decision making.
Consensus Process✔︎ Well it's like Bitcoin, node operators decide whether they want or not to follow an update.
There is a structured process to propose and implement major updates to the code, or there is a system or voting process for conflict resolution.
Future Development Funding✔︎ Yes, see the intended use of fund.
There is a plan or built-in mechanism for raising, rewarding, or allocating funds to future development, beyond the funds raised from the ICO or traditional investors.
White Paper〜 There is the "high level" whitepaper of the ICO however it doesn't really explain in detail how Nimiq works.
Justifies the use case for a decentralized network and outlines project goals from a business and technology perspective. While a white paper is important for understanding the project, it is not a requirement.
Scalability✔︎ Like pretty much every project, that's what Robin is currently working on by the way.
Assessment of a network's potential barriers to scaling and ability to grow and handle user adoption.
Roadmap✔︎ We should have the roadmap soon™️.
Clear timeline with stages of development, reasonable project milestones, or built-in development incentives.
Network Operating Costs✔︎ Yes, the team has been considering second layer solutions like Lightning Network or Liquidity Network.
The barriers to scaling the network have been identified, or solutions have been proposed or discussed. The resource consumption costs for validators and miners are not the main deterrents to participation.
Practical Applications✔︎ The new Nimiq shop is a great example of it.
There are examples of real-world implementation or future practical applications.
The asset is a separate blockchain with a new architecture system and network, or it leverages an existing blockchain for synergies and network effects
Regulation✔︎ I'm not a lawyer but I guess it can
Can Coinbase legally offer this asset?
US Securities Law〜 Hard to say, they have this checklist and the fact that some NIM were given against NET which were distributed through an ICO makes it kind of blurry
The asset is not classified as a security using Coinbase's Securities Law Framework.
Compliance Obligations✔︎ Conversion from NET to NIM went through a KYC specifically for that.
The asset would not affect Coinbase or Coinbase's ability to meet compliance obligations, which include Compliance Obligations, Anti-Money Laundering (AML) program and obligations under government licenses in any jurisdiction (e.g. Money Transmitter Licenses).
Integrity & Reputational Risk✔︎ I don't see why.
Would listing the asset be inconsistent with Coinbase policy?
User Agreement✔︎ I read it and it's doesn't.
The asset, network, application or fundamental nature of the project does not constitute a Prohibited Business under Appendix 1 of the user Agreement.
Liquidity Standards〜 Weak liquidity right now.
How liquid is this asset?
Global Market Capitalization〜 Weak capitalization.
How does the market capitalization compare to the total market capitalizations of other assets?
Asset Velocity〜 Again, weak velocity.
Trade velocity, or turnover, is a significant part of market capitalization. This is a measure of how easily the asset can be converted to another asset.
Circulation✔︎ It's available.
For service or work tokens, new supply is created through consensus protocols. If the supply is capped, then a material amount of the total tokens should be available to the public.
Global Distribution✔︎ HitBTC/Tradesatoshi/LAtoken/BTC-alpha/Nimex.
Where is this asset available to trade?
Total # of Exchanges✔︎ 5.
The number of exchanges that support the asset.
Geographic Distribution✔︎ It's tradable everywhere and I guess you can count Agoras as a DEX.
The asset is not limited to a single geographic region and is available to trade on decentralized exchanges.
Fiat and Crypto Pairs〜 Fiat pairs don't.
Fiat and crypto trading pairs exist.
Exchange Volume Distribution✔︎ It is.
If secondary markets exist, then volume should be relatively distributed across exchanges.
Demand✔︎ The Nimiq community I guess and of course it does.
What is driving demand for this asset and does it lead to stronger network effects?
Consumer Demand〜 It would be presumptuous to say there is a customer demand for Nimiq right now.
Customer demand is carefully considered, however, any asset which is created from a fork, airdrop, or automated token distribution is subject to a separate set of criteria.
Developers and Contributors✔︎ Nimiq has already a flourishing developper base.
Growing developer base and measured progress as defined by the number of repositories, commits, and contributors.
Community Activity✔︎ Yes it has.
Dedicated forums are available where developers, supporters, users, and founders can interact and build a community and offer transparency into the project. The team provides regular updates or is responsive to feedback.
External Stakeholders〜 It doesn't as far as I know.
There are investments from venture firms or hedge funds which have experience working with crypto companies or projects. The project has corporate partnerships, joint ventures, or dedicated consortiums.
Change in Market Capitalization〜 Sadly not.
The market capitalization has grown after the network has activated, demonstrating increased demand for the asset after the project's launch.
Nodes✔︎ You can even check them on a map on https://miner.nimiq.com/
Growing # of nodes on the underlying blockchain. The project has a globally distributed node network, meaning operating nodes are not contained in a single country or geographic region.
Transactions, Fees & Addresses✔︎ Check the stats
Growing # of transactions and fees paid over time. Growing # of asset or token holders, which is an indicator of asset distribution.
Economic Incentives✔︎ It's a PoW coin so yes.
Are the economic structures designed to incentivize all parties to act in the best interest of the network?
Type of Token✔︎ It's not backed by anything but the work done to generate them.
It is a service, work, or hybrid token. Tokens backed by fiat or other physical assets are categorized as US securities and will not be considered at this time.
Token Utility✔︎ Nimiq is a general payment protocol.
There is utility from obtaining, holding, participating, or spending the token. The team identifies a clear and compelling reason for the native digital asset to exist (i.e. the main purpose is not fundraising).
Inflation (Money Supply)✔︎ You can check the inflation curve here.
There is an algorithmically programmed inflation rate which incentivizes security and network effects. Or, if the total supply is capped, then a majority of the tokens should be available for trade when the network launches.
Rewards and Penalties✔︎ Yes
There are mechanisms (such as transaction fees) which incentivize miners, validators, and other participants to exhibit 'good' behavior. Conversely, there are mechanisms which deter 'bad' behavior.
Security✔︎ The smart-contract of the ICO was audited and they didn't lose the fund yet so I guess it's secure haha.
There is a focus on stringent security protocols and best practices to limit scams, hacks, and theft of funds.
Participation Equality✔︎ The number of NIM distributed through NET is only 7% in any case.
Best efforts by the team to allow a fair distribution of tokens (i.e. setting initial individual purchase caps to limit the risk of small number of investors from taking a majority of the supply).
Team Ownership✔︎ See the vesting schedule
The ownership stake retained by the team is a minority stake. There should be a lock-up period and reasonable vesting schedule to ensure the team is economically incentivized to improve the network into the future.
Transparency✔︎ See the transparency report.
The team should be available and responsive to questions or feedback about the product, token sale, or use of funds across multiple forums.
Total Supply The team should sell a fixed percentage of the total supply, and participants should know the percentage of total supply that their purchase represents, or have a clear understanding of the inflation rate.✔︎ All informations are available freely online.
Ethics or Code of Conduct✔︎ Check it here
White paper or project website should have an ethical or professional code of conduct.
Bitcoin Champion is yet another automatic cryptocurrency trading platform that promises stellar earnings.. It is completely equal to other platforms already seen, such as Bitcoin Bank or Bitcoin Compass.. Website analysis. Bitcoin Champion features on the homepage of the website the usual video about the Bitcoin revolution, with a CNN journalist, Bill Gates, the Winklevoss twins and others. --- SOLO MINING BFGMiner supports solo mining with any GBT-compatible bitcoin node (such as bitcoind). To use this mode, you need to specify the URL of your bitcoind node using the usual pool options (--url, --userpass, etc), and the --generate-to option to specify the Bitcoin address you wish to receive the block rewards mined. When you run Bitcoin Core on the same computer as your miner, the ... Unter dem Namen Robin präsentiert die Deutsche Bank ihren hauseigenen Robo-Advisor als Teil der digitalen Vermögensverwaltung auf der Investment-Plattform maxblue.de. Den Robo-Advisor gibt es seit November 2017. Die Investition ist möglich ab einer Mindestanlage von 5.000 Euro bei Kosten von 0,80 % bis 1,00 % pro Jahr zuzüglich der Fondskosten. ture, such as Intel’s SGX processors, or by mining them starting from an initial fair distribution. For leader selection we use a deter-ministic approach. On each round, we select a set of the previously created identities as consensus leader candidates in round robin manner. Because simple round-robin alone is vulnerable to attacks Further, the round-robin format for selecting the leader node was not compatible with many of the goals of Bitcoin’s structural design. Satoshi built Bitcoin’s PoW consensus algorithm on the block leader selection method of a lottery-like system where miners compete to solve a computationally intensive puzzle. The winner of that round (~10 ...
[index]          
Earn while you sleep, no expensive bitcoin mining involved. More details: ... bitcoin mining algorithm how to start bitcoin mining, btc mining, asic bitcoin miners bitcoin mining program, bitcoin ... What is Bitcoin, what is bitcoin mining, how bitcoin works I am going to explain you in Hindi Bitcoin was created by the "Satoshi Nakamoto". Bitcoin is a form of digital currency, created and ... This video is to give you insight on the cost elements associated to fixed/variable agnostic to the volatile price of crypto. i.e. back in the day in early mining of bitcoin, it was far from ... This video of Cryptocurrency Mining Algorithms gives an idea of algorithms requires for mining cryptocurrencies. It helps you to learn about mining algorithms. The video shows topics like: 1. What ... Since the scrapping of banking ban on cryptocurrency, crypto mining in India has been a growing industry. It is the process of processing transactions and adding them to the blockchain. Verifying ...